Olive
0.2.0

OVERVIEW

  • Olive
  • Design
  • Quick Tour
  • Olive Options

GET STARTED

  • Installation
  • Quickstart Examples

TUTORIALS

  • Configuring OliveSystem
  • Configuring Metric
  • Configuring Pass
  • Configuring HW-dependent optimizations
  • Advanced User Tour
  • How to add new Pass
  • How to write user_script
  • Packaging Olive artifacts

EXAMPLES

  • Inception model optimization on Qualcomm NPU
  • Cifar10 optimization with OpenVINO for Intel HW
  • BERT optimization with QAT Customized Training Loop on CPU
  • ResNet optimization with QAT Default Training Loop on CPU
  • ResNet optimization with QAT PyTorch Lightning Module on CPU
  • SqueezeNet latency optimization with DirectML
  • Stable Diffusion optimization with DirectML
  • BERT optimization with Intel® Neural Compressor Post Training quantization on CPU
  • Whisper optimization using ORT toolchain

API REFERENCE

  • OliveModels
  • OliveSystems
  • OliveEvaluator
  • Metric
  • SearchAlgorithms
  • Engine
  • Passes
Olive
  • Welcome to Olive’s documentation!
  • View page source

Welcome to Olive’s documentation!¶

Important

Olive is currently under pre-release. There will be constant updates and improvements to the functions and usage. Feel free to give feedback and suggestions to us.

This document introduces Olive and provides some examples to get you started.

OVERVIEW

  • Olive
  • Design
  • Quick Tour
  • Olive Options

GET STARTED

  • Installation
  • Quickstart Examples

TUTORIALS

  • Configuring OliveSystem
  • Configuring Metric
  • Configuring Pass
  • Configuring HW-dependent optimizations
  • Advanced User Tour
  • How to add new Pass
  • How to write user_script
  • Packaging Olive artifacts

EXAMPLES

  • Inception model optimization on Qualcomm NPU
  • Cifar10 optimization with OpenVINO for Intel HW
  • BERT optimization with QAT Customized Training Loop on CPU
  • ResNet optimization with QAT Default Training Loop on CPU
  • ResNet optimization with QAT PyTorch Lightning Module on CPU
  • SqueezeNet latency optimization with DirectML
  • Stable Diffusion optimization with DirectML
  • BERT optimization with Intel® Neural Compressor Post Training quantization on CPU
  • Whisper optimization using ORT toolchain

API REFERENCE

  • OliveModels
  • OliveSystems
  • OliveEvaluator
  • Metric
  • SearchAlgorithms
  • Engine
  • Passes
Next

© Copyright 2023, olivedevteam@microsoft.com.

Built with Sphinx using a theme provided by Read the Docs.