# Acceleration on CPU [Whisper optimization with PTQ and pre/post processing](https://github.com/microsoft/Olive/tree/main/examples/whisper) [BERT optimization with IntelĀ® Neural Compressor Post Training Quantization](https://github.com/microsoft/Olive/tree/main/examples/bert#bert-optimization-with-intel-neural-compressor-ptq-on-cpu) [BERT optimization with QAT Customized Training Loop](https://github.com/microsoft/Olive/tree/main/examples/bert#bert-optimization-with-qat-customized-training-loop-on-cpu) [ResNet optimization with QAT Default Training Loop](https://github.com/microsoft/Olive/tree/main/examples/resnet#resnet-optimization-with-qat-default-training-loop-on-cpu) [ResNet optimization with QAT PyTorch Lightning Module](https://github.com/microsoft/Olive/tree/main/examples/resnet#resnet-optimization-with-qat-pytorch-lightning-module-on-cpu) [Cifar10 optimization with OpenVINO for Intel HW](https://github.com/microsoft/Olive/tree/main/examples/cifar10_openvino_intel_hw) # Acceleration on GPU [Bert optimization with CUDA/TensorRT](https://github.com/microsoft/Olive/tree/main/examples/bert/#bert-optimization-with-cudatensorrt) [SqueezeNet latency optimization with DirectML](https://github.com/microsoft/Olive/tree/main/examples/directml/squeezenet) [Stable Diffusion optimization with DirectML](https://github.com/microsoft/Olive/tree/main/examples/directml/stable_diffusion) [Dolly V2 optimization with DirectML](https://github.com/microsoft/Olive/tree/main/examples/directml/dolly_v2) # Acceleration on NPU [Inception model optimization on Qualcomm NPU](https://github.com/microsoft/Olive/tree/main/examples/snpe/inception_snpe_qualcomm_npu) # Acceleration on DPU [ResNet optimization with Vitis-AI Post Training Quantization for AMD DPU](https://github.com/microsoft/Olive/tree/main/examples/resnet#resnet-optimization-with-vitis-ai-ptq-on-amd-dpu)