Blogs#
Olive MCP Server: Optimizing AI Models Through Natural Conversation
Learn how to use the Olive MCP Server to optimize, quantize, and fine-tune AI models through natural language — directly from VS Code Copilot, Claude, Cursor, and more.
Olive MCP Server
Introducing olive init: An Interactive Wizard for Model Optimization
Get started with Olive in seconds — the new olive init wizard guides you step-by-step from model selection to a ready-to-run optimization command.
Introducing olive init
Exploring Optimal Quantization Settings for Small Language Models
An exploration of how Olive applies different quantization strategies such as GPTQ, mixed precision, and QuaRot to optimize small language models for efficiency and accuracy.
Exploring Optimal Quantization Settings for Small Language Models
Fine-Tuning Diffusion Models with Olive
Learn how to train LoRA adapters for Stable Diffusion and Flux models using Olive CLI or JSON configuration.
Fine-Tuning Diffusion Models with Olive