Engine

Engine

class olive.engine.Engine(search_strategy: Dict[str, Any] | SearchStrategyConfig | None = None, host: Dict[str, Any] | SystemConfig | None = None, target: Dict[str, Any] | SystemConfig | None = None, evaluator: Dict[str, Any] | OliveEvaluatorConfig | None = None, cache_dir='.olive-cache', clean_cache=False, clean_evaluation_cache=False, plot_pareto_frontier=False, *, azureml_client_config=None)[source]

The engine executes the registered Olive Steps.

It facilitate evaluation of the output models using provided evaluation criteria and produces output model(s).

register(pass_type: Type[Pass], config: Dict[str, Any] = None, disable_search=False, name: str = None, host: OliveSystem = None, evaluator_config: OliveEvaluatorConfig = None, clean_run_cache: bool = False, output_name: str = None)[source]

Register a pass configuration so that it could be instantiated and executed later.

run(input_model_config: ModelConfig, accelerator_specs: List[AcceleratorSpec], data_root: str = None, packaging_config: PackagingConfig | List[PackagingConfig] | None = None, output_dir: str = None, output_name: str = None, evaluate_input_model: bool = True)[source]

Run all the registered Olive passes on the input model and produce one or more candidate models.

Parameters:
  • input_model_config – input Olive model configuration

  • accelerator_specs – list of accelerator specs

  • data_root – data root for the input data

  • packaging_config – packaging configuration, if packaging_config is provided, the output model will be packaged into a zip file.

  • output_dir – output directory for the output model

  • output_name – output name for the output model, if output_name is provided, the output model will be saved to engine’s output_dir with the prefix of output_name.

  • evaluate_input_model – if evaluate_input_model is True, run the evaluation on the input model.

Returns:

if search strategy is None, all passes are run in the order they were registered.
  1. Final model -> {output_dir}/{output_name}_{AcceleratorSpec}_model.onnx

  2. JSON file -> {output_dir}/{output_name}_{AcceleratorSpec}_model.json

  3. Evaluation results of the final model -> {output_dir}/{output_name}_{AcceleratorSpec}_metrics.json

Return footprint/zip(packaging_config) of the final model and evaluation results of the final model.

if search strategy is not None, run the search strategy to find candidate models. Return footprint/zip(packaging_config) of candidate models and evaluation results.

Note

All parameters that of type ...Config or ConfigBase class can be assigned dictionaries with keys corresponding to the fields of the class.

EngineConfig

pydantic settings olive.engine.EngineConfig[source]
field search_strategy: SearchStrategyConfig | bool = None
field host: SystemConfig = None
field target: SystemConfig = None
field evaluator: OliveEvaluatorConfig = None
field cache_dir: Path | str = '.olive-cache'
field clean_cache: bool = False
field clean_evaluation_cache: bool = False
field plot_pareto_frontier: bool = False

SearchStrategyConfig

pydantic settings olive.strategy.search_strategy.SearchStrategyConfig[source]
field execution_order: str [Required]
field search_algorithm: str [Required]
field search_algorithm_config: ConfigBase = None
field output_model_num: int = None
field stop_when_goals_met: bool = False
field max_iter: int = None
field max_time: int = None

SystemConfig

pydantic settings olive.systems.system_config.SystemConfig[source]
field type: SystemType [Required]
field config: TargetUserConfig = None
create_system()[source]
property olive_managed_env

OliveEvaluatorConfig

pydantic settings olive.evaluator.olive_evaluator.OliveEvaluatorConfig[source]
field metrics: List[Metric] = []
property is_accuracy_drop_tolerance

SearchStrategy

class olive.strategy.search_strategy.SearchStrategy(config: Dict[str, Any] | SearchStrategyConfig)[source]