Engine#

Engine#

class olive.engine.Engine(workflow_id: str = 'default_workflow', search_strategy: Dict[str, Any] | SearchStrategyConfig | None = None, host: Dict[str, Any] | SystemConfig | None = None, target: Dict[str, Any] | SystemConfig | None = None, evaluator: Dict[str, Any] | OliveEvaluatorConfig | None = None, cache_config: Dict[str, Any] | CacheConfig | None = None, plot_pareto_frontier: bool = False, no_artifacts: bool = False, *, azureml_client_config=None)[source]#

The engine executes the registered Olive Steps.

It facilitate evaluation of the output models using provided evaluation criteria and produces output model(s).

register(pass_type: Type[Pass], config: Dict[str, Any] = None, name: str = None, host: OliveSystem = None, evaluator_config: OliveEvaluatorConfig = None)[source]#

Register a pass configuration so that it could be instantiated and executed later.

run(input_model_config: ModelConfig, accelerator_specs: List[AcceleratorSpec], packaging_config: PackagingConfig | List[PackagingConfig] | None = None, output_dir: str = None, evaluate_input_model: bool = True, log_to_file: bool = False, log_severity_level: int = 1)[source]#

Run all the registered Olive passes on the input model and produce one or more candidate models.

Parameters:
  • input_model_config – input Olive model configuration

  • accelerator_specs – list of accelerator specs

  • packaging_config – packaging configuration, if packaging_config is provided, the output model will be packaged into a zip file.

  • output_dir – output directory for the output model

  • evaluate_input_model – if evaluate_input_model is True, run the evaluation on the input model.

  • log_to_file – if save logs to a file.

  • log_severity_level – severity level of the logger.

Returns:

  1. One accelerator spec:

    output_dir/footprints.json: footprint of the run output_dir/pareto_frontier_footprints.json: pareto frontier footprints output_dir/run_history.txt: run history output_dir/input_model_metrics.json: evaluation results of the input model output_dir/…: output model files

  2. Multiple accelerator specs:

    output_dir/{acclerator_spec}/…: Same as 1 but for each accelerator spec output_dir/…: output model files

No search mode:
  1. One accelerator spec

    output_dir/footprints.json: footprint of the run output_dir/run_history.txt: run history output_dir/input_model_metrics.json: evaluation results of the input model output_dir/output_footprints.json: footprint of the output models output_dir/…: output model files

    1. One pass flow:

      output_dir/metrics.json: evaluation results of the output model output_dir/…: output model files

  2. Multiple accelerator specs

    output_dir/{acclerator_spec}/…: Same as 1 but for each accelerator spec output_dir/…: output model files

Return type:

Search mode

Note

All parameters that of type ...Config or ConfigBase class can be assigned dictionaries with keys corresponding to the fields of the class.

EngineConfig#

pydantic settings olive.engine.EngineConfig[source]#
field search_strategy: SearchStrategyConfig | bool = None#
field host: SystemConfig = None#
field target: SystemConfig = None#
field evaluator: OliveEvaluatorConfig = None#
field plot_pareto_frontier: bool = False#
field no_artifacts: bool = False#

SearchStrategyConfig

pydantic settings olive.strategy.search_strategy.SearchStrategyConfig[source]#
field execution_order: str [Required]#
field search_algorithm: str [Required]#
field search_algorithm_config: ConfigBase = None#
field output_model_num: int = None#
field stop_when_goals_met: bool = False#
field max_iter: int = None#
field max_time: int = None#

SystemConfig

pydantic settings olive.systems.system_config.SystemConfig[source]
field type: SystemType [Required]
field config: TargetUserConfig = None
create_system()[source]
property olive_managed_env

OliveEvaluatorConfig

pydantic settings olive.evaluator.olive_evaluator.OliveEvaluatorConfig[source]#
field type: str = None#
field type_args: Dict [Optional]#
field user_script: Path | str = None#
field script_dir: Path | str = None#
field metrics: List[Metric] = []#
property is_accuracy_drop_tolerance#
create_evaluator(model: OliveModelHandler = None) OliveEvaluator[source]#

SearchStrategy#

class olive.strategy.search_strategy.SearchStrategy(config: Dict[str, Any] | SearchStrategyConfig)[source]#