mlos_bench.optimizers.base_optimizer
Base class for an interface between the benchmarking framework and mlos_core optimizers.
Classes
An abstract interface between the benchmarking framework and mlos_core |
Module Contents
- class mlos_bench.optimizers.base_optimizer.Optimizer(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, config: dict, global_config: dict | None = None, service: mlos_bench.services.base_service.Service | None = None)[source]
An abstract interface between the benchmarking framework and mlos_core optimizers.
Create a new optimizer for the given configuration space defined by the tunables.
- Parameters:
tunables (TunableGroups) – The tunables to optimize.
config (dict) – Free-format key/value pairs of configuration parameters to pass to the optimizer.
global_config (Optional[dict])
service (Optional[Service])
- __exit__(ex_type: Type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False] [source]
Exit the context of the optimizer.
- Parameters:
ex_type (Optional[Type[BaseException]])
ex_val (Optional[BaseException])
ex_tb (Optional[types.TracebackType])
- Return type:
Literal[False]
- abstract bulk_register(configs: Sequence[dict], scores: Sequence[Dict[str, mlos_bench.tunables.tunable.TunableValue] | None], status: Sequence[mlos_bench.environments.status.Status] | None = None) bool [source]
Pre-load the optimizer with the bulk data from previous experiments.
- Parameters:
configs (Sequence[dict]) – Records of tunable values from other experiments.
scores (Sequence[Optional[Dict[str, TunableValue]]]) – Benchmark results from experiments that correspond to configs.
status (Optional[Sequence[Status]]) – Status of the experiments that correspond to configs.
- Returns:
is_not_empty – True if there is data to register, false otherwise.
- Return type:
- abstract get_best_observation() Tuple[Dict[str, float], mlos_bench.tunables.tunable_groups.TunableGroups] | Tuple[None, None] [source]
Get the best observation so far.
- Returns:
(value, tunables) – The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.
- Return type:
Tuple[Dict[str, float], TunableGroups]
- not_converged() bool [source]
Return True if not converged, False otherwise.
Base implementation just checks the iteration count.
- Return type:
- abstract register(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, status: mlos_bench.environments.status.Status, score: Dict[str, mlos_bench.tunables.tunable.TunableValue] | None = None) Dict[str, float] | None [source]
Register the observation for the given configuration.
- Parameters:
tunables (TunableGroups) – The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.
status (Status) – Final status of the experiment (e.g., SUCCEEDED or FAILED).
score (Optional[Dict[str, TunableValue]]) – A dict with the final benchmark results. None if the experiment was not successful.
- Returns:
value – Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.
- Return type:
- suggest() mlos_bench.tunables.tunable_groups.TunableGroups [source]
Generate the next suggestion. Base class’ implementation increments the iteration count and returns the current values of the tunables.
- Returns:
tunables – The next configuration to benchmark. These are the same tunables we pass to the constructor, but with the values set to the next suggestion.
- Return type:
- property config_space: ConfigSpace.ConfigurationSpace[source]
Get the tunable parameters of the optimizer as a ConfigurationSpace.
- Returns:
The ConfigSpace representation of the tunable parameters.
- Return type:
- property current_iteration: int[source]
The current number of iterations (suggestions) registered.
Note: this may or may not be the same as the number of configurations. See Also: Scheduler.trial_config_repeat_count and Scheduler.max_trials.
- Return type:
- property max_suggestions: int[source]
The maximum number of iterations (suggestions) to run.
Note: this may or may not be the same as the number of configurations. See Also: Scheduler.trial_config_repeat_count and Scheduler.max_trials.
- Return type:
- property name: str[source]
The name of the optimizer.
We save this information in mlos_bench storage to track the source of each configuration.
- Return type:
- property start_with_defaults: bool[source]
Return True if the optimizer should start with the default values.
Note: This parameter is mutable and will be reset to False after the defaults are first suggested.
- Return type:
- property supports_preload: bool[source]
Return True if the optimizer supports pre-loading the data from previous experiments.
- Return type:
- property targets: Dict[str, Literal['min', 'max']][source]
Returns a dictionary of optimization targets and their direction.
- Return type:
Dict[str, Literal[‘min’, ‘max’]]
- property tunable_params: mlos_bench.tunables.tunable_groups.TunableGroups[source]
Get the tunable parameters of the optimizer as TunableGroups.
- Returns:
tunables – A collection of covariant groups of tunable parameters.
- Return type: