mlos_bench.optimizers.base_optimizer

Base class for an interface between the benchmarking framework and mlos_core optimizers.

Classes

Optimizer

An abstract interface between the benchmarking framework and mlos_core

Module Contents

class mlos_bench.optimizers.base_optimizer.Optimizer(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, config: dict, global_config: dict | None = None, service: mlos_bench.services.base_service.Service | None = None)[source]

An abstract interface between the benchmarking framework and mlos_core optimizers.

Create a new optimizer for the given configuration space defined by the tunables.

Parameters:
  • tunables (TunableGroups) – The tunables to optimize.

  • config (dict) – Free-format key/value pairs of configuration parameters to pass to the optimizer.

  • global_config (Optional[dict])

  • service (Optional[Service])

__enter__() Optimizer[source]

Enter the optimizer’s context.

Return type:

Optimizer

__exit__(ex_type: Type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False][source]

Exit the context of the optimizer.

Parameters:
Return type:

Literal[False]

__repr__() str[source]
Return type:

str

abstract bulk_register(configs: Sequence[dict], scores: Sequence[Dict[str, mlos_bench.tunables.tunable.TunableValue] | None], status: Sequence[mlos_bench.environments.status.Status] | None = None) bool[source]

Pre-load the optimizer with the bulk data from previous experiments.

Parameters:
  • configs (Sequence[dict]) – Records of tunable values from other experiments.

  • scores (Sequence[Optional[Dict[str, TunableValue]]]) – Benchmark results from experiments that correspond to configs.

  • status (Optional[Sequence[Status]]) – Status of the experiments that correspond to configs.

Returns:

is_not_empty – True if there is data to register, false otherwise.

Return type:

bool

abstract get_best_observation() Tuple[Dict[str, float], mlos_bench.tunables.tunable_groups.TunableGroups] | Tuple[None, None][source]

Get the best observation so far.

Returns:

(value, tunables) – The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.

Return type:

Tuple[Dict[str, float], TunableGroups]

not_converged() bool[source]

Return True if not converged, False otherwise.

Base implementation just checks the iteration count.

Return type:

bool

abstract register(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, status: mlos_bench.environments.status.Status, score: Dict[str, mlos_bench.tunables.tunable.TunableValue] | None = None) Dict[str, float] | None[source]

Register the observation for the given configuration.

Parameters:
  • tunables (TunableGroups) – The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.

  • status (Status) – Final status of the experiment (e.g., SUCCEEDED or FAILED).

  • score (Optional[Dict[str, TunableValue]]) – A dict with the final benchmark results. None if the experiment was not successful.

Returns:

value – Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.

Return type:

Optional[Dict[str, float]]

suggest() mlos_bench.tunables.tunable_groups.TunableGroups[source]

Generate the next suggestion. Base class’ implementation increments the iteration count and returns the current values of the tunables.

Returns:

tunables – The next configuration to benchmark. These are the same tunables we pass to the constructor, but with the values set to the next suggestion.

Return type:

TunableGroups

BASE_SUPPORTED_CONFIG_PROPS[source]
property config_space: ConfigSpace.ConfigurationSpace[source]

Get the tunable parameters of the optimizer as a ConfigurationSpace.

Returns:

The ConfigSpace representation of the tunable parameters.

Return type:

ConfigSpace.ConfigurationSpace

property current_iteration: int[source]

The current number of iterations (suggestions) registered.

Note: this may or may not be the same as the number of configurations. See Also: Scheduler.trial_config_repeat_count and Scheduler.max_trials.

Return type:

int

experiment_id = ''[source]
property max_suggestions: int[source]

The maximum number of iterations (suggestions) to run.

Note: this may or may not be the same as the number of configurations. See Also: Scheduler.trial_config_repeat_count and Scheduler.max_trials.

Return type:

int

property name: str[source]

The name of the optimizer.

We save this information in mlos_bench storage to track the source of each configuration.

Return type:

str

property seed: int[source]

The random seed for the optimizer.

Return type:

int

property start_with_defaults: bool[source]

Return True if the optimizer should start with the default values.

Note: This parameter is mutable and will be reset to False after the defaults are first suggested.

Return type:

bool

property supports_preload: bool[source]

Return True if the optimizer supports pre-loading the data from previous experiments.

Return type:

bool

property targets: Dict[str, Literal['min', 'max']][source]

Returns a dictionary of optimization targets and their direction.

Return type:

Dict[str, Literal[‘min’, ‘max’]]

property tunable_params: mlos_bench.tunables.tunable_groups.TunableGroups[source]

Get the tunable parameters of the optimizer as TunableGroups.

Returns:

tunables – A collection of covariant groups of tunable parameters.

Return type:

TunableGroups