mlos_bench.optimizers.mlos_core_optimizer

A wrapper for mlos_core optimizers for mlos_bench.

Classes

MlosCoreOptimizer

A wrapper class for the mlos_core optimizers.

Module Contents

class mlos_bench.optimizers.mlos_core_optimizer.MlosCoreOptimizer(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, config: dict, global_config: dict | None = None, service: mlos_bench.services.base_service.Service | None = None)[source]

Bases: mlos_bench.optimizers.base_optimizer.Optimizer

A wrapper class for the mlos_core optimizers.

Create a new optimizer for the given configuration space defined by the tunables.

Parameters:
  • tunables (TunableGroups) – The tunables to optimize.

  • config (dict) – Free-format key/value pairs of configuration parameters to pass to the optimizer.

  • global_config (Optional[dict])

  • service (Optional[Service])

__exit__(ex_type: Type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False][source]

Exit the context of the optimizer.

Parameters:
Return type:

Literal[False]

bulk_register(configs: Sequence[dict], scores: Sequence[Dict[str, mlos_bench.tunables.tunable.TunableValue] | None], status: Sequence[mlos_bench.environments.status.Status] | None = None) bool[source]

Pre-load the optimizer with the bulk data from previous experiments.

Parameters:
  • configs (Sequence[dict]) – Records of tunable values from other experiments.

  • scores (Sequence[Optional[Dict[str, TunableValue]]]) – Benchmark results from experiments that correspond to configs.

  • status (Optional[Sequence[Status]]) – Status of the experiments that correspond to configs.

Returns:

is_not_empty – True if there is data to register, false otherwise.

Return type:

bool

get_best_observation() Tuple[Dict[str, float], mlos_bench.tunables.tunable_groups.TunableGroups] | Tuple[None, None][source]

Get the best observation so far.

Returns:

(value, tunables) – The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.

Return type:

Tuple[Dict[str, float], TunableGroups]

register(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, status: mlos_bench.environments.status.Status, score: Dict[str, mlos_bench.tunables.tunable.TunableValue] | None = None) Dict[str, float] | None[source]

Register the observation for the given configuration.

Parameters:
  • tunables (TunableGroups) – The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.

  • status (Status) – Final status of the experiment (e.g., SUCCEEDED or FAILED).

  • score (Optional[Dict[str, TunableValue]]) – A dict with the final benchmark results. None if the experiment was not successful.

Returns:

value – Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.

Return type:

Optional[Dict[str, float]]

suggest() mlos_bench.tunables.tunable_groups.TunableGroups[source]

Generate the next suggestion. Base class’ implementation increments the iteration count and returns the current values of the tunables.

Returns:

tunables – The next configuration to benchmark. These are the same tunables we pass to the constructor, but with the values set to the next suggestion.

Return type:

TunableGroups

property name: str[source]

The name of the optimizer.

We save this information in mlos_bench storage to track the source of each configuration.

Return type:

str