mlos_bench.optimizers.track_best_optimizer
Mock optimizer for mlos_bench.
Classes
Base Optimizer class that keeps track of the best score and configuration. |
Module Contents
- class mlos_bench.optimizers.track_best_optimizer.TrackBestOptimizer(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, config: dict, global_config: dict | None = None, service: mlos_bench.services.base_service.Service | None = None)[source]
Bases:
mlos_bench.optimizers.base_optimizer.Optimizer
Base Optimizer class that keeps track of the best score and configuration.
Create a new optimizer for the given configuration space defined by the tunables.
- Parameters:
tunables (TunableGroups) – The tunables to optimize.
config (dict) – Free-format key/value pairs of configuration parameters to pass to the optimizer.
global_config (Optional[dict])
service (Optional[Service])
- get_best_observation() Tuple[Dict[str, float], mlos_bench.tunables.tunable_groups.TunableGroups] | Tuple[None, None] [source]
Get the best observation so far.
- Returns:
(value, tunables) – The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.
- Return type:
Tuple[Dict[str, float], TunableGroups]
- register(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, status: mlos_bench.environments.status.Status, score: Dict[str, mlos_bench.tunables.tunable.TunableValue] | None = None) Dict[str, float] | None [source]
Register the observation for the given configuration.
- Parameters:
tunables (TunableGroups) – The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.
status (Status) – Final status of the experiment (e.g., SUCCEEDED or FAILED).
score (Optional[Dict[str, TunableValue]]) – A dict with the final benchmark results. None if the experiment was not successful.
- Returns:
value – Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.
- Return type: