mlos_bench.schedulers.base_scheduler
Base class for the optimization loop scheduling policies.
Classes
Base class for the optimization loop scheduling policies. |
Module Contents
- class mlos_bench.schedulers.base_scheduler.Scheduler(*, config: Dict[str, Any], global_config: Dict[str, Any], environment: mlos_bench.environments.base_environment.Environment, optimizer: mlos_bench.optimizers.base_optimizer.Optimizer, storage: mlos_bench.storage.base_storage.Storage, root_env_config: str)[source]
Base class for the optimization loop scheduling policies.
Create a new instance of the scheduler. The constructor of this and the derived classes is called by the persistence service after reading the class JSON configuration. Other objects like the Environment and Optimizer are provided by the Launcher.
- Parameters:
config (dict) – The configuration for the scheduler.
global_config (dict) – he global configuration for the experiment.
environment (Environment) – The environment to benchmark/optimize.
optimizer (Optimizer) – The optimizer to use.
storage (Storage) – The storage to use.
root_env_config (str) – Path to the root environment configuration.
- __exit__(ex_type: Type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False] [source]
Exit the context of the scheduler.
- Parameters:
ex_type (Optional[Type[BaseException]])
ex_val (Optional[BaseException])
ex_tb (Optional[types.TracebackType])
- Return type:
Literal[False]
- __repr__() str [source]
Produce a human-readable version of the Scheduler (mostly for logging).
- Returns:
string – A human-readable version of the Scheduler.
- Return type:
- get_best_observation() Tuple[Dict[str, float] | None, mlos_bench.tunables.tunable_groups.TunableGroups | None] [source]
Get the best observation from the optimizer.
- Return type:
Tuple[Optional[Dict[str, float]], Optional[mlos_bench.tunables.tunable_groups.TunableGroups]]
- load_config(config_id: int) mlos_bench.tunables.tunable_groups.TunableGroups [source]
Load the existing tunable configuration from the storage.
- Parameters:
config_id (int)
- Return type:
- not_done() bool [source]
Check the stopping conditions.
By default, stop when the optimizer converges or max limit of trials reached.
- Return type:
- abstract run_trial(trial: mlos_bench.storage.base_storage.Storage.Trial) None [source]
Set up and run a single trial.
Save the results in the storage.
- Parameters:
- Return type:
None
- schedule_trial(tunables: mlos_bench.tunables.tunable_groups.TunableGroups) None [source]
Add a configuration to the queue of trials.
- Parameters:
- Return type:
None
- teardown() None [source]
Tear down the environment.
Call it after the completion of the .start() in the scheduler context.
- Return type:
None
- experiment: mlos_bench.storage.base_storage.Storage.Experiment | None = None[source]
- property max_trials: int[source]
Gets the maximum number of trials to run for a given experiment, or -1 for no limit.
- Return type:
- property ran_trials: List[mlos_bench.storage.base_storage.Storage.Trial][source]
Get the list of trials that were run.
- Return type: