mlos_bench.schedulers.base_scheduler

Base class for the optimization loop scheduling policies.

Classes

Scheduler

Base class for the optimization loop scheduling policies.

Module Contents

class mlos_bench.schedulers.base_scheduler.Scheduler(*, config: Dict[str, Any], global_config: Dict[str, Any], environment: mlos_bench.environments.base_environment.Environment, optimizer: mlos_bench.optimizers.base_optimizer.Optimizer, storage: mlos_bench.storage.base_storage.Storage, root_env_config: str)[source]

Base class for the optimization loop scheduling policies.

Create a new instance of the scheduler. The constructor of this and the derived classes is called by the persistence service after reading the class JSON configuration. Other objects like the Environment and Optimizer are provided by the Launcher.

Parameters:
  • config (dict) – The configuration for the scheduler.

  • global_config (dict) – he global configuration for the experiment.

  • environment (Environment) – The environment to benchmark/optimize.

  • optimizer (Optimizer) – The optimizer to use.

  • storage (Storage) – The storage to use.

  • root_env_config (str) – Path to the root environment configuration.

__enter__() Scheduler[source]

Enter the scheduler’s context.

Return type:

Scheduler

__exit__(ex_type: Type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False][source]

Exit the context of the scheduler.

Parameters:
Return type:

Literal[False]

__repr__() str[source]

Produce a human-readable version of the Scheduler (mostly for logging).

Returns:

string – A human-readable version of the Scheduler.

Return type:

str

get_best_observation() Tuple[Dict[str, float] | None, mlos_bench.tunables.tunable_groups.TunableGroups | None][source]

Get the best observation from the optimizer.

Return type:

Tuple[Optional[Dict[str, float]], Optional[mlos_bench.tunables.tunable_groups.TunableGroups]]

load_config(config_id: int) mlos_bench.tunables.tunable_groups.TunableGroups[source]

Load the existing tunable configuration from the storage.

Parameters:

config_id (int)

Return type:

mlos_bench.tunables.tunable_groups.TunableGroups

not_done() bool[source]

Check the stopping conditions.

By default, stop when the optimizer converges or max limit of trials reached.

Return type:

bool

abstract run_trial(trial: mlos_bench.storage.base_storage.Storage.Trial) None[source]

Set up and run a single trial.

Save the results in the storage.

Parameters:

trial (mlos_bench.storage.base_storage.Storage.Trial)

Return type:

None

schedule_trial(tunables: mlos_bench.tunables.tunable_groups.TunableGroups) None[source]

Add a configuration to the queue of trials.

Parameters:

tunables (mlos_bench.tunables.tunable_groups.TunableGroups)

Return type:

None

abstract start() None[source]

Start the optimization loop.

Return type:

None

teardown() None[source]

Tear down the environment.

Call it after the completion of the .start() in the scheduler context.

Return type:

None

environment[source]
experiment: mlos_bench.storage.base_storage.Storage.Experiment | None = None[source]
global_config[source]
property max_trials: int[source]

Gets the maximum number of trials to run for a given experiment, or -1 for no limit.

Return type:

int

optimizer[source]
property ran_trials: List[mlos_bench.storage.base_storage.Storage.Trial][source]

Get the list of trials that were run.

Return type:

List[mlos_bench.storage.base_storage.Storage.Trial]

storage[source]
property trial_config_repeat_count: int[source]

Gets the number of trials to run for a given config.

Return type:

int

property trial_count: int[source]

Gets the current number of trials run for the experiment.

Return type:

int