mlos_bench.optimizers.one_shot_optimizer

No-op optimizer for mlos_bench that proposes a single configuration.

Explicit configs (partial or full) are possible using configuration files.

Examples

Load tunables from a JSON string. Note: normally these would be automatically loaded from the Environment’s include_tunables config parameter.

>>> import json5 as json
>>> from mlos_bench.environments.status import Status
>>> from mlos_bench.services.config_persistence import ConfigPersistenceService
>>> service = ConfigPersistenceService()
>>> json_config = '''
... {
...   "group_1": {
...     "cost": 1,
...     "params": {
...       "colors": {
...         "type": "categorical",
...         "values": ["red", "blue", "green"],
...         "default": "green",
...       },
...       "int_param": {
...         "type": "int",
...         "range": [1, 3],
...         "default": 2,
...       },
...       "float_param": {
...         "type": "float",
...         "range": [0, 1],
...         "default": 0.5,
...         // Quantize the range into 3 bins
...         "quantization_bins": 3,
...       }
...     }
...   }
... }
... '''
>>> tunables = service.load_tunables(jsons=[json_config])
>>> # Check the defaults:
>>> tunables.get_param_values()
{'colors': 'green', 'int_param': 2, 'float_param': 0.5}

Load a JSON config of some tunable values to explicitly test. Normally these would be provided by the mlos_bench.run CLI’s --tunable-values option.

>>> tunable_values_json = '''
... {
...   "colors": "red",
...   "int_param": 1,
...   "float_param": 0.0
... }
... '''
>>> tunable_values = json.loads(tunable_values_json)
>>> tunables.assign(tunable_values).get_param_values()
{'colors': 'red', 'int_param': 1, 'float_param': 0.0}
>>> assert not tunables.is_defaults()

Now create a OneShotOptimizer from a JSON config string.

>>> optimizer_json_config = '''
... {
...   "class": "mlos_bench.optimizers.one_shot_optimizer.OneShotOptimizer",
... }
... '''
>>> config = json.loads(optimizer_json_config)
>>> optimizer = service.build_optimizer(
...   tunables=tunables,
...   service=service,
...   config=config,
... )

Run the optimizer.

>>> # Note that it will only run for a single iteration and return the values we set.
>>> while optimizer.not_converged():
...     suggestion = optimizer.suggest()
...     print(suggestion.get_param_values())
{'colors': 'red', 'int_param': 1, 'float_param': 0.0}

Classes

OneShotOptimizer

No-op optimizer that proposes a single configuration and returns.

Module Contents

class mlos_bench.optimizers.one_shot_optimizer.OneShotOptimizer(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, config: dict, global_config: dict | None = None, service: mlos_bench.services.base_service.Service | None = None)[source]

Bases: mlos_bench.optimizers.mock_optimizer.MockOptimizer

No-op optimizer that proposes a single configuration and returns.

Explicit configs (partial or full) are possible using configuration files.

Create a new optimizer for the given configuration space defined by the tunables.

Parameters:
  • tunables (TunableGroups) – The tunables to optimize.

  • config (dict) – Free-format key/value pairs of configuration parameters to pass to the optimizer.

  • global_config (dict | None)

  • service (Service | None)

suggest() mlos_bench.tunables.tunable_groups.TunableGroups[source]

Always produce the same (initial) suggestion.

Return type:

mlos_bench.tunables.tunable_groups.TunableGroups

property supports_preload: bool[source]

Return True if the optimizer supports pre-loading the data from previous experiments.

Return type:

bool