mlos_bench.optimizers.mlos_core_optimizer
A wrapper for mlos_core.optimizers
for mlos_bench
.
Config
mlos_bench.optimizers
has an overview of the configuration options for
the py:mod:.MlosCoreOptimizer.
See also
mlos_bench.optimizers
Another working example of an
MlosCoreOptimizer
.mlos_core.optimizers
Documentation on the underlying mlos_core Optimizers.
mlos_core.spaces.adapters
Documentation on the underlying mlos_core SpaceAdapters.
Examples
Load tunables from a JSON string.
Note: normally these would be automatically loaded from the
Environment
’s
include_tunables
config parameter.
>>> import json5 as json
>>> import mlos_core.optimizers
>>> from mlos_bench.environments.status import Status
>>> from mlos_bench.services.config_persistence import ConfigPersistenceService
>>> service = ConfigPersistenceService()
>>> json_config = '''
... {
... "group_1": {
... "cost": 1,
... "params": {
... "flags": {
... "type": "categorical",
... "values": ["on", "off", "auto"],
... "default": "auto",
... },
... "int_param": {
... "type": "int",
... "range": [1, 100],
... "default": 10,
... },
... "float_param": {
... "type": "float",
... "range": [0, 100],
... "default": 50.0,
... }
... }
... }
... }
... '''
>>> tunables = service.load_tunables(jsons=[json_config])
>>> # Here's the defaults:
>>> tunables.get_param_values()
{'flags': 'auto', 'int_param': 10, 'float_param': 50.0}
When using the MlosCoreOptimizer
, we can also specify some
additional properties, for instance the optimizer_type
, which is one of the
mlos_core OptimizerType
enum values:
>>> import mlos_core.optimizers
>>> print([member.name for member in mlos_core.optimizers.OptimizerType])
['RANDOM', 'FLAML', 'SMAC']
These may also include their own configuration options, which can be specified
as additional key-value pairs in the config
section, where each key-value
corresponds to an argument to the respective OptimizerTypes’s constructor.
See mlos_core.optimizers.OptimizerFactory.create()
for more details.
Other Optimizers may also have their own configuration options. See each class’ documentation for details.
When using MlosCoreOptimizer
, we can also specify an optional an
space_adapter_type
, which can sometimes help manipulate the configuration
space to something more manageable. It should be one of the following
SpaceAdapterType
enum values:
>>> import mlos_core.spaces.adapters
>>> print([member.name for member in mlos_core.spaces.adapters.SpaceAdapterType])
['IDENTITY', 'LLAMATUNE']
These may also include their own configuration options, which can be specified
as additional key-value pairs in the optional space_adapter_config
section,
where each key-value corresponds to an argument to the respective
OptimizerTypes’s constructor. See
mlos_core.spaces.adapters.SpaceAdapterFactory.create()
for more details.
Here’s an example JSON config for an MlosCoreOptimizer
.
>>> optimizer_json_config = '''
... {
... "class": "mlos_bench.optimizers.mlos_core_optimizer.MlosCoreOptimizer",
... "description": "MlosCoreOptimizer",
... "config": {
... "max_suggestions": 1000,
... "optimization_targets": {
... "throughput": "max",
... "cost": "min",
... },
... "start_with_defaults": true,
... "seed": 42,
... // Override the default optimizer type
... // Must be one of the mlos_core OptimizerType enum values.
... "optimizer_type": "SMAC",
... // Optionally provide some additional configuration options for the optimizer.
... // Note: these are optimizer-specific and may not be supported by all optimizers.
... "n_random_init": 25,
... "n_random_probability": 0.01,
... // Optionally override the default space adapter type
... // Must be one of the mlos_core SpaceAdapterType enum values.
... // LlamaTune is a method for automatically doing space reduction
... // from the original space.
... "space_adapter_type": "LLAMATUNE",
... "space_adapter_config": {
... // Note: these values are probably too low,
... // but it's just for demonstration.
... "num_low_dims": 2,
... "max_unique_values_per_param": 10,
... }
... }
... }
... '''
That config will typically be loaded via the --optimizer
command-line
argument to the mlos_bench
CLI.
However, for demonstration purposes, we can load it directly here:
>>> config = json.loads(optimizer_json_config)
>>> optimizer = service.build_optimizer(
... tunables=tunables,
... service=service,
... config=config,
... )
Internally the Scheduler will call the Optimizer’s methods to suggest configurations, like so:
>>> suggested_config_1 = optimizer.suggest()
>>> # Normally default values should be suggested first, per json config.
>>> # However, since LlamaTune is being employed here, the first suggestion may
>>> # be projected to a slightly different space.
>>> suggested_config_1.get_param_values()
{'flags': 'auto', 'int_param': 1, 'float_param': 55.5555555555556}
>>> # Get another suggestion.
>>> # Note that multiple suggestions can be pending prior to
>>> # registering their scores, supporting parallel trial execution.
>>> suggested_config_2 = optimizer.suggest()
>>> suggested_config_2.get_param_values()
{'flags': 'on', 'int_param': 78, 'float_param': 88.8888888888889}
>>> # Register some scores.
>>> # Note: Maximization problems track negative scores to produce a minimization problem.
>>> optimizer.register(suggested_config_1, Status.SUCCEEDED, {"throughput": 42, "cost": 19})
{'throughput': -42.0, 'cost': 19.0}
>>> optimizer.register(suggested_config_2, Status.SUCCEEDED, {"throughput": 7, "cost": 17.2})
{'throughput': -7.0, 'cost': 17.2}
>>> (best_score, best_config) = optimizer.get_best_observation()
>>> best_score
{'throughput': 42.0, 'cost': 19.0}
>>> assert best_config == suggested_config_1
Classes
A wrapper class for the |
Module Contents
- class mlos_bench.optimizers.mlos_core_optimizer.MlosCoreOptimizer(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, config: dict, global_config: dict | None = None, service: mlos_bench.services.base_service.Service | None = None)[source]
Bases:
mlos_bench.optimizers.base_optimizer.Optimizer
A wrapper class for the
mlos_core.optimizers
.Create a new optimizer for the given configuration space defined by the tunables.
- Parameters:
tunables (TunableGroups) – The tunables to optimize.
config (dict) – Free-format key/value pairs of configuration parameters to pass to the optimizer.
global_config (dict | None)
service (Service | None)
- __exit__(ex_type: type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False] [source]
Exit the context of the optimizer.
- Parameters:
ex_type (type[BaseException] | None)
ex_val (BaseException | None)
ex_tb (types.TracebackType | None)
- Return type:
Literal[False]
- bulk_register(configs: collections.abc.Sequence[dict], scores: collections.abc.Sequence[dict[str, mlos_bench.tunables.tunable_types.TunableValue] | None], status: collections.abc.Sequence[mlos_bench.environments.status.Status] | None = None) bool [source]
Pre-load the optimizer with the bulk data from previous experiments.
- Parameters:
configs (Sequence[dict]) – Records of tunable values from other experiments.
scores (Sequence[Optional[dict[str, TunableValue]]]) – Benchmark results from experiments that correspond to configs.
status (Optional[Sequence[Status]]) – Status of the experiments that correspond to configs.
- Returns:
is_not_empty – True if there is data to register, false otherwise.
- Return type:
- get_best_observation() tuple[dict[str, float], mlos_bench.tunables.tunable_groups.TunableGroups] | tuple[None, None] [source]
Get the best observation so far.
- Returns:
(value, tunables) – The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.
- Return type:
tuple[dict[str, float], TunableGroups]
- register(tunables: mlos_bench.tunables.tunable_groups.TunableGroups, status: mlos_bench.environments.status.Status, score: dict[str, mlos_bench.tunables.tunable_types.TunableValue] | None = None) dict[str, float] | None [source]
Register the observation for the given configuration.
- Parameters:
tunables (TunableGroups) – The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.
status (Status) – Final status of the experiment (e.g., SUCCEEDED or FAILED).
score (Optional[dict[str, TunableValue]]) – A dict with the final benchmark results. None if the experiment was not successful.
- Returns:
value – Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.
- Return type:
- suggest() mlos_bench.tunables.tunable_groups.TunableGroups [source]
Generate the next suggestion. Base class’ implementation increments the iteration count and returns the current values of the tunables.
- Returns:
tunables – The next configuration to benchmark. These are the same tunables we pass to the constructor, but with the values set to the next suggestion.
- Return type: