mlos_bench.optimizers.base_optimizer module

Base class for an interface between the benchmarking framework and mlos_core optimizers.

class mlos_bench.optimizers.base_optimizer.Optimizer(tunables: TunableGroups, config: dict, global_config: dict | None = None, service: Service | None = None)

Bases: object

An abstract interface between the benchmarking framework and mlos_core optimizers.

Attributes:
config_space

Get the tunable parameters of the optimizer as a ConfigurationSpace.

current_iteration

The current number of iterations (suggestions) registered.

max_suggestions

The maximum number of iterations (suggestions) to run.

name

The name of the optimizer.

seed

The random seed for the optimizer.

start_with_defaults

Return True if the optimizer should start with the default values.

supports_preload

Return True if the optimizer supports pre-loading the data from previous experiments.

targets

A dictionary of {target: direction} of optimization targets.

tunable_params

Get the tunable parameters of the optimizer as TunableGroups.

Methods

bulk_register(configs, scores[, status])

Pre-load the optimizer with the bulk data from previous experiments.

get_best_observation()

Get the best observation so far.

not_converged()

Return True if not converged, False otherwise.

register(tunables, status[, score])

Register the observation for the given configuration.

suggest()

Generate the next suggestion.

BASE_SUPPORTED_CONFIG_PROPS = {'max_suggestions', 'optimization_targets', 'seed', 'start_with_defaults'}
abstract bulk_register(configs: Sequence[dict], scores: Sequence[Dict[str, int | float | str | None] | None], status: Sequence[Status] | None = None) bool

Pre-load the optimizer with the bulk data from previous experiments.

Parameters:
configsSequence[dict]

Records of tunable values from other experiments.

scoresSequence[Optional[Dict[str, TunableValue]]]

Benchmark results from experiments that correspond to configs.

statusOptional[Sequence[Status]]

Status of the experiments that correspond to configs.

Returns:
is_not_emptybool

True if there is data to register, false otherwise.

property config_space: ConfigurationSpace

Get the tunable parameters of the optimizer as a ConfigurationSpace.

Returns:
ConfigurationSpace

The ConfigSpace representation of the tunable parameters.

property current_iteration: int

The current number of iterations (suggestions) registered.

Note: this may or may not be the same as the number of configurations. See Also: Scheduler.trial_config_repeat_count and Scheduler.max_trials.

abstract get_best_observation() Tuple[Dict[str, float], TunableGroups] | Tuple[None, None]

Get the best observation so far.

Returns:
(value, tunables)Tuple[Dict[str, float], TunableGroups]

The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.

property max_suggestions: int

The maximum number of iterations (suggestions) to run.

Note: this may or may not be the same as the number of configurations. See Also: Scheduler.trial_config_repeat_count and Scheduler.max_trials.

property name: str

The name of the optimizer.

We save this information in mlos_bench storage to track the source of each configuration.

not_converged() bool

Return True if not converged, False otherwise.

Base implementation just checks the iteration count.

abstract register(tunables: TunableGroups, status: Status, score: Dict[str, int | float | str | None] | None = None) Dict[str, float] | None

Register the observation for the given configuration.

Parameters:
tunablesTunableGroups

The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.

statusStatus

Final status of the experiment (e.g., SUCCEEDED or FAILED).

scoreOptional[Dict[str, TunableValue]]

A dict with the final benchmark results. None if the experiment was not successful.

Returns:
valueOptional[Dict[str, float]]

Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.

property seed: int

The random seed for the optimizer.

property start_with_defaults: bool

Return True if the optimizer should start with the default values.

Note: This parameter is mutable and will be reset to False after the defaults are first suggested.

suggest() TunableGroups

Generate the next suggestion. Base class’ implementation increments the iteration count and returns the current values of the tunables.

Returns:
tunablesTunableGroups

The next configuration to benchmark. These are the same tunables we pass to the constructor, but with the values set to the next suggestion.

property supports_preload: bool

Return True if the optimizer supports pre-loading the data from previous experiments.

property targets: Dict[str, Literal['min', 'max']]

A dictionary of {target: direction} of optimization targets.

property tunable_params: TunableGroups

Get the tunable parameters of the optimizer as TunableGroups.

Returns:
tunablesTunableGroups

A collection of covariant groups of tunable parameters.