mlos_bench.optimizers.mlos_core_optimizer module

A wrapper for mlos_core optimizers for mlos_bench.

class mlos_bench.optimizers.mlos_core_optimizer.MlosCoreOptimizer(tunables: TunableGroups, config: dict, global_config: dict | None = None, service: Service | None = None)

Bases: Optimizer

A wrapper class for the mlos_core optimizers.

Attributes:
config_space

Get the tunable parameters of the optimizer as a ConfigurationSpace.

current_iteration

The current number of iterations (suggestions) registered.

max_suggestions

The maximum number of iterations (suggestions) to run.

name

The name of the optimizer.

seed

The random seed for the optimizer.

start_with_defaults

Return True if the optimizer should start with the default values.

supports_preload

Return True if the optimizer supports pre-loading the data from previous experiments.

targets

A dictionary of {target: direction} of optimization targets.

tunable_params

Get the tunable parameters of the optimizer as TunableGroups.

Methods

bulk_register(configs, scores[, status])

Pre-load the optimizer with the bulk data from previous experiments.

get_best_observation()

Get the best observation so far.

not_converged()

Return True if not converged, False otherwise.

register(tunables, status[, score])

Register the observation for the given configuration.

suggest()

Generate the next suggestion.

bulk_register(configs: Sequence[dict], scores: Sequence[Dict[str, int | float | str | None] | None], status: Sequence[Status] | None = None) bool

Pre-load the optimizer with the bulk data from previous experiments.

Parameters:
configsSequence[dict]

Records of tunable values from other experiments.

scoresSequence[Optional[Dict[str, TunableValue]]]

Benchmark results from experiments that correspond to configs.

statusOptional[Sequence[Status]]

Status of the experiments that correspond to configs.

Returns:
is_not_emptybool

True if there is data to register, false otherwise.

get_best_observation() Tuple[Dict[str, float], TunableGroups] | Tuple[None, None]

Get the best observation so far.

Returns:
(value, tunables)Tuple[Dict[str, float], TunableGroups]

The best value and the corresponding configuration. (None, None) if no successful observation has been registered yet.

property name: str

The name of the optimizer.

We save this information in mlos_bench storage to track the source of each configuration.

register(tunables: TunableGroups, status: Status, score: Dict[str, int | float | str | None] | None = None) Dict[str, float] | None

Register the observation for the given configuration.

Parameters:
tunablesTunableGroups

The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.

statusStatus

Final status of the experiment (e.g., SUCCEEDED or FAILED).

scoreOptional[Dict[str, TunableValue]]

A dict with the final benchmark results. None if the experiment was not successful.

Returns:
valueOptional[Dict[str, float]]

Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.

suggest() TunableGroups

Generate the next suggestion. Base class’ implementation increments the iteration count and returns the current values of the tunables.

Returns:
tunablesTunableGroups

The next configuration to benchmark. These are the same tunables we pass to the constructor, but with the values set to the next suggestion.