mlos_bench.optimizers.grid_search_optimizer module

Grid search optimizer for mlos_bench.

class mlos_bench.optimizers.grid_search_optimizer.GridSearchOptimizer(tunables: TunableGroups, config: dict, global_config: dict | None = None, service: Service | None = None)

Bases: TrackBestOptimizer

Grid search optimizer.

Attributes:
config_space

Get the tunable parameters of the optimizer as a ConfigurationSpace.

current_iteration

The current number of iterations (suggestions) registered.

max_suggestions

The maximum number of iterations (suggestions) to run.

name

The name of the optimizer.

pending_configs

Gets the set of pending configs in this grid search optimizer.

seed

The random seed for the optimizer.

start_with_defaults

Return True if the optimizer should start with the default values.

suggested_configs

Gets the set of configs that have been suggested but not yet registered.

supports_preload

Return True if the optimizer supports pre-loading the data from previous experiments.

targets

A dictionary of {target: direction} of optimization targets.

tunable_params

Get the tunable parameters of the optimizer as TunableGroups.

Methods

bulk_register(configs, scores[, status])

Pre-load the optimizer with the bulk data from previous experiments.

get_best_observation()

Get the best observation so far.

not_converged()

Return True if not converged, False otherwise.

register(tunables, status[, score])

Register the observation for the given configuration.

suggest()

Generate the next grid search suggestion.

bulk_register(configs: Sequence[dict], scores: Sequence[Dict[str, int | float | str | None] | None], status: Sequence[Status] | None = None) bool

Pre-load the optimizer with the bulk data from previous experiments.

Parameters:
configsSequence[dict]

Records of tunable values from other experiments.

scoresSequence[Optional[Dict[str, TunableValue]]]

Benchmark results from experiments that correspond to configs.

statusOptional[Sequence[Status]]

Status of the experiments that correspond to configs.

Returns:
is_not_emptybool

True if there is data to register, false otherwise.

not_converged() bool

Return True if not converged, False otherwise.

Base implementation just checks the iteration count.

property pending_configs: Iterable[Dict[str, int | float | str | None]]

Gets the set of pending configs in this grid search optimizer.

Returns:
Iterable[Dict[str, TunableValue]]
register(tunables: TunableGroups, status: Status, score: Dict[str, int | float | str | None] | None = None) Dict[str, float] | None

Register the observation for the given configuration.

Parameters:
tunablesTunableGroups

The configuration that has been benchmarked. Usually it’s the same config that the .suggest() method returned.

statusStatus

Final status of the experiment (e.g., SUCCEEDED or FAILED).

scoreOptional[Dict[str, TunableValue]]

A dict with the final benchmark results. None if the experiment was not successful.

Returns:
valueOptional[Dict[str, float]]

Benchmark scores extracted (and possibly transformed) from the dataframe that’s being MINIMIZED.

suggest() TunableGroups

Generate the next grid search suggestion.

property suggested_configs: Iterable[Dict[str, int | float | str | None]]

Gets the set of configs that have been suggested but not yet registered.

Returns:
Iterable[Dict[str, TunableValue]]