mlos_bench.storage.base_storage module

Base interface for saving and restoring the benchmark data.

class mlos_bench.storage.base_storage.Storage(config: Dict[str, Any], global_config: dict | None = None, service: Service | None = None)

Bases: object

An abstract interface between the benchmarking framework and storage systems (e.g., SQLite or MLFLow).

Attributes:
experiments

Retrieve the experiments’ data from the storage.

Methods

Experiment(*, tunables, experiment_id, ...)

Base interface for storing the results of the experiment.

Trial(*, tunables, experiment_id, trial_id, ...)

Base interface for storing the results of a single run of the experiment.

experiment(*, experiment_id, trial_id, ...)

Create a new experiment in the storage.

class Experiment(*, tunables: TunableGroups, experiment_id: str, trial_id: int, root_env_config: str, description: str, opt_targets: Dict[str, Literal['min', 'max']])

Bases: object

Base interface for storing the results of the experiment.

This class is instantiated in the Storage.experiment() method.

Attributes:
description

Get the Experiment’s description.

experiment_id

Get the Experiment’s ID.

opt_targets

Get the Experiment’s optimization targets and directions.

trial_id

Get the current Trial ID.

tunables

Get the Experiment’s tunables.

Methods

load([last_trial_id])

Load (tunable values, benchmark scores, status) to warm-up the optimizer.

load_telemetry(trial_id)

Retrieve the telemetry data for a given trial.

load_tunable_config(config_id)

Load tunable values for a given config ID.

merge(experiment_ids)

Merge in the results of other (compatible) experiments trials.

new_trial(tunables[, ts_start, config])

Create a new experiment run in the storage.

pending_trials(timestamp, *, running)

Return an iterator over the pending trials that are scheduled to run on or before the specified timestamp.

property description: str

Get the Experiment’s description.

property experiment_id: str

Get the Experiment’s ID.

abstract load(last_trial_id: int = -1) Tuple[List[int], List[dict], List[Dict[str, Any] | None], List[Status]]

Load (tunable values, benchmark scores, status) to warm-up the optimizer.

If last_trial_id is present, load only the data from the (completed) trials that were scheduled after the given trial ID. Otherwise, return data from ALL merged-in experiments and attempt to impute the missing tunable values.

Parameters:
last_trial_idint

(Optional) Trial ID to start from.

Returns:
(trial_ids, configs, scores, status)([int], [dict], [Optional[dict]], [Status])

Trial ids, Tunable values, benchmark scores, and status of the trials.

abstract load_telemetry(trial_id: int) List[Tuple[datetime, str, Any]]

Retrieve the telemetry data for a given trial.

Parameters:
trial_idint

Trial ID.

Returns:
metricsList[Tuple[datetime, str, Any]]

Telemetry data.

abstract load_tunable_config(config_id: int) Dict[str, Any]

Load tunable values for a given config ID.

abstract merge(experiment_ids: List[str]) None

Merge in the results of other (compatible) experiments trials. Used to help warm up the optimizer for this experiment.

Parameters:
experiment_idsList[str]

List of IDs of the experiments to merge in.

new_trial(tunables: TunableGroups, ts_start: datetime | None = None, config: Dict[str, Any] | None = None) Trial

Create a new experiment run in the storage.

Parameters:
tunablesTunableGroups

Tunable parameters to use for the trial.

ts_startOptional[datetime]

Timestamp of the trial start (can be in the future).

configdict

Key/value pairs of additional non-tunable parameters of the trial.

Returns:
trialStorage.Trial

An object that allows to update the storage with the results of the experiment trial run.

property opt_targets: Dict[str, Literal['min', 'max']]

Get the Experiment’s optimization targets and directions.

abstract pending_trials(timestamp: datetime, *, running: bool) Iterator[Trial]

Return an iterator over the pending trials that are scheduled to run on or before the specified timestamp.

Parameters:
timestampdatetime

The time in UTC to check for scheduled trials.

runningbool

If True, include the trials that are already running. Otherwise, return only the scheduled trials.

Returns:
trialsIterator[Storage.Trial]

An iterator over the scheduled (and maybe running) trials.

property trial_id: int

Get the current Trial ID.

property tunables: TunableGroups

Get the Experiment’s tunables.

class Trial(*, tunables: TunableGroups, experiment_id: str, trial_id: int, tunable_config_id: int, opt_targets: Dict[str, Literal['min', 'max']], config: Dict[str, Any] | None = None)

Bases: object

Base interface for storing the results of a single run of the experiment.

This class is instantiated in the Storage.Experiment.trial() method.

Attributes:
opt_targets

Get the Trial’s optimization targets and directions.

status

Get the status of the current trial.

trial_id

ID of the current trial.

tunable_config_id

ID of the current trial (tunable) configuration.

tunables

Tunable parameters of the current trial.

Methods

config([global_config])

Produce a copy of the global configuration updated with the parameters of the current trial.

update(status, timestamp[, metrics])

Update the storage with the results of the experiment.

update_telemetry(status, timestamp, metrics)

Save the experiment's telemetry data and intermediate status.

config(global_config: Dict[str, Any] | None = None) Dict[str, Any]

Produce a copy of the global configuration updated with the parameters of the current trial.

Note: this is not the target Environment’s “config” (i.e., tunable params), but rather the internal “config” which consists of a combination of somewhat more static variables defined in the json config files.

property opt_targets: Dict[str, Literal['min', 'max']]

Get the Trial’s optimization targets and directions.

property status: Status

Get the status of the current trial.

property trial_id: int

ID of the current trial.

property tunable_config_id: int

ID of the current trial (tunable) configuration.

property tunables: TunableGroups

Tunable parameters of the current trial.

(e.g., application Environment’s “config”)

abstract update(status: Status, timestamp: datetime, metrics: Dict[str, Any] | None = None) Dict[str, Any] | None

Update the storage with the results of the experiment.

Parameters:
statusStatus

Status of the experiment run.

timestamp: datetime

Timestamp of the status and metrics.

metricsOptional[Dict[str, Any]]

One or several metrics of the experiment run. Must contain the (float) optimization target if the status is SUCCEEDED.

Returns:
metricsOptional[Dict[str, Any]]

Same as metrics, but always in the dict format.

abstract update_telemetry(status: Status, timestamp: datetime, metrics: List[Tuple[datetime, str, Any]]) None

Save the experiment’s telemetry data and intermediate status.

Parameters:
statusStatus

Current status of the trial.

timestamp: datetime

Timestamp of the status (but not the metrics).

metricsList[Tuple[datetime, str, Any]]

Telemetry data.

abstract experiment(*, experiment_id: str, trial_id: int, root_env_config: str, description: str, tunables: TunableGroups, opt_targets: Dict[str, Literal['min', 'max']]) Experiment

Create a new experiment in the storage.

We need the opt_target parameter here to know what metric to retrieve when we load the data from previous trials. Later we will replace it with full metadata about the optimization direction, multiple objectives, etc.

Parameters:
experiment_idstr

Unique identifier of the experiment.

trial_idint

Starting number of the trial.

root_env_configstr

A path to the root JSON configuration file of the benchmarking environment.

descriptionstr

Human-readable description of the experiment.

tunablesTunableGroups
opt_targetsDict[str, Literal[“min”, “max”]]

Names of metrics we’re optimizing for and the optimization direction {min, max}.

Returns:
experimentStorage.Experiment

An object that allows to update the storage with the results of the experiment and related data.

abstract property experiments: Dict[str, ExperimentData]

Retrieve the experiments’ data from the storage.

Returns:
experimentsDict[str, ExperimentData]

A dictionary of the experiments’ data, keyed by experiment id.