mlos_bench.environments.MockEnv

class mlos_bench.environments.MockEnv(*, name: str, config: dict, global_config: dict | None = None, tunables: TunableGroups | None = None, service: Service | None = None)

Scheduler-side environment to mock the benchmark results.

Attributes:
parameters

Key/value pairs of all environment parameters (i.e., const_args and tunable_params).

tunable_params

Get the configuration space of the given environment.

Methods

new(*, env_name, class_name, config[, ...])

Factory method for a new environment with a given config.

pprint([indent, level])

Pretty-print the environment configuration.

run()

Produce mock benchmark data for one experiment.

setup(tunables[, global_config])

Set up a new benchmark environment, if necessary.

status()

Check the status of the benchmark environment.

teardown()

Tear down the benchmark environment.

__init__(*, name: str, config: dict, global_config: dict | None = None, tunables: TunableGroups | None = None, service: Service | None = None)

Create a new environment that produces mock benchmark data.

Parameters:
name: str

Human-readable name of the environment.

configdict

Free-format dictionary that contains the benchmark environment configuration.

global_configdict

Free-format dictionary of global parameters (e.g., security credentials) to be mixed in into the “const_args” section of the local config. Optional arguments are mock_env_seed, mock_env_range, and mock_env_metrics. Set mock_env_seed to -1 for deterministic behavior, 0 for default randomness.

tunablesTunableGroups

A collection of tunable parameters for all environments.

service: Service

An optional service object. Not used by this class.

run() Tuple[Status, datetime, Dict[str, int | float | str | None] | None]

Produce mock benchmark data for one experiment.

Returns:
(status, timestamp, output)(Status, datetime, dict)

3-tuple of (Status, timestamp, output) values, where output is a dict with the results or None if the status is not COMPLETED. The keys of the output dict are the names of the metrics specified in the config; by default it’s just one metric named “score”. All output metrics have the same value.