mlos_bench.environments.mock_env module

Scheduler-side environment to mock the benchmark results.

class mlos_bench.environments.mock_env.MockEnv(*, name: str, config: dict, global_config: dict | None = None, tunables: TunableGroups | None = None, service: Service | None = None)

Bases: Environment

Scheduler-side environment to mock the benchmark results.

Attributes:
parameters

Key/value pairs of all environment parameters (i.e., const_args and tunable_params).

tunable_params

Get the configuration space of the given environment.

Methods

new(*, env_name, class_name, config[, ...])

Factory method for a new environment with a given config.

pprint([indent, level])

Pretty-print the environment configuration.

run()

Produce mock benchmark data for one experiment.

setup(tunables[, global_config])

Set up a new benchmark environment, if necessary.

status()

Check the status of the benchmark environment.

teardown()

Tear down the benchmark environment.

run() Tuple[Status, datetime, Dict[str, int | float | str | None] | None]

Produce mock benchmark data for one experiment.

Returns:
(status, timestamp, output)(Status, datetime, dict)

3-tuple of (Status, timestamp, output) values, where output is a dict with the results or None if the status is not COMPLETED. The keys of the output dict are the names of the metrics specified in the config; by default it’s just one metric named “score”. All output metrics have the same value.