mlos_bench.schedulers.base_scheduler
Base class for the optimization loop scheduling policies.
Classes
Base class for the optimization loop scheduling policies. |
Module Contents
- class mlos_bench.schedulers.base_scheduler.Scheduler(*, config: dict[str, Any], global_config: dict[str, Any], trial_runners: collections.abc.Iterable[mlos_bench.schedulers.trial_runner.TrialRunner], optimizer: mlos_bench.optimizers.base_optimizer.Optimizer, storage: mlos_bench.storage.base_storage.Storage, root_env_config: str)[source]
Bases:
contextlib.AbstractContextManager
Base class for the optimization loop scheduling policies.
Create a new instance of the scheduler. The constructor of this and the derived classes is called by the persistence service after reading the class JSON configuration. Other objects like the TrialRunner(s) and their Environment(s) and Optimizer are provided by the Launcher.
- Parameters:
config (dict) – The configuration for the Scheduler.
global_config (dict) – The global configuration for the Experiment.
trial_runner (Iterable[TrialRunner]) – The set of TrialRunner(s) (and associated Environment(s)) to benchmark/optimize.
optimizer (Optimizer) – The Optimizer to use.
storage (Storage) – The Storage to use.
root_env_config (str) – Path to the root Environment configuration.
trial_runners (collections.abc.Iterable[mlos_bench.schedulers.trial_runner.TrialRunner])
- __exit__(ex_type: type[BaseException] | None, ex_val: BaseException | None, ex_tb: types.TracebackType | None) Literal[False] [source]
Exit the context of the scheduler.
- Parameters:
ex_type (type[BaseException] | None)
ex_val (BaseException | None)
ex_tb (types.TracebackType | None)
- Return type:
Literal[False]
- __repr__() str [source]
Produce a human-readable version of the Scheduler (mostly for logging).
- Returns:
string – A human-readable version of the Scheduler.
- Return type:
- assign_trial_runners(trials: collections.abc.Iterable[mlos_bench.storage.base_storage.Storage.Trial]) None [source]
Assigns TrialRunners to the given Trial in batch.
The base class implements a simple round-robin scheduling algorithm for each Trial in sequence.
Subclasses can override this method to implement a more sophisticated policy. For instance:
def assign_trial_runners( self, trials: Iterable[Storage.Trial], ) -> TrialRunner: trial_runners_map = {} # Implement a more sophisticated policy here. # For example, to assign the Trial to the TrialRunner with the least # number of running Trials. # Or assign the Trial to the TrialRunner that hasn't executed this # TunableValues Config yet. for (trial, trial_runner) in trial_runners_map: # Call the base class method to assign the TrialRunner in the Trial's metadata. trial.set_trial_runner(trial_runner) ...
- Parameters:
trials (Iterable[Storage.Trial]) – The trial to assign a TrialRunner to.
- Return type:
None
- get_best_observation() tuple[dict[str, float] | None, mlos_bench.tunables.tunable_groups.TunableGroups | None] [source]
Get the best observation from the optimizer.
- Return type:
tuple[dict[str, float] | None, mlos_bench.tunables.tunable_groups.TunableGroups | None]
- get_trial_runner(trial: mlos_bench.storage.base_storage.Storage.Trial) mlos_bench.schedulers.trial_runner.TrialRunner [source]
Gets the TrialRunner associated with the given Trial.
- Parameters:
trial (Storage.Trial) – The trial to get the associated TrialRunner for.
- Return type:
- load_tunable_config(config_id: int) mlos_bench.tunables.tunable_groups.TunableGroups [source]
Load the existing tunable configuration from the storage.
- Parameters:
config_id (int)
- Return type:
- not_done() bool [source]
Check the stopping conditions.
By default, stop when the optimizer converges or max limit of trials reached.
- Return type:
- abstract run_trial(trial: mlos_bench.storage.base_storage.Storage.Trial) None [source]
Set up and run a single trial.
Save the results in the storage.
- Parameters:
- Return type:
None
- schedule_trial(tunables: mlos_bench.tunables.tunable_groups.TunableGroups) None [source]
Add a configuration to the queue of trials.
- Parameters:
- Return type:
None
- teardown() None [source]
Tear down the TrialRunners/Environment(s).
Call it after the completion of the .start() in the scheduler context.
- Return type:
None
- property environments: collections.abc.Iterable[mlos_bench.environments.base_environment.Environment][source]
Gets the Environment from the TrialRunners.
- property experiment: mlos_bench.storage.base_storage.Storage.Experiment | None[source]
Gets the Experiment Storage.
- Return type:
- property max_trials: int[source]
Gets the maximum number of trials to run for a given experiment, or -1 for no limit.
- Return type:
- property optimizer: mlos_bench.optimizers.base_optimizer.Optimizer[source]
Gets the Optimizer.
- Return type:
- property ran_trials: list[mlos_bench.storage.base_storage.Storage.Trial][source]
Get the list of trials that were run.
- Return type:
- property root_environment: mlos_bench.environments.base_environment.Environment[source]
Gets the root (prototypical) Environment from the first TrialRunner.
Notes
All TrialRunners have the same Environment config and are made unique by their use of the unique trial_runner_id assigned to each TrialRunner’s Environment’s global_config.
- Return type:
- property storage: mlos_bench.storage.base_storage.Storage[source]
Gets the Storage.
- Return type:
- property trial_config_repeat_count: int[source]
Gets the number of trials to run for a given config.
- Return type:
- property trial_count: int[source]
Gets the current number of trials run for the experiment.
- Return type:
- property trial_runners: dict[int, mlos_bench.schedulers.trial_runner.TrialRunner][source]
Gets the set of Trial Runners.
- Return type: