tune.analysis
ExperimentAnalysis Objects
class ExperimentAnalysis()
Analyze results from a Tune experiment.
best_trial
@property
def best_trial() -> Trial
Get the best trial of the experiment
The best trial is determined by comparing the last trial results
using the metric
and mode
parameters passed to tune.run()
.
If you didn't pass these parameters, use
get_best_trial(metric, mode, scope)
instead.
best_config
@property
def best_config() -> Dict
Get the config of the best trial of the experiment
The best trial is determined by comparing the last trial results
using the metric
and mode
parameters passed to tune.run()
.
If you didn't pass these parameters, use
get_best_config(metric, mode, scope)
instead.
results
@property
def results() -> Dict[str, Dict]
Get the last result of all the trials of the experiment
get_best_trial
def get_best_trial(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = "last", filter_nan_and_inf: bool = True) -> Optional[Trial]
Retrieve the best trial object.
Compares all trials' scores on metric
.
If metric
is not specified, self.default_metric
will be used.
If mode
is not specified, self.default_mode
will be used.
These values are usually initialized by passing the metric
and
mode
parameters to tune.run()
.
Arguments:
metric
str - Key for trial info to order on. Defaults toself.default_metric
.mode
str - One of [min, max]. Defaults toself.default_mode
.scope
str - One of [all, last, avg, last-5-avg, last-10-avg]. Ifscope=last
, only look at each trial's final step formetric
, and compare across trials based onmode=[min,max]
. Ifscope=avg
, consider the simple average over all steps formetric
and compare across trials based onmode=[min,max]
. Ifscope=last-5-avg
orscope=last-10-avg
, consider the simple average over the last 5 or 10 steps formetric
and compare across trials based onmode=[min,max]
. Ifscope=all
, find each trial's min/max score formetric
based onmode
, and compare trials based onmode=[min,max]
.filter_nan_and_inf
bool - If True (default), NaN or infinite values are disregarded and these trials are never selected as the best trial.
get_best_config
def get_best_config(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = "last") -> Optional[Dict]
Retrieve the best config corresponding to the trial.
Compares all trials' scores on metric
.
If metric
is not specified, self.default_metric
will be used.
If mode
is not specified, self.default_mode
will be used.
These values are usually initialized by passing the metric
and
mode
parameters to tune.run()
.
Arguments:
metric
str - Key for trial info to order on. Defaults toself.default_metric
.mode
str - One of [min, max]. Defaults toself.default_mode
.scope
str - One of [all, last, avg, last-5-avg, last-10-avg]. Ifscope=last
, only look at each trial's final step formetric
, and compare across trials based onmode=[min,max]
. Ifscope=avg
, consider the simple average over all steps formetric
and compare across trials based onmode=[min,max]
. Ifscope=last-5-avg
orscope=last-10-avg
, consider the simple average over the last 5 or 10 steps formetric
and compare across trials based onmode=[min,max]
. Ifscope=all
, find each trial's min/max score formetric
based onmode
, and compare trials based onmode=[min,max]
.
best_result
@property
def best_result() -> Dict
Get the last result of the best trial of the experiment
The best trial is determined by comparing the last trial results
using the metric
and mode
parameters passed to tune.run()
.
If you didn't pass these parameters, use
get_best_trial(metric, mode, scope).last_result
instead.