Skip to main content

tune.analysis

ExperimentAnalysis Objects

class ExperimentAnalysis()

Analyze results from a Tune experiment.

best_trial

@property
def best_trial() -> Trial

Get the best trial of the experiment The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run(). If you didn't pass these parameters, use get_best_trial(metric, mode, scope) instead.

best_config

@property
def best_config() -> Dict

Get the config of the best trial of the experiment The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run(). If you didn't pass these parameters, use get_best_config(metric, mode, scope) instead.

results

@property
def results() -> Dict[str, Dict]

Get the last result of all the trials of the experiment

get_best_trial

def get_best_trial(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = "last", filter_nan_and_inf: bool = True) -> Optional[Trial]

Retrieve the best trial object. Compares all trials' scores on metric. If metric is not specified, self.default_metric will be used. If mode is not specified, self.default_mode will be used. These values are usually initialized by passing the metric and mode parameters to tune.run().

Arguments:

  • metric str - Key for trial info to order on. Defaults to self.default_metric.
  • mode str - One of [min, max]. Defaults to self.default_mode.
  • scope str - One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial's final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial's min/max score for metric based on mode, and compare trials based on mode=[min,max].
  • filter_nan_and_inf bool - If True (default), NaN or infinite values are disregarded and these trials are never selected as the best trial.

get_best_config

def get_best_config(metric: Optional[str] = None, mode: Optional[str] = None, scope: str = "last") -> Optional[Dict]

Retrieve the best config corresponding to the trial. Compares all trials' scores on metric. If metric is not specified, self.default_metric will be used. If mode is not specified, self.default_mode will be used. These values are usually initialized by passing the metric and mode parameters to tune.run().

Arguments:

  • metric str - Key for trial info to order on. Defaults to self.default_metric.
  • mode str - One of [min, max]. Defaults to self.default_mode.
  • scope str - One of [all, last, avg, last-5-avg, last-10-avg]. If scope=last, only look at each trial's final step for metric, and compare across trials based on mode=[min,max]. If scope=avg, consider the simple average over all steps for metric and compare across trials based on mode=[min,max]. If scope=last-5-avg or scope=last-10-avg, consider the simple average over the last 5 or 10 steps for metric and compare across trials based on mode=[min,max]. If scope=all, find each trial's min/max score for metric based on mode, and compare trials based on mode=[min,max].

best_result

@property
def best_result() -> Dict

Get the last result of the best trial of the experiment The best trial is determined by comparing the last trial results using the metric and mode parameters passed to tune.run(). If you didn't pass these parameters, use get_best_trial(metric, mode, scope).last_result instead.