Create ensemble model forecasts
ensemble_models(
run_info,
parallel_processing = NULL,
inner_parallel = FALSE,
num_cores = NULL,
seed = 123
)
run info using the set_run_info()
function
Default of NULL runs no parallel processing and forecasts each individual time series one after another. 'local_machine' leverages all cores on current machine Finn is running on. 'spark' runs time series in parallel on a spark cluster in Azure Databricks or Azure Synapse.
Run components of forecast process inside a specific time series in parallel. Can only be used if parallel_processing is set to NULL or 'spark'.
Number of cores to run when parallel processing is set up. Used when running parallel computations on local machine or within Azure. Default of NULL uses total amount of cores on machine minus one. Can't be greater than number of cores on machine minus 1.
Set seed for random number generator. Numeric value.
Ensemble model outputs are written to disk
# \donttest{
data_tbl <- timetk::m4_monthly %>%
dplyr::rename(Date = date) %>%
dplyr::mutate(id = as.character(id)) %>%
dplyr::filter(
Date >= "2013-01-01",
Date <= "2015-06-01",
id == "M750"
)
run_info <- set_run_info()
#> Finn Submission Info
#> • Experiment Name: finn_fcst
#> • Run Name: finn_fcst-20241029T144740Z
#>
prep_data(run_info,
input_data = data_tbl,
combo_variables = c("id"),
target_variable = "value",
date_type = "month",
forecast_horizon = 3
)
#> ℹ Prepping Data
#> ✔ Prepping Data [1.7s]
#>
prep_models(run_info,
models_to_run = c("arima", "glmnet"),
num_hyperparameters = 2
)
#> ℹ Creating Model Workflows
#> ✔ Creating Model Workflows [217ms]
#>
#> ℹ Creating Model Hyperparameters
#> ✔ Creating Model Hyperparameters [220ms]
#>
#> ℹ Creating Train Test Splits
#> ✔ Creating Train Test Splits [320ms]
#>
train_models(run_info,
run_global_models = FALSE
)
#> ℹ Training Individual Models
#> ✔ Training Individual Models [11.7s]
#>
ensemble_models(run_info)
#> ℹ Training Ensemble Models
#> ✔ Training Ensemble Models [1.8s]
#>
# }