Preps data with various feature engineering recipes to create features before training models
prep_data(
run_info,
input_data,
combo_variables,
target_variable,
date_type,
forecast_horizon,
external_regressors = NULL,
hist_start_date = NULL,
hist_end_date = NULL,
combo_cleanup_date = NULL,
fiscal_year_start = 1,
clean_missing_values = TRUE,
clean_outliers = FALSE,
box_cox = FALSE,
stationary = TRUE,
forecast_approach = "bottoms_up",
parallel_processing = NULL,
num_cores = NULL,
target_log_transformation = FALSE,
fourier_periods = NULL,
lag_periods = NULL,
rolling_window_periods = NULL,
recipes_to_run = NULL,
multistep_horizon = FALSE
)
Run info using set_run_info()
A standard data frame, tibble, or spark data frame using sparklyr of historical time series data. Can also include external regressors for both historical and future data.
List of column headers within input data to be used to separate individual time series.
The column header formatted as a character value within input data you want to forecast.
The date granularity of the input data. Finn accepts the following as a character string: day, week, month, quarter, year.
Number of periods to forecast into the future.
List of column headers within input data to be used as features in multivariate models.
Date value of when your input_data starts. Default of NULL uses earliest date value in input_data.
Date value of when your input_data ends. Default of NULL uses the latest date value in input_data.
Date value to remove individual time series that don't contain non-zero values after that specified date. Default of NULL is to not remove any time series and attempt to forecast all time series.
Month number of start of fiscal year of input data, aids in building out date features. Formatted as a numeric value. Default of 1 assumes fiscal year starts in January.
If TRUE, cleans missing values. Only impute values for missing data within an existing series, and does not add new values onto the beginning or end, but does provide a value of 0 for said values.
If TRUE, outliers are cleaned and inputted with values more in line with historical data.
Apply box-cox transformation to normalize variance in data
Apply differencing to make data stationary
How the forecast is created. The default of 'bottoms_up' trains models for each individual time series. Value of 'grouped_hierarchy' creates a grouped time series to forecast at while 'standard_hierarchy' creates a more traditional hierarchical time series to forecast, both based on the hts package.
Default of NULL runs no parallel processing and forecasts each individual time series one after another. Value of 'local_machine' leverages all cores on current machine Finn is running on. Value of 'spark' runs time series in parallel on a spark cluster in Azure Databricks/Synapse.
Number of cores to run when parallel processing is set up. Used when running parallel computations on local machine or within Azure. Default of NULL uses total amount of cores on machine minus one. Can't be greater than number of cores on machine minus 1.
If TRUE, log transform target variable before training models.
List of values to use in creating fourier series as features. Default of NULL automatically chooses these values based on the date_type.
List of values to use in creating lag features. Default of NULL automatically chooses these values based on date_type.
List of values to use in creating rolling window features. Default of NULL automatically chooses these values based on date_type.
List of recipes to run on multivariate models that can run different recipes. A value of NULL runs all recipes, but only runs the R1 recipe for weekly and daily date types. A value of "all" runs all recipes, regardless of date type. A list like c("R1") or c("R2") would only run models with the R1 or R2 recipe.
Use a multistep horizon approach when training multivariate models with R1 recipe.
No return object. Feature engineered data is written to disk based on the output locations provided in
set_run_info()
.
# \donttest{
data_tbl <- timetk::m4_monthly %>%
dplyr::rename(Date = date) %>%
dplyr::mutate(id = as.character(id)) %>%
dplyr::filter(
Date >= "2013-01-01",
Date <= "2015-06-01"
)
run_info <- set_run_info()
#> Finn Submission Info
#> • Experiment Name: finn_fcst
#> • Run Name: finn_fcst-20241029T144920Z
#>
prep_data(run_info,
input_data = data_tbl,
combo_variables = c("id"),
target_variable = "value",
date_type = "month",
forecast_horizon = 3,
recipes_to_run = "R1"
)
#> ℹ Prepping Data
#> ✔ Prepping Data [1.4s]
#>
# }