This function updates the forecast agent with the latest data and inputs.
If new time series are detected in the data (up to 20\
with a floor of 10), simple forecasts are automatically created for them
using default local model inputs without LLM involvement. If the number
of new series exceeds the cap, an error directs the user to use
iterate_forecast() instead.
update_forecast(
agent_info,
weighted_mape_goal = 0.1,
allow_iterate_forecast = FALSE,
max_iter = 3,
parallel_processing = NULL,
inner_parallel = FALSE,
num_cores = NULL,
seed = 123
)Agent info from set_agent_info()
Weighted MAPE goal the agent is trying to achieve for each time series
Logical indicating if the forecast iteration should be allowed if poor performance is detected, meaning >40% of time series with >20% worse weighted MAPE than previous agent run
Numeric indicating the maximum number of iterations if iterate_forecast is ran
Default of NULL runs no parallel processing and forecasts each individual time series one after another. 'local_machine' leverages all cores on current machine Finn is running on. 'spark' runs time series in parallel on a spark cluster in Azure Databricks or Azure Synapse.
Run components of forecast process inside a specific time series in parallel. Can only be used if parallel_processing is set to NULL or 'spark'.
Number of cores to run when parallel processing is set up. Used when running parallel computations on local machine or within Azure. Default of NULL uses total amount of cores on machine minus one. Can't be greater than number of cores on machine minus 1.
Set seed for random number generator. Numeric value.
Nothing
If individual time series fail during the global or local model update
process, they are automatically re-forecast using default local model
inputs (the same treatment as new time series). If more than 20\
existing series (with a floor of 10) fail to update, an error is raised
directing the user to use iterate_forecast() instead.
if (FALSE) { # \dontrun{
# load example data
hist_data <- timetk::m4_monthly %>%
dplyr::filter(date >= "2013-01-01") %>%
dplyr::rename(Date = date) %>%
dplyr::mutate(id = as.character(id))
# set up Finn project
project <- set_project_info(
project_name = "Demo_Project",
combo_variables = c("id"),
target_variable = "value",
date_type = "month"
)
# set up LLM
driver_llm <- ellmer::chat_azure_openai(model = "gpt-4o-mini")
# set up agent info
agent_info <- set_agent_info(
project_info = project,
driver_llm = driver_llm,
input_data = hist_data,
forecast_horizon = 6,
hist_end_date = as.Date("2014-12-01")
)
# run the forecast iteration process
iterate_forecast(
agent_info = agent_info,
max_iter = 3,
weighted_mape_goal = 0.03
)
# update the forecast with latest data and inputs
agent_info <- set_agent_info(
project_info = project,
driver_llm = driver_llm,
input_data = hist_data,
forecast_horizon = 6,
hist_end_date = as.Date("2014-12-01"),
overwrite = TRUE # required to update the agent for latest data and inputs
)
update_forecast(
agent_info = agent_info,
weighted_mape_goal = 0.03
)
} # }