Generative AI for forecasting is powered by transformer architectures which are neural networks designed to model long-range dependencies in sequences. A time-series transformer is pre-trained on massive, diverse datasets, learning a general “language of time series” (recurring structures, seasonality, and signal vs. noise). Because this knowledge is broadly applicable, the foundation model can forecast new, unseen series immediately often with strong accuracy without task-specific training. This is zero‑shot forecasting: you bring data, the model brings general knowledge.
When you need domain‑specific nuance, you can fine‑tune the foundation model on your own historical data. Fine‑tuning adapts the pre‑trained knowledge to your context, typically improving accuracy and stability for your use cases while keeping training cost far lower than training from scratch.
Chronos2 is a foundation model for time series forecasting developed by Amazon
Finnts leverages chronos2, via azure self deployed model. The model weights were downloaded from huggingface/chronos2 and was deployed as an azure ml endpoint. For detailed steps on hosting refer to Azure ML documentation.
After the endpoint is up and running, set the base URL and API key as environment variables:
# In R (or add to .Renviron)
Sys.setenv(CHRONOS_API_URL = "https://xyz.inference.ml.azure.com/score")
Sys.setenv(CHRONOS_API_TOKEN = "your api key here")Chronos2 is available as a model within finnts unified workflow and can run as both a local (per time series) or global (across all time series) model. When running as a global model, Chronos2 learns patterns across all your time series simultaneously.
library(finnts)
# setup chronos2 api keys
hist_data <- timetk::m4_monthly %>%
dplyr::filter(
date >= "2010-01-01",
id == "M2"
) %>%
dplyr::rename(Date = date) %>%
dplyr::mutate(id = as.character(id))
run_info <- set_run_info(
project_name = "finnts_fcst",
run_name = "finn_sub_component_run"
)
prep_data(
run_info = run_info,
input_data = hist_data,
combo_variables = c("id"),
target_variable = "value",
date_type = "month",
forecast_horizon = 6
)
prep_models(
run_info = run_info,
models_to_run = c("chronos2"),
)
train_models(
run_info = run_info,
run_global_models = FALSE
)
final_models(run_info = run_info)
finn_output_tbl <- get_forecast_data(run_info = run_info)
head(finn_output_tbl)
# A tibble: 6 × 17
# Combo id Model_ID Model_Name Model_Type Recipe_ID Run_Type Train_Test_ID Best_Model Horizon Date Target Forecast lo_95 lo_80 hi_80 hi_95
# <chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <chr> <dbl> <date> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
# 1 M2 M2 chronos2--local--R1 chronos2 local R1 Future_Forecast 1 Yes 1 2015-07-01 NA 2342. 1573. 1840. 2844. 3111.
# 2 M2 M2 chronos2--local--R1 chronos2 local R1 Future_Forecast 1 Yes 2 2015-08-01 NA 2128. 1359. 1626. 2630. 2897.
# 3 M2 M2 chronos2--local--R1 chronos2 local R1 Future_Forecast 1 Yes 3 2015-09-01 NA 1849. 1080. 1347. 2351. 2618.
# 4 M2 M2 chronos2--local--R1 chronos2 local R1 Future_Forecast 1 Yes 4 2015-10-01 NA 1890. 1121. 1388. 2393. 2659.
# 5 M2 M2 chronos2--local--R1 chronos2 local R1 Future_Forecast 1 Yes 5 2015-11-01 NA 1868. 1098. 1365. 2370. 2637.
# 6 M2 M2 chronos2--local--R1 chronos2 local R1 Future_Forecast 1 Yes 6 2015-12-01 NA 1731. 961. 1228. 2233. 2500.Azure ML Chronos2 endpoints have minimum data size requirements of 3 rows irrespective of data frequency. When using Azure ML endpoints, finnts automatically pads your data with zeros (backward from the earliest date) to meet these requirements. This ensures your forecasts work even with smaller datasets. The padding only affects the data sent to the API, your original training data remains unchanged in the model fit object.
Chronos2 can run as both a local (per time series) or global (across all time series) model. When running as a global model, Chronos2 learns patterns across all your time series simultaneously.
# Run Chronos2 as a global model
train_models(
run_info = run_info,
run_global_models = TRUE # Chronos2 will train on all combos together
)
prep_models(
run_info = run_info,
models_to_run = c("chronos2"),
)TimeGPT is one such transformer based foundation model for time series developed by Nixtla. It delivers:
You can setup TimeGPT via either of the following:
Set the Azure base URL and API key.
# In R (or add to .Renviron)
Sys.setenv(NIXTLA_BASE_URL = "https://your-azure-deployed-timegen.azure.com/")
Sys.setenv(NIXTLA_API_KEY = "your_api_key_here")
# Note that the current version of nixtlar requires base url to end with "/"
# devtools::install_github("Nixtla/nixtlar")
library(nixtlar)
nixtla_client_setup(
base_url = "Base URL here",
api_key = "API key here"
)If you use API directly from Nixtla, base url is not required.
Sys.setenv(NIXTLA_API_KEY = "your_api_key_here")
# devtools::install_github("Nixtla/nixtlar")
library(nixtlar)
nixtla_set_api_key(api_key = "Your API key here")
# devtools::install_github("Nixtla/nixtlar")
library(nixtlar)
df <- nixtlar::electricity
# Forecast next 8 steps
fcst <- nixtla_client_forecast(
df,
h = 8,
level = c(80, 95),
# if using azure deployed timegen
# model = "azureai"
)
head(fcst)TimeGPT is available as a model within finnts unified workflow and can run as both a local (per time series) or global (across all time series) model. When running as a global model, TimeGPT learns patterns across all your time series simultaneously.
library(finnts)
# setup timegpt api keys
# Checkout these data requirements provided by nixtla
# https://www.nixtla.io/docs/data_requirements/data_requirements
hist_data <- timetk::m4_monthly %>%
dplyr::filter(
date >= "2010-01-01",
id == "M2"
) %>%
dplyr::rename(Date = date) %>%
dplyr::mutate(id = as.character(id))
run_info <- set_run_info(
project_name = "finnts_fcst",
run_name = "finn_sub_component_run"
)
prep_data(
run_info = run_info,
input_data = hist_data,
combo_variables = c("id"),
target_variable = "value",
date_type = "month",
forecast_horizon = 6
)
prep_models(
run_info = run_info,
models_to_run = c("timegpt"),
)
train_models(
run_info = run_info,
run_global_models = FALSE
)
final_models(run_info = run_info)
finn_output_tbl <- get_forecast_data(run_info = run_info)
head(finn_output_tbl)
# A tibble: 6 x 17
# Combo id Model_ID Model_Name Model_Type Recipe_ID Run_Type Train_Test_ID Best_Model Horizon
# <chr> <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <chr> <dbl>
# 1 M2 M2 timegpt--loc~ timegpt local R1 Future_~ 1 Yes 1
# 2 M2 M2 timegpt--loc~ timegpt local R1 Future_~ 1 Yes 2
# 3 M2 M2 timegpt--loc~ timegpt local R1 Future_~ 1 Yes 3
# 4 M2 M2 timegpt--loc~ timegpt local R1 Future_~ 1 Yes 4
# 5 M2 M2 timegpt--loc~ timegpt local R1 Future_~ 1 Yes 5
# 6 M2 M2 timegpt--loc~ timegpt local R1 Future_~ 1 Yes 6
# i 7 more variables: Date <date>, Target <dbl>, Forecast <dbl>, lo_95 <dbl>, lo_80 <dbl>,
# hi_80 <dbl>, hi_95 <dbl>Azure AI TimeGPT endpoints have minimum data size requirements based on data frequency:
When using Azure AI endpoints, finnts automatically pads your data with zeros (backward from the earliest date) to meet these requirements. This ensures your forecasts work even with smaller datasets. The padding only affects the data sent to the API, your original training data remains unchanged in the model fit object.
Note: Padding is only applied for Azure AI endpoints. The default Nixtla API does not have these constraints.
For forecasts exceeding two seasonal periods, TimeGPT automatically
uses the timegpt-1-long-horizon model, which is optimized
for longer forecast horizons. The threshold for “long horizon” depends
on your data frequency:
This selection happens automatically based on your
forecast_horizon and date_type parameters—no
additional configuration needed.
TimeGPT supports hyperparameter tuning for fine-tuning parameters:
finetune_steps: Number of fine-tuning
steps (range: 0-200)finetune_depth: Fine-tuning
depth/layers (range: 1-5)These parameters are automatically tuned when you set
num_hyperparameters in prep_models(). The
tuning process uses validation splits to find optimal values, then
refits the model with the best parameters.
prep_models(
run_info = run_info,
models_to_run = c("timegpt"),
num_hyperparameters = 4 # Will tune finetune_steps and finetune_depth
)TimeGPT can run as both a local (per time series) or global (across all time series) model. When running as a global model, TimeGPT learns patterns across all your time series simultaneously.
# Run TimeGPT as a global model
train_models(
run_info = run_info,
run_global_models = TRUE # TimeGPT will train on all combos together
)
prep_models(
run_info = run_info,
models_to_run = c("timegpt"),
)