Usage#
Installation#
For the latest official release, install the pip
package:
pip install microsoft-aurora
Or install from conda-forge with conda
/ mamba
:
mamba install microsoft-aurora -c conda-forge
You can also install directly from GitHub:
git clone https://github.com/microsoft/aurora.git
cd aurora
Then create a new virtual environment and install the requirements:
virtualenv venv -p python3.10
source venv/bin/activate
make install
One-Step Predictions#
Making predictions with the model involves three steps:
prepare a batch of data,
construct the model and load a checkpoint, and
run the model on the batch.
We walk through these steps in order.
First, you must construct a batch of data.
A batch of data contains surface-level variables,
static variables, atmospheric variables, and associated metadata.
This batch must be an aurora.Batch
.
Here is an example that constructs a random batch:
from datetime import datetime
import torch
from aurora import Batch, Metadata
batch = Batch(
surf_vars={k: torch.randn(1, 2, 17, 32) for k in ("2t", "10u", "10v", "msl")},
static_vars={k: torch.randn(17, 32) for k in ("lsm", "z", "slt")},
atmos_vars={k: torch.randn(1, 2, 4, 17, 32) for k in ("z", "u", "v", "t", "q")},
metadata=Metadata(
lat=torch.linspace(90, -90, 17),
lon=torch.linspace(0, 360, 32 + 1)[:-1],
time=(datetime(2020, 6, 1, 12, 0),),
atmos_levels=(100, 250, 500, 850),
),
)
The exact form of a Batch
will be explained in detail on the next page.
Second, you need to construct a model and load a checkpoint. There are various versions of Aurora, both in terms of model size and in terms of what the model was trained on. The regular version of Aurora can be constructed as follows:
from aurora import Aurora
model = Aurora()
In this example, however, we use a smaller version:
from aurora import AuroraSmall
model = AuroraSmall()
A checkpoint can then be loaded with model.load_checkpoint
by specifying a checkpoint file in a HuggingFace repository.
In this case, we use aurora-0.25-small-pretrained.ckpt
from the repository microsoft/aurora
:
model.load_checkpoint("microsoft/aurora", "aurora-0.25-small-pretrained.ckpt")
Typically, you will want to set the model to evaluation mode, which disables e.g. drop-out:
model.eval()
A detailed overview of all available models is given here.
Finally, you are ready to run the model!
model = model.to("cuda")
with torch.inference_mode():
pred = model.forward(batch)
Predictions are also of the form of aurora.Batch
.
For example, pred.surf_vars["2t"]
gives the predictions for two-meter temperature.
You will need approximately 40 GB of GPU memory for running the regular model on global 0.25 degree data.
Autoregressive Roll-Outs#
To make predictions for more than one step ahead, you can apply the model autoregressively.
This can be done with aurora.rollout
:
from aurora import rollout
model = model.to("cuda")
with torch.inference_mode():
preds = [pred.to("cpu") for pred in rollout(model, batch, steps=10)]
In the list comprehension, we move the prediction after every step immediately
to the CPU to prevent GPU memory buildup.
Every element of preds
is again of the form of aurora.Batch
.