Configuration
You will need to configure the LLM connection and authorizion secrets.
model selection
The model used by the script is configured throught the model
field in the script
function.
The model name is formatted as provider:model-name
, where provider
is the LLM provider
and the model-name
is provider specific.
script({ model: "openai:gpt-4",})
The model can also be overriden from the cli run command.
.env
file
GenAIScript uses a .env
file to store the secrets.
Create or update a
.gitignore
file in the root of your project and make it sure it includes.env
. This ensures that you do not accidentally commit your secrets to your source control..gitignore ....envCreate a
.env
file in the root of your project.- .gitignore
- .env
Update the
.env
file with the configuration information (see below).
OpenAI
This provider, openai
, is the default provider.
It uses the OPENAI_API_...
environment variables.
Create a new secret key from the OpenAI API Keys portal.
Update the
.env
file with the secret key..env OPENAI_API_KEY=sk_...Set the
model
field inscript
to the model you want to use.script({model: "openai:gpt-4o",...})
Azure OpenAI
The Azure OpenAI provider, azure
uses the AZURE_OPENAI_...
environment variables.
You can use a managed identity (recommended) or a API key to authenticate with the Azure OpenAI service.
Managed Identity (Entra ID)
Open your Azure OpenAI resource
Navigate to Access Control, then View My Access. Make sure your user or service principal has the Cognitive Services OpenAI User/Contributor role. If you get a
401
error, it’s typically here that you will fix it.Navigate to Resource Management, then Keys and Endpoint.
Update the
.env
file with the endpoint..env AZURE_OPENAI_ENDPOINT=https://....openai.azure.comNavigate to deployments and make sure that you have your LLM deployed and copy the
deployment-id
, you will need it in the script.Update the
model
field in thescript
function to match the model deployment name in your Azure resource.script({model: "azure:deployment-id",...})
Visual Studio Code
Visual Studio Code will ask you to allow using the Microsoft account and then will open a browser where you can choose the user or service principal.
- If you are getting
401
errors after a while, try signing out in the user menu (lower left in Visual Studio Code) and back in.
CLI
Login with Azure CLI then use the cli as usual.
az login
API Key
Open your Azure OpenAI resource and navigate to Resource Management, then Keys and Endpoint.
Update the
.env
file with the secret key (Key 1 or Key 2) and the endpoint..env AZURE_OPENAI_API_KEY=...AZURE_OPENAI_API_ENDPOINT=https://....openai.azure.comOpen Azure AI Studio, select Deployments and make sure that you have your LLM deployed and copy the
deployment-id
, you will need it in the script.Update the
model
field in thescript
function to match the model deployment name in your Azure resource.script({model: "azure:deployment-id",...})
Local Models
There are many projects that allow you to run models locally on your machine, or in a container.
LocalAI
LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It uses free Open Source models and it runs on CPUs.
LocalAI acts as an OpenAI replacement, you can see the model name mapping
used in the container, like gpt-4
is mapped to phi-2
.
Install Docker. See the LocalAI documentation for more information.
Update the
.env
file and set the api type tolocalai
..env OPENAI_API_TYPE=localai
Ollama
Ollama is a desktop application that let you download and run model locally.
Running tools locally may require additional GPU resources depending on the model you are using.
Use the ollama
provider to access Ollama models.
Start the Ollama application or
Terminal window ollama serveUpdate your script to use the
ollama:phi3
model.script({...,model: "ollama:phi3",})
Llamafile
https://llamafile.ai/ is a single file desktop application that allows you to run an LLM locally.
The provider is llamafile
and the model name is ignored.
Jan, LMStudio, LLaMA.cpp
Jan, LMStudio, LLaMA.cpp also allow running models locally or interfacing with other LLM vendors.
Update the
.env
file with the local server information..env OPENAI_API_BASE=http://localhost:...
Model specific environment variables
You can provide different environment variables
for each named model by using the PROVIDER_MODEL_API_...
prefix or PROVIDER_API_...
prefix.
The model name is capitalized and
all non-alphanumeric characters are converted to _
.
This allows to have various sources of LLM computations
for different models. For example, to enable the ollama:phi3
model running locally, while keeping the default openai
model connection information.
OLLAMA_PHI3_API_BASE=http://localhost:11434/v1
Next steps
Write your first script.