Skip to content

Configuration

You will need to configure the LLM connection and authorizion secrets.

model selection

The model used by the script is configured throught the model field in the script function. The model name is formatted as provider:model-name, where provider is the LLM provider and the model-name is provider specific.

script({
model: "openai:gpt-4",
})

The model can also be overriden from the cli run command.

.env file

GenAIScript uses a .env file to store the secrets.

  1. Create or update a .gitignore file in the root of your project and make it sure it includes .env. This ensures that you do not accidentally commit your secrets to your source control.

    .gitignore
    ...
    .env
  2. Create a .env file in the root of your project.

    • .gitignore
    • .env
  3. Update the .env file with the configuration information (see below).

OpenAI

This provider, openai, is the default provider. It uses the OPENAI_API_... environment variables.

  1. Create a new secret key from the OpenAI API Keys portal.

  2. Update the .env file with the secret key.

    .env
    OPENAI_API_KEY=sk_...
  3. Set the model field in script to the model you want to use.

    script({
    model: "openai:gpt-4o",
    ...
    })

Azure OpenAI

The Azure OpenAI provider, azure uses the AZURE_OPENAI_... environment variables. You can use a managed identity (recommended) or a API key to authenticate with the Azure OpenAI service.

Managed Identity (Entra ID)

  1. Open your Azure OpenAI resource

  2. Navigate to Access Control, then View My Access. Make sure your user or service principal has the Cognitive Services OpenAI User/Contributor role. If you get a 401 error, it’s typically here that you will fix it.

  3. Navigate to Resource Management, then Keys and Endpoint.

  4. Update the .env file with the endpoint.

    .env
    AZURE_OPENAI_ENDPOINT=https://....openai.azure.com
  5. Navigate to deployments and make sure that you have your LLM deployed and copy the deployment-id, you will need it in the script.

  6. Update the model field in the script function to match the model deployment name in your Azure resource.

    script({
    model: "azure:deployment-id",
    ...
    })

Visual Studio Code

Visual Studio Code will ask you to allow using the Microsoft account and then will open a browser where you can choose the user or service principal.

  • If you are getting 401 errors after a while, try signing out in the user menu (lower left in Visual Studio Code) and back in.

CLI

Login with Azure CLI then use the cli as usual.

Terminal window
az login

API Key

  1. Open your Azure OpenAI resource and navigate to Resource Management, then Keys and Endpoint.

  2. Update the .env file with the secret key (Key 1 or Key 2) and the endpoint.

    .env
    AZURE_OPENAI_API_KEY=...
    AZURE_OPENAI_API_ENDPOINT=https://....openai.azure.com
  3. Open Azure AI Studio, select Deployments and make sure that you have your LLM deployed and copy the deployment-id, you will need it in the script.

  4. Update the model field in the script function to match the model deployment name in your Azure resource.

    script({
    model: "azure:deployment-id",
    ...
    })

Local Models

There are many projects that allow you to run models locally on your machine, or in a container.

LocalAI

LocalAI act as a drop-in replacement REST API that’s compatible with OpenAI API specifications for local inferencing. It uses free Open Source models and it runs on CPUs.

LocalAI acts as an OpenAI replacement, you can see the model name mapping used in the container, like gpt-4 is mapped to phi-2.

  1. Install Docker. See the LocalAI documentation for more information.

  2. Update the .env file and set the api type to localai.

    .env
    OPENAI_API_TYPE=localai

Ollama

Ollama is a desktop application that let you download and run model locally.

Running tools locally may require additional GPU resources depending on the model you are using.

Use the ollama provider to access Ollama models.

  1. Start the Ollama application or

    Terminal window
    ollama serve
  2. Update your script to use the ollama:phi3 model.

    script({
    ...,
    model: "ollama:phi3",
    })

Llamafile

https://llamafile.ai/ is a single file desktop application that allows you to run an LLM locally.

The provider is llamafile and the model name is ignored.

Jan, LMStudio, LLaMA.cpp

Jan, LMStudio, LLaMA.cpp also allow running models locally or interfacing with other LLM vendors.

  1. Update the .env file with the local server information.

    .env
    OPENAI_API_BASE=http://localhost:...

Model specific environment variables

You can provide different environment variables for each named model by using the PROVIDER_MODEL_API_... prefix or PROVIDER_API_... prefix. The model name is capitalized and all non-alphanumeric characters are converted to _.

This allows to have various sources of LLM computations for different models. For example, to enable the ollama:phi3 model running locally, while keeping the default openai model connection information.

.env
OLLAMA_PHI3_API_BASE=http://localhost:11434/v1

Next steps

Write your first script.