Overview
You will need to configure the LLM connection and authorization secrets. You can use remote (like OpenAI, Azure, etc.) and local models (like Ollama, Jan, LMStudio, etc.) with GenAIScript.
Model selection
Section titled “Model selection”The model used by the script is configured through the model
field in the script
function.
The model name is formatted as provider:model-name
, where provider
is the LLM provider
and the model-name
is provider specific.
script({ model: "openai:gpt-4o",});
Large, small, vision models
Section titled “Large, small, vision models”You can also use the small
, large
, vision
model aliases to use the default configured small, large and vision-enabled models.
Large models are typically in the OpenAI gpt-4 reasoning range and can be used for more complex tasks.
Small models are in the OpenAI gpt-4o-mini range, and are useful for quick and simple tasks.
script({ model: "small" });
script({ model: "large" });
The model aliases can also be overridden from the cli run command, or environment variables or configuration file. Learn more about model aliases.
genaiscript run ... --model large_model_id --small-model small_model_id
or by adding the GENAISCRIPT_MODEL_LARGE
and GENAISCRIPT_MODEL_SMALL
environment variables.
GENAISCRIPT_MODEL_LARGE="azure_serverless:..."GENAISCRIPT_MODEL_SMALL="azure_serverless:..."GENAISCRIPT_MODEL_VISION="azure_serverless:..."
You can also configure the default aliases for a given LLM provider by using the provider
argument.
The default are documented in this page and printed to the console output.
script({ provider: "openai" });
genaiscript run ... --provider openai
Model aliases
Section titled “Model aliases”In fact, you can define any alias for your model (only alphanumeric characters are allowed)
through environment variables of the name GENAISCRIPT_MODEL_ALIAS
where ALIAS
is the alias you want to use.
GENAISCRIPT_MODEL_TINY=...
Model aliases are always lowercased when used in the script.
script({ model: "tiny" });
.env
file and .env.genaiscript
file
Section titled “.env file and .env.genaiscript file”GenAIScript uses a .env
file (and .env.genaiscript
) to load secrets and configuration information into the process environment variables.
GenAIScript multiple .env
files to load configuration information.
Create or update a
.gitignore
file in the root of your project and make it sure it includes.env
. This ensures that you do not accidentally commit your secrets to your source control..gitignore ....env.env.genaiscriptCreate a
.env
file in the root of your project.- .gitignore
- .env
Update the
.env
file with the configuration information (see below).
Custom .env file location
Section titled “Custom .env file location”You can specify a custom .env
file location through the CLI or an environment variable.
-
GenAIScript script loads the following
.env
files in order by default:~/.env.genaiscript
./.env.genaiscript
./.env
-
by adding the
--env <...files>
argument to the CLI. Each.env
file is imported in order and may override previous values.
npx genaiscript ... --env .env .env.debug
- by setting the
GENAISCRIPT_ENV_FILE
environment variable.
GENAISCRIPT_ENV_FILE=".env.local" npx genaiscript ...
- by specifying the
.env
file location in a configuration file.
{ "$schema": "https://microsoft.github.io/genaiscript/schemas/config.json", "envFile": [".env.local", ".env.another"]}
No .env file
Section titled “No .env file”If you do not want to use a .env
file, make sure to populate the environment variables
of the genaiscript process with the configuration values.
Here are some common examples:
- Using bash syntax
OPENAI_API_KEY="value" npx --yes genaiscript run ...
- GitHub Action configuration
run: npx --yes genaiscript run ...env: OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
configure
command
Section titled “configure command”The configure command is an interactive command to configure and validate the LLM connections.
npx genaiscript configure
Dev Containers on Windows
Section titled “Dev Containers on Windows”You can use Dev Containers to easily create a containerized development environment.
- Install WSL2
- Install Docker Desktop. Make sure the Docker service is running.
- Open Visual Studio Code
- Install the dev container extension in VSCode
- Open the command palette (
Ctrl
+Shift
+P
) and type *New Dev Container… - Select the
Node.JS & TypeScript
image.
The echo
provider is a dry run LLM provider that returns the messages without calling any LLM.
It is most useful for debugging when you want to see the result LLM request without sending it.
script({ model: "echo",});
Echo replies with the chat messages as markdown and JSON, which can be helpful for debugging.
The none
provider prevents the execution of LLM. It is typically used on a top-level script that exclusively uses inline prompts.
script({ model: "none",});
Custom Provider (OpenAI compatible)
Section titled “Custom Provider (OpenAI compatible)”You can use a custom provider that is compatible with the OpenAI text generation API. This is useful for running LLMs on a local server or a different cloud provider.
For example, to define a ollizard
provider, you need to set the OLLIARD_API_BASE
environment variable to the custom provider URL,
and OLLIZARD_API_KEY
if needed.
OLLIZARD_API_BASE=http://localhost:1234/v1#OLLIZARD_API_KEY=...
Then you can use this provider like any other provider.
script({ model: "ollizard:llama3.2:1b",});
Model specific environment variables
Section titled “Model specific environment variables”You can provide different environment variables
for each named model by using the PROVIDER_MODEL_API_...
prefix or PROVIDER_API_...
prefix.
The model name is capitalized and
all non-alphanumeric characters are converted to _
.
This allows to have various sources of LLM computations
for different models. For example, to enable the ollama:phi3
model running locally, while keeping the default openai
model connection information.
OLLAMA_PHI3_API_BASE=http://localhost:11434/v1
Running behind a proxy
Section titled “Running behind a proxy”You can set the HTTP_PROXY
and/or HTTPS_PROXY
environment variables to run GenAIScript behind a proxy.
HTTP_PROXY=http://proxy.example.com:8080
Checking your configuration
Section titled “Checking your configuration”You can check your configuration by running the genaiscript info env
command.
It will display the current configuration information parsed by GenAIScript.
genaiscript info env