Skip to content

GitHub Models

The GitHub Models provider, github, allows running models through the GitHub Marketplace. This provider is useful for prototyping and subject to rate limits depending on your subscription.

script({ model: "github:openai/gpt-4o" });

If you are running from a GitHub Codespace, the token is already configured for you… It just works.

As of April 2025, you can use the GitHub Actions token (GITHUB_TOKEN) to call AI models directly inside your workflows.

  1. Ensure that the models permission is enabled in your workflow configuration.

    genai.yml
    permissions:
    models: read
  2. Pass the GITHUB_TOKEN when running genaiscript

    genai.yml
    run: npx -y genaiscript run ...
    env:
    GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

Read more in the GitHub Documentation

If you are not using GitHub Actions or Codespaces, you can use your own token to access the models.

  1. Create a GitHub personal access token. The token should not have any scopes or permissions.

  2. Update the .env file with the token.

    .env
    GITHUB_TOKEN=...

To configure a specific model,

  1. Open the GitHub Marketplace and find the model you want to use.

  2. Copy the model name from the Javascript/Python samples

    const modelName = "microsoft/Phi-3-mini-4k-instruct";

    to configure your script.

    script({
    model: "github:microsoft/Phi-3-mini-4k-instruct",
    });

If you are already using GITHUB_TOKEN variable in your script and need a different one for GitHub Models, you can use the GITHUB_MODELS_TOKEN variable instead.

If you don’t have environment variables configured, GenAIScript will attempt to use the GitHub CLI (gh) to retrieve your authentication token.

  1. Install the GitHub CLI from https://cli.github.com/ and ensure it’s available in your PATH.

  2. Authenticate with GitHub using the CLI:

    Terminal window
    gh auth login

This approach is convenient for local development but requires that you have the GitHub CLI installed and authenticated.

By default, GitHub Models uses the current actor to run inference. You can specify an organization in the GITHUB_MODELS_ORG environment to run inference on behalf of that organization instead.

.env
GITHUB_MODELS_ORG=my-org

The actor must be a member of the organization and have enabled models in the organization (see documentation).

Currently these models do not support streaming and system prompts. GenAIScript handles this internally.

script({
model: "github:openai/o1-mini",
});

Aliases

The following model aliases are attempted by default in GenAIScript.

AliasModel identifier
largeopenai/gpt-4.1
smallopenai/gpt-4.1-mini
tinyopenai/gpt-4.1-nano
visionopenai/gpt-4.1
reasoningopenai/o3
reasoning_smallopenai/o3-mini
embeddingsopenai/text-embedding-3-small

Limitations

  • Smaller context windows, and rate limiting in free tier. See https://docs.github.com/en/github-models/use-github-models/prototyping-with-ai-models.
  • listModels
  • logprobs (and top logprobs) ignored
  • Ignore prediction of output tokens
  • topLogprobs