Skip to content

GitHub Copilot Chat

If you have access to GitHub Copilot Chat in Visual Studio Code, GenAIScript will be able to leverage those language models as well.

This mode is useful to run your scripts without having a separate LLM provider or local LLMs. However, those models are not available from the command line and have additional limitations and rate limiting defined by the GitHub Copilot platform.

There is no configuration needed as long as you have GitHub Copilot installed and configured in Visual Studio Code.

You can force using this model by using github_copilot_chat:* as a model name or set the GenAIScript > Language Chat Models Provider setting to true. This will default GenAIScript to use this provider for model aliases.

Play
  1. Install GitHub Copilot Chat (emphasis on Chat)

  2. run your script
  3. Confirm that you are allowing GenAIScript to use the GitHub Copilot Chat models.

  4. select the best chat model that matches the one you have in your script

    A dropdown menu titled 'Pick a Language Chat Model for openai:gpt-4' with several options including 'GPT 3.5 Turbo', 'GPT 4', 'GPT 4 Turbo (2024-01-25 Preview)', and 'GPT 4o (2024-05-13)', with 'GPT 3.5 Turbo' currently highlighted.

    (This step is skipped if you already have mappings in your settings)

The mapping of GenAIScript model names to Visual Studio Models is stored in the settings.