Skip to content

GenAIScript

GenAIScript is a tool for generating and executing scripts using LLMs. It is used in PromptPex to generate the test generation scripts.

  • Install Node.js v22+
  • Configure your LLM credentials in .env. You can use OpenAI, Azure OpenAI, or Ollama.
Terminal window
npx --yes genaiscript configure
  • Launch promptpex locally
Terminal window
npx --yes genaiscript@latest serve --remote microsoft/promptpex --remote-branch dev

To launch PromptPex in a docker container, first create an image with the following command:

Terminal window
docker build -t genaiscript -<<EOF
FROM node:lts-alpine
RUN apk add --no-cache git && npm install -g genaiscript
EOF

Launch promptpex using the genaiscript image

Terminal window
docker run --env GITHUB_TOKEN --env-file .env --name genaiscript --rm -it --expose 8003 -p 8003:8003 -v ${PWD}:/workspace -w /workspace genaiscript genaiscript serve --network --remote microsoft/promptpex --remote-branch dev

Use CodeSpaces / dev container to get a fully configured environment, including access to LLMs through GitHub Marketplace Models.

Open in GitHub Codespaces

then launch the server

Terminal window
npm run serve
  • Clone this repository
  • Install Node.js v22+
  • Install dependencies
Terminal window
npm install

Configure the eval, rules, baseline aliases

Section titled “Configure the eval, rules, baseline aliases”

PromptPex defines the following model aliases for the different phases of the test generation:

  • rules: rule, inverse rules, test generation
  • eval: rule and test quality evaluations
  • baseline: baseline test generation

If you are using a specific set of models, you can use a .env file to override the eval/rules/baseline aliases

GENAISCRIPT_MODEL_EVAL="azure:gpt-4o_2024-11-20"
GENAISCRIPT_MODEL_RULES="azure:gpt-4o_2024-11-20"
GENAISCRIPT_MODEL_BASELINE="azure:gpt-4o_2024-11-20"
  • Launch web interface
Terminal window
npm run serve
  • Open localhost

The development of PromptPex is done using GenAIScript.

  • Install Node.js v22+
  • Configure your LLM credentials in .env

Use Visual Studio Code to get builtin typechecking from TypeScript or

Terminal window
npm run build

For convenience,

Terminal window
npm run gcm
  • Open a JavaScript Debug Terminal in Visual Studio Code
  • Put a breakpoint in your script
  • Launch the script
Terminal window
npm run upgrade

Set the DEBUG=promptpex:* environment variable to enable additional logging.

Terminal window
DEBUG=promptpex:* npm run ...

To pipe the stderr, stdout to a file,

Terminal window
DEBUG=* npm run ... > output.txt 2>&1

Add --vars cache=true to the command line to enable caching of LLM calls.