Skip to content

Overview

PromptPex is packaged as a npm.js command line tool that uses GenAIScript.

To use PromptPex locally, you need to have Node.js installed and set up your environment. Follow these steps:

  • Install Node.js v22+ (or later).
  • Make sure you have the right version of Node.js:
Terminal window
node --version
  • Run PromptPex configuration to set up your .env file:
Terminal window
npx promptpex configure

PromptPex supports many LLM providers, such as OpenAI, Azure OpenAI, GitHub Models, Ollama, and more. The configuration will prompt you to select the LLM provider you want to use and set up the necessary environment variables in a .env file.

  • Run PromptPex on your prompt file(s):
Terminal window
npx promptpex my_prompt.prompty

PromptPex also supports the following file formats:

  • .md, .txt, tread as a Jinja2 templated string (Markdown)
  • .prompty, Prompty file format (default)
  • .prompt.yml, GitHub Models format

If you prefer to run PromptPex in a Docker container, you can use the following command. This assumes you have Docker installed and running on your machine.

  • Run the configuration command to set up your .env file.
Terminal window
docker run -e GITHUB_TOKEN="$GITHUB_TOKEN" --rm -it -v "$PWD":/app -w /app node:lts-alpine npx --yes promptpex configure
  • Run PromptPex on your prompt file(s) using Docker:
Terminal window
docker run -e GITHUB_TOKEN="$GITHUB_TOKEN" --rm -it -v "$PWD":/app -w /app node:lts-alpine npx --yes promptpex my_prompt.prompty

You might need to pass more environment variables depending on your shell configuration.

PromptPex supports different effort levels for test generation, which can be specified using the --vars effort flag. The available effort levels are:

  • min: Minimal effort, generates a small number of simple tests.
  • low: Low effort, generates a moderate number of tests with some complexity.
  • medium: Medium effort, generates a larger number of more complex tests.
  • high: High effort, generates the maximum number of tests with the highest complexity.
Terminal window
npx promptpex my_prompt.prompty --vars effort=min

We start with simple examples of using PromptPex assume your prompt is in a file called my_prompt.prompty and you want generate tests, run them, and evaluate the results. More details about all the parameters you can specify can be found in the CLI parameter documentation.

Suppose you want to generate tests, run them, and evaluate the results using the minimum effort level:

Terminal window
npx promptpex my_prompt.prompty --vars effort=min out=results evals=true modelsUnderTest="ollama:llama3.3" evalModel="ollama:llama3.3"

Suppose you only want to generate tests and not run them:

Terminal window
npx promptpex my_prompt.prompty --vars effort=min out=results evals=false

Generate Only Tests with Groundtruth Outputs

Section titled “Generate Only Tests with Groundtruth Outputs”

Suppose you only want to generate tests and add groundtruth outputs from a specific model and not run them:

Terminal window
npx promptpex my_prompt.prompty --vars effort=min out=results evals=false "groundtruthModel=ollama:llama3.3"

Run and Evaluate Tests from a Context File

Section titled “Run and Evaluate Tests from a Context File”

Suppose you just ran the above command and the file results/my_prompt/promptpex_context.json was created. (See saving and restoring) You can now load this context file to run and evaluate the tests:

Terminal window
npx promptpex results/my_prompt/promptpex_context.json --vars evals=true "modelsUnderTest=ollama:llama3.3" "evalModel=ollama:llama3.3"
  • For more details on prompt format and advanced usage, see the overview.