Skip to content

Parameters

This page documents all parameters available to the PromptPex CLI and script interface. Each parameter can be provided as a CLI flag (e.g., --param value) or via environment/configuration files. Default values and accepted types are indicated where applicable.

The first argument can be a Prompty file containing the prompt or a JSON file containing a saved PromptPex context, which will include all the tests, test runs, etc. saved in a previous invocation of PromptPex. If no argument is provided, the --prompt parameter must be specified.

ParameterTypeDefaultDescription
--promptstringPrompt template to analyze. Provide inline or via file. Supports prompty markdown format.
--effortstringEffort level for test generation. One of: min, low, medium, high. Influences test count and complexity.
--outstringOutput folder for generated files.
--cachebooleanCache all LLM calls for faster experimentation.
--testRunCachebooleanCache test run results in files.
--evalCachebooleanCache evaluation results in files.
--evalsbooleanfalseEvaluate the test results.
--testsPerRuleinteger3Number of tests to generate per rule (1-10).
--splitRulesbooleantrueSplit rules and inverse rules in separate prompts for test generation.
--maxRulesPerTestGenerationinteger3Max rules per test generation (affects test complexity).
--testGenerationsinteger2Number of times to amplify test generation (1-10).
--runsPerTestinteger2Number of runs per test during evaluation (1-100).
--disableSafetybooleanfalseDisable safety system prompts and content safety checks.
--rateTestsbooleanfalseGenerate a report rating the quality of the test set.
--rulesModelstringModel used to generate rules (can override ‘rules’ alias).
--baselineModelstringModel used to generate baseline tests.
--modelsUnderTeststringSemicolon-separated list of models to run the prompt against.
--evalModelstringSemicolon-separated list of models to use for test evaluation.
--compliancebooleanfalseEvaluate test result compliance.
--maxTestsToRunnumberMaximum number of tests to run.
--inputSpecInstructionsstringAdditional instructions for input specification generation.
--outputRulesInstructionsstringAdditional instructions for output rules generation.
--inverseOutputRulesInstructionsstringAdditional instructions for inverse output rules generation.
--testExpansionInstructionsstringAdditional instructions for test expansion generation.
--storeCompletionsbooleanStore chat completions using Azure OpenAI stored completions.
--storeModelstringModel used to create stored completions (can override ‘store’ alias).
--groundtruthModelstringModel used to generate groundtruth outputs.
--customMetricstringCustom test evaluation template (as a prompt).
--createEvalRunsbooleanCreate an Evals run in OpenAI Evals (requires OPENAI_API_KEY).
--testExpansionsinteger0Number of test expansion phases (0-5).
--testSamplesCountintegerNumber of test samples to include for rules/test generation.
--testSamplesShufflebooleanShuffle test samples before generating tests.
--filterTestCountinteger5Number of tests to include in filtered output of evalTestCollection.
--loadContextbooleanfalseLoad context from a file.
--loadContextFilestringpromptPex_context.jsonFilename to load PromptPexContext from before running.
Terminal window
promptpex {file.prompty|file.json>} --prompt myprompt.prompty --effort=medium --out=results/ --evals=true --modelsUnderTest="openai:gpt-4o;ollama:llama3.3:70b" --evalModel="openai:gpt-4o" --rateTests=true
  • For more details on prompt format and advanced usage, see the main documentation.