PyRIT (Python Risk Identification Tool for generative AI) is an open-source framework that helps security professionals proactively identify risks in generative AI systems. The scanner is the primary way to run security assessments — it executes Scenarios against a target AI system and reports results.
How It Works¶
A PyRIT scan has three key ingredients:
A Scenario — defines what to test (e.g., content harms, jailbreaks, encoding probes). Scenarios bundle attack strategies, datasets, and scoring into a reusable package.
A Target — the AI system you’re testing (e.g., an OpenAI endpoint, an Azure OpenAI deployment, a custom HTTP endpoint).
Configuration — connects the scanner to your target and registers the components it needs (targets, scorers, datasets). See Configuration.
Running Scans¶
PyRIT provides two command-line interfaces:
| Tool | Best For | Documentation |
|---|---|---|
pyrit_scan | Automated, single-command execution. CI/CD pipelines, batch processing, reproducible runs. | pyrit_scan |
pyrit_shell | Interactive exploration. Rapid iteration, comparing results across runs, debugging scenarios. | pyrit_shell |
Quick Example¶
# Run the Foundry RedTeamAgent scenario against your configured target
pyrit_scan foundry.red_team_agent --target openai_chat --initializers target load_default_datasets --strategies base64Built-in Scenarios¶
PyRIT ships with scenarios organized into three families:
| Family | Scenarios | Documentation |
|---|---|---|
| AIRT | ContentHarms, Psychosocial, Cyber, Jailbreak, Leakage, Scam | AIRT Scenarios |
| Foundry | RedTeamAgent | Foundry Scenarios |
| Garak | Encoding | Garak Scenarios |
Each scenario page shows how to run it with minimal configuration.
For Developers¶
If you want to build custom scenarios or understand the programming model behind scenarios, see the Scenarios Programming Guide. For details on attack strategies, dataset configuration, and advanced programmatic usage, see Scenario Parameters.