Task 01 - Perform a red team evaluation via UI
Introduction
With the widespread adoption of AI agents in various applications, ensuring their security and reliability has become paramount. Red teaming is a proactive approach to identify vulnerabilities and weaknesses in AI systems by simulating real-world attacks and adversarial scenarios. By conducting red teaming exercises, organizations can uncover potential risks, assess the robustness of their AI models, and implement necessary safeguards to mitigate threats.
Description
In this task, you will create an AI Red Teaming Agent using Microsoft Foundry and run it in the Microsoft Foundry UI. You will configure the agent to perform a scan against your deployed AI agents, using a set of predefined attack prompts. You will also establish a baseline for your models’ performance and behavior, which will help you identify any anomalies or vulnerabilities during the red teaming exercises.
Success Criteria
- You have run a red team evaluation in Microsoft Foundry.
- You have configured the agent with a set of attack prompts.
- You have established a baseline for your AI models.
Learning Resources
AI Red Teaming Agent Run AI Red Teaming Agent in the cloud
Key Tasks
01: Perform a red team evaluation
The Azure AI Red Teaming Agent is a powerful tool that can help you identify potential risks and vulnerabilities in your AI models.
Expand this section to view the solution
To perform an evaluation, navigate to Microsoft Foundry and select the Build menu. Then, navigate to the Evaluations menu and select the Red team tab. On this page, select the Create button to create a new evaluation.

On the Create red teaming run page, select Agent as the target and then choose the cora agent. Then, select Next to continue to the next page.

Leave the Run configuration settings the same and choose Next. On the Review prohibited actions page, you will need to configure the prohibited actions for the cora agent. Select the cora button with a pencil icon to continue.

Scroll through the set of default actions. You may choose to add further actions by using the + Add action button. When you are done, select Save to complete configuration and then select Next to continue to the four step.
On the Review page, you may choose to rename the red team name. When you are done, select Submit to begin the evaluation.

This will open an evaluation page. The red team run should read In progress. You can obtain more details by selecting the run link.

This run will take several minutes to complete.
02: Review the results in Microsoft Foundry
Each execution of the AI red teaming service generates a report that summarizes the findings of the scan. The report includes information about the attack prompts used, the responses generated by the target model, and any risks or vulnerabilities identified during the scan.
Expand this section to view the solution
Once your scan in the prior task has completed, select the link for the run.

Review the overall report and you can see, for each example, what the cora agent returned and whether this violated the guidelines you created.

You can use these evaluation results to modify agent prompts and then add a new run to determine whether your changes have a positive impact on these results.