Deploy Risky AI and Risky Agents Policies
Implementation Effort: Low – One-click deployment available from DSPM for AI, or manual configuration through Insider Risk Management.
User Impact: Low – Policies operate in detection mode; no user-facing enforcement until Adaptive Protection is enabled.
Overview
Insider Risk Management provides purpose-built policy templates to detect risky AI usage by both users and agents. Deploying these policies establishes visibility into potentially harmful AI interactions before they result in data exposure or compliance violations. This supports the Zero Trust principle of Assume Breach by continuously monitoring AI activity for anomalous behavior, and Verify Explicitly by scoring risk signals across all AI interactions.
Without these policies, organizations lack visibility into users submitting sensitive data to AI apps, agents accessing protected content inappropriately, or patterns of risky behavior that accumulate over time. The Risky AI usage policy detects concerning user behavior while the Risky Agents policy monitors agent-specific risks from Copilot Studio and Foundry agents.
Key activities include:
- Deploy Risky AI usage policy: Detect user browsing to generative AI sites, sensitive information in prompts and responses, and risky patterns in Microsoft 365 Copilot and other AI apps
- Deploy Risky Agents policy: Detect agents exposed to risky prompts, generating sensitive responses, accessing sensitive SharePoint files, or sharing content externally
- Configure detection indicators: Select which AI-related signals contribute to risk scoring
- Enable DSPM for AI integration: Deploy via one-click recommendation for preconfigured detection coverage