メインコンテンツへスキップ

Deploy One-Click DLP and IRM Policies from DSPM for AI

Implementation Effort: Low – One-click policies are preconfigured in the DSPM for AI console and deploy with minimal administrator input; policy scoping and tuning can follow after initial deployment.
User Impact: Medium – DLP policies may block or warn users when sharing sensitive information with AI apps, and Insider Risk Management policies begin calculating user risk based on AI interaction patterns.

Overview

As organizations adopt Microsoft 365 Copilot, agents, and third-party generative AI apps, sensitive data flows through AI channels that traditional DLP and Insider Risk Management policies were not designed to cover. Microsoft Purview Data Security Posture Management (DSPM) for AI addresses this gap by offering preconfigured one-click policies that deploy DLP, Insider Risk Management, and Communication Compliance protections across AI interaction locations without requiring security teams to build each policy from scratch. These one-click policies fall into two categories: data discovery policies that detect sensitive information flowing into AI apps, monitor browser visits to AI sites, identify risky prompts and responses, and flag unethical behavior in AI interactions; and data protection policies that actively block or restrict sensitive data sharing with AI services using Adaptive Protection, sensitivity label enforcement, and inline content inspection. The data protection policies integrate with Adaptive Protection to dynamically adjust enforcement based on each user's calculated risk level — elevated-risk users receive block-with-override or hard-block actions when they attempt to paste or upload sensitive content to AI apps, while lower-risk users may proceed with fewer restrictions. This integration means that the Insider Risk Management discovery policies feed risk signals into the DLP enforcement policies, creating a closed loop where risky AI behavior is both detected and acted upon. DSPM for AI also provisions default sensitivity labels and label policies if they do not already exist, ensuring the classification foundation that DLP policies depend on is in place. This activity directly supports Verify Explicitly because enforcement decisions are based on each user's demonstrated risk level and the sensitivity classification of the data involved, rather than static allow-or-block rules. It supports Assume Breach by deploying detection controls that identify sensitive data exposure through AI channels before a data loss event escalates, and by ensuring that risky AI usage patterns generate signals that feed into the organization's broader threat detection workflows. It supports Use Least Privilege Access by restricting AI workloads from processing sensitivity-labeled content beyond what the user's current risk posture warrants. Without these policies, organizations must manually build equivalent DLP, IRM, and Communication Compliance policies for AI locations — a process that introduces delay, increases the risk of configuration gaps, and leaves AI workloads operating without data security controls during the policy development period. During that gap, threat actors or careless users can exfiltrate classified content through AI prompts, and risky AI interaction patterns go unmonitored.

Reference