跳到主要內容

Configure DLP Policies for M365 Copilot and Agent 365 Locations

Implementation Effort: Medium – Requires designing DLP policy rules for the Microsoft 365 Copilot location, configuring blocking rules for highly classified sensitivity labels, and extending policy scope to include Agent 365 instances — all of which demand cross-team coordination between compliance, security, and data governance teams.
User Impact: High – Users will experience Copilot responses being blocked or redacted when DLP policies prevent processing of content that matches highly classified sensitivity labels or sensitive information types, and agents scoped under DLP policies will have their data interactions restricted.

Overview

Data Loss Prevention (DLP) policies are the enforcement mechanism that prevents Microsoft 365 Copilot and Agent 365 instances from processing, summarizing, or surfacing content that should not flow through AI interactions. Microsoft Purview DLP supports a dedicated Microsoft 365 Copilot location that allows administrators to create policies specifically targeting AI interactions — separate from traditional DLP policies that cover Exchange, SharePoint, OneDrive, and Teams. This separation is important because the risk profile of AI-mediated data access is different from direct user access: Copilot can synthesize content from multiple sources in a single response, which means a DLP violation in an AI context can expose sensitive data in ways that traditional channel-based DLP does not cover.

The core configuration involves creating DLP rules that block Copilot from processing content bearing specific sensitivity labels — typically the organization's most restricted classifications such as "Highly Confidential" or "Top Secret." When a user's Copilot query would retrieve content matching a blocked label, the DLP policy prevents the content from being included in the AI response and surfaces a policy tip explaining why the result was restricted. This is a direct implementation of Use Least Privilege Access — even though the user may have permissions to view the document directly, the organization has determined that AI-mediated access to that classification level is not appropriate. The distinction matters because Copilot responses can be copied, shared, or referenced in ways that bypass the original document's access controls.

Beyond the Copilot location, organizations must also scope DLP policies to include Agent 365 instances. Custom agents interact with organizational data through their configured data sources, and without DLP coverage, an agent could retrieve and present highly classified content to users who query it. Extending DLP policy scope to Agent 365 ensures that the same data protection rules that govern Copilot interactions also apply to custom-built agents — preventing a governance gap where agents operate outside the DLP perimeter.

This activity supports Verify Explicitly by enforcing data classification decisions at the point of AI interaction rather than relying on users to self-govern what they do with Copilot responses. It also supports Assume Breach by ensuring that even if a threat actor gains access to a user account with broad permissions, DLP policies prevent the most sensitive content from being surfaced through AI-assisted reconnaissance. Without these policies, Copilot and agents process every document the user can access, regardless of classification — meaning a single compromised account with a Copilot license becomes an AI-powered data exfiltration tool against the organization's most sensitive content.

Reference