メインコンテンツへスキップ

Configure DSPM for AI Activity Explorer and Observability

Implementation Effort: Low – Activity explorer is a built-in capability in Microsoft Purview DSPM for AI that requires minimal configuration; extending observability to Agent 365 instances uses the same console with preview features enabled.
User Impact: Low – Admin-only activity; activity explorer provides operational visibility to security and compliance teams without any changes to end-user workflows.

Overview

Once Microsoft 365 Copilot and Agent 365 instances are active in the environment, security teams need continuous visibility into how AI workloads interact with organizational data. Microsoft Purview Data Security Posture Management (DSPM) for AI provides an activity explorer that surfaces telemetry about AI interactions — what content was accessed, which users triggered the interactions, what sensitivity labels were present on the accessed content, and whether any data protection policies were triggered during the interaction. This observability is essential because AI workloads generate a high volume of data access events that are invisible through traditional audit logs alone.

The activity explorer consolidates AI interaction telemetry into a single view. Security teams can filter by user, by sensitivity label, by content location, or by policy match to investigate specific scenarios — for example, identifying how often Copilot surfaces content labeled as "Highly Confidential," or which users are generating the most AI interactions against restricted SharePoint sites. This telemetry also extends to Agent 365 instances, where the preview observability capabilities allow administrators to monitor how custom agents interact with organizational data, which data sources they query, and whether their interactions trigger DLP or sensitivity label policy matches. Consolidating Copilot and Agent 365 observability into the same console ensures that security teams do not maintain separate monitoring workflows for different AI workload types.

Beyond the activity explorer, DSPM for AI generates reports that summarize AI interaction patterns, policy matches, and data exposure trends across the tenant. Organizations should establish a recurring review cadence for these reports — weekly or biweekly — so that emerging risks are caught before they escalate. A report showing a sudden increase in Copilot interactions with highly classified content, or a spike in DLP policy matches from a specific department, is a leading indicator that requires investigation. Treating these reports as a periodic operational task rather than a one-time configuration ensures that monitoring stays current as the AI environment evolves.

The observability picture is incomplete without eDiscovery coverage. Microsoft Purview eDiscovery supports searching and placing legal holds on AI interaction data — the prompts users submitted to Copilot, the responses returned, and the content referenced during each interaction. Configuring eDiscovery for AI interactions means adding the Copilot location to eDiscovery search scopes and hold policies so that when a legal or compliance event triggers a search, AI interaction data is included alongside email, chat, and document content. Without this configuration, eDiscovery searches return traditional communication records but miss the AI interactions that may contain the most relevant evidence — particularly in cases involving data misuse or information leakage through AI-assisted workflows.

This activity directly supports Assume Breach — continuous monitoring of AI data access patterns allows security teams to detect anomalous behavior early. A compromised user account generating unusual Copilot queries against sensitive repositories, or an agent instance accessing data outside its expected scope, produces telemetry signals that the activity explorer surfaces. The recurring review of DSPM reports catches trends that individual alerts miss, and eDiscovery readiness ensures that when an incident escalates to a legal or compliance investigation, the evidence is already preserved. Without this observability, those interactions go undetected until a data loss event forces a retroactive investigation. The activity also supports Verify Explicitly by providing evidence-based visibility into what AI workloads are actually doing, rather than relying on assumptions about how they should be operating.

Organizations that skip this step are operating AI workloads without instrumentation. They cannot answer basic questions — which sensitive documents are being surfaced by Copilot, whether agents are accessing data outside their intended scope, or whether data protection policies are being triggered at scale. They also cannot produce AI interaction records when legal or compliance teams need them. This blind spot leaves security teams reactive rather than proactive, discovering problems only after they have caused damage.

Reference