Skip to main content

Enable Diagnostic Logging for AI Services and Agents

Implementation Effort: Medium – Requires creating centralized logging infrastructure and configuring logging across multiple AI platforms.
User Impact: Low – Logging is transparent to users and does not affect agent functionality.

Overview

Comprehensive logging of AI service activity is foundational to security detection and response for AI workloads. This supports the Zero Trust principle of Assume Breach by ensuring all agent interactions are captured for threat detection and forensic investigation. Without centralized logging, security teams have no visibility into what agents are doing, what data they're accessing, or whether they're behaving anomalously.

A dedicated Log Analytics workspace provides the central collection point for all AI security telemetry. AI activity logging must span all platforms where agents operate—Azure AI Services, Azure OpenAI, and Copilot Studio. Each platform has distinct logging mechanisms that must be configured to capture the telemetry needed for security monitoring. Diagnostic settings route logs to the central workspace where they can be correlated with other security signals and analyzed by Microsoft Sentinel.

Once telemetry flows into the workspace, retention policies determine how long that data remains available for interactive queries and how long it is preserved at lower cost for forensic and compliance purposes. Without explicit retention configuration, all tables default to 30 days — which is insufficient for most security investigation and compliance requirements. If a threat actor compromises an AI agent and the organization needs to reconstruct activity from three months ago, a 30-day default means that evidence no longer exists. Log Analytics offers two retention tiers: analytics retention keeps data available for real-time queries, alerts, and workbooks, while long-term retention preserves data at reduced cost, accessible through search jobs when needed for incident investigation or regulatory response. Total retention can extend up to 12 years. AI security data spans multiple tables — diagnostic logs from Azure AI Services and Azure OpenAI, Defender for Cloud alerts in the SecurityAlert table, and analytic rule outputs in SecurityIncident — and each table can have independent retention settings. Security teams need analytics retention long enough to support active threat hunting (typically 90–180 days) and total retention aligned with the organization's compliance and legal-hold obligations.

This activity also supports Verify Explicitly by maintaining the audit trail that validates whether detection rules, access policies, and response playbooks operated as intended during an incident. Organizations in regulated industries — financial services, healthcare, government — face additional obligations to retain security records for defined minimum periods; failing to configure retention before AI workloads go live risks non-compliance that surfaces only when regulators or legal teams request data that no longer exists. A complementary activity in the Secure AI Data Access functional area addresses Microsoft Purview retention policies for Copilot and agent interaction content — user prompts, AI responses, and referenced documents. The two activities target different platforms and different data types but should be configured together to avoid gaps in the organization's overall AI data retention posture.

Key activities include:

  • Create Log Analytics workspace: Deploy a dedicated workspace for AI security monitoring with appropriate retention and access controls
  • Azure AI Services diagnostics: Configure diagnostic settings on Azure AI resources to capture request metrics, audit logs, and trace data
  • Azure OpenAI logging: Enable request and response logging to capture prompts, completions, token usage, and content filter results
  • Copilot Studio audit logging: Enable audit logging in Microsoft Purview to capture agent conversations, plugin invocations, and user interactions
  • Route logs to workspace: Configure all AI service diagnostic settings to send logs to the central Log Analytics workspace
  • Export Global Secure Access traffic logs to SIEM: Configure GSA traffic log export so that network-level AI app usage—including shadow AI access patterns—flows into Microsoft Sentinel or your SIEM for correlation with service-level diagnostics
  • Configure retention policies: Set analytics retention (90–180 days recommended) and long-term retention per table to meet security investigation and compliance requirements

Reference