📄️ Require Users to Use Entra ID Auth to Interact with Agents
Implementation Effort: Medium – IT must configure agent authentication flows, consent, and app registrations that tie interactive agents to Entra ID, but this is a one‑time project rather than an ongoing program.
📄️ Discover and inventory existing agents in Agent 365 Registry
Implementation Effort: Low – Agent Registry is available in the Microsoft 365 admin center; Microsoft-built and Copilot Studio agents are automatically registered and require no manual onboarding.
📄️ Triage Discovered Agents and Establish Ownership
Implementation Effort: Medium – Requires cross-team coordination to assess agents and assign accountability.
📄️ Design Conditional Access Posture for Agents
Implementation Effort: Medium – Requires mapping all agent access patterns across the Microsoft Entra Agent ID architecture and defining a policy structure before any enforcement begins.
📄️ Design Identity Governance Access Controls for Agents
Implementation Effort: Medium – Requires cross-team alignment between identity governance administrators, security architects, and business stakeholders to define the governance model before any operational controls are deployed.
📄️ Create Custom Security Attributes for Agent and Resource Classification
Implementation Effort: Medium – Requires defining an attribute taxonomy across agent identities and target resources, coordinating with application owners to tag resources, and assigning attributes to agents through the Agent Registry or Entra admin center.
📄️ Enable ID Protection and Deploy Risk-Based Conditional Access for Agents
Implementation Effort: Low – ID Protection for agents is enabled at the tenant level with minimal configuration; the risk-based CA policy follows a standard template.
📄️ Deploy Attribute-Based Conditional Access Policies for Agents
Implementation Effort: Medium – Requires custom security attributes to be in place on both agents and resources, and careful policy design to avoid blocking legitimate agent flows.
📄️ Establish Agent Publishing and Certification Standards
Implementation Effort: Medium – Requires cross-team alignment between security, platform engineering, and development teams to define publishing standards, certification criteria, and observability instrumentation requirements.
📄️ Publish Discovered Agents to Agent Registry
Implementation Effort: Low – Registry publication is a straightforward administrative action per agent once discovery is complete.
📄️ Organize Discovered Agents with Registry Collections
Implementation Effort: Low – Collections are a lightweight organizational feature that can be configured quickly once agents are published.
📄️ Assign Sponsors and Owners to Agent Identities
Implementation Effort: Medium – Requires identifying the right human accountable for each agent identity across potentially many teams, and establishing an operational process for ongoing sponsor assignment as new agents are deployed.
📄️ Create Access Packages for Agent Resource Assignments
Implementation Effort: Medium – Requires defining catalogs, selecting resource roles, configuring approval policies, and setting expiration and lifecycle rules for each access package.
📄️ Configure Lifecycle Workflows for Sponsor Mover/Leaver Scenarios
Implementation Effort: Medium – Requires configuring lifecycle workflow templates in Microsoft Entra ID Governance and integrating them with the organization's existing joiner/mover/leaver processes.
📄️ Inventory workload identities with agent-like behavior
Implementation Effort: Medium – Requires reviewing existing managed identities and service principals across Azure subscriptions to identify those exhibiting autonomous, agent-like behavior patterns.
📄️ Triage workload identities as agent candidates
Implementation Effort: Medium – Requires cross-team evaluation of inventoried workload identities against migration criteria, including risk level, permission scope, and operational autonomy.
📄️ Migrate agent-candidate workloads to agent identities
Implementation Effort: High – Requires coordinated migration of workload identities to the agent identity framework, including re-registration in Agent Registry, permission reassignment, and validation of downstream service dependencies.
📄️ Enable Global Secure Access for Copilot Studio agents
Implementation Effort: Low – Requires enabling the Global Secure Access for Agents toggle in the Power Platform Admin Center for the target environment or environment group.
📄️ Update Copilot Studio connectors to route through Global Secure Access
Implementation Effort: Medium – Requires identifying all existing Copilot Studio custom connectors and editing each one to apply the Global Secure Access routing configuration; new connectors inherit routing automatically.
📄️ Configure GSA dashboard for generative AI app visibility
Implementation Effort: Low – Dashboard is available in the Microsoft Entra admin center once Global Secure Access is configured; requires reviewing widgets and applying the generative AI filter.
📄️ Configure web content filtering for AI app categories
Implementation Effort: Low – Requires creating a web content filtering policy in the Microsoft Entra admin center with rules targeting AI-related web categories; minimal infrastructure changes.
📄️ Link filtering policies to baseline profile for agent traffic
Implementation Effort: Low – Requires linking existing web content filtering policies to the baseline security profile in the Microsoft Entra admin center; a few clicks per policy.
📄️ Configure Prompt Shield for AI Traffic Inspection
Implementation Effort: Medium – Requires configuring prompt policies, conversation schemes for target LLMs, linking policies to security profiles, and creating a Conditional Access policy to scope enforcement.
📄️ Extend Prompt Shield to Custom Enterprise LLM Endpoints
Implementation Effort: Medium – Requires understanding the request and response format of each custom LLM endpoint to define conversation schemes that Prompt Shield can parse.
📄️ Enable Microsoft Purview Audit for AI interactions
Implementation Effort: Low – Audit logging is on by default for most Microsoft 365 organizations; verification and enablement require a single PowerShell command or portal action.
📄️ Deploy DSPM for AI Overview and Get Started Prerequisites
Implementation Effort: Low – DSPM for AI is a built-in Microsoft Purview capability that requires license activation and minimal configuration; no agents or infrastructure deployment needed.
📄️ Assess and Remediate Data Oversharing for Copilot Readiness
Implementation Effort: Medium – Requires running oversharing assessments in Microsoft Purview DSPM for AI, creating custom assessments for priority sites, and coordinating remediation actions across site owners, data stewards, and security teams.
📄️ Configure DSPM for AI Activity Explorer and Observability
Implementation Effort: Low – Activity explorer is a built-in capability in Microsoft Purview DSPM for AI that requires minimal configuration; extending observability to Agent 365 instances uses the same console with preview features enabled.
📄️ Deploy Collection Policies for AI Interaction Locations
Implementation Effort: Low – Collection policies are deployed through the DSPM for AI console with minimal configuration; extending coverage to enterprise AI apps may require coordination with network teams if third-party SASE/SSE providers are in use.
📄️ Configure DLP Policies for M365 Copilot and Agent 365 Locations
Implementation Effort: Medium – Requires designing DLP policy rules for the Microsoft 365 Copilot location, configuring blocking rules for highly classified sensitivity labels, and extending policy scope to include Agent 365 instances — all of which demand cross-team coordination between compliance, security, and data governance teams.
📄️ Deploy Endpoint and Browser DLP Policies for AI Data Locations
Implementation Effort: Medium – Requires configuring endpoint DLP policies scoped to AI-accessible data locations, deploying browser DLP for AI interactions in Microsoft Edge, and optionally leveraging DSPM for AI one-click policy deployment to accelerate coverage.
📄️ Deploy Risky AI and Risky Agents Policies
Implementation Effort: Low – One-click deployment available from DSPM for AI, or manual configuration through Insider Risk Management.
📄️ Configure Adaptive Protection and Priority User Groups
Implementation Effort: Medium – Requires integration between Insider Risk Management and DLP policies, plus identification of priority users.
📄️ Establish Recurring Triage Process for AI Insider Risk Alerts
Implementation Effort: Medium – Requires defining triage workflows, assigning SOC analysts to review AI-specific insider risk alerts on a regular cadence, and coordinating escalation paths with legal, HR, and compliance teams.
📄️ Define Isolation Requirements for Agent Infrastructure
Implementation Effort: Medium – Requires architectural decisions and documentation across network and data domains.
📄️ Require Content Safety SDK Integration for All Agent Inputs Across Hosting Platforms
Implementation Effort: Low – Establishing the requirement is a policy and documentation task; SDK integration per agent is lightweight with available client libraries.
📄️ Deploy and Configure the AI Gateway
Implementation Effort: High – Requires provisioning Azure API Management infrastructure, defining API policies for token rate limiting and content safety, configuring networking, and integrating with identity providers and backend AI services.
📄️ Establish APIM Gateway Requirement for MCP Servers
Implementation Effort: Low – Establishing the requirement is a policy decision; APIM already supports MCP server exposure with minimal configuration.
📄️ Define Sensitivity Label Inheritance Requirements for AI Outputs
Implementation Effort: Low – Defining the requirements is a policy task; Microsoft Purview already supports label inheritance behavior for AI interactions that administrators configure centrally.
📄️ Configure AI Red Teaming Agent in Foundry
Implementation Effort: Medium – Requires provisioning the AI Red Teaming Agent in Azure AI Foundry, configuring attack scenarios and target endpoints, and interpreting the results to prioritize remediation.
📄️ Establish Red Teaming Requirement for New Agent Deployments
Implementation Effort: Low – Establishing the requirement is a policy and process task; the underlying tooling (AI Red Teaming Agent) is already provisioned.
📄️ Establish Recurring AI Red Teaming Validation Cadence
Implementation Effort: Medium – Requires defining a recurring schedule, assigning ownership, integrating red teaming results into remediation workflows, and tracking posture changes over time.
📄️ Establish Identity Requirements for Agent Development
Implementation Effort: Low – Requires documenting standards and integrating into development processes.
📄️ Enable Diagnostic Logging for AI Services and Agents
Implementation Effort: Medium – Requires creating centralized logging infrastructure and configuring logging across multiple AI platforms.
📄️ Deploy Microsoft Sentinel workspace for AI threat detection
Implementation Effort: Medium – Requires provisioning a Log Analytics workspace, onboarding Microsoft Sentinel, configuring data connectors for AI workload telemetry, and setting retention policies appropriate for security investigation timelines.
📄️ Enable AI-specific analytics rules for prompt injection detection
Implementation Effort: Medium – Requires identifying and enabling the relevant built-in analytics rule templates from the Content hub, mapping entity types to AI workload identities, and tuning thresholds to the organization's AI usage patterns.
📄️ Create custom analytics rules for agent anomaly detection
Implementation Effort: Medium – Requires writing KQL queries against AI workload telemetry tables, defining baseline behavior for agent identities, and iterating on thresholds through testing against real data.
📄️ Configure AI threat detection workbooks
Implementation Effort: Low – Involves deploying workbook templates from the Content hub and customizing visualizations to focus on AI-specific analytics rule outputs and incident data.
📄️ Create AI Incident Response Playbooks in Sentinel
Implementation Effort: Medium – Requires designing response workflows for AI-specific incident types and coordinating with SOC teams on containment actions for AI workloads.
📄️ Configure Automated Response Rules for High-Risk AI Activity
Implementation Effort: Medium – Requires defining triage and escalation logic for AI-specific incident types and testing automation rules against real AI incident data.
📄️ Integrate AI Threat Response with Defender XDR
Implementation Effort: Medium – Requires onboarding the Sentinel workspace to the Defender portal, validating bi-directional incident sync, and reconciling automation rules that may behave differently in the unified portal.
📄️ Configure Scheduled AI Threat Review Queries and Dashboards
Implementation Effort: Low – Involves creating saved KQL queries and configuring a Sentinel workbook with AI-specific visualizations. Does not require cross-team coordination.
📄️ Configure Retention Policies for AI Prompts and Responses
Implementation Effort: Medium – Requires defining retention periods for AI interaction data, coordinating with legal and compliance teams on regulatory requirements, and configuring policies across Copilot and agent workloads.
📄️ Configure Defender for Cloud Apps GenAI app discovery
Implementation Effort: Medium – Requires Defender for Cloud Apps and Defender for Endpoint to be deployed; involves configuring cloud app discovery policies for the generative AI category and setting up monitoring or blocking rules.
📄️ Block Access to Unsanctioned AI Apps with Defender for Cloud Apps
Implementation Effort: Low – Policy creation in Defender for Cloud Apps is straightforward once GenAI app discovery is already configured.
📄️ Configure Data Quality Rules for AI Grounding Data Sources
Implementation Effort: Medium – Requires identifying the data sources used for AI grounding, defining data quality rules in Microsoft Purview, and establishing ongoing monitoring to detect quality degradation over time.