メインコンテンツへスキップ

Enable Defender for Cloud AI security posture management

Implementation Effort: Medium – Requires the Defender CSPM plan to be enabled on Azure subscriptions hosting AI workloads; for AWS accounts, permissions must be reconfigured to enable AI posture capabilities.
User Impact: Low – Infrastructure-level security assessment; end users are not affected.

Overview

Organizations deploying Azure OpenAI, Azure AI Foundry, Azure Machine Learning, and multi-cloud AI workloads on AWS Bedrock or Google Vertex AI need to know what AI components exist in their environment and whether those components are securely configured. Without AI security posture management, the security team cannot systematically discover deployed AI models, identify vulnerable AI library dependencies in container images, detect Infrastructure as Code misconfigurations in AI service deployments, or analyze attack paths that could expose AI workloads to compromise. Each of these gaps represents a category of risk that remains invisible until exploited.

Defender for Cloud's AI security posture management, part of the Defender CSPM plan, provides continuous discovery of the AI Bill of Materials — application components, data sources, and AI artifacts — from code to cloud. It discovers AI workloads across Azure OpenAI, Azure AI Foundry, Azure Machine Learning, Amazon Bedrock, and Google Vertex AI, and extends to AI agent workloads deployed through Azure AI Foundry and Copilot Studio. The capability surfaces security recommendations on identity, data security, and internet exposure, detects IaC misconfigurations early in the development cycle, identifies vulnerable generative AI library dependencies, and provides attack path analysis that maps how threat actors could move from an initial foothold to AI workload compromise.

This supports Assume breach by continuously analyzing how AI workloads could be attacked and what data would be exposed, enabling the organization to remediate the highest-risk paths before they are exploited. It supports Verify explicitly by assessing each AI workload's configuration against security recommendations and surfacing deviations from secure baselines. Without this capability, AI workloads are deployed and operated without systematic security assessment, misconfigurations persist undetected, and the organization's AI attack surface grows with each new model deployment.

Reference