メインコンテンツへスキップ

Configure Defender for Cloud Apps GenAI app discovery

Implementation Effort: Medium – Requires Defender for Cloud Apps and Defender for Endpoint to be deployed; involves configuring cloud app discovery policies for the generative AI category and setting up monitoring or blocking rules.
User Impact: Low – Discovery and monitoring are transparent to end users unless blocking actions are applied to specific apps.

Overview

Users adopt generative AI applications faster than security teams can evaluate them. Consumer AI tools, industry-specific AI services, and unvetted third-party generative AI apps appear in the organization's network traffic without any formal approval process. Without a discovery mechanism, the security team cannot distinguish between sanctioned AI tools like Microsoft 365 Copilot and unsanctioned ones that may leak sensitive data, lack enterprise security controls, or violate compliance requirements. This is the AI-specific version of shadow IT, and it requires the same discovery-first approach.

Microsoft Defender for Cloud Apps provides a cloud app catalog with security and compliance risk scores for generative AI applications, and a discovery engine that identifies which of those apps are actually in use across the organization. The generative AI category filter isolates AI-specific apps from the broader cloud app landscape. From there, the security team can create monitoring policies that alert on new generative AI app usage, or blocking policies that mark specific apps as unsanctioned — which, when integrated with Defender for Endpoint, prevents those apps from running on onboarded devices. Microsoft Purview DSPM for AI extends this further with data security visibility into AI interactions.

This supports Verify explicitly by identifying every generative AI application in use and assessing its risk score before granting or denying organizational access. It supports Use least privilege access by enabling the security team to restrict AI app usage to only those apps that meet security and compliance requirements, rather than allowing unrestricted access by default. Without this discovery capability, unsanctioned AI apps operate undetected, sensitive data flows to uncontrolled services, and the organization cannot enforce its AI governance policies.

Reference