跳到主要内容

Block Access to Unsanctioned AI Apps with Defender for Cloud Apps

Implementation Effort: Low – Policy creation in Defender for Cloud Apps is straightforward once GenAI app discovery is already configured.
User Impact: Medium – Users attempting to access blocked AI apps will be denied and may need guidance on sanctioned alternatives.

Overview

Discovering which generative AI apps users are accessing is only the first step. Without enforcement, discovery data becomes a report that no one acts on — users continue sending prompts, uploading files, and sharing enterprise data with AI services the organization has not vetted. This task closes the gap between visibility and control by creating Defender for Cloud Apps policies that block access to AI apps the organization has classified as unsanctioned.

Defender for Cloud Apps maintains a catalog of generative AI applications with risk scores based on factors like data handling practices, compliance certifications, and security posture. Once the GenAI app discovery is in place, administrators can review discovered apps, mark them as sanctioned or unsanctioned, and create policies that enforce those decisions. Blocking policies prevent users from reaching unsanctioned AI destinations, complementing the URL-category-level filtering provided by Global Secure Access web content filtering. Where GSA blocks at the network layer by AI URL category, Defender for Cloud Apps blocks at the application layer with per-app granularity — allowing the organization to sanction one AI coding assistant while blocking another, even if both share the same URL category.

This supports Verify explicitly by ensuring every AI application a user accesses has been evaluated and approved against the organization's risk criteria, rather than allowing access by default to any AI service that is not explicitly malicious. It supports Assume breach by reducing the attack surface — unsanctioned AI apps may lack enterprise-grade data handling, making them potential exfiltration paths for sensitive data whether through intentional misuse or prompt manipulation. If this task is not completed, the organization has visibility into shadow AI usage but no mechanism to stop it, leaving the decision to use unapproved AI tools entirely in the hands of individual users.

Reference