주요 콘텐츠로 건너뛰기

Configure Scheduled AI Threat Review Queries and Dashboards

Implementation Effort: Low – Involves creating saved KQL queries and configuring a Sentinel workbook with AI-specific visualizations. Does not require cross-team coordination.
User Impact: Low – Admin-only activity; dashboards are consumed by SOC analysts and security leadership, not by end users.

Overview

Detection rules and automation handle individual AI incidents as they occur, but point-in-time response does not reveal trends — whether prompt injection attempts are increasing week over week, whether specific agent identities generate disproportionate alert volumes, or whether mean time to resolve AI incidents is improving. Without a structured review mechanism, the SOC operates reactively and lacks the data to assess whether its AI detection coverage is adequate.

This task creates the review artifact: a Sentinel workbook that aggregates AI-specific incident data into trend lines, entity breakdowns, and detection gap indicators, paired with saved KQL queries that analysts can run on demand to investigate emerging patterns. The queries also serve as the starting point for new analytics rules when reviews identify coverage gaps — closing the feedback loop between incident response and detection engineering.

This supports Assume breach by giving the SOC continuous visibility into AI threat trends, enabling the team to identify and close detection gaps before a threat actor exploits an unmonitored attack vector. It supports Verify explicitly by surfacing patterns that are invisible at the individual incident level — for example, a gradual increase in low-severity prompt injection attempts that individually do not warrant escalation but collectively signal a probing campaign. Without this task, AI threat detection remains static: rules fire when threats occur, but nobody systematically evaluates whether those rules are catching what they should.

Reference