Skip to main content

RAI Planning

Responsible AI assessment should be as structured and repeatable as security planning. When every AI system goes through the same principled evaluation, risk coverage improves, stakeholder trust increases, and compliance gaps surface before they become costly.

Why Use RAI Planning?

The RAI Planner agent transforms informal AI ethics reviews into a repeatable, evidence-backed assessment:

  • 🔍 Systematic coverage evaluates each AI component against six RAI principles and seven threat categories, eliminating the guesswork of ad-hoc reviews
  • 📊 Quantified outcomes produce a scored RAI scorecard across five dimensions so stakeholders can compare assessment quality across projects
  • 🔗 Security plan integration picks up where the Security Planner leaves off, inheriting AI component data and continuing threat ID sequences without duplication

TIP

If you have already completed a security plan, the from-security-plan entry mode is recommended. It pre-populates AI system scope from the security plan's state.json and starts RAI threat IDs at the next sequence after the security plan's threat count.

How It Works

The RAI Planner follows six sequential phases, each mapped to NIST AI RMF functions. Every phase produces artifacts, and the agent never advances without your confirmation.

Phase 1: AI System Scoping

Discover the AI system's purpose, technology stack, deployment model, and stakeholder roles. Classify AI components and establish assessment boundaries. Maps to NIST Govern and Map functions.

Phase 2: Sensitive Uses Assessment

Screen the AI system against Microsoft's sensitive uses categories. Identify restricted uses requiring escalation. Map vulnerable populations and downstream effects with harm severity ratings.

Phase 3: RAI Standards Mapping

Map AI system components and behaviors to the six RAI principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. Cross-reference with NIST AI RMF subcategories and applicable regulations.

Phase 4: RAI Security Model Analysis

Apply AI-specific threat analysis per component using seven threat categories: data poisoning, model evasion, prompt injection, output manipulation, bias amplification, privacy leakage, and misuse escalation. Threats follow the RAI-T-{CATEGORY}-{NNN} format.

Phase 5: RAI Impact Assessment

Evaluate control surface completeness for each identified threat. Document existing mitigations, identify gaps, analyze tradeoffs between competing RAI principles, and generate the control surface catalog and evidence register.

Phase 6: Review and Handoff

Present the RAI scorecard summarizing all findings. Generate backlog items for identified gaps and hand off to the ADO or GitHub backlog system. Optionally dispatch findings back to the Security Planner for integrated tracking.

Entry Modes

Three entry modes determine how Phase 1 begins. All converge at Phase 2 once AI system scoping completes.

ModeSourceBest for
captureFresh interviewNew AI projects without prior artifacts
from-prdPRD/BRD documentsProjects with product definition artifacts
from-security-planSecurity plan stateProjects that completed security planning first (recommended)

See entry modes for detailed guidance on when to choose each mode and what each mode pre-populates.

PageDescription
Why RAI planning?The case for structured RAI assessment over ad-hoc reviews
Agent overviewArchitecture, state management, and interaction model
Entry modesChoosing between capture, from-prd, and from-security-plan
Phase referenceDetailed inputs, outputs, and state transitions for all six phases
Handoff pipelineScorecard generation, backlog output, and the Security-to-RAI pipeline
Security planning overviewThe Security Planner agent that feeds into RAI assessment

🤖 Crafted with precision by ✨Copilot following brilliant human instruction, then carefully refined by our team of discerning human reviewers.