Rapid Agent Prototype Pattern (RAPP)
Compress 3-6 month development cycles into days using the "Buzz Saw + Human Taste" methodology.
π― Executive Summary
The Rapid Agent Prototype Pattern (RAPP) represents a fundamental shift in how AI agents are built, tested, and deployed at enterprise scale. Traditional software development is like a handsaw - slow, manual, laborious. RAPP is the buzz saw - fast, tireless, scalable.
The Core Philosophy: Let AI do the heavy lifting (the "buzz saw"), but humans provide the judgment that separates good code from production-ready solutions (the "taste").
π Comprehensive Documentation
This documentation provides a complete view of the Rapid Agent Prototype Pattern (RAPP), combining:
- Technical Implementation - Code examples, architecture details, and deployment specifics for developers
- Business Context - ROI metrics, process workflows, and strategic value for executives
- Practical Guidance - Step-by-step instructions and real-world examples for all team members
All content is presented in a unified, comprehensive format to give you the complete picture.
π What RAPP Delivers
- Discovery to MVP in Days: Convert customer conversations into working prototypes overnight
- Infinite Scalability: Work on 20+ projects simultaneously with the same team
- Zero Code Dependency: Agents hot-load from Azure File Storage - no redeployment needed
- Universal Architecture: Works in Teams, web, mobile, voice - same AI, same context
- Production-Ready Quality: Most generated agents pass human review on first attempt
- Streamlined Video Generation: Turn prototypes into customer-ready demos (manual process with automation roadmap)
π RAPP 14-Step Process
Every project follows this standardized workflow, with 6 human quality gates preventing scope creep and ensuring value:
| Step | Type | Duration | Owner |
|---|---|---|---|
| 1. Discovery Call | MANUAL | 30-60 min | Field Team |
| 2. Transcript Analysis | AUDIT | 15 min | Delivery Team |
| 3. Generate MVP "Poke" | MANUAL | 1-2 hours | Delivery Team |
| 4. Customer Validation | AUDIT | 1-2 days | Field Team |
| 5. Generate Agent Code | MANUAL | Days-Weeks | Technical Team |
| 6. Code Quality Review | AUDIT | 30-60 min | Technical Team |
| 7. Deploy Prototype | MANUAL | 1-2 hours | Technical Team |
| 8. Demo Review | AUDIT | 30 min | Delivery Lead |
| 9. Generate Video Demo | MANUAL | 3-4 hours | Delivery Team |
| 10. Feedback Processing | AUDIT | 1-3 days | Field Team |
| 11. Iteration Loop | MANUAL | Variable | Full Team |
| 12. Production Deployment | MANUAL | 1 day | Technical Team |
| 13. Post-Deployment Audit | AUDIT | Ongoing | Success Team |
| 14. Scale & Maintain | MANUAL | Ongoing | Technical Team |
π‘ Key Insight: The Risk Elimination Pattern
Traditional software development risk: spending 6 months on a problem no one cares about. This framework is designed to make that go away.
How? Multiple customer validation checkpoints before significant time investment. The MVP "poke" at Step 3 gets customer feedback before any real work begins. By Step 6, you have code locked in scope with customer sign-off.
System Architecture
A 4-layer universal intelligence system designed for infinite scale and zero-downtime updates.
πΌ Why Architecture Matters (Business View)
The architecture enables rapid agent updates with zero downtime, infinite horizontal scaling for concurrent users, and universal memory so context persists across Teams, Web, and Mobile. This translates to faster iteration cycles, lower operational costs, and seamless user experiences.
ποΈ The Four Layers
The architecture separates concerns into four distinct, independently scalable layers. This design allows agents to hot-load without redeployment, context to persist across all channels, and the system to degrade gracefully when components fail.
| Layer | Responsibility | Technology |
|---|---|---|
| 1. Interface Layer | User interaction channels | M365 Copilot, Teams, Web, Mobile, Voice |
| 2. Orchestration Layer | Conversation management, routing, history | Copilot Studio + Power Automate |
| 3. Intelligence Layer | Agent execution, decision-making, tool calling | Azure Function App (500 lines of code) |
| 4. Storage Layer | Agent code, user memory, workflow definitions | Azure File Storage |
β‘ Why This Architecture Wins
1. Hot-Loading Without Redeployment
Problem: Traditional deployments require downtime, testing pipelines, and approval processes. Making a small agent tweak could take days.
Solution: Agents live in Azure File Storage at memory/{user-guid}/agents/. The Intelligence Layer dynamically loads them on each request. Drop in a new Python file, and it's live instantly.
π― Real-World Example: Field Sales Route Optimization
Scenario: Customer requests a change to route optimization logic at 2 PM.
Traditional Approach: 3-5 days for code change β testing β approval β deployment.
This Architecture:
- Update
dynamic-route-planning-agent.pylocally - Upload to Azure File Storage
/agents/directory - Next request automatically loads updated agent (instant)
Result: Customer sees the change in minutes with zero downtime.
2. Universal Context Across All Channels
Problem: User tells AI something at work. They go home and use a different app. The AI has amnesia.
Solution: Memory is stored by user_guid in Azure File Storage, not tied to any interface. Whether they're in Teams at work, using a web app at home, or talking on their Apple Watch in the car, the AI has full context.
memory/
βββ abc123-user-guid/
β βββ conversation_history.json
β βββ preferences.json
β βββ agents/
β β βββ custom-agent-1.py
β β βββ custom-agent-2.py
β βββ workflows/
β βββ daily-routine.json
βββ xyz456-user-guid/
βββ conversation_history.json
βββ ...
3. Stateless Intelligence Layer = Infinite Scale
The Azure Function that powers the Intelligence Layer is completely stateless. Every request is self-contained:
- Load user memory from Storage Layer
- Load required agents from Storage Layer
- Execute agent logic
- Save updated memory back to Storage Layer
- Return response
Benefit: Horizontal scaling is trivial. Need to handle 10,000 concurrent users? Spin up 10,000 function instances. Azure handles it automatically.
4. Graceful Degradation
Because agents are modular and loaded independently, failures are isolated:
- One agent breaks? Only that capability is lost. Other agents still work.
- Copilot Studio goes down? Backend still works. Connect via direct API.
- File Storage has latency? Implement local caching in the function.
π― Real-World Example: Production Resilience
Every morning, the developer runs diagnostic tests to verify system health - asking "how capable are you today?" like the Blade Runner test. If an expected test doesn't trigger, the modular architecture makes it immediately clear which layer has failed: is the agent not loading from storage? Is the Copilot routing broken? Or is there a Power Automate connection issue?
The user can diagnose exactly which layer failed:
- Assistant working, memory working, but agent not loading? β Storage Layer issue
- Agents loading but wrong responses? β Intelligence Layer prompt issue
- Can't route to Teams? β Orchestration Layer configuration
π§ Technical Implementation Details
Intelligence Layer: The "Waiter" Pattern
The core intelligence is modeled like a restaurant:
- The Assistant is the waiter - talks to the customer (user)
- The Agents are the kitchen staff - make the pizza, burger, etc.
- The User doesn't need to know how the kitchen works
Why this matters: Context efficiency. The waiter's conversation with the kitchen (agent calls) is completely separate from the conversation with the customer. The user doesn't see "Calling DynamicRoutePlanningAgent..." debug logs.
# Stateless Azure Function (500 lines total)
def handle_request(user_message, user_guid):
# 1. Load context
memory = storage.read_file(f"memory/{user_guid}/conversation_history.json")
# 2. Dynamically load agents
available_agents = []
for agent_file in storage.list_files(f"memory/{user_guid}/agents/"):
agent = dynamically_import(agent_file)
available_agents.append(agent)
# 3. Build system prompt with agent metadata
system_prompt = f"""
You are an AI assistant. You have access to these tools:
{json.dumps([agent.metadata for agent in available_agents])}
Use tools when needed by outputting:
TOOL_CALL: agent_name(param1="value1", param2="value2")
"""
# 4. LLM decides which agent to call
llm_response = openai_client.chat.completions.create(
model="gpt-4",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_message}
]
)
# 5. If tool call detected, execute agent
if "TOOL_CALL:" in llm_response:
agent_call = parse_tool_call(llm_response)
agent = next(a for a in available_agents if a.name == agent_call.name)
result = agent.perform(**agent_call.params)
# 6. Feed result back to LLM for user-friendly response
final_response = openai_client.chat.completions.create(
messages=[
...memory,
{"role": "assistant", "content": llm_response},
{"role": "tool", "content": result},
{"role": "user", "content": "Provide a user-friendly summary"}
]
)
# 7. Save updated conversation
memory.append({"role": "user", "content": user_message})
memory.append({"role": "assistant", "content": final_response})
storage.write_file(f"memory/{user_guid}/conversation_history.json", memory)
return final_response
Agent Metadata: Self-Documenting Tools
Every agent describes itself using JSON Schema. The Intelligence Layer reads this metadata and generates the system prompt dynamically.
class DynamicRoutePlanningAgent(BasicAgent):
def __init__(self):
self.name = "DynamicRoutePlanning"
self.metadata = {
"name": self.name,
"description": "Optimizes field sales routes based on priority, location, and time constraints",
"parameters": {
"type": "object",
"properties": {
"salesperson_id": {
"type": "string",
"description": "ID of the salesperson"
},
"date": {
"type": "string",
"description": "Date for route planning (YYYY-MM-DD)"
},
"priority_accounts": {
"type": "array",
"description": "List of high-priority account IDs",
"items": {"type": "string"}
}
},
"required": ["salesperson_id", "date"]
}
}
super().__init__(name=self.name, metadata=self.metadata)
def perform(self, **kwargs):
# Implementation here
salesperson = kwargs.get('salesperson_id')
date = kwargs.get('date')
priorities = kwargs.get('priority_accounts', [])
# Business logic: optimize route
optimized_route = calculate_optimal_route(salesperson, date, priorities)
return f"Optimized route for {date}: {optimized_route}"
Key Insight: Because agents self-document, the LLM automatically knows when to call them. You can add 100 new agents and the system adapts with zero code changes to the Intelligence Layer.
π‘ Why Not Use Copilot Studio's Built-In AI Capabilities?
Problem: No access to the full system message in Copilot Studio. Can't fully control the prompt engineering that makes agents work reliably.
Key insight: The fundamental limitation of Copilot Studio's built-in AI is the lack of system message control. Without access to customize the system message, you cannot fine-tune the prompt engineering that makes agents perform reliably in this RAPP framework.
Solution: Use Copilot Studio for what it's great at (channels, UI, routing) and keep intelligence in a custom Azure Function where you have full control.
Core Principles
The "Buzz Saw + Human Taste" philosophy that makes this framework work at scale.
βοΈ The Buzz Saw: Heavy Lifting (Building Toward Automation)
π Current State
This framework is currently manual-first with a roadmap to automate key steps. We're building the "buzz saw" step by step, proving each process manually before automating it.
Traditional development is like a handsaw. Slow, manual, requires constant human energy. You can only cut so fast, and you get tired.
The vision: AI automation as a buzz saw. Fast, tireless, and you can just "lay on it" with more work. Feed it transcripts, feed it MVPs, feed it feedback - it keeps processing.
π― Real-World Example: The 60 Agent Test
Challenge: Generate 60 agents to prove the automation works.
Timeline: When leadership provided a list of 60 agents to test the automation capability, the developer generated all 60 production-ready agents during a lunch break - proving the system could scale agent creation from months to hours.
Result: 60 production-ready agents in ~1 hour while eating lunch. Manual development would have taken months.
Where We're Building Automation (Automation Roadmap):
- Transcript β Project JSON: Extract customer name, stakeholders, MVP, competing solutions, timeline from raw transcript (currently manual)
- MVP β Agent Code: Generate 6-10 Python agents from a validated MVP use case (currently manual)
- Prototype β Video Demo: Create narrated demo video with Azure TTS from prototype walkthrough (currently manual)
- Customer Feedback β Code Updates: Adjust agent logic based on feedback without rewriting from scratch (currently manual)
- Parallel Processing: Work on 20+ projects simultaneously with tooling and documentation (building capacity)
π¨βπ³ Human Taste: The Critical Quality Gates
The Problem: AI can generate code fast, but it doesn't have judgment. It doesn't know if the code solves a real problem, if it's secure, or if customers will actually pay for it.
The Solution: 6 human "quality gates" where experienced team members apply taste, judgment, and experience to keep the project on track.
π‘ Key Insight
It's not about avoiding manual work - it's about avoiding tedious, low-value manual tasks. What's critical is maintaining control over quality and judgment calls. Having good taste matters. The key questions are: Is this actually useful? Based on experience, does this truly solve the problem we're facing?
The 6 Quality Gates:
| Gate | Human Decision | Why It Matters |
|---|---|---|
| Gate 1: Transcript Audit | Did we extract the right problem? | Prevents building the wrong thing from day one |
| Gate 2: Customer MVP Validation | Does the customer agree this is the MVP? | Locks scope before any code is written - eliminates 3-week wasted cycles |
| Gate 3: Code Quality Review | Are the generated agents secure and correct? | Catches logic errors, security issues, integration problems (high pass rate) |
| Gate 4: Demo Review | Is this ready to show the customer? | "First impressions matter" - polish messaging and flow before customer sees it |
| Gate 5: Feedback Processing | Is this feedback in scope or scope creep? | Prevents endless iterations - decide what's in/out of MVP |
| Gate 6: Post-Deployment Audit | Are we delivering the promised value? | Usage metrics, bug tracking, user satisfaction - ensure ROI |
The Magic: Each gate takes 15-60 minutes of human time, but prevents days or weeks of wasted AI cycles.
π― The Risk Elimination Pattern
The biggest risk in traditional software development: spending 6 months on a problem no one cares about.
How This Framework Eliminates That Risk:
Traditional Approach
- Week 1-2: Requirements gathering
- Week 3-6: Design and architecture
- Week 7-20: Development
- Week 21-24: Testing and bug fixes
- Week 25: Customer says "This isn't what we meant"
This Framework
- Day 1: Discovery call (transcript)
- Day 2: Send MVP "poke" to customer
- Day 3-4: Customer validates or provides feedback
- Day 5: Generate agents based on locked MVP
- Day 6: Customer sees working prototype
- Day 7-10: Iterations based on real feedback
Key Insight: By getting customer validation at Step 3 (MVP "poke") and again at Step 8 (working demo), you've had 2 major checkpoints before investing serious time. If they don't like it, you've only spent a few days, not months.
π― Real-World Example: The "Poke" Methodology
What is a "poke"? A deliberately lightweight MVP summary sent to the customer before any code is written.
The concept: Create a stripped-down MVP summary to send to the customer - a "poke" that says "this is what I think I heard." This lightweight validation serves two purposes: it confirms understanding before any code is written, and the customer's response provides feedback that informs what the AI will build. This early engagement catches misunderstandings before they become expensive mistakes.
Why it works: Customer can respond with "Yes that's it" or "No, you misunderstood - here's what we actually need." Either way, you know BEFORE writing code.
π Iterations β Experience - It's About Cycles
Traditional Thinking: "I've been doing this for 10 years, so I have 10 years of experience."
Reality: Experience isn't measured by time spent - it's measured by the number of learning cycles completed. Ten years with few iterations is less valuable than two years with hundreds of rapid feedback loops.
π‘ The Iteration Advantage
Scenario A: Developer with 10 years experience, working on 1 project per year = 10 iterations total
Scenario B: Developer using this framework, working on 20 projects in parallel per month = 240 iterations per year
Result: Scenario B gains 24x more learning in the same time period. This compounds exponentially.
How This Framework Maximizes Iterations:
- Parallel Processing: Work on 20+ projects simultaneously
- Fast Feedback Loops: Days instead of months per cycle
- Structured Quality Gates: Spot issues early, iterate faster
- Pattern Recognition: See the same problems across multiple industries, learn universal solutions
Real Impact: In 6 months using this framework, you'll encounter more edge cases, customer objections, integration challenges, and solution patterns than most developers see in 5 years of traditional development.
β οΈ Common Pitfall: Skipping the Quality Gates
Temptation: "The AI generated the code in 5 minutes, let's just ship it!"
Reality: Without human quality gates, you'll ship code that:
- Solves the wrong problem (no MVP validation)
- Has security vulnerabilities (no code review)
- Fails customer expectations (no demo review)
- Never gets used (no post-deployment metrics)
The Fix: Think of quality gates as "compression points" where you invest 15-60 minutes to save days or weeks of rework.
π§ Gut Check: Core Principles
What is the PRIMARY purpose of the 6 human quality gates in this framework?
Step 1: Discovery Call
Capture the customer's biggest problems in their own words - this transcript becomes the foundation for everything.
MANUALπ― Objective
Record and transcribe a customer conversation that identifies: (1) their biggest business problems, (2) the data sources they have access to, and (3) how they currently solve the problem manually.
β±οΈ Duration
30-60 minutes - One focused conversation with the decision maker(s)
π₯ Owner
Field Team (Team Members A, B, C, D) - Anyone who interfaces with customers
π The Three Critical Questions
π‘ Discovery Philosophy
The approach is to start with open-ended business problem discovery: "What are the biggest problems with your business?" This question creates a crucial mindset shift - focusing on business pain points rather than technology solutions. By leading with problems instead of AI capabilities, you ensure the automation serves real business needs.
Question 1: What are your biggest problems?
Why this matters: Focuses on business value, not technology. Customer describes pain in their own language.
π― Real-World Example: Field Sales Operations
Customer's problem statement: Field sales representatives struggle with three critical questions every day: Where should we go? What order should we visit our stops? And what should we do when we get there?
Why this is perfect: Specific, measurable, impacts daily operations. Not "we need AI" or "we want to innovate."
Question 2: What data sources do you have?
Why this matters: AI without data is useless. Need to know what's realistic to access.
Key point: Identify what data sources are required to solve the problem, with a clear caveat up front: if data access isn't available, the project's potential is severely limited. AI systems need data to function effectively - this is a non-negotiable requirement that must be established during discovery.
- CRM systems: Dynamics 365, Salesforce, custom systems
- Document repositories: SharePoint, OneDrive, file shares
- Operational systems: Inventory, scheduling, ticketing systems
- External APIs: Weather, traffic, market data
Question 3: How do you solve this manually today?
Why this matters: The manual process is the blueprint for automation. You're scaling what they already do, not inventing something new.
Key point: The core principle is scaling and automating the existing manual process - not inventing an entirely new workflow. This approach reduces risk and increases adoption because users recognize their familiar process, just faster and more consistent.
π― Detailed Example: Job Matching Process
Customer description: The company offers numerous job opportunities and uses WhatsApp as their primary applicant communication channel. They need a bot that can analyze applicant resumes (from LinkedIn or uploaded files) and match them with the most suitable job opportunities from their extensive catalog.
Manual process breakdown:
- Receive resume via WhatsApp
- HR person reviews resume manually
- Searches job database for matching criteria
- Sends 3-5 job options back to candidate
- Candidate selects one, HR submits to internal system
This becomes the automation blueprint.
π¬ How to Conduct the Call
Before the Call:
- Ensure recording is enabled (Teams auto-transcribe, or use Otter.ai)
- Inform participants the call will be recorded
- Have 1-2 key decision makers on the call (not 10 people)
During the Call:
- Let them talk: Your job is to extract their knowledge, not pitch technology
- Ask clarifying questions: "Walk me through exactly what the end user does when they get this request"
- Get specific: Not "we have inefficiencies" but "it takes 20 hours per week to process NDAs manually"
- Note competing solutions: "We looked at Salesforce Agentforce" - this goes into project tracking
- Confirm data access: "Can you give us read access to your Dynamics environment?"
Red Flags to Watch For:
β οΈ Warning Signs This Won't Work
- "We just want to try AI" - No specific problem = no MVP
- "We can't share any data" - AI needs data to work
- "We have 50 different problems" - Need ONE focused MVP to start
- "We're not sure who the decision maker is" - Get the right person on the call
π€ Output
π― Deliverable
- Recording file: MP4 or audio file from Teams/Zoom
- Transcript file: Auto-generated .txt or .docx from meeting platform
- Key stakeholders identified: Names and roles mentioned in call
File format: Plain text transcript is ideal. If using Teams, the .vtt file works. Just needs to be readable by the next step's automation.
π‘ Pro Tip: The "Can We Solve This?" Litmus Test
At the end of the call, ask yourself: "If I had 6 Python agents and access to their data, could I build a prototype that solves this problem?"
If yes β Move to Step 2 (Audit)
If no β Schedule follow-up to get more specific requirements
Step 2: Audit - Transcript Analysis
Quality Gate #1: Verify we extracted the right problem before building anything.
QUALITY GATE #1π― Objective
Use the Transcript-to-Project Agent to automatically extract structured project data from the raw transcript, then human audit to verify accuracy before proceeding.
β±οΈ Duration
15 minutes - AI processes transcript instantly, human reviews output
π₯ Owner
Delivery Team (Team Members A, B) - People who will build the solution need to confirm they understand the problem
π€ Automation Tool: Transcript-to-Project Agent
Type: M365 Copilot Studio Declarative Agent
Location: Open Transcript-to-Project Agent
What it does: Extracts customer name, stakeholders, competing solutions, MVP use case, timeline, and generates structured JSON for project tracking.
How to Use:
- Open Copilot Studio and load the Transcript-to-Project Agent
- Paste the entire meeting transcript into the chat
- Agent returns structured JSON with all extracted fields
- Copy JSON to project tracking system or save as
[customer]-project.json
Analyze this transcript: Met with Acme Insurance today about their legal operations. VP Legal and Legal Operations Manager discussed replacing their current Salesforce contract system. They have an EA with 150 Copilot Studio licenses and want to start with NDA automation as the MVP. Budget approved at $450K ACV. Main concern is contract routing and approval workflows. They want a POC in 6 weeks focusing on contract generation, analysis, and automated routing.
{
"id": "1735840999",
"customerName": "Acme Insurance",
"status": "planning",
"type": "legal",
"description": "Legal operations AI bot for contract automation",
"stakeholders": "VP Legal, Legal Operations Manager",
"competingSolution": "Salesforce",
"contractDetails": "EA with 150 Copilot Studio licenses, $450K ACV",
"agents": ["ContractTemplate", "ContractAnalysis", "ContractRouting"],
"mvpUseCase": "NDA Automation Bot",
"mvpDescription": "Automated NDA generation, analysis, and routing with approval workflows",
"mvpTimeline": "6 weeks POC",
"notes": "Competing with Salesforce, focus on contract automation. Latency requirement under 2 seconds.",
"createdDate": "2025-10-15",
"updatedDate": "2025-10-15"
}
β Human Audit Checklist
The AI extracted data - now verify it's correct:
- Customer name: Is this the correct legal entity name?
- Stakeholders: Are all decision makers captured with correct titles?
- MVP use case: Does this match what the customer actually said?
- Competing solutions: Did we note who they're comparing us to?
- Timeline: Is the deadline realistic and confirmed by customer?
- Data sources: Are the systems they mentioned captured?
π― Common Corrections Needed
- AI says: "ContractManagement" agent | You correct: Need 3 separate agents (Generation, Analysis, Routing)
- AI says: Status = "active" | You correct: Status = "planning" (not building yet)
- AI says: Timeline "ASAP" | You correct: Timeline "6 weeks for POC, Q1 2025 for production"
π¦ Decision Point
Pass: JSON accurately represents the customer problem β Proceed to Step 3
Fail: Key information missing or incorrect β Schedule follow-up call to clarify
β οΈ Do NOT Proceed If:
- MVP use case is vague ("improve efficiency" - not specific enough)
- No data sources identified (AI needs data)
- Decision maker not on the call (need buy-in from the person who signs contracts)
- Problem is "nice to have" not "must solve" (won't get budget/priority)
Better to stop here than waste 3 weeks building the wrong thing.
π€ Output
π― Deliverable
- Validated project JSON: Saved as
[customer]-project.json - Audit sign-off: Team member initials + timestamp confirming accuracy
- Go/No-Go decision: Documented for next step
π‘ Why This Gate Matters
Time saved: 15 minutes here prevents 3 weeks of building the wrong solution.
Real impact: Traditional software development risks spending months building solutions for problems that don't actually matter to stakeholders. The audit gate eliminates this risk by validating stakeholder commitment and problem importance before any development begins.
Step 3: Generate MVP "Poke"
Send the customer a lightweight MVP summary to confirm we heard them correctly - BEFORE writing any code.
MANUALπ― Objective
Generate a clear, concise MVP summary and send it to the customer for validation. This "poke" tests if we understood their problem correctly before investing time in development.
β±οΈ Duration
1-2 hours (manual) - Team member drafts MVP document from project JSON using templates and examples
π₯ Owner
Delivery Team - Manual process with plan to automate using LLM-based generation
π€ What is a "Poke"?
A poke is a deliberately lightweight MVP summary - usually 1-2 pages - that describes what you think the customer needs in plain business language.
π‘ The "Poke" Concept
The "poke" is a deliberately lightweight MVP summary sent to validate understanding: "This is what I think I heard - is this correct?" This simple document serves as both a customer validation checkpoint and the exact specification that guides AI prototype generation. Getting customer confirmation on the poke ensures the AI builds the right solution.
Why "Poke" Instead of "Requirements Document"?
- Lightweight: 1-2 pages, not 20 pages of requirements
- Plain language: Written for business stakeholders, not engineers
- Invites feedback: "Did we hear you right?" not "Sign off on these requirements"
- Fast turnaround: Sent within 24 hours of discovery call
π What Goes In The Poke
Section 1: Problem Statement (2-3 sentences)
π― Example: Field Sales Route Planning
"Your field sales teams currently spend 2-3 hours per day planning routes manually and deciding which accounts to visit in what order. This results in inefficient routes, missed high-priority accounts, and inconsistent coverage. We heard that solving this routing problem would save each salesperson 10-15 hours per week."
Section 2: Proposed MVP Solution (3-4 bullet points)
- What it does: AI agent that generates optimized daily routes based on account priority, location, and time constraints
- Key capabilities:
- Pulls account data from your CRM automatically
- Prioritizes high-value accounts and urgent visits
- Generates turn-by-turn route with estimated times
- Accessible via Teams mobile app (no new app to install)
- What it doesn't do (scope boundaries): Not replacing your CRM, not doing GPS navigation, not scheduling appointments (just optimizing visits)
Section 3: Data Requirements (simple list)
- Read access to Dynamics 365 Sales (account locations, priority flags)
- Last visit date per account
- Salesperson territories and working hours
Section 4: Success Metrics (how we'll measure success)
- Reduce route planning time from 2-3 hours/day to 5 minutes/day
- Increase daily store visits from average 6 to 8-9 per salesperson
- 100% coverage of high-priority accounts weekly (currently 70%)
Section 5: Timeline & Next Steps (simple roadmap)
- Week 1-2: Build prototype with 1 salesperson territory as test
- Week 3: Demo to your team, gather feedback
- Week 4-6: Refine based on feedback, pilot with 5 salespeople
- Week 7-8: Full rollout to all field sales teams
π€ Automation Process
This step uses the project JSON from Step 2 and an LLM prompt template to generate the poke document:
Generate an MVP "poke" document based on this customer project JSON: [Insert validated project JSON from Step 2] The document should be: - 1-2 pages maximum - Written in plain business language (not technical jargon) - Structured as: Problem Statement, Proposed Solution, Data Requirements, Success Metrics, Timeline - Specific and measurable (not vague "improve efficiency") - Include scope boundaries (what we're NOT building) - End with clear next steps for customer to respond Tone: Professional but approachable. We're confirming we heard them correctly, not pitching a solution.
Output: LLM generates a formatted document (Word .docx or PDF) ready to send to customer
π€ Delivery to Customer
Field team (whoever did the discovery call) sends the poke via email:
π― Example Email Template
Subject: Retail Field Sales Routing - MVP Summary for Review
Body:
Hi [Stakeholder],
Thanks for the great discussion yesterday about optimizing your field sales routes. I've attached a brief summary of what I heard as the MVP we'd build to solve this problem.
Can you review and let me know:
- Did we capture the problem correctly?
- Is this MVP scope what you had in mind, or should we adjust?
- Are the data sources we listed accessible?
- Any concerns or questions before we proceed?
If this looks good, we can have a working prototype ready to demo in 2 weeks.
Best,
[Your name]
π€ Output
π― Deliverable
- MVP Poke Document: 1-2 page summary in Word/PDF format
- Email to customer: Sent with document attached, awaiting response
- Timeline started: Clock starts on customer response SLA (usually 2-3 days)
π‘ Why This Step is Critical
Time investment so far: 1 hour discovery call + 15 min audit + overnight automation = 1.25 hours of human time
Potential time saved: If customer says "no that's not what we meant," you've saved 3-6 weeks of building the wrong thing
Important principle: The customer's response to the poke becomes crucial feedback for agent development. Any additional details they provide - like "our HR system is actually split across 3 instances" - represents the first feedback cycle before a single line of code is written. This early correction prevents building on faulty assumptions.
Step 4: Audit - Customer Validation
Quality Gate #2: Customer signs off on MVP scope before any code is written.
QUALITY GATE #2π― Objective
Get explicit customer confirmation that the MVP poke accurately describes what they need. Lock in scope with stakeholder sign-off.
β±οΈ Duration
1-2 days - Waiting for customer to review and respond to the poke
π₯ Owner
Field Team - Manages customer communication and captures feedback
π― Three Possible Outcomes
Outcome 1: β "Yes, that's exactly right"
Next steps:
- Document customer sign-off (email confirmation + stakeholder names)
- Lock MVP scope in project tracker (mark as "Approved - Ready for Development")
- Proceed immediately to Step 5 (Generate Agent Code)
π― Perfect Customer Response Example
Email reply: "Yes, this MVP captures exactly what we discussed. The scope looks right, data access is no problem, and the 2-week timeline for a prototype works for us. Go ahead and build it!"
Why this is ideal: Explicit approval, no scope changes, confirmed data access, agreed timeline. Ready to build.
Outcome 2: π "Close, but here's what needs to change"
Next steps:
- Capture ALL feedback in writing (don't rely on verbal/phone conversation)
- Feed feedback back to LLM: "Update MVP poke with these customer corrections: [paste feedback]"
- Generate revised poke v2 and re-send to customer
- Repeat Step 4 until you get Outcome 1 (customer approval)
π― Feedback Handling Example
Customer feedback: "The route optimization part is right, but we also need the agent to check inventory levels at each store and suggest which products to stock up. Can that be in the MVP?"
Your decision tree:
- Option A: "Yes, we can add inventory checking to MVP" β Update poke, re-send
- Option B: "That's a great idea for Phase 2, but let's start with just route optimization to get you value faster" β Negotiate scope
Key principle: New requirements often change the agent architecture. What started as 5 agents might become 6 agents, with a new specialized agent handling the additional requirement. This modular flexibility allows you to adapt the design based on customer feedback without rearchitecting the entire system.
Outcome 3: β "This isn't what we need"
Next steps:
- Schedule follow-up discovery call to re-clarify requirements
- Go back to Step 1 - you missed something fundamental in the initial conversation
- DO NOT proceed to development - you'll build the wrong thing
β οΈ Red Flag: Scope Creep Starts Here
Warning signs:
- "Can we also add [10 more features]?"
- "Actually, now that I think about it, we need [completely different thing]"
- "My boss just told me we also need [out-of-scope requirement]"
Solution: Use the poke as a scope boundary tool. Say: "Great ideas for Phase 2, but let's lock in this MVP first, get it deployed and working, then we can expand."
π Audit Checklist Before Proceeding
Before marking this step complete and moving to Step 5, verify:
- Written confirmation: Customer email/message explicitly approving MVP scope
- Stakeholder sign-off: Decision maker (person who approves budget) has approved, not just end users
- Data access confirmed: Customer confirmed we'll get access to the data sources listed
- Timeline agreed: Customer understands and accepts the prototype delivery timeline
- Scope locked: No vague "we'll figure it out later" items - everything is defined
π€ Output
π― Deliverable
- Customer approval email: Saved to project folder as proof of scope agreement
- Final MVP poke v2/v3: With all customer feedback incorporated
- Updated project JSON: Status changed to "mvp-locked" with approval date
- Green light for development: Official go-ahead to generate agent code
π‘ This is The Most Important Gate
Why: Every subsequent step builds on the assumption that the MVP is correct. If the MVP is wrong, everything downstream is wasted effort.
Time invested through Step 4: ~2 hours of human time + 1-2 days of customer response time
Risk eliminated: Building the wrong solution for 3-6 weeks = hundreds of hours saved
Risk mitigation: By Step 6, you have working code with locked scope and customer approval on the requirements. If the customer ultimately decides the solution isn't right, you've only invested a few days rather than months - dramatically reducing the cost of failure.
Step 5: Generate Agent Code
Use the RAPP Agent Generator to create production-ready Python agents from the locked MVP.
MANUALπΌ Business Summary
Current state: Team manually creates agent code using templates and patterns. Vision: AI-assisted generation to accelerate development.
Goal: Reduce agent development time from 2-3 weeks per agent to hours with reusable templates and tooling. Build library of patterns for consistency.
π― Objective
Create 6-10 Python agent files that implement the MVP use case. Each agent is self-documenting, modular, and ready to drop into the Azure Function. (Currently manual development with templates; building toward AI-assisted generation)
β±οΈ Duration
Days to weeks (manual) - Team develops agents using templates and patterns. Goal: Reduce to hours with AI-assisted tooling
π₯ Owner
Technical Team - Manual development using templates. Vision: AI-assisted generation with human review
π οΈ Development Tool: RAPP Agent Generator
Location: Open RAPP Agent Generator
Planned Type: M365 Copilot Studio Declarative Agent
Vision: Generate complete Python agent code following the BasicAgent pattern with JSON Schema metadata, error handling, and Azure integration.
Manual Process:
- Review the approved MVP poke + project JSON from Step 4
- Identify required agent capabilities from MVP description
- Use existing agent templates from Customers/ directory as starting point
- Manually code each agent following the BasicAgent pattern
- Save each file as
[agent-name]-agent.pyin customer directory - Test agents locally before proceeding to Step 6
Future Automation Vision:
The RAPP Agent Generator will auto-generate agent code from MVP JSON, reducing development time from days to minutes.
Generate Python agents for this MVP: **Use Case:** Field sales route optimization **Capabilities Needed:** 1. Dynamic route planning based on account priority and location 2. Shelf recognition using computer vision to check product placement 3. Promotion suggestions based on sales data 4. Next best action recommendations for sales reps **Data Sources:** - Dynamics 365 CRM (accounts, locations, visit history) - Azure File Storage (shelf images from mobile uploads) - Sales transaction database **Requirements:** - Each agent must follow BasicAgent pattern - Include JSON Schema metadata for self-documentation - Handle errors gracefully (return string, never throw exceptions) - Support Azure File Storage for image processing agents - All agents must return strings, not dicts or None
π Generated Agent Structure
Each agent follows this standard pattern (the entire framework is ~500 lines of code, agents are ~100-300 lines each):
from agents.basic_agent import BasicAgent
import logging
import json
class DynamicRoutePlanningAgent(BasicAgent):
def __init__(self):
self.name = "DynamicRoutePlanning" # No spaces!
self.metadata = {
"name": self.name,
"description": "Optimizes field sales routes based on priority, location, and time constraints. Call this when a salesperson needs their daily route planned.",
"parameters": {
"type": "object",
"properties": {
"salesperson_id": {
"type": "string",
"description": "ID of the salesperson needing route optimization"
},
"date": {
"type": "string",
"description": "Date for route planning (YYYY-MM-DD format)"
},
"priority_accounts": {
"type": "array",
"description": "List of high-priority account IDs to prioritize",
"items": {"type": "string"}
},
"max_visits": {
"type": "integer",
"description": "Maximum number of visits for the day",
"minimum": 1,
"maximum": 20
}
},
"required": ["salesperson_id", "date"]
}
}
super().__init__(name=self.name, metadata=self.metadata)
def perform(self, **kwargs):
"""
Generates optimized route for a salesperson's day.
Returns: String with route details
"""
# Extract parameters
salesperson_id = kwargs.get('salesperson_id')
date = kwargs.get('date')
priority_accounts = kwargs.get('priority_accounts', [])
max_visits = kwargs.get('max_visits', 10)
# Validate required parameters
if not salesperson_id or not date:
return "Error: salesperson_id and date are required parameters"
try:
# 1. Fetch accounts for this salesperson's territory
accounts = self._get_territory_accounts(salesperson_id)
# 2. Filter by priority if specified
if priority_accounts:
accounts = [a for a in accounts if a['id'] in priority_accounts]
# 3. Optimize route (calls internal optimization logic)
optimized_route = self._calculate_optimal_route(
accounts, max_visits, date
)
# 4. Format response as string (NEVER return dict or None!)
result = f"Optimized route for {salesperson_id} on {date}:\\n"
result += f"Total stops: {len(optimized_route)}\\n"
result += f"Estimated time: {self._estimate_route_time(optimized_route)} hours\\n\\n"
for idx, stop in enumerate(optimized_route, 1):
result += f"{idx}. {stop['account_name']} ({stop['address']})\\n"
result += f" Priority: {stop['priority']} | ETA: {stop['eta']}\\n"
return result
except Exception as e:
logging.error(f"Route planning failed for {salesperson_id}: {str(e)}")
return f"Error generating route: {str(e)}"
def _get_territory_accounts(self, salesperson_id):
"""Fetch accounts from CRM for this salesperson's territory"""
# Implementation: Query Dynamics 365 or mock data
return []
def _calculate_optimal_route(self, accounts, max_visits, date):
"""Optimize visit order based on location and priority"""
# Implementation: TSP algorithm or heuristic
return []
def _estimate_route_time(self, route):
"""Calculate total travel + visit time"""
# Implementation: Distance/time calculations
return 0
Key Features the Generator Includes:
- JSON Schema metadata: Self-documenting for the LLM
- Parameter validation: Required params checked upfront
- Error handling: Try/except with logging, always returns string
- Helper methods: Private methods (_method_name) for complex logic
- Clear descriptions: Explains when to call the agent
π§ Common Customizations After Generation
The generated code is largely production-ready, but you may need to:
π― Customization Example: Dynamics 365 Integration
Generated (generic):
def _get_territory_accounts(self, salesperson_id):
# TODO: Implement Dynamics 365 query
return []
Customized (specific to customer):
def _get_territory_accounts(self, salesperson_id):
from azure.identity import DefaultAzureCredential
from dynamics_client import DynamicsClient
client = DynamicsClient(
endpoint=os.environ['DYNAMICS_ENDPOINT'],
credential=DefaultAzureCredential()
)
# Customer-specific: Query accounts in territory with last_visit field
accounts = client.query(
"accounts",
filter=f"ownerid eq '{salesperson_id}' and statecode eq 0",
select="accountid,name,address1_composite,priority,last_visit_date"
)
return accounts
The real work: "The hard work for us is basically connecting it to their system, right? And testing of course."
Typical Customizations:
- API endpoints: Replace placeholders with actual customer URLs
- Authentication: Add customer-specific auth (OAuth, API keys, etc.)
- Data schemas: Adjust field names to match customer's CRM/database
- Business logic: Fine-tune algorithms based on customer feedback
π€ Output
π― Deliverable
- 6-10 Python agent files: One per capability, named [capability]-agent.py
- Agent manifest JSON: List of all agents with descriptions
- README: How to deploy these agents to Azure Function
File structure:
agents/ βββ dynamic-route-planning-agent.py βββ shelf-recognition-agent.py βββ promotion-suggestions-agent.py βββ next-best-action-agent.py βββ dynamics-crud-agent.py (helper for CRM operations) βββ manage-memory-agent.py (default agent, always included)
π‘ The 60 Agent Test
Real example: When leadership provided a list of 60 agents to test the automation capability, the developer accepted the challenge and generated all 60 production-ready agents during a lunch break - demonstrating that the system could scale agent creation from months to hours.
Result: Successfully generated all 60 agents. Most just need minor API endpoint customization.
Step 6: Audit - Code Quality Review
Quality Gate #3: Human review of generated code for logic errors, security issues, and integration problems.
QUALITY GATE #3π― Objective
Technical team reviews the 6-10 generated agents to verify they're secure, follow best practices, and will integrate correctly with customer systems.
β±οΈ Duration
30-60 minutes - Experienced developer can review 6-10 agents in one sitting
π₯ Owner
Technical Team Lead - Someone with Python + Azure experience
β Code Review Checklist
1. Security Review
- No hardcoded credentials: All API keys/secrets use environment variables
- Input validation: Required parameters are checked before use
- SQL injection protection: If querying databases, use parameterized queries
- File path sanitization: If reading files, validate paths to prevent directory traversal
2. Error Handling
- Try/except blocks: All external API calls wrapped in error handling
- Logging: Errors logged with context (agent name, parameters)
- Return strings always: Never returns None, dict, or throws exceptions to caller
- Graceful degradation: If one agent fails, others still work
3. Integration Check
- API endpoints: Customer-specific URLs are correct
- Data schema: Field names match customer's actual system
- Authentication: Auth method matches customer's setup (OAuth, API key, cert)
- Dependencies: Any required Python packages listed in requirements.txt
4. Metadata Quality
- Clear descriptions: LLM will understand when to call this agent
- Required params marked: "required" array in JSON Schema is accurate
- Examples in descriptions: Parameter descriptions include example values
- Enum constraints: If param has fixed options, use enum to constrain it
π¨ Common Issues to Fix
β οΈ Issue #1: Returns Dict Instead of String
Generated code (wrong):
return {"status": "success", "route": optimized_route}
Fixed:
return f"Route generated successfully: {json.dumps(optimized_route)}"
Why: The Intelligence Layer expects strings. Dicts cause errors.
β οΈ Issue #2: Missing Error Context
Generated code (insufficient):
except Exception as e:
return "Error occurred"
Fixed:
except Exception as e:
logging.error(f"Route planning failed for {salesperson_id} on {date}: {str(e)}")
return f"Error generating route for {salesperson_id}: {str(e)}"
Why: Need context for debugging. "Error occurred" doesn't help diagnose issues.
β οΈ Issue #3: Vague Agent Description
Generated metadata (bad):
"description": "Handles route planning"
Fixed:
"description": "Optimizes daily sales routes for field reps. Call this when a salesperson says 'plan my route' or 'where should I go today'. Considers account priority, location proximity, and visit history to generate optimal stop order."
Why: LLM needs to know WHEN to call the agent and with what parameters.
π Review Metrics
Iterative refinement: The code review process involves an interactive conversation with the AI: request consolidation of redundant agents, ask for functionality splits, adjust implementation details - iteratively refining until the agent files meet quality standards.
π¦ Decision Point
Pass: All agents reviewed, minor fixes applied β Proceed to Step 7 (Deploy Prototype)
Fail: Major logic errors or missing integrations β Send feedback to RAPP Agent Generator and regenerate
π€ Output
π― Deliverable
- Reviewed agent code: All 6-10 agents marked as "Approved" with reviewer initials
- Issue log: Any bugs found + fixes applied documented
- Integration notes: Customer-specific config details for deployment
- Green light for deployment: Code ready to run in Azure
π‘ Why High Pass Rates?
The framework enforces patterns: BasicAgent template + JSON Schema metadata + error handling requirements = consistent quality
AI learns from examples: RAPP Agent Generator is trained on working agents from previous projects
Modularity helps: Each agent does ONE thing well. Easier to review than monolithic code.
Step 7: Deploy Prototype
Get working agents into Azure and M365 Copilot in minutes
π― Objective
Deploy approved, reviewed agent code to Azure Function App and connect to M365 Copilot Studio channels. This makes the prototype accessible to the customer for real testing.
Time Investment: 1-2 hours (manual deployment with scripts)
Human Role: Manual Azure resource setup, agent deployment, connectivity verification, testing
Automation Vision: ARM templates and deployment scripts to reduce setup time to minutes
π₯ Input from Step 6
π What You Have
- Reviewed agent code: 6-10 Python files approved and ready
- Project JSON: Customer context, MVP scope, data sources
- Integration notes: Any customer-specific config (Dynamics on-prem, custom APIs, etc.)
- GitHub repo: Copilot Agent 365 template forked and ready
π Deployment Process (2-Step Setup)
Step 7.1: Deploy Azure Infrastructure
Use the Azure ARM template to create all required resources with one click:
Example: One-Click Azure Deployment
- Go to GitHub repo:
Copilot Agent 365template - Click "Deploy to Azure" button in README
- ARM template creates:
- Azure Function App (stateless intelligence layer)
- Azure File Storage (agent code + user memory)
- Application Insights (logging and monitoring)
- All networking and security configs
- Deployment completes in 5-10 minutes
One-click deployment: The Azure ARM template automatically provisions all required Azure infrastructure - Function App, Storage, App Insights, networking, and security configurations - without requiring manual Azure portal navigation. The deployment is fully automated and customer-facing users don't need to understand the underlying infrastructure.
Step 7.2: Run Setup Script
Execute the bash/PowerShell script that configures the environment:
# Option 1: Local setup (for testing)
git clone [copilot-agent-365-repo]
cd copilot-agent-365
./setup.sh # or setup.ps1 on Windows
# Script installs:
# - Python dependencies
# - Azure SDK
# - Connects to Azure Function App
# - Uploads default agents (memory, email)
# Option 2: Cloud setup (GitHub Codespaces)
# Click "Open in Codespaces" button
# Runs automatically in cloud - no local install needed
Dynamic loading: The setup script automates the entire environment configuration process - downloading dependencies, installing Python packages, connecting to Azure services, and uploading default agents. It handles all the technical plumbing automatically, regardless of whether running locally or in cloud environments.
Step 7.3: Deploy Your Agents
Copy the approved agents from Step 6 into the deployment:
# Local deployment (VS Code)
1. Copy your 6-10 reviewed agent files
2. Paste into /agents directory
3. Right-click β "Deploy to Function App"
4. Select your Azure subscription
5. Deployment completes in 2-3 minutes
# Cloud deployment (drag-and-drop)
1. Open Azure Portal β Your Function App
2. Navigate to File Storage β /agents folder
3. Drag approved .py files directly into folder
4. Agents hot-load automatically (zero downtime)
π₯ Hot-Loading = Zero Downtime
The stateless architecture means you can update agents WITHOUT restarting the Function App. Drop new agent files into Azure File Storage, and they're available instantly.
Example: Retail customer route optimization agent updated during live demo in minutes - no service interruption.
Step 7.4: Connect to M365 Copilot Studio
Make your agents accessible through Teams, Outlook, and web chat:
- Download Copilot Studio solution: Pre-built YAML file in repo (
/solution/copilot-agent-solution.zip) - Import to Power Platform:
- Go to copilotstudio.microsoft.com
- Select "Import solution"
- Upload the .zip file
- Configure environment variables (Azure Function URL, API key)
- Enable channels: Check boxes for Teams, Web, Outlook, etc.
- Publish: One-click publish to M365
M365 integration: The process involves downloading the pre-built Copilot Studio solution, importing it into Power Platform, configuring the desired channels (Teams, Web, Outlook), and publishing - which exposes the agents across all Microsoft 365 interfaces where users work.
Example: Banking Customer Service Bot Deployment
Deployed agents: CheckIn, Onboarding, Research, Risk, Scheduler (6 agents total)
Deployment time: 45 minutes from code review approval to live in Teams
Process:
- Azure ARM template deployed β 8 minutes
- Setup script run β 5 minutes
- 6 agents uploaded to /agents folder β 2 minutes
- Copilot Studio solution imported β 10 minutes
- Teams channel enabled and tested β 20 minutes
Result: Customer can now chat with banker bot in Teams and test all 6 MVP scenarios
β Deployment Verification Checklist
Before calling the customer, verify these items:
| Check Item | How to Verify | Expected Result |
|---|---|---|
| Azure Function App running | Azure Portal β Function App β Overview | Status: "Running" |
| Agents loaded | File Storage β /agents folder | See your 6-10 .py files |
| Function responds to HTTP | Postman test: POST to Function URL | Returns "Available agents: [list]" |
| Copilot Studio connected | Copilot Studio β Test chat | Bot responds to "Hello" |
| Agent calls working | Ask bot to trigger one agent | Agent executes, returns result |
| Memory persistence | Tell bot something, ask it to recall | Bot remembers across messages |
β οΈ Common Deployment Issues
Issue #1: Azure Function App timeout errors
Cause: Default timeout is 5 minutes, some agents take longer for first cold start
Fix: In Azure Portal β Function App β Configuration β General Settings β "Function timeout" β Set to 10 minutes
Issue #2: Copilot Studio can't reach Azure Function
Cause: CORS not configured to allow copilotstudio.microsoft.com
Fix: Azure Portal β Function App β CORS β Add https://*.copilotstudio.microsoft.com
Issue #3: Agents not loading (File Storage connection error)
Cause: Connection string not configured in Function App environment variables
Fix: Configuration β Application Settings β Add AZURE_STORAGE_CONNECTION_STRING
π¬ Testing Before Customer Demo
Role-play the MVP scenarios to ensure everything works:
Example: Rehearsal Testing (Banking Customer)
Scenario 1: Customer Check-In
You: "Hi, I'm here for my 2pm appointment"
Bot: *calls CheckInAgent*
Bot: "Welcome back! I see you're scheduled with your advisor at 2pm for account review. They'll be right with you."
β
Verify: CheckInAgent logs show correct customer lookup
Scenario 2: Research 529 Plans
You: "I'd like to understand what 529 options are available"
Bot: *calls ResearchAgent*
Bot: "We offer three 529 college savings plans: [details]. Would you like me to compare them?"
β
Verify: ResearchAgent returns relevant financial product info
Scenario 3: Open New Account
You: "Great, let's open an account"
Bot: *calls OnboardingAgent*
Bot: "I'll help you get started. First, let me verify your information..."
β
Verify: OnboardingAgent initiates correct workflow
Real example: "So this is the use case for [Banking Customer]. So I basically loaded in those agents... And my prototyper is so good, you can basically rehearse with it."
What to check during rehearsal:
- Correct agent calls: LLM selects the right agent for each request
- Parameters passed correctly: Agent receives proper input (customer ID, date, etc.)
- Response quality: Agent returns useful, formatted output
- Error handling: Try invalid inputs - does it fail gracefully?
- Memory works: Bot remembers context across conversation
π§ Optional: IT Firewall Testing
For security-sensitive customers, you can run the prototype locally:
# Run on single IT laptop (no cloud deployment needed)
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Start local Azure Function
func start
# Prototype runs on localhost:7071
# Customer can test without any data leaving their network
Security option: "It could run locally, so they don't even need to really hook it into their systems. You know, they could firewall it to one person's laptop in IT."
π€ Output
π― Deliverable
- Live prototype: Agents running in Azure Function App
- M365 access: Customer can test in Teams, Outlook, or web chat
- Agent logs: Application Insights capturing all calls for review
- Deployment documentation: Screenshots of successful deployment steps
- Test results: Rehearsal testing notes showing all MVP scenarios work
π‘ Why Deployment is So Fast
ARM template automation: No manual Azure resource creation - one click provisions everything
Stateless architecture: 500 lines of code, minimal dependencies, deploys in seconds
Hot-loading agents: Update agents without redeploying the Function App
Pre-built Copilot solution: No custom Copilot Studio configuration needed
Speed example: "Cape team member had his own business set up by the next day using this. That's how fast you can move with this. That's what rapid means."
Step 8: Audit - Demo Review π¦
Quality Gate #4: Customer sees working prototype and provides feedback
π― Objective
Film the working prototype solving the MVP scenarios and send to customer for validation. This is the FIRST time the customer sees their agents actually working (not just a document or diagram).
Time Investment: 2-4 hours total
- Record demo: 30-60 minutes
- Edit video (optional): 30 minutes
- Customer review: 1-2 days
- Feedback session: 30-60 minutes
Human Role: Record screen, narrate what's happening, collect customer feedback
Automation Role: Agents execute the MVP scenarios in real-time (no smoke and mirrors)
π₯ Input from Step 7
π What You Have
- Live prototype: Agents deployed and tested in Azure
- M365 Copilot access: Agents accessible via Teams, Outlook, web
- Rehearsal notes: Tested MVP scenarios, know what works
- Approved MVP doc: Customer-signed scope from Step 4
π¬ Demo Recording Process
Step 8.1: Prepare the Recording Environment
- Open M365 Copilot: Teams chat or web interface (whichever is cleaner)
- Load MVP scenarios: Have customer-approved use cases ready to demonstrate
- Start screen recorder: OBS Studio, Loom, Camtasia, or built-in OS recorder
- Optional: Enable webcam for personal touch (see yourself explaining)
Step 8.2: Record Each MVP Scenario
Show the agents solving the problem - no slides, no mockups, real execution:
Example: Retail Field Sales Demo Recording
MVP Scenario 1: Route Planning
[Screen recording in Teams]
You: "I need my route for today in Los Angeles"
*Agent indicator shows DynamicRoutePlanningAgent is called*
Bot: "Analyzing 47 accounts in Los Angeles area...
Optimized route generated:
Stop 1: Retail Store A (8:30am) - High priority, last visit 14 days ago
Stop 2: Retail Store B (10:00am) - Promotion check needed
Stop 3: Retail Store C (11:30am) - New seasonal display
Stop 4: Retail Store D (1:30pm) - Restocking required
Estimated drive time: 2.5 hours
Would you like me to send this to your calendar?"
β
Customer sees: Real agent, real data sources, real routing logic
MVP Scenario 2: Shelf Insights
You: "What should I do at Retail Store A?"
*ShelfRecognitionAgent called*
Bot: "At Retail Store A (Store #4521):
- Your brand share of shelf: 18% (target: 25%)
- Competitor gap: Competitor X display blocking your product section
- Action: Request 2 additional shelf facings for your product line
- Promo opportunity: Labor Day end cap available Sept 1-5"
β
Customer sees: Vision AI, competitive analysis, actionable recommendations
The process: Deploy the agents with the working prototype, record a narrated video demonstration walking through all MVP scenarios, and deliver it to the customer for their review and feedback.
Step 8.3: Show "Behind the Scenes" (Optional but Powerful)
Briefly show the agent being called in Application Insights logs:
[Switch to Azure Portal]
Application Insights β Live Metrics
Agent Calls (last 60 seconds):
β DynamicRoutePlanningAgent: 1.2s response time β
β Inputs: {salesperson_id: "DS-4421", date: "2024-10-15", region: "LA"}
β Output: 4 stops, optimized route returned
[Switch back to Teams]
"As you can see, the agent executed in real-time, pulling from your Dynamics 365 data."
Why show logs? Proves it's not a mockup. Customer sees actual agent execution with their data sources.
Step 8.4: Narrate What's Happening
Don't just show it - explain the value:
- Before each scenario: "This scenario solves [business problem] that you described in our discovery call..."
- During agent execution: "The agent is now connecting to your Dynamics 365 instance to pull account data..."
- After result: "This route saves your field rep approximately 45 minutes per day based on your current manual process..."
π‘ The "Waiter" Pattern in Action
From the customer's perspective, they're having a natural conversation with the bot. Behind the scenes:
- Waiter (LLM) takes the order: "I need my route for today"
- Waiter goes to kitchen: Calls DynamicRoutePlanningAgent with parameters
- Chef (Agent) cooks: Queries Dynamics, runs optimization algorithm, formats output
- Waiter serves the meal: "Here's your optimized route with 4 stops..."
Customer sees: Simple conversation
System does: Complex orchestration with data sources, agents, memory
The "waiter" pattern: The LLM acts as an intermediary, managing conversations with specialized agents behind the scenes while presenting a simple, unified interface to the user. The end user doesn't see or need to understand the complex orchestration happening in the background - they just get clean, contextualized responses.
π§ Sending the Demo to Customer
Email template for demo delivery:
Example: Demo Delivery Email
Subject: [Customer] Field Sales Agent - Working Prototype Demo
Hi [Customer Contact],
Great news - your AI agents are up and running!
I've recorded a 12-minute demo showing the prototype solving all 4 MVP scenarios we agreed on:
1. Dynamic route planning (0:00-3:30)
2. Shelf recognition and insights (3:30-6:45)
3. Next-best-action recommendations (6:45-9:20)
4. Promotion suggestions (9:20-12:00)
π₯ Watch the demo: [Loom/YouTube link]
This is a fully functional prototype running on your Dynamics 365 data (test environment).
You can see the agents executing in real-time - no mockups or slide decks.
**Next Steps:**
1. Watch the demo and note any feedback
2. We'll schedule a 30-minute call to discuss your thoughts
3. Based on your input, we can iterate the agents or proceed to production deployment
**Two Questions for You:**
1. Do these agents solve the business problem as you described it?
2. Is there anything you'd like adjusted before we move to production?
Looking forward to your feedback!
Best regards,
[Your Name]
π Customer Feedback Session
Schedule a 30-60 minute call to walk through their reactions:
| Question to Ask | Why It Matters | What to Listen For |
|---|---|---|
| "Did the agents solve the problem as you described it?" | Confirms MVP scope alignment | Yes β Proceed to Step 9 No β Back to Step 5 with feedback |
| "Were the results accurate and useful?" | Validates data integration quality | Issues? β May need data source adjustments |
| "Is there anything you'd change or add?" | Scope creep detector | Minor tweaks OK. Major changes β Renegotiate MVP |
| "Can you see this being used by your team?" | Adoption readiness check | Hesitation? β May need training plan |
| "What would production deployment look like?" | Sets expectations for Step 11-14 | Security, compliance, rollout timeline |
π¦ Decision Point: Quality Gate #4
Three possible outcomes from this gate:
Customer says: "This is exactly what we need. Let's move forward."
Next step: Proceed to Step 9 (Generate Video Demo)
Customer says: "Close, but can you adjust [specific thing]?"
Next step: Loop back to Step 5, regenerate affected agents, redeploy, re-record
Customer says: "Great, but now we also need [whole new capability]..."
Next step: STOP. Explain this is outside MVP scope. Either stick to original or redefine project (back to Step 1)
β οΈ Warning: The "While We're At It" Trap
Scenario: Customer loves the demo and says "This is amazing! While we're at it, could you also make it integrate with our inventory system and send automated reports to executives?"
Why this is dangerous:
- You're 80% through the workflow - almost at the finish line
- Adding new capabilities = new discovery, new MVP, new testing
- Timeline blows out from "days" to "weeks"
- Original momentum is lost
How to respond:
"I'm glad you see the value! Those are great ideas for the NEXT phase.
Let's complete this MVP first, get it into production, and then we can
scope a Phase 2 project with those additional capabilities. Sound good?"
Scope creep warning: "Scope creep warning signs... Picture the AI agent library, right? So they would do the one click install... But if they come back with major changes, you need to renegotiate MVP scope."
β Approval Checklist
Before proceeding to Step 9, confirm:
| Check Item | Verification Method | Status |
|---|---|---|
| Customer watched full demo | Loom/YouTube analytics show completion | β‘ Confirmed |
| All MVP scenarios demonstrated | Demo video covers every use case from Step 4 | β‘ Confirmed |
| Customer provided written approval | Email or Slack message saying "approved" | β‘ Confirmed |
| No major scope changes requested | Feedback is minor tweaks, not new features | β‘ Confirmed |
| Production deployment timeline discussed | Customer knows next steps and timeline | β‘ Confirmed |
π€ Output
π― Deliverable
- Demo video: 10-15 minute recording showing all MVP scenarios
- Customer feedback notes: Documented reactions and any requested tweaks
- Approval email: Written confirmation from customer to proceed
- Iteration log (if needed): List of changes made based on feedback
- Green light for production: Customer-approved prototype ready for final polish
π‘ Why This Gate Matters So Much
First moment of truth: Customer sees their problem actually being solved (not just described in a doc)
Validates the Buzz Saw approach: AI did heavy lifting, human ensured taste/quality
Builds credibility: You went from conversation to working prototype in days, not months
Risk elimination: You're iterating on WORKING CODE, not arguing about design documents
Iteration philosophy: "And they're gonna love it or they're gonna hate it, right? And we go through the same process of, OK, here's the feedback, let's keep iterating."
Step 9: Generate Video Demo
Create polished, shareable demo video for executive presentations and marketing
π― Objective
Generate a professional, self-contained demo video with Azure TTS narration that can be shared with executives, embedded in sales decks, or used for internal evangelism. This is different from Step 8's screen recording - this is a PRODUCED demo with scripted narration and polished visuals.
Time Investment: 3-4 hours (manual)
- Create demo JSON manually: 1-2 hours using the Local-First Chat Animation Studio
- Script narration and timing: 1 hour
- Test and refine: 1 hour
Human Role: Manual creation of demo structure, scripting, narration, and timing adjustments
Automation Vision: Video Demo Generator to auto-create demo JSON from prototype walkthrough, reducing time to 30 minutes
π₯ Input from Step 8
π What You Have
- Customer-approved prototype: Working agents validated in Step 8
- Screen recording demo: Raw footage of agent execution
- MVP documentation: Business value, metrics, use cases
- Project JSON: Customer name, stakeholders, timeline
π€ Automation: Video Demo Generator
π€ Automation Tool: Video Demo Generator
Type: M365 Copilot Studio Declarative Agent
Location: Open Video Demo Generator
Local Tool: Open Local-First Chat Animation Studio
What It Does:
Generates structured JSON demo configurations with:
- Demo steps: 7-10 choreographed steps with timing (60-90 seconds total)
- Azure TTS narration: Professional voice-over text for each step
- Chat interactions: Simulated user β agent conversations
- Agent cards: Visual overlays showing agent status, metrics, processing
- Narrative arc: Problem β Solution β Business Impact structure
How to Use:
- Open Copilot Studio and load the Video Demo Generator agent
- Provide context: Paste your MVP documentation + project JSON
- Prompt:
"Generate a demo JSON for [CustomerName]'s AI agent prototype. Industry: [Industry] Use Case: [Brief description from MVP] Agents: [List 3-5 key agents] Key Metrics: [Time saved, accuracy improved, cost reduced] Target audience: [Executives / Technical teams / Field users] Include: - Opening hook showing the business problem - 3-4 steps demonstrating agent capabilities - Specific metrics and ROI - Chat interactions showing natural language use - Closing with scalability vision" - Agent returns: Complete JSON file with all demo steps and narration
- Save JSON: Copy output to
demos/[customer-name]-demo.json
π Example: Retail Field Sales Video Demo Generation
Input Prompt to Video Demo Generator:
Generate a demo JSON for Retail Customer's Field Sales AI Agent system.
Industry: Field Sales
Use Case: Field sales reps need to know where to go, what order to visit accounts,
and what actions to take at each location. Currently done manually, taking 45
minutes per day. AI agents solve this in real-time.
Agents:
1. DynamicRoutePlanningAgent - Optimizes daily routes
2. ShelfRecognitionAgent - Vision AI analyzes shelf photos
3. NextBestActionAgent - Recommends actions at each store
4. PromotionSuggestionsAgent - Identifies promo opportunities
Key Metrics:
- 45 minutes saved per rep per day
- 18% share-of-shelf β 25% (target achievement)
- 95% planogram compliance (up from 67%)
- Real-time Dynamics 365 integration
Target audience: Customer sales leadership and field operations VPs
Include:
- Opening hook: Field rep standing in parking lot, unsure where to go
- Route planning demo with LA map and 4 optimized stops
- Shelf photo analysis at Retail Store A showing competitor gap
- Next-best-action recommendations with specific tasks
- ROI: 200+ hours saved per month across team
- Closing: Scales to all markets, all regions globally
Generated Demo JSON (excerpt):
{
"name": "Field Sales Intelligence System",
"headerTitle": "AI-Powered Field Sales Optimization",
"userName": "Sales Rep - Senior Territory Manager",
"azureTTS": {
"key": "[TTS_KEY]",
"region": "eastus",
"voiceName": "en-US-GuyNeural"
},
"metadata": {
"industry": "Retail - Beverage Distribution",
"useCase": "Field Sales Route Optimization",
"targetAudience": "Sales Leadership",
"integration": "Microsoft Dynamics 365, Azure Vision AI"
},
"demoSteps": [
{
"id": "step1-hook",
"name": "The Daily Challenge",
"duration": 7000,
"voiceText": "Every morning, 200 field reps face the same question: where do I go today, and what do I do when I get there? This manual process wastes 45 minutes per day per rep. Let's see how AI changes everything.",
"narrator": {
"title": "The $2M Problem",
"text": "200 reps Γ 45 min/day Γ 250 days = 37,500 hours wasted annually",
"subtitle": "Manual route planning β’ No prioritization β’ Missed opportunities",
"position": "center"
}
},
{
"id": "step2-route",
"name": "AI Route Planning",
"duration": 6000,
"voiceText": "The rep opens Copilot and asks for their route. In 1.2 seconds, the DynamicRoutePlanningAgent analyzes 47 LA accounts, considers last visit dates, priority scores, and traffic patterns, and generates an optimized 4-stop route.",
"narrator": {
"title": "Instant Route Optimization",
"text": "47 accounts analyzed β’ 4 stops prioritized β’ 2.5 hours estimated",
"subtitle": "Saves 45 minutes of planning time",
"position": "left"
}
},
{
"id": "step3-shelf",
"name": "Vision AI Shelf Analysis",
"duration": 6000,
"voiceText": "At the retail location, the rep takes a quick shelf photo. The ShelfRecognitionAgent uses Azure Vision AI to detect the brand's 18% share, identifies a competitor display blocking the product section, and recommends requesting 2 additional facings.",
"narrator": {
"title": "Real-Time Competitive Intelligence",
"text": "Current: 18% share β’ Target: 25% β’ Gap: Competitor blocking",
"subtitle": "95% detection accuracy β’ Instant recommendations",
"position": "right"
}
},
...more steps...
{
"id": "step9-impact",
"name": "Business Impact",
"duration": 8000,
"voiceText": "Across 200 field reps, this system saves 37,500 hours annually, improves share-of-shelf by 7 percentage points, and increases planogram compliance from 67% to 95%. That's a 340% ROI in the first year.",
"narrator": {
"title": "Transformational Results",
"text": "$2.3M annual savings β’ 7% share gain β’ 28% compliance improvement",
"subtitle": "Scales globally to all markets",
"position": "center"
}
}
],
"chatInteractions": [
{
"stepId": "step2-route",
"messages": [
{"type": "user", "text": "I need my route for today in Los Angeles", "voice": false},
{"type": "system", "text": "π Analyzing accounts...", "timestamp": "9:03am"},
{"type": "agent", "text": "Route optimized for 4 high-priority stops. Estimated time: 2.5 hours.", "voice": false}
]
}
],
"agentCards": [
{
"stepId": "step2-route",
"card": {
"title": "DynamicRoutePlanningAgent",
"status": "ACTIVE",
"details": {
"Accounts Analyzed": "47",
"Stops Generated": "4",
"Processing Time": "1.2s",
"Optimization": "Traffic + Priority + History"
}
}
}
]
}
What the agent generated:
- 9 demo steps with complete narration (85 seconds total)
- Problem β Solution β Impact narrative arc
- Specific metrics: 45 min saved, 18% β 25% share, $2.3M savings
- Chat interactions showing natural language queries
- Agent cards showing processing details and confidence scores
- Professional narrator overlays for each step
π¨ Customizing the Generated Demo
Review the JSON and adjust:
1. Narration Tone
- For executives: Focus on ROI, strategic value, competitive advantage
- For technical teams: Emphasize architecture, integrations, scalability
- For field users: Highlight ease of use, time savings, day-to-day benefits
2. Voice Selection (Azure TTS)
Change the voiceName field to match your audience:
// Professional Male (default)
"voiceName": "en-US-GuyNeural"
// Professional Female
"voiceName": "en-US-JennyNeural"
// Enthusiastic Male
"voiceName": "en-US-DavisNeural"
// Warm Female
"voiceName": "en-US-AriaNeural"
3. Step Timing
Adjust duration values if narration feels rushed or too slow:
// Calculate: ~150 words per minute for natural speech
// Example: 25 words of narration = ~10,000ms (10 seconds)
{
"id": "step3-shelf",
"duration": 6000, // β Increase if narration cuts off
"voiceText": "At Retail Store A, the sales rep takes a quick shelf photo..."
}
4. Metrics Verification
Double-check all numbers match your project JSON:
- Time savings (45 min/day in retail example)
- Accuracy improvements (67% β 95% compliance)
- ROI calculations ($2.3M savings)
- Volume metrics (200 reps, 47 accounts)
π₯ Running the Demo
The generated JSON works with the Local-First Chat Animation Studio:
Option 1: Use Local-First Tool (Recommended)
1. Open tools/localfirst_chat_animation_studio_tool.html in your browser
2. Click "Import Demo JSON" button
3. Paste your generated JSON or upload the .json file
4. Configure Azure TTS credentials (or use mock TTS for testing)
5. Click "Preview Demo" to test
6. Press SPACE to start auto-play, arrow keys to navigate steps
Benefits:
- Works offline, no server required
- Built-in JSON editor and validator
- Save/load demos from browser storage
- Export standalone HTML files
Option 2: Generate Standalone HTML
Ask the LLM to wrap your JSON in a complete HTML file:
Prompt: "Take this demo JSON and create a self-contained HTML file
with embedded JavaScript, CSS styling, Azure TTS integration, and
keyboard controls. Include copy button for code blocks."
[Paste your generated JSON]
Testing the Demo:
- Press SPACE to start auto-play mode
- Watch narration overlays appear with timing
- Listen to Azure TTS read voiceText (or simulate if no key)
- Verify agent cards display at correct steps
- Check chat interactions appear with proper sequencing
π§ Sharing the Demo
Multiple distribution options:
Option 1: Share HTML File
- Email the standalone HTML file to customer
- They open it in any browser - works offline
- No installation or accounts needed
- Azure TTS requires internet connection for narration
Option 2: Host on GitHub Pages
1. Push HTML to GitHub repo
2. Enable GitHub Pages in repo settings
3. Share URL: https://[username].github.io/[repo]/[demo].html
4. Works on mobile, tablet, desktop
Option 3: Record and Export Video
1. Use OBS Studio / Loom to record browser playthrough
2. Export as MP4 (1920x1080 recommended)
3. Upload to YouTube / SharePoint / Vimeo
4. Embed in PowerPoint decks or send direct link
Example: Email to Executive Stakeholders
Subject: Field Sales AI System - Interactive Demo Ready
Hi [Executive Name],
Following our prototype approval, I've created an interactive demo showcasing
the AI agents in action. This is a polished, 90-second walkthrough with
professional narration.
π₯ Interactive Demo: [Attach HTML or link]
Key highlights:
- 45 minutes saved per rep per day
- Real-time route optimization with LA market example
- Vision AI shelf analysis at Retail Store A
- $2.3M annual savings across 200-rep team
How to view:
1. Download the HTML file
2. Open in Chrome/Edge/Safari
3. Press SPACE to start auto-play
4. Use arrow keys to navigate steps manually
This demo is ready to share with:
- Sales leadership for budget approval
- Field teams for change management training
- IT/Security for technical review
Let me know if you'd like any adjustments to messaging or metrics.
Best regards,
[Your Name]
π‘ Why Video Demos Matter
Async evangelism: Executives can watch on their schedule, share with peers
Consistent messaging: Same story told every time, no live demo variability
Internal evangelism: Field teams can see what's coming, build excitement
Sales enablement: Account teams can use for similar customers in same industry
No technical dependencies: Works offline, no Azure credits needed to view
β οΈ Common Mistakes
Mistake #1: Too many steps (12+ steps)
Why it's bad: Demo drags on, loses audience attention after 2 minutes
Fix: Keep to 7-10 steps, 60-90 seconds total. Focus on 3-4 key capabilities.
Mistake #2: Generic narration ("This agent is powerful...")
Why it's bad: No differentiation, sounds like marketing fluff
Fix: Use specific metrics: "45 minutes saved" not "saves time". "18% β 25% share" not "improves results".
Mistake #3: No narrative arc (just feature list)
Why it's bad: Audience doesn't understand the transformation
Fix: Structure as Problem β Solution β Impact. Start with pain, end with ROI.
β Quality Checklist
Before sharing the demo, verify:
| Check Item | How to Verify | Status |
|---|---|---|
| All metrics match project JSON | Cross-reference numbers with MVP doc | β‘ Confirmed |
| Narration timing feels natural | Listen to Azure TTS playback, no cut-offs | β‘ Confirmed |
| Agent cards appear at correct steps | Watch demo, verify stepId references | β‘ Confirmed |
| Chat interactions make sense | Read conversations, check for logic flow | β‘ Confirmed |
| Customer name spelled correctly | Check all references (header, steps, narration) | β‘ Confirmed |
| Demo works on mobile/tablet | Test on iOS Safari, Android Chrome | β‘ Confirmed |
π€ Output
π― Deliverable
- Demo JSON file:
demos/[customer-name]-demo.json - Standalone HTML demo: Self-contained file with narration
- Hosted URL (optional): GitHub Pages or internal server link
- MP4 video (optional): Recorded playthrough for PowerPoint embedding
- Usage instructions: How to view, navigate, and share
Step 10: Audit - Final Demo Review π¦
Quality Gate #5: Customer confirms video demo is ready to share with executives
π― Objective
Get final approval from customer that the polished video demo accurately represents their solution and is ready to share with executive stakeholders, board members, or field teams.
Time Investment: 30-60 minutes (review call with customer)
Decision: Approve demo for production use OR request adjustments
π₯ Input from Step 9
π What You Have
- Video demo (HTML or MP4): Polished with Azure TTS narration
- Demo JSON: Complete script with steps, metrics, narration
- Distribution plan: How demo will be shared (email, hosted URL, embedded in deck)
β Review Checklist with Customer
| Review Item | What to Check |
|---|---|
| Metrics accuracy | All numbers match reality (time saved, ROI, accuracy improvements) |
| Messaging tone | Narration appropriate for target audience (executives vs. field users) |
| Company branding | Customer name spelled correctly, no competitor references |
| Technical accuracy | Integration points, systems mentioned are correct |
| Shareability | Customer comfortable with internal distribution |
π¦ Decision Point
β APPROVED: Demo is locked and ready for production deployment planning β Proceed to Step 11
π ITERATE: Minor adjustments needed (narration, metrics, branding) β Fix and re-review
π€ Output
π― Deliverable
- Approved demo files: Final HTML/MP4 locked for distribution
- Distribution permission: Customer sign-off for executive sharing
- Usage guidelines: Who can see it, how to present it
Step 11: Production Planning
Define security, compliance, rollout strategy, and production requirements
π― Objective
Work with customer's IT, Security, and Compliance teams to define production deployment requirements. Transform prototype into production-ready system.
Time Investment: 3-7 days (meetings + documentation)
π Security & Compliance Planning
Questions for IT/Security Team:
- Data residency: Where can data be stored? (US, EU, on-prem)
- Authentication: SSO required? Azure AD integration?
- Network access: Public internet OK or private endpoints needed?
- Audit logging: What events need to be logged for compliance?
- Data encryption: At rest and in transit requirements
- Compliance frameworks: HIPAA, SOC 2, GDPR, FedRAMP?
Common Requirements:
// Production vs. Prototype
Prototype: Single Azure Function App, public endpoints, dev Azure subscription
Production: Redundant deployments, private endpoints, customer Azure subscription
Security additions needed:
- Azure AD authentication (no anonymous access)
- Private Link for Azure Function App
- Azure Key Vault for secrets (not environment variables)
- Application Gateway with WAF
- DDoS protection
- Managed Identity for service-to-service auth
- Azure Monitor alerts for security events
π Rollout Strategy
Phased Deployment Plan:
| Phase | Users | Duration | Success Criteria |
|---|---|---|---|
| Phase 1: Pilot | 5-10 power users | 2 weeks | 80% adoption, no critical bugs |
| Phase 2: Early Adopters | 25-50 users | 4 weeks | Positive NPS, feature requests captured |
| Phase 3: General Availability | All users | 8 weeks | ROI metrics achieved, support tickets low |
π€ Output
π― Deliverable
- Production requirements doc: Security, compliance, infrastructure needs
- Rollout plan: Phased deployment schedule with success metrics
- Support plan: Who handles user questions and bugs
- Training materials: User guides, video tutorials, FAQ
Step 12: Audit - Security & Compliance Sign-off π¦
Quality Gate #6: IT/Security/Compliance approve production deployment
π― Objective
Get formal approval from customer's IT, Security, and Compliance teams that the system meets all production requirements.
Time Investment: 1-3 weeks (review process, pen testing, compliance audit)
π Security Review Checklist
- Penetration testing: Third-party security audit completed
- Vulnerability scan: No critical or high-severity issues
- Code review: Security team reviewed agent code
- Access controls: RBAC configured, least privilege enforced
- Secrets management: All credentials in Azure Key Vault
- Logging/monitoring: Security events captured and alerted
π Compliance Review Checklist
- Data classification: PII/PHI handling documented
- Data retention: Policies configured for user memory and logs
- Audit trail: All agent actions logged with user context
- Compliance frameworks: SOC 2, HIPAA, GDPR requirements met
- Third-party integrations: Vendor BAAs signed where needed
π¦ Decision Point
β APPROVED: All security and compliance requirements met β Proceed to Step 13 (Production Deployment)
βΈοΈ CONDITIONAL APPROVAL: Minor fixes needed β Address issues and re-submit
π BLOCKED: Major security/compliance issues β Back to Step 11 for redesign
π€ Output
π― Deliverable
- Security sign-off: Written approval from CISO or Security team
- Compliance sign-off: Approval from Compliance/Legal team
- Pen test report: Results from third-party security audit
- Production deployment authorization: Go-live approval from stakeholders
Step 13: Production Deployment
Move from prototype to production with phased rollout
π― Objective
Deploy the production-ready system to customer's Azure subscription with all security and compliance controls. Execute phased rollout to pilot users.
Time Investment: 2-3 days (deployment) + 2-4 weeks (pilot phase)
π Production Deployment Steps
1. Infrastructure Deployment
// Deploy to customer's production Azure subscription
1. Run production ARM template (with security additions from Step 11)
2. Configure Azure AD authentication
3. Set up Private Link endpoints
4. Configure Application Gateway + WAF
5. Enable Azure Monitor alerts
6. Configure backup and disaster recovery
2. Agent Deployment
// Deploy approved agents to production
1. Copy reviewed agents from prototype to production File Storage
2. Verify agents load correctly in production Function App
3. Test one agent call with production data
4. Configure production Dynamics 365 / data source connections
5. Validate memory persistence with production Azure Storage
3. M365 Integration
// Connect to production M365 tenant
1. Import Copilot Studio solution to production environment
2. Configure production Azure Function URL
3. Enable Teams channel for pilot user group
4. Test end-to-end: Teams β Copilot β Function β Agents β Data sources
5. Verify SSO authentication works
4. Pilot User Onboarding
- Send welcome email: Instructions for accessing agent in Teams
- Conduct training session: 30-minute walkthrough of key scenarios
- Provide support channel: Slack/Teams channel for questions
- Set expectations: This is Phase 1, feedback will shape improvements
π Monitoring & Support
Week 1-2 (Pilot Phase):
- Daily check-ins: Review Application Insights logs for errors
- User feedback: Weekly survey to pilot users (NPS, issues, feature requests)
- Performance metrics: Agent response time, success rate, usage frequency
- Hot fixes: Address critical bugs within 24 hours
π€ Output
π― Deliverable
- Production system: Agents running in customer's Azure with all security controls
- Pilot users active: 5-10 users using agents in daily work
- Monitoring dashboard: Real-time metrics on usage and performance
- Support process: Defined escalation path for issues
Step 14: Post-Deployment Review
Measure success, gather learnings, plan Phase 2 improvements
π― Objective
After 2-4 weeks of pilot usage, review metrics, collect user feedback, validate ROI, and plan next phase of rollout or feature enhancements.
Time Investment: 1-2 hours (review meeting with customer)
π Success Metrics Review
Quantitative Metrics:
| Metric | Target (from MVP) | Actual | Status |
|---|---|---|---|
| Time saved per user per day | 45 minutes | [Measured] | β / β οΈ / β |
| Agent accuracy | 95% | [Measured] | β / β οΈ / β |
| User adoption rate | 80% | [Measured] | β / β οΈ / β |
| Agent calls per day | 20+ per user | [Measured] | β / β οΈ / β |
Qualitative Feedback:
- User satisfaction (NPS): Survey results from pilot users
- Top feature requests: What users want added in Phase 2
- Pain points: What's not working well
- Unexpected use cases: Creative ways users are leveraging agents
π Lessons Learned
What Worked Well:
- Which agents got the most usage?
- What aspects of training resonated with users?
- Which integration points worked smoothly?
What Needs Improvement:
- Which agents had accuracy issues?
- Where did users get confused or stuck?
- What data source connections were unreliable?
π Phase 2 Planning
Expansion Options:
- Wider rollout: Expand from 10 pilot users to 50+ users
- New agents: Add capabilities based on user requests
- Integrations: Connect to additional data sources or systems
- Advanced features: Multi-agent workflows, proactive alerts, scheduled tasks
ROI Validation:
Example: Field Sales System ROI After 4 Weeks
Projected savings: $2.3M annually (from Step 9 demo)
Actual savings (pilot):
- 10 pilot users Γ 42 min saved/day (slightly better than 45 min target)
- 4 weeks Γ 5 days/week = 20 days
- Total: 140 hours saved across 10 users in 4 weeks
- Extrapolated to 200 users: 35,000 hours/year saved β (close to 37,500 target)
User feedback highlights:
- "Route planning agent saves me 30-60 min every morning" - Sales Rep A
- "Shelf recognition catches things I would have missed" - Sales Rep B
- "I want this for all my accounts, not just LA market" - Feature request
Decision: Proceed with Phase 2 rollout to all 200 reps in Q2
π€ Output
π― Deliverable
- Success metrics report: Quantitative data on adoption, usage, time saved
- User feedback summary: NPS scores, quotes, feature requests
- ROI validation: Actual savings vs. projected savings
- Phase 2 proposal: Roadmap for next 3-6 months of enhancements
- Case study (optional): Document learnings for similar customers
π You've Completed the RAPP 14-Step Framework!
Discovery to production in days, not months:
- β Discovery Call β MVP Definition (Steps 1-4): 2-3 days
- β Agent Generation β Prototype Demo (Steps 5-8): 1-2 days
- β Video Demo β Final Approval (Steps 9-10): 1 day
- β Production Planning β Deployment (Steps 11-13): 1-3 weeks
- β Post-Deployment Review (Step 14): 2-4 weeks after launch
Total timeline: Working prototype in 3-5 days, production deployment in 3-5 weeks (mostly waiting for security/compliance approvals)
Compare to traditional development: 3-6 months for same scope
Result: 20x faster, 6 human quality gates preventing scope creep, validated ROI with real user data
Team Setup Guide
How to onboard Team Members A, B, and other team members to run this workflow autonomously
π― Objective
Enable multiple team members to run the 14-step framework in parallel, handling 20+ concurrent projects without bottlenecks.
Parallel execution: "[Team Member A], go and go and have this session, right? [Team Member B], go and have this session, get the get the inputs that we need... And then I'm going to let this work overnight and then I'm going to get to the gates of audit."
π₯ Station-Based Workflow Distribution
Divide the workflow across team members based on skill sets:
Station 1: Discovery & MVP Definition (Steps 1-4)
Owner: Team Member A + Team Member B (Sales/Solutions Architects)
Skills needed: Customer communication, requirements gathering, MVP scoping
Tools:
- Transcript-to-Project Agent (Copilot Studio)
- Customer project tracker HTML tool
- MVP "poke" template
Deliverables: Approved MVP document, locked project JSON
Time per project: 2-3 days
Station 2: Agent Generation & Code Review (Steps 5-6)
Owner: Team Member B + Team Member C (Technical leads)
Skills needed: Python knowledge (basic), code review, integration planning
Tools:
- RAPP Agent Generator (Copilot Studio)
- VS Code for code review
- Azure OpenAI for agent customization
Deliverables: 6-10 reviewed Python agents ready for deployment
Time per project: 1-2 days
Station 3: Deployment & Demo (Steps 7-10)
Owner: Team Member C (Technical deployment lead)
Skills needed: Azure deployment, Copilot Studio, video production
Tools:
- Azure CLI / Portal for deployment
- Video Demo Generator (Copilot Studio)
- Local-First Chat Animation Studio (tools/localfirst_chat_animation_studio_tool.html)
- Loom/OBS for screen recording
Deliverables: Working prototype in Azure, customer-approved video demo
Time per project: 1-2 days
Station 4: Production & Post-Launch (Steps 11-14)
Owner: Customer IT team + Team Member C (support escalation)
Skills needed: Production Azure management, security/compliance knowledge
Tools:
- Azure Portal (customer subscription)
- Application Insights for monitoring
- Teams for pilot user support
Deliverables: Production system, ROI validation, Phase 2 roadmap
Time per project: 3-5 weeks (mostly waiting for approvals)
π Team Member Onboarding Checklist
For Team Members A/B (Station 1):
| Setup Task | How To | Status |
|---|---|---|
| Copilot Studio access | Admin grants access to Copilot Studio environment | β‘ Complete |
| Load Transcript-to-Project Agent | Import declarativeAgent_0.json from /Transcript-to-Project Agent (1)/ | β‘ Complete |
| Test project tracker tool | Open customer-project-tracker.html, add test project | β‘ Complete |
| Practice MVP "poke" creation | Run through Step 3 with sample transcript | β‘ Complete |
| Shadow 1 discovery call | Observe Team Member C running Steps 1-4 with real customer | β‘ Complete |
For Team Member B (Station 2):
| Setup Task | How To | Status |
|---|---|---|
| Azure OpenAI access | Get API key + endpoint from Azure Portal | β‘ Complete |
| Load RAPP Agent Generator | Import declarativeAgent_0.json from /RAPP Agent Generator/ | β‘ Complete |
| VS Code setup | Install Python extension, configure linter | β‘ Complete |
| Review BasicAgent template | Read Customers/*/agent examples to understand pattern | β‘ Complete |
| Run code review on sample agent | Use Step 6 checklist on existing agent | β‘ Complete |
For Team Member C (Station 3):
Already proficient - focus on scaling capacity, not learning
π Handoff Protocol Between Stations
Station 1 β Station 2 Handoff:
What Team Members A/B Pass to Team Members B/C:
Slack message format:
π Project: [Customer Name] - [Project Name]
β
Status: MVP Approved by customer (Step 4 complete)
Deliverables:
- Customer transcript: [Link to recording + transcript file]
- Project JSON: [Attachment: customer-project-json.json]
- Approved MVP "poke": [Link to customer approval email]
- Stakeholder contacts: [Customer PM name + email]
Next actions:
- Generate agents using RAPP Agent Generator (Step 5)
- Target completion: [Date]
Questions: @TeamMemberB @TeamMemberC
Station 2 β Station 3 Handoff:
What Team Member B Passes to Team Member C:
Slack message format:
π Project: [Customer Name] - [Project Name]
β
Status: Agent code reviewed and approved (Step 6 complete)
Deliverables:
- Agent code: [Folder with 6-10 .py files]
- Code review notes: [Any issues fixed, customizations made]
- Integration requirements: [Dynamics version, data sources, API keys needed]
- Project JSON: [Updated with any scope adjustments]
Next actions:
- Deploy to Azure prototype environment (Step 7)
- Record demo and send to customer (Step 8)
- Target demo delivery: [Date]
Questions: @TeamMemberC
βοΈ Automation Tools for Team Coordination
Slack Bot for Status Updates:
// Auto-post to #ai-projects channel when stage completes
Team Member A completes Step 4 β Bot posts:
"π [Customer Name] MVP approved! @TeamMemberB ready for agent generation (Station 2)"
Team Member B completes Step 6 β Bot posts:
"π [Customer Name] agents reviewed! @TeamMemberC ready for deployment (Station 3)"
Team Member C completes Step 10 β Bot posts:
"π [Customer Name] demo approved! Production planning starts Monday"
Shared Project Tracker:
Use the customer-project-tracker.html tool to visualize pipeline:
- Column 1: Discovery (Team Members A/B) - Projects in Steps 1-4
- Column 2: Development (Team Members B/C) - Projects in Steps 5-6
- Column 3: Deployment (Team Member C) - Projects in Steps 7-10
- Column 4: Production (Customer IT) - Projects in Steps 11-14
Team can see at a glance: 5 projects in Discovery, 3 in Development, 2 in Deployment, 8 in Production
π‘ Key to Scaling: Freeze Frame the Process
Process standardization: The key to scaling is documenting each station with crystal-clear specifications - defining exact inputs, expected outputs, audit criteria, and handoff procedures to the next stage. This "freeze frame" documentation allows team members to operate their stations autonomously with consistent quality.
What this means: Each station has clear inputs, clear outputs, and clear quality criteria. No ambiguity, no "figure it out yourself."
Result: Team Members A and B can run their stations autonomously while Team Member C focuses on deployment bottleneck
Scaling to 20+ Concurrent Projects
How the framework handles massive parallelization
π° Business Impact at Scale
3-person team capacity: 20-25 concurrent MVP projects, 50+ total projects including production.
Cost structure: ~$50-100/month per prototype environment (auto-deleted after customer handoff), zero marginal cost for production (runs in customer Azure).
Revenue model: Each project = $100K-500K ACV. Team of 3 can manage $2M-10M ARR pipeline simultaneously.
π― Objective
Run 20+ customer projects simultaneously using the same automation workflow, same agents, same team.
Scaling philosophy: Once the standardized process is refined and documented, the framework can handle unlimited concurrent MVPs for unlimited customers - the only constraint being data access. The automation and standardization enable infinite horizontal scaling.
π Current Capacity Analysis
In customer-project-tracker.html
Current team capacity (1 person)
Team Members A + B + C stations
Bottleneck Analysis:
| Stage | Current Bottleneck | Solution |
|---|---|---|
| Steps 1-4 (Discovery) | Customer meeting scheduling | Team Members A + B run in parallel (5-10 projects/week) |
| Steps 5-6 (Agent Gen) | Code review takes 30-60 min/project | Team Member B handles, 5-8 reviews/day possible |
| Steps 7-10 (Deploy/Demo) | Team Member C deployment bandwidth | Automate deployment scripts (ARM templates), batch demos |
| Steps 11-14 (Production) | Customer security reviews (1-3 weeks) | Not a blocker - runs async, doesn't consume team time |
π Scaling Strategies
1. Batch Processing
Example: Monday Morning Batch
Scenario: 10 projects reached Step 7 (Deploy Prototype) over the weekend
Old way (sequential): Deploy 1 project, test, move to next = 10 hours
New way (batch):
- Run ARM template deployment for all 10 projects in parallel (8 min each, run simultaneously)
- Use single setup script with loop to configure all 10 Function Apps (30 min total)
- Drag approved agents into Azure File Storage for each (5 min per project = 50 min total)
- Batch test: Run one agent call for each project to verify (10 min total)
Total time: 1.5 hours (vs. 10 hours sequential)
Result: 6.7x faster through parallelization
2. Overnight Automation
Overnight automation: "And then I'm going to let this work overnight and then I'm going to get to the gates of audit and it might take four or five days to get through."
Automatable tasks to run overnight:
- Agent generation (Step 5): Queue 20 agent generation requests to RAPP Agent Generator, let them run overnight
- Code review prep (Step 6): Auto-lint checks, security scans run async
- Demo video rendering (Step 9): Generate demo JSON for all approved projects, render videos in batch
3. Template Reuse Across Similar Customers
Example: Retail Field Sales Template
Problem: 5 different retail customers all need route planning + shelf recognition agents
Solution:
- Create "Retail Field Sales Template" with 4 core agents (industry standard pattern)
- For each new customer, customize agent parameters (not logic):
- Change Dynamics instance URL
- Adjust store visit frequency rules
- Customize product categories
- Skip Step 5 (agent generation) - go straight to Step 6 (review customizations)
Time savings: Steps 5-6 compress from 2 days to 4 hours
Quality improvement: Using battle-tested agents, not generating from scratch
π Capacity Model: 3-Person Team
Weekly Throughput:
| Team Member | Station | Projects/Week |
|---|---|---|
| Team Member A | Station 1 (Discovery) | 3-5 new MVPs defined |
| Team Member B | Station 1 + 2 (Discovery + Agent Gen) | 3-5 MVPs + 5-8 code reviews |
| Team Member C | Station 3 (Deploy/Demo) | 8-10 deployments + demos |
Pipeline Visualization:
Week 1:
- Team Members A/B start 8 new projects (Step 1-4)
- Team Member B reviews 5 agent codebases (Step 6)
- Team Member C deploys 10 prototypes, records 10 demos (Step 7-8)
- 15 projects in production phase (Step 11-14, customer-managed)
Week 2:
- 8 projects from Week 1 move to Team Member B for agent generation
- Team Members A/B start 8 MORE new projects
- Team Member C deploys 8 prototypes from previous week
- 23 projects in production phase
Week 3:
- Steady state: 8 in Discovery, 8 in Development, 8 in Deployment, 25+ in Production
- Total: 49 active projects across all stages
Steady-state capacity: 20-25 projects in active development (Steps 1-10), 50+ total projects including production
π§ Infrastructure Scaling Requirements
Azure Resources (per project):
- Dev/Prototype: 1 Function App, 1 File Storage account (Hot tier), Application Insights
- Production: Customer's Azure subscription (not our cost)
- Cost per prototype: ~$50-100/month while active, delete after Step 10
Cost Optimization:
Automated cleanup script:
# Run weekly: Delete prototype environments >30 days old
az functionapp list --query "[?tags.Stage=='Prototype' && tags.LastUsed<'2024-09-15']" | xargs -I {} az functionapp delete --name {}
# Saves: $50-100 per deleted environment
# With 20 projects/month, 10 completing β $500-1000/month savings
π‘ The Secret to Infinite Scale
Stateless architecture + Modular agents = No shared state between projects
Each project is completely independent:
- Separate Azure Function App (no cross-project interference)
- Separate File Storage (agents + user memory isolated)
- Separate Copilot Studio solution (customer-specific branding)
Result: Adding Project #21 is identical to adding Project #1. No complexity explosion.
Scaling capability: "This scales infinitely. This scales in parallel... So you could have infinite agents working say for a book factory using the one endpoint."
β οΈ Warning: Quality Gates Still Required
Don't skip gates for speed: Even at 20+ projects, each project MUST pass all 6 quality gates
Why: One bad MVP that wasn't properly scoped = 3-6 weeks of wasted effort fixing scope creep
Better: Take 30 extra minutes at Gate #2 (Step 4) to lock MVP scope than lose 3 weeks later
Roadmap - EOY 2025
Next release wave priorities and strategic focus areas
π― Strategic Shift: From Prototypes to Production
The strategic imperative is to transform conceptual ideas into production-ready solutions that field teams can deploy immediately. The focus is on creating demos that not only impress in presentations but can also be reliably reproduced and reused across multiple customer engagements to accelerate deal closure.
π₯ Top 3 Priorities (In Order)
1. SPUR Committed Work - Agents & Videos
What: Deliver committed agents and videos for SPUR engagement
Impact: Revenue-generating, already committed to customer
Status to Track: Completion of agent builds and video production
2. Enable Team Members - Documentation & Tools
What: Create "station-based" guide enabling team to work independently on 2+ process stations
Deliverables:
- 14-step process guide (the "Bible") with audit trails
- M365 agents for discovery calls and MVP generation
- Clear input/output documentation for each station
Impact: Team scalability - enable team members to handle 20+ customer engagements concurrently
Key Requirement: "Less code-dependent, more tool-based" - team uses tools, not raw code
3. AI Agents Library Infrastructure
What: Azure environment architecture where field can demo live agents
Deliverables:
- Shared Azure tenant (non-expiring) for all agents
- Live endpoint demos accessible by entire field
- No per-demo deployment needed
Impact: Field enablement at scale - eliminate deployment friction for demos
β‘ Specific Action Items
Immediate (Next 1-2 Weeks)
| Action | Description | Owner |
|---|---|---|
| Hardware Procurement | Procure 2TB external drive for video storage (via Juke Room Store or procurement) | Infrastructure Team |
| Video Automation | Implement speech-to-text solution for audio timing (double API call for duration data) | Automation Team |
| PII Gate | Create review process and tagging system (PII/PII-removed/Public-ready) for agent library | Security/Compliance |
Short-term (This Cycle)
| Action | Description | Owner |
|---|---|---|
| Documentation | Complete station-based guide with examples and prompts | Documentation Team |
| Agent Deployment | Simplify ARM scripts for one-click deployment | Technical Team |
| Library Architecture | Design structure for field-accessible agent demos with live endpoints | Architecture Team |
π Success Metrics
Team Independence
Metric: Number of stations team members can operate independently
Target: 2+ stations by end of cycle
Process Efficiency
Metric: Time reduction from manual process to tool-based process
Target: 50% reduction in manual steps
Field Adoption
Metric: Number of demos running from shared Azure environment
Target: 10+ field-accessible demos
Customer Conversion
Metric: Demos converted to deployed agents
Target: 30% conversion rate
π― Connect Format Alignment
Results You Deliver:
- Business Impact: Accelerating deals through reproducible AI agent demos
- Team Impact: Enabling team members to scale from individual work to managing 20+ concurrent engagements
- Customer Impact: Moving from "amazing demos" to deployed, production-ready solutions
How You Deliver:
- From Research to Production: Grounding prototypes into field-usable tools
- Incremental Delivery: "Don't wait for the whole project" - ship station 1 & 2, then expand
- Team Enablement: Growth mindset - building framework that helps others succeed, not just solo innovation
π Security & Governance Goals
- PII Detection & Tagging: Implement systematic PII detection and classification for agent library
- Governance Gates: Establish review gates before public consumption of agents
- Data Handling Best Practices: Document security protocols in team documentation
- Compliance Alignment: Ensure all agents meet enterprise security standards before field deployment
π‘ Pro Tip for Connect
When writing your Connect, emphasize the shift from "200 amazing prototypes" to "2 production-ready tools the field can use tomorrow." This directly addresses the strategic feedback about moving from ideation to realization.
Tool Browser
Interactive tools for project management and agent development
π οΈ Available Tools
These self-contained HTML tools help you manage projects, track implementations, and create demonstrations. Each tool runs entirely in your browser with local-first data storage.
Project Tracker
Local-First Data Storage
Track customer projects, manage AI agent implementations, and maintain detailed records of your RAPP workflow. Features include project timelines, agent libraries, and export capabilities.
Chat Animation Studio
Demo Creation Tool
Create engaging animated chat demonstrations for your AI agents. Design realistic conversation flows, customize timing, and export professional demo videos for customer presentations.
π About These Tools
These are local-first HTML tools that run entirely in your browser. Your data is stored locally using browser localStorage, ensuring privacy and offline functionality.
- No Server Required: All tools work offline after initial load
- Data Privacy: Your data never leaves your browser
- Export Capabilities: Download your data as JSON or formatted documents
- Shareable: Open these HTML files in any modern browser
π§ Tool Development
Need a custom tool for your workflow? These tools are built as standalone HTML files that can be easily modified or extended. Check the tools/ directory in the repository for source code and templates.
π‘ Creating Your Own Tools
All tools follow a simple pattern:
- Single HTML file with embedded CSS and JavaScript
- Local storage for data persistence
- Export/import functionality for data portability
- Responsive design for mobile and desktop use