Prompts
Prompts are the building blocks of APM — focused, reusable AI instructions that accomplish specific tasks. They follow the .prompt.md convention and are distributed as shareable packages.
How Prompts Work in APM
Section titled “How Prompts Work in APM”APM treats prompts as deployable artifacts:
- Prompts (
.prompt.mdfiles) contain AI instructions with parameter placeholders - Packages bundle prompts for sharing via
apm publishandapm install - Deployment places prompts into well-known directories (e.g.,
.github/prompts/) where tools like GitHub Copilot can discover and use them - Compilation resolves parameter placeholders, cross-file references, and link transforms at install time
# Deployment flowapm install owner/my-prompt-package ↓APM compiles .prompt.md files (parameter defaults, link resolution) ↓Prompts land in .github/prompts/ for Copilot to discoverWhat are Prompts?
Section titled “What are Prompts?”A prompt is a single-purpose AI instruction stored in a .prompt.md file. Prompts are:
- Focused: Each prompt does one thing well
- Reusable: Can be used across multiple scripts
- Parameterized: Accept inputs to customize behavior
- Testable: Easy to run and validate independently
Prompt File Structure
Section titled “Prompt File Structure”Prompts follow the VSCode .prompt.md convention with YAML frontmatter:
---description: Analyzes application logs to identify errors and patternsauthor: DevOps Teammcp: - logs-analyzerinput: - service_name - time_window - log_level---
# Analyze Application Logs
You are a expert DevOps engineer analyzing application logs to identify issues and patterns.
## Context- Service: ${input:service_name}- Time window: ${input:time_window}- Log level: ${input:log_level}
## Task1. Retrieve logs for the specified service and time window2. Identify any ERROR or FATAL level messages3. Look for patterns in warnings that might indicate emerging issues4. Summarize findings with: - Critical issues requiring immediate attention - Trends or patterns worth monitoring - Recommended next steps
## Output FormatProvide a structured summary with:- **Status**: CRITICAL | WARNING | NORMAL- **Issues Found**: List of specific problems- **Patterns**: Recurring themes or trends- **Recommendations**: Suggested actionsKey Components
Section titled “Key Components”YAML Frontmatter
Section titled “YAML Frontmatter”- description: Clear explanation of what the prompt does
- author: Who created/maintains this prompt
- mcp: Required MCP servers for tool access
- input: Parameters the prompt expects
Prompt Body
Section titled “Prompt Body”- Clear instructions: Tell the AI exactly what to do
- Context section: Provide relevant background information
- Input references: Use
${input:parameter_name}for dynamic values - Output format: Specify how results should be structured
Input Parameters
Section titled “Input Parameters”Reference script inputs using the ${input:name} syntax:
## Analysis Target- Service: ${input:service_name}- Environment: ${input:environment}- Start time: ${input:start_time}MCP Tool Integration (Phase 2 - Coming Soon)
Section titled “MCP Tool Integration (Phase 2 - Coming Soon)”Note: MCP integration is planned work. Currently, prompts work with natural language instructions only.
Future capability - Prompts will be able to use MCP servers for external tools:
---description: Future MCP-enabled promptmcp: - kubernetes-mcp # For cluster access - github-mcp # For repository operations - slack-mcp # For team communication---Current workaround: Use detailed natural language instructions:
---description: Current approach without MCP tools---
# Kubernetes Analysis
Please analyze the Kubernetes cluster by:1. Examining the deployment configurations I'll provide2. Reviewing resource usage patterns3. Suggesting optimization opportunities
[Include relevant data in the prompt or as context]See MCP Integration for MCP server configuration and usage.
Writing Effective Prompts
Section titled “Writing Effective Prompts”Be Specific
Section titled “Be Specific”# GoodAnalyze the last 24 hours of application logs for service ${input:service_name},focusing on ERROR and FATAL messages, and identify any patterns that mightindicate performance degradation.
# AvoidLook at some logs and tell me if there are problems.Structure Your Instructions
Section titled “Structure Your Instructions”## Task1. First, do this specific thing2. Then, analyze the results looking for X, Y, and Z3. Finally, summarize findings in the specified format
## Success Criteria- All ERROR messages are categorized- Performance trends are identified- Clear recommendations are providedSpecify Output Format
Section titled “Specify Output Format”## Output Format**Summary**: One-line status**Critical Issues**: Numbered list of immediate concerns**Recommendations**: Specific next steps with priority levelsExample Prompts
Section titled “Example Prompts”Code Review Prompt
Section titled “Code Review Prompt”---description: Reviews code changes for best practices and potential issuesauthor: Engineering Teaminput: - pull_request_url - focus_areas---
# Code Review Assistant
Review the code changes in pull request ${input:pull_request_url} with focus on ${input:focus_areas}.
## Review Criteria1. **Security**: Check for potential vulnerabilities2. **Performance**: Identify optimization opportunities3. **Maintainability**: Assess code clarity and structure4. **Testing**: Evaluate test coverage and quality
## OutputProvide feedback in standard PR review format with:- Specific line comments for issues- Overall assessment score (1-10)- Required changes vs suggestionsDeployment Health Check
Section titled “Deployment Health Check”---description: Verifies deployment success and system healthauthor: Platform Teammcp: - kubernetes-tools - monitoring-apiinput: - service_name - deployment_version---
# Deployment Health Check
Verify the successful deployment of ${input:service_name} version ${input:deployment_version}.
## Health Check Steps1. Confirm pods are running and ready2. Check service endpoints are responding3. Verify metrics show normal operation4. Test critical user flows
## Success Criteria- All pods STATUS = Running- Health endpoint returns 200- Error rate < 1%- Response time < 500msRunning Prompts
Section titled “Running Prompts”Prompts can be executed locally using APM’s experimental agent workflow system.
Define scripts in your apm.yml or let APM auto-discover .prompt.md files as
runnable workflows.
See the Agent Workflows guide for setup instructions, runtime configuration, and execution examples.
Best Practices
Section titled “Best Practices”1. Single Responsibility
Section titled “1. Single Responsibility”Each prompt should do one thing well. Break complex operations into multiple prompts.
2. Clear Naming
Section titled “2. Clear Naming”Use descriptive names that indicate the prompt’s purpose:
analyze-performance-metrics.prompt.mdcreate-incident-ticket.prompt.mdvalidate-deployment-config.prompt.md
3. Document Inputs
Section titled “3. Document Inputs”Always specify what inputs are required and their expected format:
input: - service_name # String: name of the service to analyze - time_window # String: time range (e.g., "1h", "24h", "7d") - severity_level # String: minimum log level ("ERROR", "WARN", "INFO")4. Version Control
Section titled “4. Version Control”Keep prompts in version control alongside scripts. Use semantic versioning for breaking changes.
Next Steps
Section titled “Next Steps”- Learn about Agent Workflows to run prompts locally with AI runtimes
- See CLI Reference for complete command documentation
- Check Development Guide for local development setup