Skip to main content

AI Artifacts Common Standards

This document defines shared standards, conventions, and quality gates that apply to all AI artifact contributions to hve-core (agents, prompts, and instructions files).

Agents Not Accepted

The following agent types will likely be rejected or closed automatically because equivalent agents already exist in hve-core:

Duplicate Agent Categories

Research or Discovery Agents

Agents that search for, gather, or discover information.

  • ❌ Reason: Existing agents already handle research and discovery workflows
  • ✅ Alternative: Use existing research-focused agents in .github/agents/

Indexing or Referencing Agents

Agents that catalog, index, or create references to existing projects.

  • ❌ Reason: Existing agents already provide indexing and referencing capabilities
  • ❌ Tool integration: Widely supported tools built into VS Code GitHub Copilot and MCP tools with extremely wide adoption are already supported by existing hve-core agents
  • ✅ Alternative: Use existing reference management agents that use standard VS Code GitHub Copilot tools and widely-adopted MCP tools

Planning Agents

Agents that plan work, break down tasks, or organize backlog items.

  • ❌ Reason: Existing agents already handle work planning and task organization
  • ✅ Alternative: Use existing planning-focused agents in .github/agents/

Implementation Agents

General-purpose coding agents that implement features.

  • ❌ Reason: Existing agents already provide implementation guidance
  • ✅ Alternative: Use existing implementation-focused agents

Rationale for Rejection

These agent types are rejected because:

  1. Existing agents are hardened and heavily used: the hve-core library already contains production-tested agents in these categories
  2. Consistency and maintenance: coalescing around existing agents reduces fragmentation and maintenance burden
  3. Avoid duplication: multiple agents serving the same purpose create confusion and divergent behavior
  4. Standard tooling already integrated: VS Code GitHub Copilot built-in tools and widely-adopted MCP tools are already used by existing agents

Before Submitting

When planning to submit an agent that falls into these categories:

  1. Question necessity: does your use case truly require a new agent, or can existing agents meet your needs?
  2. Review existing agents: examine .github/agents/ to identify agents that already serve your purpose
  3. Check tool integration: verify whether the VS Code GitHub Copilot tools or MCP tools you need are already used by existing agents
  4. Consider enhancement over creation: if existing agents don't fully meet your requirements, evaluate whether your changes are generic enough to benefit all users and valuable enough to justify modifying the existing agent
  5. Propose enhancements: submit a PR to enhance an existing agent rather than creating a duplicate

What Makes a Good New Agent

Focus on agents that:

CriterionDescription
Fill gapsAddress use cases not covered by existing agents
Provide unique valueOffer specialized domain expertise or workflow patterns not present in the library
Are non-overlappingHave clearly distinct purposes from existing agents
Cannot be mergedRepresent functionality too specialized or divergent to integrate into existing agents
Use standard toolingUse widely-supported VS Code GitHub Copilot tools and MCP tools rather than custom integrations

Model Version Requirements

All AI artifacts (agents, instructions, prompts) MUST target the latest available models from Anthropic and OpenAI only.

Accepted Models

ProviderModels
AnthropicLatest Claude models (e.g., Claude Sonnet 4, Claude Opus 4)
OpenAILatest GPT models (e.g., GPT-5, 5.1-COdEX)

Not Accepted

  • ❌ Older model versions (e.g., GPT-4o, Claude 4)
  • ❌ Models from other providers
  • ❌ Custom or fine-tuned models
  • ❌ Deprecated model versions

Rationale

  1. Feature parity: latest models support the most advanced features and capabilities
  2. Maintenance burden: supporting multiple model versions creates testing and compatibility overhead
  3. Performance: latest models provide superior reasoning, accuracy, and efficiency
  4. Future-proofing: older models will be deprecated and removed from service

Collections

Collection manifests in collections/*.collection.yml are the source of truth for artifact selection and distribution.

Collection Purpose

Collection manifests serve three primary functions:

  1. Selection \u2014 determine which artifacts are included in each collection via items[]
  2. Maturity filtering \u2014 control channel inclusion with items[].maturity (defaults to stable)
  3. Packaging inputs \u2014 provide canonical manifest data used by build and distribution flows

Collection Structure

Each manifest contains top-level collection metadata and an items array:

id: coding-standards
name: Coding Standards
description: Language-specific coding instructions
tags:
- coding-standards
- bash
- python
items:
- path: .github/instructions/coding-standards/python-script.instructions.md
kind: instruction
maturity: stable
- path: .github/prompts/hve-core/task-plan.prompt.md
kind: prompt
maturity: preview

Collection Tags

Each collection manifest declares a top-level tags array for categorization and discoverability. Tags exist only at the collection level, not on individual items.

CollectionTags
hve-core-allhve, complete, bundle
adoazure-devops, ado, work-items, builds, pull-requests
coding-standardscoding-standards, code-review, pull-request, quality, bash, bicep, csharp, powershell, python, rust, terraform, uv
data-sciencedata, jupyter, streamlit, dashboards, visualization, data-science
design-thinkingdesign-thinking, coaching, methodology, ux
experimentalexperimental, media, utilities
githubgithub, issues, backlog, triage, sprint
installerinstaller, setup, onboarding
project-planningdocumentation, architecture, adr, brd, prd, diagrams, planning
hve-coreworkflow, rpi, planning, research, implementation, review
security-planningsecurity, incident-response, risk, planning

When creating a new collection, choose tags that describe the domain, technologies, and workflows covered. Use lowercase kebab-case and prefer existing tags before introducing new ones.

Collection Item Format

Each items[] entry follows this structure:

- path: .github/agents/hve-core/rpi-agent.agent.md
kind: agent
maturity: stable
FieldRequiredDescription
pathYesRepository-relative path to the artifact source
kindYesArtifact type (agent, prompt, instruction, skill, or hook)
maturityNoRelease readiness level; when omitted, effective maturity defaults to stable

Adding Artifacts to a Collection

When contributing a new artifact:

  1. Create the artifact file in the appropriate directory
  2. Add a matching items[] entry in one or more collections/*.collection.yml files
  3. Set maturity when the artifact should be preview, experimental, or deprecated
  4. Update the collection's tags array if your artifact introduces a new technology or domain not yet represented
  5. Run npm run lint:yaml to validate manifest syntax and schema compliance
  6. Run npm run plugin:validate to validate collection manifests
  7. Run npm run plugin:generate to regenerate plugin directories

Repo-Specific Artifact Exclusion

Artifacts placed at the root of .github/agents/, .github/instructions/, .github/prompts/, or .github/skills/ (without a subdirectory) are repo-specific and MUST NOT be added to collection manifests. These files govern internal repository concerns (CI/CD workflows, repo-specific conventions) that do not apply outside this repository.

Deprecated Artifact Placement

Artifacts that have been superseded or are scheduled for removal MUST be moved to .github/deprecated/{type}/ (e.g., .github/deprecated/agents/, .github/deprecated/prompts/). The build system automatically excludes this subtree from collection manifests, plugin generation, and extension packaging.

When to Move an Artifact to Deprecated

  • A newer artifact fully replaces the existing one
  • The artifact is no longer maintained or tested
  • The artifact targets a retired platform or workflow

How to Deprecate an Artifact

  1. Move the file with git mv to preserve history: git mv .github/agents/{collection}/old.agent.md .github/deprecated/agents/old.agent.md
  2. Add a note in the deprecated file's frontmatter or body identifying its replacement
  3. Remove the artifact's entry from all collections/*.collection.yml files
  4. Run npm run plugin:generate to regenerate plugin outputs
  5. Update any documentation that references the old artifact path

Exclusion Scope

Artifacts at the root of .github/agents/, .github/instructions/, .github/prompts/, or .github/skills/ are excluded from:

  • Collection manifests (collections/*.collection.yml items)
  • Plugin generation (plugins/ directory contents)
  • Extension packaging and distribution
  • Collection builds and bundles
  • Artifact selection for published releases

Validation Enforcement

The plugin generation and validation tooling actively enforces this exclusion:

  • Collection validation fails if root-level repo-specific paths appear in items[]
  • Plugin generation skips root-level artifacts
  • Extension packaging filters out these files during build

Placement Guidelines

ScopeLocationIncluded in Plugins
Repository-specific.github/instructions/ (root, no subdirectory)❌ No
Collection-scoped.github/instructions/{collection-id}/ (by convention)✅ Yes
Language/tech-specific.github/instructions/coding-standards/{language}/✅ Yes
Shared cross-collection.github/instructions/shared/✅ Yes

If your instructions apply only to this repository and are not intended for distribution to consumers, place them at the root of .github/instructions/. Otherwise, by convention, place them in .github/instructions/{collection-id}/ or a language-specific subdirectory under coding-standards/ (e.g., coding-standards/csharp/, coding-standards/bash/). Shared cross-collection artifacts go in .github/instructions/shared/.

Collection Taxonomy

Collections represent role-targeted artifact packages for HVE-Core artifacts. The collection system enables role-specific artifact distribution without fragmenting the codebase.

Defined Collections

CollectionIdentifierDescription
Allhve-core-allFull bundle of all stable HVE Core agents, prompts, instructions, and skills
Azure DevOpsadoAzure DevOps work item management, build monitoring, and pull request creation
Coding Standardscoding-standardsLanguage-specific coding instructions for bash, Bicep, C#, PowerShell, Python, and Terraform
Data Sciencedata-scienceData specification generation, Jupyter notebooks, and Streamlit dashboards
Design Thinkingdesign-thinkingDesign Thinking coaching identity, quality constraints, and methodology instructions
ExperimentalexperimentalExperimental skills and utilities in early development
GitHub BackloggithubGitHub issue discovery, triage, sprint planning, and backlog execution
InstallerinstallerHVE Core installation and environment setup
Project Planningproject-planningPRDs, BRDs, ADRs, architecture diagrams, and documentation operations
HVE Core Workflowhve-coreResearch, Plan, Implement, Review workflow agents and prompts
Security Planningsecurity-planningSecurity plan creation, incident response, and risk assessment

Collection Assignment Guidelines

When assigning collections to artifacts:

  • Include hve-core-all plus any role-specific collections that particularly benefit for universal artifacts
  • Include only the relevant collections for role-specific artifacts (omit hve-core-all for highly specialized artifacts)
  • Assign cross-cutting tools like RPI workflow artifacts (task-researcher, task-planner) to multiple relevant collections

Example Collection Assignments

Adding an artifact to multiple collections means adding its items[] entry in each relevant collections/*.collection.yml:

# In collections/hve-core-all.collection.yml - Universal
- path: .github/instructions/hve-core/markdown.instructions.md
kind: instruction

# In collections/coding-standards.collection.yml - Coding standards
- path: .github/instructions/hve-core/markdown.instructions.md
kind: instruction

# In collections/hve-core.collection.yml - Core workflow
- path: .github/agents/hve-core/rpi-agent.agent.md
kind: agent

Selecting Collections for New Artifacts

Answer these questions when determining collection assignments:

  1. Who is the primary user? Identify the main role that benefits from this artifact
  2. Who else benefits? Consider secondary roles that may find value
  3. Is it foundational? Core workflow artifacts should include multiple collections
  4. Is it specialized? Domain-specific artifacts may target fewer collections

When in doubt, include hve-core-all to ensure the artifact appears in the full collection while still enabling targeted distribution.

Extension Packaging

Collections are consumed during VS Code Extension packaging to determine which artifacts are included in stable and pre-release extension channels.

Agent Handoff Dependencies

During VS Code Extension packaging, agent handoff dependencies are automatically resolved to ensure UI navigation buttons work correctly.

How Handoff Resolution Works

The extension packaging process (scripts/extension/Prepare-Extension.ps1) includes the Resolve-HandoffDependencies function:

  1. Seed agents: Starts with agents listed in the collection manifest
  2. Parse frontmatter: Reads the handoffs field from each agent's frontmatter
  3. BFS traversal: Performs breadth-first search to find all reachable agents through handoff chains
  4. Include all: Adds all discovered agents to the extension package

Collection Manifests and Dependencies

Collection manifests do NOT declare dependencies. They only specify:

  • path: Repository-relative path to the artifact
  • kind: Artifact type (agent, prompt, instruction, skill, hook)
  • maturity: Release readiness level (optional, defaults to stable)

Dependencies are resolved through agent frontmatter handoffs declarations during extension packaging, not through collection manifest fields.

Creating Artifacts with Dependencies

When creating artifacts that reference other artifacts:

GuidelineDescription
Agent handoffsUse the handoffs frontmatter field in agents to declare UI navigation buttons
Document relationshipsClearly describe dependencies in artifact documentation
Test in isolationVerify your artifact works when only its collection is installed
Keep coupling minimalAvoid unnecessary dependencies between artifacts

For agent handoff configuration details, see Contributing Custom Agents - Frontmatter Requirements.

Maturity Field Requirements

Maturity is defined in collections/*.collection.yml under items[].maturity and MUST NOT appear in artifact frontmatter.

Purpose

The maturity field controls which extension channel includes the artifact:

ChannelDescription
Stable channelOnly artifacts with maturity: stable
Pre-release channelArtifacts with stable, preview, or experimental maturity

Valid Values

ValueDescriptionStable ChannelPre-release Channel
stableProduction-ready, fully tested✅ Included✅ Included
previewFeature-complete, may have rough edges❌ Excluded✅ Included
experimentalEarly development, may change significantly❌ Excluded✅ Included
deprecatedScheduled for removal❌ Excluded❌ Excluded

When items[].maturity is omitted, the effective maturity defaults to stable.

Default for New Contributions

New collection items SHOULD use maturity: stable unless:

  • The artifact is a proof-of-concept or experimental feature
  • The artifact requires additional testing or feedback before wide release
  • The contributor explicitly intends to target early adopters

Setting Maturity

Add or update the maturity value on each collection item in collections/*.collection.yml:

items:
# path can reference artifacts from any subfolder
- path: .github/agents/{collection-id}/example.agent.md
kind: agent
maturity: stable

For detailed channel and lifecycle information, see Release Process - Extension Channels.

Before submitting: Verify your artifact targets the current latest model versions from Anthropic or OpenAI. Contributions targeting older or alternative models will be automatically rejected.

Plugin Generation

The plugins/ directory contains auto-generated plugin bundles created from collection manifests for use with GitHub Copilot CLI. These plugin directories are outputs of the build process and MUST NOT be edited directly.

Generation Workflow

When you add an artifact to a collection manifest:

  1. Author artifact: Create your agent, prompt, instruction, or skill in .github/
  2. Update collection: Add an items[] entry to one or more collections/*.collection.yml files
  3. Validate collections: Run npm run plugin:validate to check manifest correctness
  4. Generate plugins: Run npm run plugin:generate to regenerate all plugin directories
  5. Commit both: Commit the source artifact, collection manifest updates, AND generated plugin outputs together

Plugin Directory Structure

Each generated plugin directory contains:

ContentDescription
Symlinked artifactsDirect symlinks to source files in .github/ (preserves single source of truth)
Generated READMEAuto-generated documentation listing all included artifacts
Plugin manifestplugin.json file for GitHub Copilot CLI plugin system
Marketplace metadataAggregated data for extension distribution

Critical Rules for Plugin Files

WARNING

Files under plugins/ are generated outputs and MUST NOT be edited directly.

RuleDescription
Regenerate after changesAlways run npm run plugin:generate after modifying collection manifests or artifacts
Symlinked filesMarkdown artifacts are symlinked, so edits to plugin files modify source artifacts
Generated filesREADME and JSON files are generated fresh on each run
Durable editsDirect edits to plugin files will be overwritten or cause conflicts
Source of truthAlways edit the source artifact in .github/, not the plugin copy

When to Regenerate Plugins

Run npm run plugin:generate whenever you:

  • Add a new artifact to a collection manifest
  • Remove an artifact from a collection manifest
  • Modify artifact frontmatter (description, dependencies, handoffs)
  • Update artifact file content that affects generated README documentation
  • Change collection manifest metadata (tags, description, name)
  • Update the hve-core-all collection (auto-updated during generation)

Validating Collection Manifests

Before generating plugins, validate collection YAML files to catch errors early:

npm run plugin:validate

This command checks:

CheckDescription
YAML syntaxValid YAML structure and formatting
Required fieldsPresence of id, name, description, items
Path referencesAll artifact paths exist and are accessible
Kind valuesValid artifact kinds (agent, prompt, instruction, skill, hook)
Maturity valuesValid maturity levels (stable, preview, experimental, deprecated)
Duplicate pathsNo duplicate artifact entries within a collection
Root-level exclusionsNo repo-specific artifacts from .github/{type}/ root

Always validate before generating plugins:

# Recommended workflow
npm run plugin:validate # Validate collections first
npm run plugin:generate # Then regenerate plugins

Validation errors will prevent successful plugin generation, so fixing validation issues first saves time and prevents incomplete plugin outputs.

Plugin Generation Reference

For detailed documentation on the plugin generation system, including:

  • Generation script implementation details
  • Collection validation rules
  • Plugin directory structure specifications
  • Troubleshooting generation errors

See the Plugin Scripts README.

XML-Style Block Standards

All AI artifacts use XML-style HTML comment blocks to wrap examples, schemas, templates, and critical instructions. This enables automated extraction, better navigation, and consistency.

Requirements

RuleDescription
Tag namingUse kebab-case (e.g., <!-- <example-valid-frontmatter> -->)
Matching pairsOpening and closing tags MUST match exactly
Unique namesEach tag name MUST be unique within the file (no duplicates)
Code fence placementPlace code fences inside blocks, never outside
Nested blocksUse 4-backtick outer fence when demonstrating blocks with code fences
Single linesOpening and closing tags on their own lines

Valid XML-Style Block Structure

<!-- <example-configuration> -->
```json
{
"enabled": true,
"timeout": 30
}
```
<!-- </example-configuration> -->

Demonstrating Blocks with Nested Fences

When showing examples that contain XML blocks with code fences, use 4-backtick outer fence:

````markdown
<!-- <example-bash-script> -->
```bash
#!/bin/bash
echo "Hello World"
```
<!-- </example-bash-script> -->
````

Common Tag Patterns

  • <!-- <example-*> --> - Code examples
  • <!-- <schema-*> --> - Schema definitions
  • <!-- <pattern-*> --> - Coding patterns
  • <!-- <convention-*> --> - Convention blocks
  • <!-- <anti-pattern-*> --> - Things to avoid
  • <!-- <reference-sources> --> - External documentation links
  • <!-- <validation-checklist> --> - Validation steps
  • <!-- <file-structure> --> - File organization

Common XML Block Issues

Missing Closing Tag

XML-style comment blocks opened but never closed. Always include matching closing tags <!-- </block-name> --> for all opened blocks.

Duplicate Tag Names

Using the same XML block tag name multiple times in a file. Make each tag name unique (e.g., <example-python-function> and <example-bash-script> instead of multiple <example-code> blocks).

Markdown Quality Standards

All AI artifacts MUST follow these markdown quality requirements:

Heading Hierarchy

  • Start with H1 title
  • No skipped levels (H1 → H2 → H3, not H1 → H3)
  • Use H1 for document title only
  • Use H2 for major sections, H3 for subsections

Code Blocks

  • All code blocks MUST have language tags
  • Use proper language identifiers: bash, python, json, yaml, markdown, text, plaintext
  • No naked code blocks without language specification

❌ Bad:

```
code without language tag
```

✅ Good:

```python
def example(): pass
```

URL Formatting

  • No bare URLs in prose
  • Wrap in angle brackets: <https://example.com>
  • Use markdown links: [text](https://example.com)

❌ Bad:

See https://example.com for details.

✅ Good:

See <https://example.com> for details.
# OR
See [official documentation](https://example.com) for details.

List Formatting

  • Use consistent list markers (prefer * for bullets)
  • Use - for nested lists or alternatives
  • Numbered lists use 1., 2., 3. etc.

Line Length

  • Target ~500 characters per line
  • Exceptions: code blocks, tables, URLs, long technical terms
  • Not a hard limit, but improves readability

Whitespace

  • No hard tabs (use spaces)
  • No trailing whitespace (except 2 spaces for intentional line breaks)
  • File ends with single newline character

File Structure

  • Starts with frontmatter (YAML between --- delimiters)
  • Followed by markdown content
  • Includes attribution in frontmatter description field
  • Single newline at EOF

RFC 2119 Directive Language

Use standardized keywords for clarity and enforceability:

Required Behavior

MUST, WILL, MANDATORY, REQUIRED, and CRITICAL indicate absolute requirements. Non-compliance is a defect.

Example: Required Behavior

All functions MUST include type hints for parameters and return values.
You WILL validate frontmatter before proceeding (MANDATORY).

Strong Recommendations

SHOULD and RECOMMENDED indicate best practices. Valid reasons may exist for exceptions, but non-compliance requires justification.

Example: Strong Recommendations

Examples SHOULD be wrapped in XML-style blocks for reusability.
Functions SHOULD include docstrings with parameter descriptions.

Optional/Permitted

MAY, OPTIONAL, and CAN indicate permitted but not required behavior. The choice is left to the implementer.

Example: Optional Behavior

You MAY include version fields in frontmatter.
Contributors CAN organize examples by complexity level.

Avoid Ambiguous Language

❌ Ambiguous (Never Use):

You might want to validate the input...
It could be helpful to add docstrings...
Perhaps consider wrapping examples...
Try to follow the pattern...
Maybe include tests...

✅ Clear (Always Use):

You MUST validate all input before processing.
Functions SHOULD include docstrings.
Examples SHOULD be wrapped in XML-style blocks.
You MAY include additional examples.

Common Validation Standards

All AI artifacts are validated using these automated tools:

Validation Commands

Run these commands before submitting:

# Validate frontmatter against schemas
npm run lint:frontmatter

# Check markdown quality
npm run lint:md

# Spell check
npm run spell-check

# Validate all links
npm run lint:md-links

# PowerShell analysis (if applicable)
npm run lint:ps

# Validate skill structure (if applicable)
npm run validate:skills

Quality Gates

All submissions MUST pass:

GateDescription
Frontmatter SchemaValid YAML with required fields
Markdown LintingNo markdown rule violations
Spell CheckNo spelling errors (or added to dictionary)
Link ValidationAll links accessible and valid
File FormatCorrect fences and structure

Validation Checklist Template

Use this checklist structure in type-specific guides:

### Validation Checklist

#### Frontmatter
- [ ] Valid YAML between `---` delimiters
- [ ] All required fields present and valid
- [ ] No trailing whitespace
- [ ] Single newline at EOF

#### Markdown Quality
- [ ] Heading hierarchy correct
- [ ] Code blocks have language tags
- [ ] No bare URLs
- [ ] Consistent list markers

#### XML-Style Blocks
- [ ] All blocks closed properly
- [ ] Unique tag names
- [ ] Code fences inside blocks

#### Technical
- [ ] File references valid
- [ ] External links accessible
- [ ] No conflicts with existing files

Common Testing Practices

Before submitting any AI artifact:

1. Manual Testing

  • Execute the artifact manually with realistic scenarios
  • Verify outputs match expectations
  • Check edge cases (missing data, invalid inputs, errors)

2. Example Verification

  • All code examples are syntactically correct
  • Examples run without errors
  • Examples demonstrate intended patterns

3. Tool Validation

  • Specified tools/commands exist and work
  • Tool outputs match documentation
  • Error messages are clear

4. Documentation Review

  • All sections complete and coherent
  • Cross-references valid
  • No contradictory guidance

Common Issues and Fixes

Ambiguous Directives

Using vague, non-committal language that doesn't clearly indicate requirements. Use RFC 2119 keywords (MUST, SHOULD, MAY) to specify clear requirements.

Missing XML Block Closures

XML-style comment blocks opened but never closed. Always include matching closing tags for all XML-style comment blocks.

Code Blocks Without Language Tags

Code blocks missing language identifiers for syntax highlighting. Always specify the language for code blocks (python, bash, json, yaml, markdown, text, plaintext).

Bare URLs

URLs placed directly in text without proper markdown formatting. Wrap URLs in angle brackets <https://example.com> or use proper markdown link syntax [text](url.md).

Inconsistent List Markers

Mixing different bullet point markers (* and -) in the same list. Use consistent markers throughout (prefer * for bullets, - for nested or alternatives).

Trailing Whitespace

Extra spaces at the end of lines (except intentional 2-space line breaks). Remove all trailing whitespace from lines.

Skipped Heading Levels

Jumping from H1 to H3 without an H2, breaking document hierarchy. Follow proper heading sequence (H1 → H2 → H3) without skipping levels.

Attribution Requirements

All AI artifacts MUST include attribution as a suffix in the frontmatter description field:

description: 'Tests prompt files in a sandbox environment - Brought to you by microsoft/hve-core'

Format: - Brought to you by organization/repository-name appended to the description value.

Skill files (SKILL.md) additionally include a blockquote attribution footer as the last line of body content:

> Brought to you by microsoft/hve-core

GitHub Issue Title Conventions

When filing issues against hve-core, use Conventional Commit-style title prefixes that match the repository's commit message format.

Issue Title Format

Issue TypeTitle PrefixExample
Bug reportsfix:fix: validation script fails on Windows paths
Agent requestsfeat(agents):feat(agents): add Azure cost analysis agent
Prompt requestsfeat(prompts):feat(prompts): add PR description generator
Instruction requestsfeat(instructions):feat(instructions): add Go language standards
Skill requestsfeat(skills):feat(skills): add diagram generation skill
General featuresfeat:feat: support multi-root workspaces
Documentationdocs:docs: clarify installation steps

Benefits

  • Issue titles align with commit and PR title conventions
  • Automated changelog generation works correctly
  • Scopes clearly identify affected artifact categories
  • Consistent formatting across all project tracking

Reference

See commit-message.instructions.md for the complete list of types and scopes.

Getting Help

When contributing AI artifacts:

Review Examples

Artifact TypeLocation
AgentsFiles in .github/agents/{collection-id}/ (the conventional location)
PromptsFiles in .github/prompts/{collection-id}/ (the conventional location)
InstructionsFiles in .github/instructions/{collection-id}/ (the conventional location)

Check Repository Standards

  • Read .github/copilot-instructions.md for repository-wide conventions
  • Review existing files in same category for patterns
  • Use prompt-builder.agent.md agent for guided assistance

Ask Questions

  • Open draft PR and ask in comments
  • Reference specific validation errors
  • Provide context about your use case

Common Resources


🤖 Crafted with precision by ✨Copilot following brilliant human instruction, then carefully refined by our team of discerning human reviewers.