Integration Testing
This document describes APM’s integration testing strategy to ensure runtime setup scripts work correctly and the golden scenario from the README functions as expected.
Testing Strategy
Section titled “Testing Strategy”APM uses a tiered approach to integration testing:
1. Smoke Tests (Every CI run)
Section titled “1. Smoke Tests (Every CI run)”- Location:
tests/integration/test_runtime_smoke.py - Purpose: Fast verification that runtime setup scripts work
- Scope:
- Runtime installation (codex, llm)
- Binary functionality (
--version,--help) - APM runtime detection
- Workflow compilation without execution
- Duration: ~2-3 minutes per platform
- Trigger: Every push/PR
2. End-to-End Golden Scenario Tests (Releases only)
Section titled “2. End-to-End Golden Scenario Tests (Releases only)”- Location:
tests/integration/test_golden_scenario_e2e.py - Purpose: Complete verification of the README golden scenario
- Scope:
- Full runtime setup and configuration
- Project initialization (
apm init) - Dependency installation (
apm install) - Real API calls to GitHub Models
- Both Codex and LLM runtime execution
- Duration: ~10-15 minutes per platform (with 20-minute timeout)
- Trigger: Only on version tags (releases)
Running Tests Locally
Section titled “Running Tests Locally”Smoke Tests
Section titled “Smoke Tests”# Run all smoke testspytest tests/integration/test_runtime_smoke.py -v
# Run specific testpytest tests/integration/test_runtime_smoke.py::TestRuntimeSmoke::test_codex_runtime_setup -vE2E Tests
Section titled “E2E Tests”Option 1: Complete CI Process Simulation (Recommended)
Section titled “Option 1: Complete CI Process Simulation (Recommended)”```bashexport GITHUB_TOKEN=your_token_here./scripts/test-integration.shThis script (`scripts/test-integration.sh`) is a unified script that automatically adapts to your environment:
**Local mode** (no existing binary):1. **Builds binary** with PyInstaller (like CI build job)2. **Sets up symlink and PATH** (like CI artifacts download)3. **Installs runtimes** (codex/llm setup)4. **Installs test dependencies** (like CI test setup)5. **Runs integration tests** with the built binary (like CI integration-tests job)
**CI mode** (binary exists in `./dist/`):1. **Uses existing binary** from CI build artifacts2. **Sets up symlink and PATH** (standard CI process)3. **Installs runtimes** (codex/llm setup)4. **Installs test dependencies** (like CI test setup)5. **Runs E2E tests** with pre-built binary
#### Option 2: Direct pytest execution```bash# Set up environmentexport APM_E2E_TESTS=1export GITHUB_TOKEN=your_github_token_hereexport GITHUB_MODELS_KEY=your_github_token_here # LLM runtime expects this specific env var
# Run E2E testspytest tests/integration/test_golden_scenario_e2e.py -v -s
# Run specific E2E testpytest tests/integration/test_golden_scenario_e2e.py::TestGoldenScenarioE2E::test_complete_golden_scenario_codex -v -sNote: Both GITHUB_TOKEN and GITHUB_MODELS_KEY should contain the same GitHub token value, but different runtimes expect different environment variable names.
CI/CD Integration
Section titled “CI/CD Integration”GitHub Actions Workflow
Section titled “GitHub Actions Workflow”On every push/PR:
- Unit tests + Smoke tests (runtime installation verification)
On version tag releases:
- Unit tests + Smoke tests
- Build binaries (cross-platform)
- E2E golden scenario tests (using built binaries)
- Create GitHub Release
- Publish to PyPI
- Update Homebrew Formula
Manual workflow dispatch:
- Test builds (uploads as workflow artifacts)
- Allows testing the full build pipeline without creating a release
- Useful for validating changes before tagging
GitHub Actions Authentication
Section titled “GitHub Actions Authentication”E2E tests require proper GitHub Models API access:
Required Permissions:
contents: read- for repository accessmodels: read- Required for GitHub Models API access
Environment Variables:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}- for Codex runtimeGITHUB_MODELS_KEY: ${{ secrets.GITHUB_TOKEN }}- for LLM runtime (expects different env var name)
Both runtimes authenticate against GitHub Models but expect different environment variable names.
Release Pipeline Sequencing
Section titled “Release Pipeline Sequencing”The workflow ensures quality gates at each step:
- test job - Unit tests + smoke tests (all platforms)
- build job - Binary compilation (depends on test success)
- integration-tests job - Comprehensive runtime scenarios (depends on build success)
- create-release job - GitHub release creation (depends on integration-tests success)
- publish-pypi job - PyPI package publication (depends on release creation)
- update-homebrew job - Homebrew formula update (depends on PyPI publication)
Each stage must succeed before proceeding to the next, ensuring only fully validated releases reach users.
Test Matrix
Section titled “Test Matrix”All integration tests run on:
- Linux: ubuntu-24.04 (x86_64)
- macOS Intel: macos-13 (x86_64)
- macOS Apple Silicon: macos-14 (arm64)
Python Version: 3.12 (standardized across all environments) Package Manager: uv (for fast dependency management and virtual environments)
What the Tests Verify
Section titled “What the Tests Verify”Smoke Tests Verify:
Section titled “Smoke Tests Verify:”- ✅ Runtime setup scripts execute successfully
- ✅ Binaries are downloaded and installed correctly
- ✅ Binaries respond to basic commands
- ✅ APM can detect installed runtimes
- ✅ Configuration files are created properly
- ✅ Workflow compilation works (without execution)
E2E Tests Verify:
Section titled “E2E Tests Verify:”- ✅ Complete golden scenario from README works
- ✅
apm runtime setup copilotinstalls and configures GitHub Copilot CLI - ✅
apm runtime setup codexinstalls and configures Codex - ✅
apm runtime setup llminstalls and configures LLM - ✅
apm init my-hello-worldcreates project correctly - ✅
apm installhandles dependencies - ✅
apm run start --param name="Tester"executes successfully - ✅ Real API calls to GitHub Models work
- ✅ Parameter substitution works correctly
- ✅ MCP integration functions (GitHub tools)
- ✅ Binary artifacts work across platforms
- ✅ Release pipeline integrity (GitHub Release → PyPI → Homebrew)
Benefits
Section titled “Benefits”Speed vs Confidence Balance
Section titled “Speed vs Confidence Balance”- Smoke tests: Fast feedback (2-3 min) on every change
- E2E tests: High confidence (15 min) only when shipping
Cost Efficiency
Section titled “Cost Efficiency”- Smoke tests use no API credits
- E2E tests only run on releases (minimizing API usage)
- Manual workflow dispatch for test builds without publishing
Platform Coverage
Section titled “Platform Coverage”- Tests run on all supported platforms
- Catches platform-specific runtime issues
Release Confidence
Section titled “Release Confidence”- E2E tests must pass before any publishing steps
- Multi-stage release pipeline ensures quality gates
- Guarantees shipped releases work end-to-end
- Users can trust the README golden scenario
- Cross-platform binary verification
- Automatic Homebrew formula updates
Debugging Test Failures
Section titled “Debugging Test Failures”Smoke Test Failures
Section titled “Smoke Test Failures”- Check runtime setup script output
- Verify platform compatibility
- Check network connectivity for downloads
E2E Test Failures
Section titled “E2E Test Failures”- Use the unified integration script first: Run
./scripts/test-integration.shto reproduce the exact CI environment locally - Verify
GITHUB_TOKENhas required permissions (models:read) - Ensure both
GITHUB_TOKENandGITHUB_MODELS_KEYenvironment variables are set - Check GitHub Models API availability
- Review actual vs expected output
- Test locally with same environment
- For hanging issues: Check command transformation in script runner (codex expects prompt content, not file paths)
Adding New Tests
Section titled “Adding New Tests”For New Runtime Support:
Section titled “For New Runtime Support:”- Add smoke test for runtime setup
- Add E2E test for golden scenario with new runtime
- Update CI matrix if new platform support
For New Features:
Section titled “For New Features:”- Add smoke test for compilation/validation
- Add E2E test if feature requires API calls
- Keep tests focused and fast
This testing strategy ensures we ship with confidence while maintaining fast development cycles.