Testing Guide
Tests are auto-generated for runtime libraries based on @sample decorators in TypeSpec. This guide covers running tests and understanding the test architecture.
Running Tests
Section titled “Running Tests”cd runtime/csharpdotnet test # All testsdotnet test --filter "ClassName" # Specific classcd runtime/python/agentschemauv run pytest tests/ # All testsuv run pytest tests/test_load_model.py # Specific fileuv run pytest -v # Verbose outputcd runtime/typescript/agentschemanpm test # All testsnpm test -- --grep "Model" # Filter by namecd runtime/go/agentschemago test ./... # All testsgo test -run "TestModel" # Filter by namego test -v ./... # Verbose outputTest Architecture
Section titled “Test Architecture”Tests are generated from @sample decorators in TypeSpec files. For each model:
flowchart LR
A["@sample decorator"] --> B[Emitter]
B --> C[test_load_*.py]
B --> D[*ConversionTests.cs]
B --> E[*.test.ts]
B --> I[*_test.go]
C --> F[pytest]
D --> G[dotnet test]
E --> H[vitest]
I --> J[go test]
What Gets Tested
Section titled “What Gets Tested”- JSON Loading - Parse JSON into model instance
- YAML Loading - Parse YAML into model instance
- JSON Roundtrip - Load → Save → Load produces same data
- YAML Roundtrip - Load → Save → Load produces same data
- Shorthand Loading - Scalar → Full model (if
@shorthanddefined)
Test Context Standardization
Section titled “Test Context Standardization”All language emitters use a shared buildBaseTestContext() function from test-context.ts to ensure consistent test generation. This provides:
Standardized Interface
Section titled “Standardized Interface”interface BaseTestContext { node: TypeNode; // The model being tested isAbstract: boolean; // Skip direct instantiation for polymorphic bases package?: string; // Package/namespace for imports examples: TestExample[]; // From @sample decorators alternates: AlternateTest[]; // From @shorthand decorators}Language-Specific Options
Section titled “Language-Specific Options”Each language defines options for casing, escaping, and delimiters:
| Language | Key Casing | Boolean Literals | String Delimiters |
|---|---|---|---|
| Python | snake_case | True/False | " or """ |
| Go | PascalCase | true/false | " |
| TypeScript | camelCase | true/false | " |
| C# | PascalCase | True/False | " or @" |
Consistent Field Names
Section titled “Consistent Field Names”All test templates use standardized field names:
validations(notvalidation) - Array of property assertionsdelimiter(notdelimeter) - Quote character for stringsscalarType(notscalar) - The scalar type nameisOptional(notisPointer) - Whether property is optional
Example Generated Test (Python)
Section titled “Example Generated Test (Python)”def test_load_json_model(): json_data = ''' { "id": "gpt-4", "provider": "azure" } ''' data = json.loads(json_data) instance = Model.load(data) assert instance is not None assert instance.id == "gpt-4" assert instance.provider == "azure"
def test_roundtrip_json_model(): json_data = '''{"id": "gpt-4", "provider": "azure"}''' original = json.loads(json_data) instance = Model.load(original) saved = instance.save() reloaded = Model.load(saved) assert reloaded.id == instance.idAdding Test Coverage
Section titled “Adding Test Coverage”Via @sample Decorators
Section titled “Via @sample Decorators”The best way to add test coverage is through @sample decorators:
model MyModel { @doc("Name of the model") @sample(#{ name: "test-model" }) // Generates test with this value @sample(#{ name: "another-model" }) // Multiple samples = multiple tests name: string;}Test Combinations
Section titled “Test Combinations”When a model has multiple properties with @sample, the emitter generates test combinations:
model Example { @sample(#{ a: "value1" }) @sample(#{ a: "value2" }) a: string;
@sample(#{ b: 1 }) @sample(#{ b: 2 }) b: int32;}// Generates 4 tests: (value1, 1), (value1, 2), (value2, 1), (value2, 2)Manual Test Files
Section titled “Manual Test Files”Some tests are not auto-generated and can be edited:
| Runtime | Location | Notes |
|---|---|---|
| C# | runtime/csharp/AgentSchema.Tests/ | Project files are preserved |
| Python | runtime/python/agentschema/tests/ | conftest.py is preserved |
| TypeScript | runtime/typescript/agentschema/tests/ | Config files are preserved |
| Go | runtime/go/agentschema/ | go.mod is preserved |
Test Validation Patterns
Section titled “Test Validation Patterns”Property Assertions
Section titled “Property Assertions”Tests validate scalar properties from @sample values:
# Pythonassert instance.name == "expected-value"assert instance.count == 42assert instance.enabled == True
# TypeScriptexpect(instance.name).toEqual("expected-value");
# C#Assert.Equal("expected-value", instance.Name);
# Goif instance.Name != "expected-value" { t.Errorf("Expected name to be 'expected-value', got %v", instance.Name)}Type Checks
Section titled “Type Checks”# Pythonassert isinstance(instance.items, list)
# TypeScriptexpect(Array.isArray(instance.items)).toBe(true);
# C#Assert.IsType<List<string>>(instance.Items);
# Goif len(instance.Items) == 0 { t.Error("Expected items to be non-empty")}Debugging Test Failures
Section titled “Debugging Test Failures”1. Check the Sample Values
Section titled “1. Check the Sample Values”Ensure @sample values in TypeSpec are valid:
// ❌ Wrong - sample doesn't match property@sample(#{ wrongName: "value" })name: string;
// ✅ Correct@sample(#{ name: "value" })name: string;2. Regenerate Tests
Section titled “2. Regenerate Tests”cd agentschema && npm run generate3. Check Generated Test File
Section titled “3. Check Generated Test File”Look at the actual generated test to understand what’s being tested:
# Pythoncat runtime/python/agentschema/tests/test_load_model.py
# TypeScriptcat runtime/typescript/agentschema/tests/model.test.ts4. Run Single Test with Verbose Output
Section titled “4. Run Single Test with Verbose Output”# Pythonuv run pytest tests/test_load_model.py::test_load_json_model -v
# TypeScriptnpm test -- --grep "Model" --reporter verbose
# C#dotnet test --filter "ModelConversionTests" --logger "console;verbosity=detailed"
# Gogo test -v -run "TestModel" ./...Tests run automatically on:
- Pull requests
- Pushes to main branch
See .github/workflows/ for CI configuration.