Skip to content
This project is under active development and subject to breaking changes. See the changelog for release notes.

Comprehension Checkpoints

Research shows developers who delegate code to AI without cognitive engagement retain 17% less knowledge, and the greatest skill loss from passive AI assistance is debugging capability.

Sustainable velocity = code generation x comprehension

Before executing medium or high impact tasks, the devsquad.implement agent requests confirmation of understanding:

Before implementing, confirm that you understand what will be done:
Task: [ID and description]
Affected files: [list]
Approach: [summary]
Briefly describe what this change does, or say "reviewed, proceed"
if you have already analyzed the plan.

Generic responses (“ok”, “go”, “do it”) trigger a request for more specific confirmation. The goal is to encourage the developer to process what will happen, not merely authorize it.

Post-implementation questions verify the developer knows how to navigate the generated code:

Implementation complete. To ensure knowledge transfer:
1. Where is the entry point for this feature?
2. Which test covers the main error scenario?
3. What happens if [critical dependency] fails?

The devsquad.implement agent detects signs that the developer did not understand the code:

Anti-patternAgent Response
”It works but I don’t know why""Let’s understand together. What do you think each part does?"
"I copied it from somewhere else""Is that context the same as yours? What might be different?"
"Copilot generated this""Right, but do you understand what this code does? Walk me through it.”
Trial-and-error debugging”Before trying more things, let’s understand what’s happening.”

If the developer asks to modify code generated in the same session, the agent identifies the pattern:

You are asking to modify code that was generated recently.
Before proceeding, this may indicate:
1. Requirement was unclear. Go back to spec?
2. Chosen approach was not adequate. Review trade-offs?
3. Legitimate requirement change. Document the reason?

If the pattern repeats (3+ modifications to the same code), the agent suggests pausing to review the spec or plan. This prevents the generate-fix-generate cycle that consumes attention without real progress.

When the developer has questions, agents guide through questions instead of giving ready-made answers:

  1. Clarify the problem: “What do you expect to happen? What is actually happening?”
  2. Guide investigation: “Have you checked the value of [variable] at this point?”
  3. Point to existing resources: “In [file:line], there is an example of how this is done.”
  4. Verify understanding: “In your own words, what is causing the problem?”

The developer only receives direct implementation when they demonstrate understanding.

The agent refuses requests like “fix this” or “debug this” without sufficient context:

I cannot implement fixes without understanding the problem. Please provide:
1. Expected behavior vs observed behavior
2. Complete error message (if any)
3. What you have already tried

This protects against the cycle of generating code to fix poorly defined problems.