Review Gates
Review gates are quality checkpoints that evaluate phase artifacts before the lifecycle advances. They enforce standards and catch problems early -- before bad decisions cascade through later phases.
Gate Configuration
In v3, gates are configured per-phase inside protocol YAML files using human_approval: boolean:
# Example from a protocol YAML
phases:
plan:
agents: [architect, product-manager]
spawn_strategy: team
gate:
human_approval: true # true = human must approve; false = auto-advance
checklist: plan.yamlWhen human_approval: true
The most rigorous mode. The human must explicitly approve advancement.
Behavior with no failures:
- The review report is presented
- You are asked to approve, request revisions, or reject
- No advancement occurs without your explicit "yes"
Behavior with warnings:
- Warnings are listed with justifications
- You decide whether to approve despite warnings
Behavior with failures:
- Failures are listed as mandatory action items
- Advancement is blocked -- no override option
- Fix the issues and run
/sniper-reviewagain
This is the recommended setting for plan and review phases because these are high-risk transitions.
When human_approval: false
Auto-advances when quality is acceptable, with async review.
Behavior with no failures:
- Auto-advances to the next phase
- Report is printed for your records
Behavior with warnings:
- Auto-advances despite warnings
- Warnings are noted for async review
Behavior with failures:
- Failures are presented and you are asked to choose:
- Have the agents fix the issues
- Override and advance anyway
- Stop and review manually
This is suitable for discover and implement phases where output can be refined in later iterations.
WARNING
Setting human_approval: false on plan or review phases is strongly discouraged. Bad architecture decisions cascade through the entire project, and unreviewed code can introduce bugs and security issues.
How Gates Work
Gates are enforced by Claude Code hooks, not by convention. The step-by-step flow:
- All tasks in a phase complete.
- The lead orchestrator's turn ends.
- A
Stophook fires thegate-revieweragent. - The gate-reviewer reads the phase checklist and validates each item.
- PASS (exit 0) -- lead advances to next phase.
- FAIL (exit 2) -- lead is blocked. It reads the failure report and routes feedback to the failing agent.
- PENDING_APPROVAL -- human must approve before advancing.
Approval Gates by Protocol
| Protocol | Approval Gates |
|---|---|
full | plan-approval, final-review |
feature | plan-approval |
patch | final-review |
hotfix | none |
explore | none |
refactor | final-review |
ingest | none |
Checklist Item Types
Each phase has a checklist in packages/core/checklists/. Items can be one of four types:
| Type | Description | Pass Condition |
|---|---|---|
| Command | Run a shell command (e.g., pnpm test) | Exit code 0 |
| Artifact | Verify a file exists at an expected path | File exists |
| Glob | Verify files matching a pattern exist | At least one match |
| Grep | Search the diff for patterns (e.g., TODO/FIXME) | Pattern found (or not found, depending on rule) |
Blocking vs. Non-Blocking Items
Checklist items are either blocking or non-blocking:
- Blocking failures halt the protocol. The gate returns FAIL and the lead must resolve the issue before the protocol can advance.
- Non-blocking failures are recorded as warnings in the gate report. The protocol can still advance, but the warnings are surfaced for human review.
How Evaluation Works
When /sniper-review runs, it:
- Determines the current active phase from the protocol checkpoint
- Loads the phase-specific checklist from
.sniper/checklists/ - Identifies the artifacts to review based on the phase
- Evaluates each checklist criterion against the actual artifact content
- Assigns PASS, WARN, or FAIL to each item
- Applies the gate policy based on the
human_approvalsetting
Evaluation Criteria
Each checklist item receives one of three statuses:
| Status | Meaning | Criteria |
|---|---|---|
| PASS | Criterion is clearly met | Substantive content, specific (not generic), actionable depth |
| WARN | Partially met or needs improvement | Content exists but lacks specificity, vague language, incomplete sections |
| FAIL | Not met | Content missing entirely, only placeholder text, contradicts criterion |
The evaluator reads the full artifact content and checks each criterion. Template placeholders (TODO, <!-- -->) are treated as FAILs, not WARNs.
Phase-Specific Checklists
Discovery Checklist
Located at .sniper/checklists/discover.yaml. Evaluates discovery artifacts:
Project Brief:
- Problem statement is specific and evidence-based
- At least 3 direct competitors identified with features and pricing
- Unique value proposition clearly differentiates
- Target market segment defined with size estimates
- Key assumptions listed explicitly
- v1 scope separates in-scope from out-of-scope
Risk Assessment:
- Technical feasibility risks identified with specifics
- Integration and compliance risks documented
- Each risk has a mitigation strategy
- At least 2 devil's advocate findings
User Personas:
- At least 2 distinct personas defined
- Each has role, goals, pain points, workflows
- Primary user journey mapped
- Personas are realistic, not idealized
Planning Checklist
Located at .sniper/checklists/plan.yaml. The most detailed checklist. Evaluates plan artifacts and cross-document consistency:
PRD: testable acceptance criteria, prioritized requirements (P0/P1/P2), measurable success metrics, no duplicates
Architecture: technology choices with rationale and alternatives, component diagram with boundaries, data models with field types, API contracts specific enough for independent implementation
UX Spec: information architecture covering all pages, user flows including error paths, component states (default/hover/active/disabled/loading/error), accessibility requirements
Security: auth model, authorization model, encryption strategy, compliance with named regulations, threat model
Cross-Document Consistency: API contracts match UX data needs, security is implementable within architecture, PRD requirements fully covered by architecture
Implement Checklist
Located at .sniper/checklists/implement.yaml. Evaluates code and tests:
- Code quality, linting, type safety
- Test existence and pass rates
- Acceptance criteria verification
- Architecture compliance
- Security review
Memory Compliance
When the memory system is active, review gates also check compliance with learned conventions, anti-patterns, and decisions:
- Convention checks -- verify that code follows codified conventions (e.g., "all API routes use Zod validation")
- Anti-pattern scanning -- search for known anti-patterns in changed files
- Decision consistency -- ensure changes do not contradict active architectural decisions
Memory compliance findings are advisory when human_approval: false and enforcement-level when human_approval: true.
Domain Pack Checklists
Domain packs can provide additional checklist items. If .sniper/packs/*/checklists/ contains any markdown files, those items are evaluated after the framework checklist.
For example, the sales-dialer pack adds a telephony review checklist that verifies TCPA compliance and call recording requirements.
Running Reviews Manually
You can run a review at any time with:
/sniper-reviewThis evaluates the current active phase. The command reads the phase from the protocol checkpoint, loads the appropriate checklist, and produces a full report.
Structured Decision Prompts
Review gates are quality checkpoints between phases. For mid-phase questions and decision points that arise during agent execution, see Structured Decision Prompts. SDPs complement gates — they handle ambiguity within a phase, while gates enforce quality at phase boundaries.
Next Steps
- Structured Decision Prompts -- how agents surface mid-phase questions
- Configuration -- configure gates and protocols
- Full Lifecycle -- see gates in action across the lifecycle
- Reference: Checklists -- browse all available checklists
