/sniper-review -- Run Review Gate for the Current Phase
You are executing the /sniper-review command. Your job is to evaluate the current phase's artifacts against its review checklist and enforce the appropriate gate policy. Follow every step below precisely.
Step 0: Pre-Flight -- Determine Current Phase
Read
.sniper/config.yamlDetermine the current active phase: find the last entry in
state.phase_logwherecompleted_atis null.If no active phase (all completed or empty log):
ERROR: No active phase. The SNIPER lifecycle has not been started or all phases are complete. Current state: Phase: not started Sprint: 0 To begin, run one of these phase commands: /sniper-ingest -- Ingest an existing codebase /sniper-discover -- Start Phase 1: Discovery & Analysis /sniper-plan -- Start Phase 2: Planning & Architecture /sniper-solve -- Start Phase 3: Epic Sharding & Story Creation /sniper-sprint -- Start Phase 4: Implementation SprintThen STOP.
Store the current phase name. It must be one of:
ingest,discover,plan,solve,sprint
Step 1: Map Phase to Checklist and Gate Mode
Use this mapping to determine which checklist to load and what gate mode to enforce:
| Phase | Checklist File | Config Gate Key |
|---|---|---|
ingest | .sniper/checklists/ingest-review.md | review_gates.after_ingest |
discover | .sniper/checklists/discover-review.md | review_gates.after_discover |
plan | .sniper/checklists/plan-review.md | review_gates.after_plan |
solve | .sniper/checklists/story-review.md | review_gates.after_solve |
sprint | .sniper/checklists/sprint-review.md | review_gates.after_sprint |
- Read the gate mode from
config.yamlusing the appropriate key - Read the checklist file
- Check for domain pack checklists: scan
.sniper/packs/*/checklists/for any.mdfiles. If found, these will be evaluated as additional checklist items after the framework checklist (Step 3b).
If the checklist file does not exist:
ERROR: Checklist file not found: {path}
The framework installation may be incomplete. Check .sniper/checklists/ for available checklists.Then STOP.
Step 2: Identify Artifacts to Review
Based on the current phase, identify which artifact files need to be reviewed:
Phase: ingest
| Artifact | Expected Path |
|---|---|
| Project Brief | docs/brief.md |
| System Architecture | docs/architecture.md |
| Coding Conventions | docs/conventions.md |
Phase: discover
| Artifact | Expected Path |
|---|---|
| Project Brief | docs/brief.md |
| Risk Assessment | docs/risks.md |
| User Personas | docs/personas.md |
Phase: plan
| Artifact | Expected Path |
|---|---|
| PRD | docs/prd.md |
| Architecture | docs/architecture.md |
| UX Specification | docs/ux-spec.md |
| Security Requirements | docs/security.md |
Phase: solve
| Artifact | Expected Path |
|---|---|
| Epics | docs/epics/*.md |
| Stories | docs/stories/*.md |
Phase: sprint
| Artifact | Expected Path |
|---|---|
| Source Code | Files in ownership directories |
| Tests | Files in test directories |
| Sprint Stories | Stories assigned to current sprint |
For each expected artifact:
- Check if the file exists
- If it does NOT exist, record an immediate FAIL for that artifact:
FAIL: {artifact_name} not found at {path} - If it exists, read its content for evaluation in Step 3
Step 3: Evaluate Each Checklist Item
Parse the checklist file. Each line starting with - [ ] is a checklist item.
Group the checklist items by their section headers (## headers in the checklist file).
For each checklist item, evaluate it against the relevant artifact content:
Evaluation Criteria
For each item, assign one of three statuses:
PASS -- The criterion is clearly met in the artifact:
- The artifact contains substantive content addressing the criterion
- The content is specific, not generic placeholder text
- The content has enough depth to be actionable
WARN -- The criterion is partially met or needs improvement:
- The artifact addresses the topic but lacks specificity
- The content exists but is thin or uses vague language
- The artifact has the right structure but some sections are incomplete
FAIL -- The criterion is not met:
- The artifact does not address the criterion at all
- The relevant section is empty or contains only template placeholders
- The content contradicts the criterion
- The artifact file does not exist
Evaluation Process
For each checklist section:
- Read the relevant artifact
- For each checklist item in that section: a. Search the artifact for content related to the criterion b. Assess whether the content meets the criterion (PASS/WARN/FAIL) c. Write a brief (one-line) justification for the assessment
- Record the result
Be thorough but fair:
- Do NOT fail items just because they could be better -- that is a WARN
- Do NOT pass items that only have placeholder text (template markers like
<!-- -->orTODO) - For cross-document consistency checks, read ALL referenced documents and compare
Step 3c: Memory Compliance Checks
After evaluating the phase checklist, check project memory for compliance if memory files exist.
3c-1: Load Memory
Check if .sniper/memory/ directory exists. If not, skip this step entirely.
Read:
.sniper/memory/conventions.yaml— filter for entries withenforcement: review_gateorenforcement: both.sniper/memory/anti-patterns.yaml— all entries.sniper/memory/decisions.yaml— active entries only
If workspace memory exists (check config), also load workspace-level files.
3c-2: Convention Compliance
For each convention with review gate enforcement:
- Read the convention's
ruleanddetection_hint(if present) - Examine the sprint output / artifacts being reviewed
- Check whether the convention was followed
- Report as PASS (compliant) or WARN (violation with details)
3c-3: Anti-Pattern Scanning
For each anti-pattern:
- Read the
detection_hint - If a detection hint is present, search the changed files for matches
- If matches found, report as WARN with file locations
- If no detection hint, skip automated detection (will be caught in manual review)
3c-4: Decision Consistency
For each active decision:
- Check if the sprint output contradicts the decision
- Example: if decision says "Use PostgreSQL" but new code imports MongoDB, flag it
- Report any contradictions
3c-5: Report Memory Compliance
Add a "Memory Compliance" section to the review output:
## Memory Compliance
### Convention Checks
PASS conv-001: Zod validation — all new routes use validation middleware
WARN conv-003: Barrel exports — 2 new directories missing index.ts
### Anti-Pattern Checks
PASS ap-001: No direct DB queries in handlers — clean
WARN ap-002: Silent error catch found in lib/webhook-delivery.ts:42
### Decision Consistency
PASS All decisions consistent
### Summary
{N} conventions checked, {M} violations
{N} anti-patterns checked, {M} matches found
{N} decisions checked, {M} contradictionsIf there are violations, these count as review findings but do NOT block the gate by themselves (memory compliance is advisory unless the gate mode is strict AND the convention enforcement is review_gate).
Step 4: Present Results
Print a formatted review report:
============================================
SNIPER Review Gate: {phase} Phase
============================================
Gate Mode: {strict|flexible|auto}
Checklist: {checklist_file}
Date: {today's date}
--------------------------------------------
{Section Name from Checklist}
--------------------------------------------
{PASS|WARN|FAIL} {checklist item text}
-> {brief justification}
{PASS|WARN|FAIL} {checklist item text}
-> {brief justification}
... (repeat for all items in section)
--------------------------------------------
{Next Section Name}
--------------------------------------------
... (repeat for all sections)
============================================
Summary
============================================
Total Items: {count}
PASS: {count} ({percentage}%)
WARN: {count} ({percentage}%)
FAIL: {count} ({percentage}%)
Overall: {ALL PASS | HAS WARNINGS | HAS FAILURES}Step 5: Apply Gate Policy
Based on the gate mode and results, take the appropriate action:
Gate Mode: strict
If ALL items are PASS (no WARN, no FAIL):
- Print: "All review criteria passed. This gate requires human approval to advance."
- Ask the user: "Do you approve advancing from the {phase} phase to the next phase? (yes/no)"
- If YES: proceed to Step 6 (update state)
- If NO: print "Phase advancement blocked by reviewer. Address feedback and run
/sniper-reviewagain."
If ANY items are WARN (but no FAIL):
- Print: "Review found warnings. This gate requires human approval."
- List all WARN items with their justifications
- Ask: "There are {count} warnings. Do you want to approve advancement despite these warnings? (yes/no)"
- If YES: proceed to Step 6
- If NO: print "Phase advancement blocked. Address warnings and run
/sniper-reviewagain."
If ANY items are FAIL:
- Print: "Review found failures. This gate BLOCKS advancement."
- List all FAIL items with their justifications
- Print: "The following items MUST be addressed before this phase can be approved:"
- List each FAIL item as an action item
- Print: "Fix these issues and run
/sniper-reviewagain." - Do NOT advance. Do NOT ask for override. STOP here.
Gate Mode: flexible
If ALL items are PASS:
- Print: "All review criteria passed. Auto-advancing to next phase."
- Proceed to Step 6 (update state)
If ANY items are WARN (but no FAIL):
- Print: "Review found warnings. Auto-advancing (flexible gate)."
- List WARN items briefly
- Print: "These items are noted for async review. Proceeding to next phase."
- Proceed to Step 6
If ANY items are FAIL:
- Print: "Review found failures in a flexible gate."
- List all FAIL items
- Ask: "There are {count} failures. This is a flexible gate -- do you want to advance anyway? (yes/no)"
- If YES: proceed to Step 6 with a note that failures were accepted
- If NO: print "Address failures and run
/sniper-reviewagain."
Gate Mode: auto
- Print: "Auto gate -- no review required. Advancing to next phase."
- Print any FAIL or WARN items as informational notes
- Proceed to Step 6
Step 6: Update Lifecycle State
When a phase is approved for advancement:
Read the current
.sniper/config.yamlFind the active phase_log entry (the one where
completed_atis null) and update it:yamlcompleted_at: "{current ISO timestamp}" approved_by: "{human or auto-flexible or auto}"Update artifact statuses based on what was reviewed:
- If all items for an artifact passed -> set status to
approved - If any items warn but no fails -> keep status as
draft - If the artifact exists but has fails -> keep status as
draft - Keep existing values for artifacts not reviewed in this phase
- If all items for an artifact passed -> set status to
If the completed phase is
sprint, incrementstate.current_sprintby 1.Write the updated config back to
.sniper/config.yamlSuggest the next command based on what was just completed:
Completed Phase Suggested Next Commands ingest/sniper-feature,/sniper-discover,/sniper-auditdiscover/sniper-planplan/sniper-solvesolve/sniper-sprintsprint/sniper-sprint(next sprint),/sniper-reviewPrint the completion confirmation:
============================================ Phase Review Complete ============================================ Completed: {phase} ({context}) Artifacts updated in config.yaml Suggested next: {next_command}
IMPORTANT RULES
- NEVER skip evaluation -- read every artifact and assess every checklist item.
- NEVER auto-approve a strict gate. Always require explicit human input.
- NEVER modify artifact files -- this command is a review tool, not an editor.
- Be honest in assessments. Do not inflate passes to speed things along.
- If artifacts contain only template placeholders, that is a FAIL, not a WARN.
- For cross-document consistency checks, you MUST read all referenced documents.
- When updating config.yaml, preserve ALL existing content -- only modify the state section.
- Always show the full formatted report before applying gate logic.
- If the user's config.yaml is malformed or unreadable, report an error and STOP.
