/sniper-sprint -- Phase 4: Implementation Sprint (Parallel Team)
You are executing the /sniper-sprint command. Your job is to run an implementation sprint by spawning a development team to implement selected stories. You are the team lead -- you coordinate, facilitate API contract alignment, and ensure quality. You do NOT write code yourself. Follow every step below precisely.
Arguments: $ARGUMENTS
Step 0: Pre-Flight Checks
Perform ALL checks before proceeding. If any critical check fails, STOP.
0a. Verify SNIPER Is Initialized
- Read
.sniper/config.yaml. - If the file does not exist or
project.nameis empty:- STOP. Print: "SNIPER is not initialized. Run
/sniper-initfirst."
- STOP. Print: "SNIPER is not initialized. Run
0b. Check for Feature Flag
- If
$ARGUMENTScontains--feature SNPR-{XXXX}:- Store the feature ID.
- Read
state.features[]from config to find the feature. - If not found, STOP: "Feature SNPR-{XXXX} not found. Run
/sniper-feature --listto see active features." - Set story directory to
docs/features/SNPR-{XXXX}/stories/. - Set team name prefix to
sniper-feature-sprint-{feature_id}. - Note: Feature sprints do NOT increment
state.current_sprint.
- If no
--featureflag:- Set story directory to
docs/stories/. - Set team name prefix to
sniper-sprint.
- Set story directory to
0c. Verify Stories Exist
- List files in the story directory (set in 0b).
- If the directory does not exist or contains no
.mdfiles:- If feature mode: STOP. Print: "No stories found for SNPR-{XXXX}. The feature may not have reached the solving phase yet."
- If normal mode: STOP. Print: "No stories found in
docs/stories/. Run/sniper-solvefirst to create stories."
0d. Config Migration Check
- Read
schema_versionfrom.sniper/config.yaml. - If
schema_versionis absent or less than 2, run the v1→v2 migration. Write the updated config before proceeding.
0e. Verify Phase State
- Check that
state.artifacts.stories.statusis not null (stories have been created). - If
state.artifacts.stories.statusis null but story files exist, print a warning and continue.
0f. Verify Framework Files
Check that these files exist:
.sniper/teams/sprint.yaml.sniper/spawn-prompts/_template.md.sniper/checklists/sprint-review.md.sniper/personas/process/developer.md.sniper/personas/process/qa-engineer.md.sniper/personas/technical/backend.md.sniper/personas/technical/frontend.md.sniper/personas/technical/infrastructure.md.sniper/personas/technical/ai-ml.md.sniper/personas/cognitive/systems-thinker.md.sniper/personas/cognitive/user-empathetic.md.sniper/personas/cognitive/security-first.md.sniper/personas/cognitive/performance-focused.md.sniper/personas/cognitive/devils-advocate.md
Report any missing files as warnings.
Step 1: Increment Sprint Number and Update State
Edit .sniper/config.yaml:
If normal mode (no --feature flag):
- Increment
state.current_sprintby 1 (e.g., 0 -> 1, 1 -> 2). - Store the new sprint number as
{sprint_number}for use throughout. - Append to
state.phase_log:yaml- phase: sprint context: "sprint-{sprint_number}" started_at: "{current ISO timestamp}" completed_at: null approved_by: null
If feature mode (--feature SNPR-{XXXX}):
- Do NOT increment
state.current_sprint. - Use the feature ID as the sprint identifier.
- Append to
state.phase_log:yaml- phase: sprint context: "feature-sprint-SNPR-{XXXX}" started_at: "{current ISO timestamp}" completed_at: null approved_by: null
Step 2: Read Team Definition and Config
- Read
.sniper/teams/sprint.yamlin full. Parse:available_teammates: the pool of possible teammates (not all will be needed)sprint_rules: rules that apply to all sprint executioncoordination: pairs that need to communicatereview_gate: should bestrict
- Read
.sniper/config.yamlfor:ownershipsection (file ownership mappings)stacksection (technology details)agent_teams.max_teammates(maximum concurrent teammates)agent_teams.default_modelandagent_teams.planning_model
Step 3: Present Available Stories and Select Sprint Backlog
3a. Inventory All Stories
- Read every
.mdfile indocs/stories/. - For each story, extract:
- Story ID and title (from filename and header)
- Epic reference
- Complexity (S/M/L)
- Priority (P0/P1/P2)
- File ownership (which directories it touches)
- Dependencies (which other stories must complete first)
- Status: check if the story has been implemented in a previous sprint (look for a "Status: Complete" marker or check if the files it would create already exist)
3b. Identify Available Stories
Filter to stories that:
- Have NOT been completed in a previous sprint
- Have all dependencies satisfied (dependent stories are already completed, OR the dependent story is also being selected for this sprint)
3c. Check for Sprint Backlog Argument
If $ARGUMENTS contains story IDs (e.g., "S01 S02 S03" or "S01-S05"), use those as the sprint backlog directly. Verify they exist and their dependencies are met.
3d. Present to User for Selection
If no stories were specified in arguments, present the available stories to the user:
============================================
Sprint {sprint_number} -- Story Selection
============================================
Available stories (not yet implemented, dependencies met):
| # | Story | Epic | Size | Priority | Ownership | Deps |
|-----|------------------------------|---------|------|----------|------------------|--------|
| S01 | {title} | E01 | M | P0 | backend, tests | None |
| S02 | {title} | E01 | S | P0 | infra | None |
| S03 | {title} | E02 | M | P0 | backend, tests | S01 |
| ... | ... | ... | ... | ... | ... | ... |
Stories blocked (dependencies not met):
| S15 | {title} | E05 | L | P1 | frontend, tests | S09 |
Recommended: Start with P0 stories that have no dependencies.
Select stories for this sprint (e.g., "S01 S02 S03 S04 S05"):Wait for the user to respond with their selection.
3e. Validate Selection
- Verify all selected stories exist.
- Verify dependencies are met (either already completed in a previous sprint, or another selected story satisfies the dependency).
- If dependencies are unmet, warn the user and suggest adding the dependency stories.
- Check that the total workload is reasonable for one sprint (suggest limiting to 5-10 stories per sprint).
Step 4: Determine Required Teammates
Based on the selected stories' file ownership, determine which teammates to spawn.
Ownership-to-Teammate Mapping
Read the owns_from_config field from each available teammate in sprint.yaml, and cross-reference with config.yaml ownership rules:
| Story touches directories in... | Teammate needed |
|---|---|
ownership.backend paths (src/backend/, src/api/, src/services/, src/db/, src/workers/) | backend-dev |
ownership.frontend paths (src/frontend/, src/components/, src/hooks/, src/styles/, src/pages/) | frontend-dev |
ownership.infrastructure paths (docker/, .github/, infra/, terraform/, scripts/) | infra-dev |
| AI/ML features mentioned in story | ai-dev |
| Always included | qa-engineer |
Rules
- Scan each selected story's "File Ownership" field.
- Map each ownership area to the corresponding teammate.
qa-engineeris ALWAYS included -- they test everything.ai-devis only needed if stories explicitly mention AI/ML features (check story content, not just ownership).- Do NOT exceed
agent_teams.max_teammatesfrom config.yaml. If too many teammates would be needed, inform the user and suggest splitting the sprint.
Teammate Model Selection
From sprint.yaml, note the model field for each teammate:
- Most teammates use
sonnet(the default model) ai-devusesopus(complex AI work)
Store the list of required teammates for the next steps.
Step 5: Assign Stories to Teammates
Each story is assigned to exactly ONE implementation teammate (plus QA gets a testing task for each story).
Assignment Rules
- Backend stories ->
backend-dev - Frontend stories ->
frontend-dev - Infrastructure stories ->
infra-dev - AI/ML stories ->
ai-dev - Full-stack stories (touch both backend and frontend) -> assign to the teammate whose area has the heavier lift. Note the cross-boundary work in their task description and set up coordination.
- QA stories ->
qa-engineergets a test task for each implementation story, blocked by that story's completion.
Balance Check
Try to distribute stories roughly evenly across teammates. If one teammate has 5 stories and another has 1, suggest rebalancing to the user.
Step 6: Compose Spawn Prompts
For each needed teammate, compose a spawn prompt by reading persona layers and assembling them into the template.
Reading Persona Layers
For each teammate in the sprint.yaml available_teammates list that is needed:
Read the persona files specified in their
composesection:- Process layer:
.sniper/personas/process/{compose.process}.md - Technical layer:
.sniper/personas/technical/{compose.technical}.md(skip if null) - Cognitive layer:
.sniper/personas/cognitive/{compose.cognitive}.md - Domain layer: domain pack context if configured
- Process layer:
Read the spawn template:
.sniper/spawn-prompts/_template.mdLook up the ownership paths from config.yaml using the
owns_from_configfield:- e.g., if
owns_from_config: backend, get the paths fromconfig.yaml->ownership.backend
- e.g., if
Assembling the Spawn Prompt
For each teammate, fill the spawn template:
{name}= teammate name from sprint.yaml{process_layer}= contents of process persona file{technical_layer}= contents of technical persona file (or "No specific technical lens" if null){cognitive_layer}= contents of cognitive persona file{domain_layer}= domain context or "No domain pack configured."{ownership}= the actual directory paths from config.yaml
Then append the sprint-specific context:
## Sprint Context
**Sprint:** {sprint_number}
**Project:** {project.name}
**Stack:** {full stack details from config.yaml}
## Sprint Rules
{copy all sprint_rules from sprint.yaml}
## Your Assigned Stories
{For each story assigned to this teammate, include the FULL story file content.
Read each story file and embed it completely.}
### Story 1: {story ID} - {story title}
{full content of the story file}
### Story 2: {story ID} - {story title}
{full content of the story file}
...
## Architecture Reference
Read `docs/architecture.md` for the full system architecture.
The relevant sections are embedded in each story above.
{If feature mode: "Also read `docs/features/SNPR-{XXXX}/arch-delta.md` for architecture changes specific to this feature. The delta takes precedence for this feature's scope."}
{If conventions doc exists: "Also read `docs/conventions.md` for the project's coding patterns and conventions."}
## Coordination
{If this teammate has coordination pairs from sprint.yaml, list them:}
- Coordinate with `{other teammate}` on: {topic from coordination section}
- Message your coordination partner BEFORE implementing shared interfaces
## Instructions
1. Read ALL assigned story files completely before writing any code.
2. If you have coordination partners, message them to align on shared interfaces BEFORE coding.
3. Implement each story following the architecture patterns and acceptance criteria.
4. Write tests for every piece of functionality.
5. Verify all acceptance criteria are met.
6. Message the team lead when each story is complete.
7. If you are blocked, message the team lead IMMEDIATELY.QA Engineer Spawn Prompt
The QA engineer's prompt is special -- they test ALL the sprint's stories:
## Sprint Context
**Sprint:** {sprint_number}
**Project:** {project.name}
**Stack:** {test_runner from config.yaml}
## Sprint Rules
{copy all sprint_rules}
## Stories to Test
{For each story in the sprint, include the FULL story file content.
The QA engineer needs all stories to write comprehensive tests.}
### Story: {story ID} - {story title}
**Implemented by:** {teammate name}
**Status:** WAIT for implementation to complete before testing this story.
{full content of the story file}
...
## Instructions
1. Read ALL story files to understand the full scope of this sprint.
2. Your tasks are BLOCKED until the corresponding implementation tasks complete.
3. When an implementation task completes, write tests for that story:
- Unit tests for individual functions
- Integration tests for API endpoints
- E2E tests for user-facing flows (if specified in the story)
4. Verify every acceptance criterion from the story.
5. Run the full test suite and report results.
6. If you find bugs or deviations from acceptance criteria, message the implementing teammate directly.
7. Message the team lead with test results for each story.Step 7: Create the Agent Team
Use TeamCreate:
TeamCreate:
team_name: "sniper-sprint-{sprint_number}"
description: "SNIPER Sprint {sprint_number} for {project.name}. Stories: {list of story IDs}."Step 8: Create Tasks with Dependencies
Create tasks in the shared task list.
Implementation Tasks (can run in parallel)
For each implementation teammate, create one task per assigned story:
TaskCreate:
subject: "Implement {story ID}: {story title}"
description: "{Full story description including acceptance criteria. Include the file path to the story: docs/stories/{story file}. Mention file ownership boundaries.}"
activeForm: "Implementing {story ID}: {story title}"If stories within the same teammate have inter-story dependencies, set addBlockedBy accordingly.
QA Tasks (blocked by implementation)
For each story, create a QA testing task that is blocked by the implementation task:
TaskCreate:
subject: "Test {story ID}: {story title}"
description: "Write and run tests for {story ID}. Verify all acceptance criteria. Story file: docs/stories/{story file}."
activeForm: "Testing {story ID}: {story title}"Set dependencies:
TaskUpdate:
taskId: "{qa task id}"
addBlockedBy: ["{implementation task id for this story}"]Step 9: Spawn Teammates
Spawn each required teammate:
team_name: "sniper-sprint-{sprint_number}"name: teammate name from sprint.yaml- The full composed spawn prompt from Step 6
Spawn order:
- Spawn implementation teammates first (backend-dev, frontend-dev, infra-dev, ai-dev as needed).
- Spawn qa-engineer last (their tasks are blocked anyway).
Assign tasks using TaskUpdate:
- Each implementation task -> owner: corresponding teammate name, status: "in_progress"
- Each QA task -> owner: "qa-engineer" (stays
pendinguntil implementation completes)
Step 10: Enter Delegate Mode
You are the team lead. You coordinate. You do NOT write code.
10a: API Contract Alignment (Critical)
If BOTH backend-dev and frontend-dev are in this sprint:
- Immediately after spawning, message both:
"Before implementing, align on API contracts. backend-dev: share your planned endpoint specs. frontend-dev: share your expected data shapes. Agree on the contract before coding."
- Monitor their conversation. If they are not communicating within 5 minutes, prompt them again.
- If there are conflicts in the contract, help mediate.
10b: Other Coordination Pairs
From sprint.yaml coordination section, facilitate:
- backend-dev <-> ai-dev: AI pipeline integration points, data flow, WebSocket events, API boundaries
- backend-dev <-> qa-engineer: Share testable endpoints as completed
Message relevant teammates if coordination is not happening organically.
10c: Progress Monitoring
Track progress throughout execution:
- Check TaskList periodically.
- When an implementation teammate completes a story:
- Verify the code was written (check that new files exist in the relevant directories).
- Mark the implementation task as
completed. - The corresponding QA task is now unblocked.
- Message qa-engineer: "Implementation of {story ID} is complete. You can begin testing."
- Update the QA task to
in_progress.
- When QA completes testing a story:
- Ask for test results (pass/fail count).
- If tests fail, message the implementing teammate with the failure details.
- If tests pass, mark the QA task as
completed.
- If a teammate has not messaged in 10 minutes, check on them:
"Checking in -- how is progress on {task}? Are you blocked on anything?"
10d: Handling Blockers
If a teammate reports a blocker:
- Determine if it is a dependency issue (waiting on another teammate) or a technical issue.
- For dependency issues: message the blocking teammate and prioritize.
- For technical issues: provide guidance from the architecture doc or escalate to the user.
- If a blocker cannot be resolved, inform the user and ask for direction.
Wait for ALL tasks (implementation AND QA) to complete before proceeding.
Step 11: Verify Sprint Output
Once all tasks are complete:
- Verify code exists: Check that new files were created in the expected directories based on story file ownership.
- Verify tests exist: Check that test files were created.
- Run tests (if possible): Execute the test runner command from config.yaml:or the equivalent command for the project's test runner. Capture the results.
{package_manager} run test - Collect results from QA: If the QA engineer reported test results via messaging, compile them.
If any stories are incomplete or tests are failing, do NOT proceed. Message the relevant teammates and resolve issues first.
Step 12: Run Review Gate (STRICT -- Human Must Review Code)
This is a STRICT gate. Human review is NON-NEGOTIABLE for code.
Read the review checklist at
.sniper/checklists/sprint-review.md.For each checklist section, evaluate:
- Code Quality: Check for linting issues, type errors, hardcoded secrets, error handling.
- Testing: Verify tests exist and pass.
- Acceptance Criteria: Cross-reference each story's criteria with what was implemented.
- Architecture Compliance: Verify code follows architecture patterns.
- Security: Check for obvious security issues.
Prepare a sprint review report:
============================================
SNIPER Sprint {sprint_number} Review
============================================
Gate Mode: STRICT (human review required)
Stories Implemented:
{story ID}: {title} -- {IMPLEMENTED / PARTIAL / MISSING}
...
Test Results:
Total: {count}
Passed: {count}
Failed: {count}
Skipped: {count}
Code Quality:
[PASS] / [ATTENTION] / [FAIL] for each checklist item
Acceptance Criteria Verification:
{story ID}: {X}/{Y} criteria met
...
Architecture Compliance:
[PASS] / [ATTENTION] / [FAIL] for each checklist item
Security:
[PASS] / [ATTENTION] / [FAIL] for each checklist item
Files Changed:
{summary of new/modified files by directory}
============================================- Present to the user and WAIT for approval.
Print to the user:
"Sprint {sprint_number} review is complete. Please review the code changes and test results above."
"Your options:"
- Approve -- mark sprint stories as complete
- Request revisions -- specify what needs to change
- Reject -- discard sprint output
- WAIT for the user to respond. Do not auto-advance.
If User Requests Revisions
- Parse feedback to determine which stories need changes.
- Message the relevant teammates with specific revision instructions.
- Wait for revisions and re-testing.
- Re-run the checklist and present again.
If User Approves
Proceed to Step 13.
If User Rejects
Print: "Sprint {sprint_number} rejected. Code remains in place but stories are not marked complete. Review and address issues manually." Update state and STOP.
Step 13: Update State and Shut Down Team
Update Lifecycle State
Edit .sniper/config.yaml:
- Update the sprint entry in
state.phase_log:- Set
completed_at: "{current ISO timestamp}" - Set
approved_by: "human"
- Set
Mark Stories Complete
For each story that was implemented and approved, add a completion marker:
- Add
> **Status:** Complete (Sprint {sprint_number})to the top of each story file
If feature mode: Also update state.features[] for this feature:
- Increment
stories_completeby the number of completed stories - If
stories_complete == stories_total, the feature is ready for merge-back
Shut Down Teammates
Send shutdown requests to each teammate:
- Send shutdown_request to each spawned teammate by name
- Wait for all to acknowledge
Step 14: Trigger Sprint Retrospective
After the review gate passes, automatically trigger a sprint retrospective if memory is enabled.
14-1: Check Memory Configuration
Read .sniper/config.yaml:
- If
memory.enabledis false or not set, skip retrospective - If
memory.auto_retrois false, skip retrospective but print:Sprint retrospective skipped (auto_retro is disabled). To run manually: /sniper-memory --retro
14-2: Read Retro Team and Compose Agent
- Read
.sniper/teams/retro.yamlfor the team definition - Parse the teammate entry:
retro-analystwith compose layers from the YAML - Compose the retro-analyst spawn prompt using
/sniper-composewith the layers from the team YAML:/sniper-compose --process {compose.process} --cognitive {compose.cognitive} --name "Retro Analyst"
14-3: Run Retrospective
Spawn the retro agent with these context files:
- All completed story files from this sprint (from
docs/stories/) - The review gate output from Step 12
- Existing memory files (
.sniper/memory/conventions.yaml,.sniper/memory/anti-patterns.yaml) - The code changes from this sprint (git diff summary)
The retro agent should produce: .sniper/memory/retros/sprint-{N}-retro.yaml
14-4: Auto-Codify Findings
If memory.auto_codify is true in config:
- Read the retro output
- For each finding with
recommendation: codifyANDconfidence: high:- If it's a convention: append to
.sniper/memory/conventions.yamlwith statusconfirmed - If it's an anti-pattern: append to
.sniper/memory/anti-patterns.yamlwith statusconfirmed
- If it's a convention: append to
- For findings with
confidence: medium:- Append with status
candidate
- Append with status
- Regenerate
.sniper/memory/summary.md
14-5: Show Retro Summary
Display the retrospective results:
============================================
Sprint {sprint_number} Retrospective
============================================
Stories analyzed: {count}
New Conventions (auto-codified):
conv-{XXX}: {rule}
New Anti-Patterns (auto-codified):
ap-{XXX}: {description}
Candidates (need confirmation):
{rule/description}
Estimation Calibration:
Overestimates: {stories}
Underestimates: {stories}
Pattern: {description}
Positive Patterns:
{pattern}
============================================Print: Review auto-codified entries with: /sniper-memory --conventions Print: Promote candidates with: /sniper-memory --promote {id}
Step 15: Present Results and Next Steps
============================================
SNIPER Sprint {sprint_number} Complete
============================================
Stories Completed: {count}/{total selected}
{story ID}: {title} [COMPLETE]
...
Test Results: {passed}/{total} passing
Remaining Stories (not yet implemented):
{count} stories remaining across {count} epics
Sprint Duration: {time elapsed}
============================================
Next Steps
============================================
1. Review the implemented code in your editor
2. Run `/sniper-sprint` again to start the next sprint
3. Run `/sniper-status` to see overall project progress
4. If all stories are complete, the project is ready for release
Remaining work estimate:
{count} stories, approximately {count} more sprints
============================================IMPORTANT RULES
- You are the LEAD. You coordinate. You do NOT write code.
- ALWAYS let the user select which stories go into the sprint. Do not auto-select.
- Each story is assigned to exactly ONE implementation teammate. QA tests everything.
- QA tasks are ALWAYS blocked by their corresponding implementation tasks.
- API contract alignment between backend and frontend is CRITICAL. Facilitate it proactively.
- The review gate is STRICT. Do NOT auto-advance. ALWAYS wait for human review.
- If
$ARGUMENTScontains "dry-run", perform Steps 0-5 only (plan the sprint without spawning) and present the plan. - If
$ARGUMENTScontains story IDs, use them as the sprint backlog without prompting for selection. - If
$ARGUMENTScontains "skip-review", IGNORE IT. The sprint gate is strict and cannot be skipped. - Do NOT exceed
max_teammatesfrom config.yaml. Suggest splitting the sprint if too many would be needed. - Honor
model_overridefrom sprint.yaml (ai-dev uses opus, others use sonnet). - All file paths are relative to the project root.
- Do NOT automatically start the next sprint -- let the user initiate it.
- If this is not the first sprint, check previous sprint history and completed stories to avoid re-implementing.
