feat: add custom agents and quick-flow workflows, remove tech-spec track

Major Changes:
- Add sample custom agents demonstrating installable agent system
  - commit-poet: Generates semantic commit messages (BMAD Method repo sample)
  - toolsmith: Development tooling expert with knowledge base covering bundlers, deployment, docs, installers, modules, and tests (BMAD Method repo sample)
  - Both agents demonstrate custom agent architecture and are installable to projects via BMAD installer system
  - Include comprehensive installation guides and sidecar knowledge bases

- Add bmad-quick-flow methodology for rapid development
  - create-tech-spec: Direct technical specification workflow
  - quick-dev: Flexible execution workflow supporting both tech-spec-driven and direct instruction development
  - quick-flow-solo-dev (Barry): 1 man show agent specialized in bmad-quick-flow methodology
  - Comprehensive documentation for quick-flow approach and solo development

- Remove deprecated tech-spec workflow track
  - Delete entire tech-spec workflow directory and templates
  - Remove quick-spec-flow.md documentation (replaced by quick-flow docs)
  - Clean up unused epic and story templates

- Fix custom agent installation across IDE installers
  - Repair antigravity and multiple IDE installers to properly support custom agents
  - Enable custom agent installation via quick installer, agent installer, regular installer, and special agent installer
  - All installation methods now accessible via npx with full documentation

Infrastructure:
- Update BMM module configurations and team setups
- Modify workflow status paths to support quick-flow integration
- Reorganize documentation with new agent and workflow guides
- Add custom/ directory for user customizations
- Update platform codes and installer configurations
This commit is contained in:
Brian Madison
2025-11-23 08:50:36 -06:00
parent 6907d44810
commit 4308b36d4d
63 changed files with 2556 additions and 3019 deletions

View File

@@ -1,217 +0,0 @@
# Tech-Spec Workflow Validation Checklist
**Purpose**: Validate tech-spec workflow outputs are context-rich, definitive, complete, and implementation-ready.
**Scope**: Quick-flow software projects (1-5 stories)
**Expected Outputs**: tech-spec.md + epics.md + story files (1-5 stories)
**New Standard**: Tech-spec should be comprehensive enough to replace story-context for most quick-flow projects
---
## 1. Output Files Exist
- [ ] tech-spec.md created in output folder
- [ ] epics.md created (minimal for 1 story, detailed for multiple)
- [ ] Story file(s) created in sprint_artifacts
- Naming convention: story-{epic-slug}-N.md (where N = 1 to story_count)
- 1 story: story-{epic-slug}-1.md
- Multiple stories: story-{epic-slug}-1.md through story-{epic-slug}-N.md
- [ ] bmm-workflow-status.yaml updated (if not standalone mode)
- [ ] No unfilled {{template_variables}} in any files
---
## 2. Context Gathering (NEW - CRITICAL)
### Document Discovery
- [ ] **Existing documents loaded**: Product brief, research docs found and incorporated (if they exist)
- [ ] **Document-project output**: Checked for {output_folder}/index.md (brownfield codebase map)
- [ ] **Sharded documents**: If sharded versions found, ALL sections loaded and synthesized
- [ ] **Context summary**: loaded_documents_summary lists all sources used
### Project Stack Detection
- [ ] **Setup files identified**: package.json, requirements.txt, or equivalent found and parsed
- [ ] **Framework detected**: Exact framework name and version captured (e.g., "Express 4.18.2")
- [ ] **Dependencies extracted**: All production dependencies with specific versions
- [ ] **Dev tools identified**: TypeScript, Jest, ESLint, pytest, etc. with versions
- [ ] **Scripts documented**: Available npm/pip/etc scripts identified
- [ ] **Stack summary**: project_stack_summary is complete and accurate
### Brownfield Analysis (if applicable)
- [ ] **Directory structure**: Main code directories identified and documented
- [ ] **Code patterns**: Dominant patterns identified (class-based, functional, MVC, etc.)
- [ ] **Naming conventions**: Existing conventions documented (camelCase, snake_case, etc.)
- [ ] **Key modules**: Important existing modules/services identified
- [ ] **Testing patterns**: Test framework and patterns documented
- [ ] **Structure summary**: existing_structure_summary is comprehensive
---
## 3. Tech-Spec Definitiveness (CRITICAL)
### No Ambiguity Allowed
- [ ] **Zero "or" statements**: NO "use X or Y", "either A or B", "options include"
- [ ] **Specific versions**: All frameworks, libraries, tools have EXACT versions
- ✅ GOOD: "Python 3.11", "React 18.2.0", "winston v3.8.2 (from package.json)"
- ❌ BAD: "Python 2 or 3", "React 18+", "a logger like pino or winston"
- [ ] **Definitive decisions**: Every technical choice is final, not a proposal
- [ ] **Stack-aligned**: Decisions reference detected project stack
### Implementation Clarity
- [ ] **Source tree changes**: EXACT file paths with CREATE/MODIFY/DELETE actions
- ✅ GOOD: "src/services/UserService.ts - MODIFY - Add validateEmail() method"
- ❌ BAD: "Update some files in the services folder"
- [ ] **Technical approach**: Describes SPECIFIC implementation using detected stack
- [ ] **Existing patterns**: Documents brownfield patterns to follow (if applicable)
- [ ] **Integration points**: Specific modules, APIs, services identified
---
## 4. Context-Rich Content (NEW)
### Context Section
- [ ] **Available Documents**: Lists all loaded documents
- [ ] **Project Stack**: Complete framework and dependency information
- [ ] **Existing Codebase Structure**: Brownfield analysis or greenfield notation
### The Change Section
- [ ] **Problem Statement**: Clear, specific problem definition
- [ ] **Proposed Solution**: Concrete solution approach
- [ ] **Scope In/Out**: Clear boundaries defined
### Development Context Section
- [ ] **Relevant Existing Code**: References to specific files and line numbers (brownfield)
- [ ] **Framework Dependencies**: Complete list with exact versions from project
- [ ] **Internal Dependencies**: Internal modules listed
- [ ] **Configuration Changes**: Specific config file updates identified
### Developer Resources Section
- [ ] **File Paths Reference**: Complete list of all files involved
- [ ] **Key Code Locations**: Functions, classes, modules with file:line references
- [ ] **Testing Locations**: Specific test directories and patterns
- [ ] **Documentation Updates**: Docs that need updating identified
---
## 5. Story Quality
### Story Format
- [ ] All stories use "As a [role], I want [capability], so that [benefit]" format
- [ ] Each story has numbered acceptance criteria
- [ ] Tasks reference AC numbers: (AC: #1), (AC: #2)
- [ ] Dev Notes section links to tech-spec.md
### Story Context Integration (NEW)
- [ ] **Tech-Spec Reference**: Story explicitly references tech-spec.md as primary context
- [ ] **Dev Agent Record**: Includes all required sections (Context Reference, Agent Model, etc.)
- [ ] **Test Results section**: Placeholder ready for dev execution
- [ ] **Review Notes section**: Placeholder ready for code review
### Story Sequencing (If Level 1)
- [ ] **Vertical slices**: Each story delivers complete, testable functionality
- [ ] **Sequential ordering**: Stories in logical progression
- [ ] **No forward dependencies**: No story depends on later work
- [ ] Each story leaves system in working state
### Coverage
- [ ] Story acceptance criteria derived from tech-spec
- [ ] Story tasks map to tech-spec implementation guide
- [ ] Files in stories match tech-spec source tree
- [ ] Key code references align with tech-spec Developer Resources
---
## 6. Epic Quality (All Projects)
- [ ] **Epic title**: User-focused outcome (not implementation detail)
- [ ] **Epic slug**: Clean kebab-case slug (2-3 words)
- [ ] **Epic goal**: Clear purpose and value statement
- [ ] **Epic scope**: Boundaries clearly defined
- [ ] **Success criteria**: Measurable outcomes
- [ ] **Story map** (if multiple stories): Visual representation of epic → stories
- [ ] **Implementation sequence** (if multiple stories): Logical story ordering with dependencies
- [ ] **Tech-spec reference**: Links back to tech-spec.md
- [ ] **Detail level appropriate**: Minimal for 1 story, detailed for multiple
---
## 7. Workflow Status Integration
- [ ] bmm-workflow-status.yaml updated (if exists)
- [ ] Current phase reflects tech-spec completion
- [ ] Progress percentage updated appropriately
- [ ] Next workflow clearly identified
---
## 8. Implementation Readiness (NEW - ENHANCED)
### Can Developer Start Immediately?
- [ ] **All context available**: Brownfield analysis + stack details + existing patterns
- [ ] **No research needed**: Developer doesn't need to hunt for framework versions or patterns
- [ ] **Specific file paths**: Developer knows exactly which files to create/modify
- [ ] **Code references**: Can find similar code to reference (brownfield)
- [ ] **Testing clear**: Knows what to test and how
- [ ] **Deployment documented**: Knows how to deploy and rollback
### Tech-Spec Replaces Story-Context?
- [ ] **Comprehensive enough**: Contains all info typically in story-context XML
- [ ] **Brownfield analysis**: If applicable, includes codebase reconnaissance
- [ ] **Framework specifics**: Exact versions and usage patterns
- [ ] **Pattern guidance**: Shows examples of existing patterns to follow
---
## 9. Critical Failures (Auto-Fail)
- [ ]**Non-definitive technical decisions** (any "option A or B" or vague choices)
- [ ]**Missing versions** (framework/library without specific version)
- [ ]**Context not gathered** (didn't check for document-project, setup files, etc.)
- [ ]**Stack mismatch** (decisions don't align with detected project stack)
- [ ]**Stories don't match template** (missing Dev Agent Record sections)
- [ ]**Missing tech-spec sections** (required section missing from enhanced template)
- [ ]**Stories have forward dependencies** (would break sequential implementation)
- [ ]**Vague source tree** (file changes not specific with actions)
- [ ]**No brownfield analysis** (when document-project output exists but wasn't used)
---
## Validation Notes
**Document any findings:**
- **Context Gathering Score**: [Comprehensive / Partial / Insufficient]
- **Definitiveness Score**: [All definitive / Some ambiguity / Significant ambiguity]
- **Brownfield Integration**: [N/A - Greenfield / Excellent / Partial / Missing]
- **Stack Alignment**: [Perfect / Good / Partial / None]
## **Strengths:**
## **Issues to address:**
## **Recommended actions:**
**Ready for implementation?** [Yes / No - explain]
**Can skip story-context?** [Yes - tech-spec is comprehensive / No - additional context needed / N/A]
---
_The tech-spec should be a RICH CONTEXT DOCUMENT that gives developers everything they need without requiring separate context generation._

View File

@@ -1,74 +0,0 @@
# {{project_name}} - Epic Breakdown
**Date:** {{date}}
**Project Level:** {{project_level}}
---
<!-- Repeat for each epic (N = 1, 2, 3...) -->
## Epic {{N}}: {{epic_title_N}}
**Slug:** {{epic_slug_N}}
### Goal
{{epic_goal_N}}
### Scope
{{epic_scope_N}}
### Success Criteria
{{epic_success_criteria_N}}
### Dependencies
{{epic_dependencies_N}}
---
## Story Map - Epic {{N}}
{{story_map_N}}
---
## Stories - Epic {{N}}
<!-- Repeat for each story (M = 1, 2, 3...) within epic N -->
### Story {{N}}.{{M}}: {{story_title_N_M}}
As a {{user_type}},
I want {{capability}},
So that {{value_benefit}}.
**Acceptance Criteria:**
**Given** {{precondition}}
**When** {{action}}
**Then** {{expected_outcome}}
**And** {{additional_criteria}}
**Prerequisites:** {{dependencies_on_previous_stories}}
**Technical Notes:** {{implementation_guidance}}
**Estimated Effort:** {{story_points}} points ({{time_estimate}})
<!-- End story repeat -->
---
## Implementation Timeline - Epic {{N}}
**Total Story Points:** {{total_points_N}}
**Estimated Timeline:** {{estimated_timeline_N}}
---
<!-- End epic repeat -->

View File

@@ -1,436 +0,0 @@
# Unified Epic and Story Generation
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>
<workflow>
<critical>This generates epic + stories for ALL quick-flow projects</critical>
<critical>Always generates: epics.md + story files (1-5 stories based on {{story_count}})</critical>
<critical>Runs AFTER tech-spec.md completion</critical>
<critical>Story format MUST match create-story template for compatibility with story-context and dev-story workflows</critical>
<step n="1" goal="Load tech spec and extract implementation context">
<action>Read the completed tech-spec.md file from {default_output_file}</action>
<action>Load bmm-workflow-status.yaml from {workflow-status} (if exists)</action>
<action>Get story_count from workflow variables (1-5)</action>
<action>Ensure {sprint_artifacts} directory exists</action>
<action>Extract from tech-spec structure:
**From "The Change" section:**
- Problem statement and solution overview
- Scope (in/out)
**From "Implementation Details" section:**
- Source tree changes
- Technical approach
- Integration points
**From "Implementation Guide" section:**
- Implementation steps
- Testing strategy
- Acceptance criteria
- Time estimates
**From "Development Context" section:**
- Framework dependencies with versions
- Existing code references
- Internal dependencies
**From "Developer Resources" section:**
- File paths
- Key code locations
- Testing locations
Use this rich context to generate comprehensive, implementation-ready epic and stories.
</action>
</step>
<step n="2" goal="Generate epic slug and structure">
<action>Create epic based on the overall feature/change from tech-spec</action>
<action>Derive epic slug from the feature name:
- Use 2-3 words max
- Kebab-case format
- User-focused, not implementation-focused
Examples:
- "OAuth Integration" → "oauth-integration"
- "Fix Login Bug" → "login-fix"
- "User Profile Page" → "user-profile"
</action>
<action>Store as {{epic_slug}} - this will be used for all story filenames</action>
<action>Adapt epic detail to story count:
**For single story (story_count == 1):**
- Epic is minimal - just enough structure
- Goal: Brief statement of what's being accomplished
- Scope: High-level boundary
- Success criteria: Core outcomes
**For multiple stories (story_count > 1):**
- Epic is detailed - full breakdown
- Goal: Comprehensive purpose and value statement
- Scope: Clear boundaries with in/out examples
- Success criteria: Measurable, testable outcomes
- Story map: Visual representation of epic → stories
- Implementation sequence: Logical ordering with dependencies
</action>
</step>
<step n="3" goal="Generate epic document">
<action>Initialize {epics_file} using {epics_template}</action>
<action>Populate epic metadata from tech-spec context:
**Epic Title:** User-facing outcome (not implementation detail)
- Good: "OAuth Integration", "Login Bug Fix", "Icon Reliability"
- Bad: "Update recommendedLibraries.ts", "Refactor auth service"
**Epic Goal:** Why this matters to users/business
**Epic Scope:** Clear boundaries from tech-spec scope section
**Epic Success Criteria:** Measurable outcomes from tech-spec acceptance criteria
**Dependencies:** From tech-spec integration points and dependencies
</action>
<template-output file="{epics_file}">project_name</template-output>
<template-output file="{epics_file}">date</template-output>
<template-output file="{epics_file}">epic_title</template-output>
<template-output file="{epics_file}">epic_slug</template-output>
<template-output file="{epics_file}">epic_goal</template-output>
<template-output file="{epics_file}">epic_scope</template-output>
<template-output file="{epics_file}">epic_success_criteria</template-output>
<template-output file="{epics_file}">epic_dependencies</template-output>
</step>
<step n="4" goal="Intelligently break down into stories">
<action>Analyze tech-spec implementation steps and create story breakdown
**For story_count == 1:**
- Create single comprehensive story covering all implementation
- Title: Focused on the deliverable outcome
- Tasks: Map directly to tech-spec implementation steps
- Estimated points: Typically 1-5 points
**For story_count > 1:**
- Break implementation into logical story boundaries
- Each story must be:
- Independently valuable (delivers working functionality)
- Testable (has clear acceptance criteria)
- Sequentially ordered (no forward dependencies)
- Right-sized (prefer 2-4 stories over many tiny ones)
**Story Sequencing Rules (CRITICAL):**
1. Foundation → Build → Test → Polish
2. Database → API → UI
3. Backend → Frontend
4. Core → Enhancement
5. NO story can depend on a later story!
Validate sequence: Each story N should only depend on stories 1...N-1
</action>
<action>For each story position (1 to {{story_count}}):
1. **Determine story scope from tech-spec tasks**
- Group related implementation steps
- Ensure story leaves system in working state
2. **Create story title**
- User-focused deliverable
- Active, clear language
- Good: "OAuth Backend Integration", "OAuth UI Components"
- Bad: "Write some OAuth code", "Update files"
3. **Extract acceptance criteria**
- From tech-spec testing strategy and acceptance criteria
- Must be numbered (AC #1, AC #2, etc.)
- Must be specific and testable
- Use Given/When/Then format when applicable
4. **Map tasks to implementation steps**
- Break down tech-spec implementation steps for this story
- Create checkbox list
- Reference AC numbers: (AC: #1), (AC: #2)
5. **Estimate story points**
- 1 point = < 1 day (2-4 hours)
- 2 points = 1-2 days
- 3 points = 2-3 days
- 5 points = 3-5 days
- Total across all stories should align with tech-spec estimates
</action>
</step>
<step n="5" goal="Generate story files">
<for-each story="1 to story_count">
<action>Set story_filename = "story-{{epic_slug}}-{{n}}.md"</action>
<action>Set story_path = "{sprint_artifacts}/{{story_filename}}"</action>
<action>Create story file using {user_story_template}</action>
<action>Populate story with:
**Story Header:**
- N.M format (where N is always 1 for quick-flow, M is story number)
- Title: User-focused deliverable
- Status: Draft
**User Story:**
- As a [role] (developer, user, admin, system, etc.)
- I want [capability/change]
- So that [benefit/value]
**Acceptance Criteria:**
- Numbered list (AC #1, AC #2, ...)
- Specific, measurable, testable
- Derived from tech-spec testing strategy and acceptance criteria
- Cover all success conditions for this story
**Tasks/Subtasks:**
- Checkbox list mapped to tech-spec implementation steps
- Each task references AC numbers: (AC: #1)
- Include explicit testing tasks
**Technical Summary:**
- High-level approach for this story
- Key technical decisions
- Files/modules involved
**Project Structure Notes:**
- files_to_modify: From tech-spec "Developer Resources → File Paths"
- test_locations: From tech-spec "Developer Resources → Testing Locations"
- story_points: Estimated effort
- dependencies: Prerequisites (other stories, systems, data)
**Key Code References:**
- From tech-spec "Development Context → Relevant Existing Code"
- From tech-spec "Developer Resources → Key Code Locations"
- Specific file:line references when available
**Context References:**
- Link to tech-spec.md (primary context document)
- Note: Tech-spec contains brownfield analysis, framework versions, patterns, etc.
**Dev Agent Record:**
- Empty sections (populated during dev-story execution)
- Agent Model Used
- Debug Log References
- Completion Notes
- Files Modified
- Test Results
**Review Notes:**
- Empty section (populated during code review)
</action>
<template-output file="{{story_path}}">story_number</template-output>
<template-output file="{{story_path}}">story_title</template-output>
<template-output file="{{story_path}}">user_role</template-output>
<template-output file="{{story_path}}">capability</template-output>
<template-output file="{{story_path}}">benefit</template-output>
<template-output file="{{story_path}}">acceptance_criteria</template-output>
<template-output file="{{story_path}}">tasks_subtasks</template-output>
<template-output file="{{story_path}}">technical_summary</template-output>
<template-output file="{{story_path}}">files_to_modify</template-output>
<template-output file="{{story_path}}">test_locations</template-output>
<template-output file="{{story_path}}">story_points</template-output>
<template-output file="{{story_path}}">time_estimate</template-output>
<template-output file="{{story_path}}">dependencies</template-output>
<template-output file="{{story_path}}">existing_code_references</template-output>
</for-each>
</step>
<step n="6" goal="Generate story map and finalize epic" if="story_count > 1">
<action>Create visual story map showing epic → stories hierarchy
Include:
- Epic title at top
- Stories listed with point estimates
- Dependencies noted
- Sequence validation confirmation
Example:
```
Epic: OAuth Integration (8 points)
├── Story 1.1: OAuth Backend (3 points)
│ Dependencies: None
├── Story 1.2: OAuth UI Components (3 points)
│ Dependencies: Story 1.1
└── Story 1.3: OAuth Testing & Polish (2 points)
Dependencies: Stories 1.1, 1.2
```
</action>
<action>Calculate totals:
- Total story points across all stories
- Estimated timeline (typically 1-2 points per day)
</action>
<action>Append to {epics_file}:
- Story summaries
- Story map visual
- Implementation sequence
- Total points and timeline
</action>
<template-output file="{epics_file}">story_map</template-output>
<template-output file="{epics_file}">story_summaries</template-output>
<template-output file="{epics_file}">total_points</template-output>
<template-output file="{epics_file}">estimated_timeline</template-output>
<template-output file="{epics_file}">implementation_sequence</template-output>
</step>
<step n="7" goal="Validate story quality">
<critical>Always run validation - NOT optional!</critical>
<action>Validate all stories against quality standards:
**Story Sequence Validation (CRITICAL):**
- For each story N, verify it doesn't depend on story N+1 or later
- Check: Can stories be implemented in order 1→2→3→...?
- If sequence invalid: Identify problem, propose reordering, ask user to confirm
**Acceptance Criteria Quality:**
- All AC are numbered (AC #1, AC #2, ...)
- Each AC is specific and testable (no "works well", "is good", "performs fast")
- AC use Given/When/Then or equivalent structure
- All success conditions are covered
**Story Completeness:**
- All stories map to tech-spec implementation steps
- Story points align with tech-spec time estimates
- Dependencies are clearly documented
- Each story has testable AC
- Files and locations reference tech-spec developer resources
**Template Compliance:**
- All required sections present
- Dev Agent Record sections exist (even if empty)
- Context references link to tech-spec.md
- Story numbering follows N.M format
</action>
<check if="validation issues found">
<output>⚠️ **Story Validation Issues:**
{{issues_list}}
**Recommended Fixes:**
{{fixes}}
Shall I fix these automatically? (yes/no)</output>
<ask>Apply fixes? (yes/no)</ask>
<check if="yes">
<action>Apply fixes (reorder stories, rewrite vague AC, add missing details)</action>
<action>Re-validate</action>
<output>✅ Validation passed after fixes!</output>
</check>
</check>
<check if="validation passes">
<output>✅ **Story Validation Passed!**
**Quality Scores:**
- Sequence: ✅ Valid (no forward dependencies)
- AC Quality: ✅ All specific and testable
- Completeness: ✅ All tech spec tasks covered
- Template Compliance: ✅ All sections present
Stories are implementation-ready!</output>
</check>
</step>
<step n="8" goal="Update workflow status and finalize">
<action>Update bmm-workflow-status.yaml (if exists):
- Mark tech-spec as complete
- Initialize story sequence tracking
- Set first story as TODO
- Track epic slug and story count
</action>
<output>**✅ Epic and Stories Generated!**
**Epic:** {{epic_title}} ({{epic_slug}})
**Total Stories:** {{story_count}}
{{#if story_count > 1}}**Total Points:** {{total_points}}
**Estimated Timeline:** {{estimated_timeline}}{{/if}}
**Files Created:**
- `{epics_file}` - Epic structure{{#if story_count == 1}} (minimal){{/if}}
- `{sprint_artifacts}/story-{{epic_slug}}-1.md`{{#if story_count > 1}}
- `{sprint_artifacts}/story-{{epic_slug}}-2.md`{{/if}}{{#if story_count > 2}}
- Through story-{{epic_slug}}-{{story_count}}.md{{/if}}
**What's Next:**
All stories reference tech-spec.md as primary context. You can proceed directly to development with the DEV agent!
Story files are ready for:
- Direct implementation (dev-story workflow)
- Optional context generation (story-context workflow for complex cases)
- Sprint planning organization (sprint-planning workflow for multi-story coordination)
</output>
</step>
</workflow>

View File

@@ -1,980 +0,0 @@
# Tech-Spec Workflow - Context-Aware Technical Planning (quick-flow)
<workflow>
<critical>The workflow execution engine is governed by: {project-root}/{bmad_folder}/core/tasks/workflow.xml</critical>
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
<critical>Communicate all responses in {communication_language} and language MUST be tailored to {user_skill_level}</critical>
<critical>Generate all documents in {document_output_language}</critical>
<critical>This is quick-flow efforts - tech-spec with context-rich story generation</critical>
<critical>Quick Flow: tech-spec + epic with 1-5 stories (always generates epic structure)</critical>
<critical>LIVING DOCUMENT: Write to tech-spec.md continuously as you discover - never wait until the end</critical>
<critical>CONTEXT IS KING: Gather ALL available context before generating specs</critical>
<critical>DOCUMENT OUTPUT: Technical, precise, definitive. Specific versions only. User skill level ({user_skill_level}) affects conversation style ONLY, not document content.</critical>
<critical>Input documents specified in workflow.yaml input_file_patterns - workflow engine handles fuzzy matching, whole vs sharded document discovery automatically</critical>
<critical>⚠️ ABSOLUTELY NO TIME ESTIMATES - NEVER mention hours, days, weeks, months, or ANY time-based predictions. AI has fundamentally changed development speed - what once took teams weeks/months can now be done by one person in hours. DO NOT give ANY time estimates whatsoever.</critical>
<critical>⚠️ CHECKPOINT PROTOCOL: After EVERY <template-output> tag, you MUST follow workflow.xml substep 2c: SAVE content to file immediately → SHOW checkpoint separator (━━━━━━━━━━━━━━━━━━━━━━━) → DISPLAY generated content → PRESENT options [a]Advanced Elicitation/[c]Continue/[p]Party-Mode/[y]YOLO → WAIT for user response. Never batch saves or skip checkpoints.</critical>
<step n="0" goal="Validate workflow readiness and detect project level" tag="workflow-status">
<action>Check if {output_folder}/bmm-workflow-status.yaml exists</action>
<check if="status file not found">
<output>No workflow status file found. Tech-spec workflow can run standalone or as part of BMM workflow path.</output>
<output>**Recommended:** Run `workflow-init` first for project context tracking and workflow sequencing.</output>
<output>**Quick Start:** Continue in standalone mode - perfect for rapid prototyping and quick changes!</output>
<ask>Continue in standalone mode or exit to run workflow-init? (continue/exit)</ask>
<check if="continue">
<action>Set standalone_mode = true</action>
<output>Great! Let's quickly configure your project...</output>
<ask>How many user stories do you think this work requires?
**Single Story** - Simple change (bug fix, small isolated feature, single file change)
→ Generates: tech-spec + epic (minimal) + 1 story
→ Example: "Fix login validation bug" or "Add email field to user form"
**Multiple Stories (2-5)** - Coherent feature (multiple related changes, small feature set)
→ Generates: tech-spec + epic (detailed) + 2-5 stories
→ Example: "Add OAuth integration" or "Build user profile page"
Enter **1** for single story, or **2-5** for number of stories you estimate</ask>
<action>Capture user response as story_count (1-5)</action>
<action>Validate: If not 1-5, ask for clarification. If > 5, suggest using full BMad Method instead</action>
<ask if="not already known greenfield vs brownfield">Is this a **greenfield** (new/empty codebase) or **brownfield** (existing codebase) project?
**Greenfield** - Starting fresh, no existing code aside from starter templates
**Brownfield** - Adding to or modifying existing functional code or project
Enter **greenfield** or **brownfield**:</ask>
<action>Capture user response as field_type (greenfield or brownfield)</action>
<action>Validate: If not greenfield or brownfield, ask again</action>
<output>Perfect! Running as:
- **Story Count:** {{story_count}} {{#if story_count == 1}}story (minimal epic){{else}}stories (detailed epic){{/if}}
- **Field Type:** {{field_type}}
- **Mode:** Standalone (no status file tracking)
Let's build your tech-spec!</output>
</check>
<check if="exit">
<action>Exit workflow</action>
</check>
</check>
<check if="status file found">
<action>Load the FULL file: {workflow-status}</action>
<action>Parse workflow_status section</action>
<action>Check status of "tech-spec" workflow</action>
<action>Get selected_track from YAML metadata indicating this is quick-flow-greenfield or quick-flow-brownfield</action>
<action>Get field_type from YAML metadata (greenfield or brownfield)</action>
<action>Find first non-completed workflow (next expected workflow)</action>
<check if="selected_track is NOT quick-flow-greenfield AND NOT quick-flow-brownfield">
<output>**Incorrect Workflow for Level {{selected_track}}**
Tech-spec is for Simple projects. **Correct workflow:** `create-prd` (PM agent). You should Exit at this point, unless you want to force run this workflow.
</output>
</check>
<check if="tech-spec status is file path (already completed)">
<output>⚠️ Tech-spec already completed: {{tech-spec status}}</output>
<ask>Re-running will overwrite the existing tech-spec. Continue? (y/n)</ask>
<check if="n">
<output>Exiting. Use workflow-status to see your next step.</output>
<action>Exit workflow</action>
</check>
</check>
<check if="tech-spec is not the next expected workflow">
<output>⚠️ Next expected workflow: {{next_workflow}}. Tech-spec is out of sequence.</output>
<ask>Continue with tech-spec anyway? (y/n)</ask>
<check if="n">
<output>Exiting. Run {{next_workflow}} instead.</output>
<action>Exit workflow</action>
</check>
</check>
<action>Set standalone_mode = false</action>
</check>
</step>
<step n="0.5" goal="Discover and load input documents">
<invoke-protocol name="discover_inputs" />
<note>After discovery, these content variables are available: {product_brief_content}, {research_content}, {document_project_content}</note>
</step>
<step n="1" goal="Comprehensive context discovery - gather everything available">
<action>Welcome {user_name} warmly and explain what we're about to do:
"I'm going to gather all available context about your project before we dive into the technical spec. The following content has been auto-loaded:
- Product briefs and research: {product_brief_content}, {research_content}
- Brownfield codebase documentation: {document_project_content} (loaded via INDEX_GUIDED strategy)
- Your project's tech stack and dependencies
- Existing code patterns and structure
This ensures the tech-spec is grounded in reality and gives developers everything they need."
</action>
<action>**PHASE 1: Load Existing Documents**
Search for and load (using dual-strategy: whole first, then sharded):
1. **Product Brief:**
- Search pattern: {output*folder}/\_brief*.md
- Sharded: {output*folder}/\_brief*/index.md
- If found: Load completely and extract key context
2. **Research Documents:**
- Search pattern: {output*folder}/\_research*.md
- Sharded: {output*folder}/\_research*/index.md
- If found: Load completely and extract insights
3. **Document-Project Output (CRITICAL for brownfield):**
- Always check: {output_folder}/index.md
- If found: This is the brownfield codebase map - load ALL shards!
- Extract: File structure, key modules, existing patterns, naming conventions
Create a summary of what was found and ask user if there are other documents or information to consider before proceeding:
- List of loaded documents
- Key insights from each
- Brownfield vs greenfield determination
</action>
<action>**PHASE 2: Intelligently Detect Project Stack**
Use your comprehensive knowledge as a coding-capable LLM to analyze the project:
**Discover Setup Files:**
- Search {project-root} for dependency manifests (package.json, requirements.txt, Gemfile, go.mod, Cargo.toml, composer.json, pom.xml, build.gradle, pyproject.toml, etc.)
- Adapt to ANY project type - you know the ecosystem conventions
**Extract Critical Information:**
1. Framework name and EXACT version (e.g., "React 18.2.0", "Django 4.2.1")
2. All production dependencies with specific versions
3. Dev tools and testing frameworks (Jest, pytest, ESLint, etc.)
4. Available build/test scripts
5. Project type (web app, API, CLI, library, etc.)
**Assess Currency:**
- Identify if major dependencies are outdated (>2 years old)
- Use WebSearch to find current recommended versions if needed
- Note migration complexity in your summary
**For Greenfield Projects:**
<check if="field_type == greenfield">
<action>Use WebSearch to discover current best practices and official starter templates</action>
<action>Recommend appropriate starters based on detected framework (or user's intended stack)</action>
<action>Present benefits conversationally: setup time saved, modern patterns, testing included</action>
<ask>Would you like to use a starter template? (yes/no/show-me-options)</ask>
<action>Capture preference and include in implementation stack if accepted</action>
</check>
**Trust Your Intelligence:**
You understand project ecosystems deeply. Adapt your analysis to any stack - don't be constrained by examples. Extract what matters for developers.
Store comprehensive findings as {{project_stack_summary}}
</action>
<action>**PHASE 3: Brownfield Codebase Reconnaissance** (if applicable)
<check if="field_type == brownfield OR document-project output found">
Analyze the existing project structure:
1. **Directory Structure:**
- Identify main code directories (src/, lib/, app/, components/, services/)
- Note organization patterns (feature-based, layer-based, domain-driven)
- Identify test directories and patterns
2. **Code Patterns:**
- Look for dominant patterns (class-based, functional, MVC, microservices)
- Identify naming conventions (camelCase, snake_case, PascalCase)
- Note file organization patterns
3. **Key Modules/Services:**
- Identify major modules or services already in place
- Note entry points (main.js, app.py, index.ts)
- Document important utilities or shared code
4. **Testing Patterns & Standards (CRITICAL):**
- Identify test framework in use (from package.json/requirements.txt)
- Note test file naming patterns (.test.js, test.py, .spec.ts, Test.java)
- Document test organization (tests/, **tests**, spec/, test/)
- Look for test configuration files (jest.config.js, pytest.ini, .rspec)
- Check for coverage requirements (in CI config, test scripts)
- Identify mocking/stubbing libraries (jest.mock, unittest.mock, sinon)
- Note assertion styles (expect, assert, should)
5. **Code Style & Conventions (MUST CONFORM):**
- Check for linter config (.eslintrc, .pylintrc, rubocop.yml)
- Check for formatter config (.prettierrc, .black, .editorconfig)
- Identify code style:
- Semicolons: yes/no (JavaScript/TypeScript)
- Quotes: single/double
- Indentation: spaces/tabs, size
- Line length limits
- Import/export patterns (named vs default, organization)
- Error handling patterns (try/catch, Result types, error classes)
- Logging patterns (console, winston, logging module, specific formats)
- Documentation style (JSDoc, docstrings, YARD, JavaDoc)
Store this as {{existing_structure_summary}}
**CRITICAL: Confirm Conventions with User**
<ask>I've detected these conventions in your codebase:
**Code Style:**
{{detected_code_style}}
**Test Patterns:**
{{detected_test_patterns}}
**File Organization:**
{{detected_file_organization}}
Should I follow these existing conventions for the new code?
Enter **yes** to conform to existing patterns, or **no** if you want to establish new standards:</ask>
<action>Capture user response as conform_to_conventions (yes/no)</action>
<check if="conform_to_conventions == no">
<ask>What conventions would you like to use instead? (Or should I suggest modern best practices?)</ask>
<action>Capture new conventions or use WebSearch for current best practices</action>
</check>
<action>Store confirmed conventions as {{existing_conventions}}</action>
</check>
<check if="field_type == greenfield">
<action>Note: Greenfield project - no existing code to analyze</action>
<action>Set {{existing_structure_summary}} = "Greenfield project - new codebase"</action>
</check>
</action>
<action>**PHASE 4: Synthesize Context Summary**
Create {{loaded_documents_summary}} that includes:
- Documents found and loaded
- Brownfield vs greenfield status
- Tech stack detected (or "To be determined" if greenfield)
- Existing patterns identified (or "None - greenfield" if applicable)
Present this summary to {user_name} conversationally:
"Here's what I found about your project:
**Documents Available:**
[List what was found]
**Project Type:**
[Brownfield with X framework Y version OR Greenfield - new project]
**Existing Stack:**
[Framework and dependencies OR "To be determined"]
**Code Structure:**
[Existing patterns OR "New codebase"]
This gives me a solid foundation for creating a context-rich tech spec!"
</action>
<template-output>loaded_documents_summary</template-output>
<template-output>project_stack_summary</template-output>
<template-output>existing_structure_summary</template-output>
</step>
<step n="2" goal="Conversational discovery of the change/feature">
<action>Engage {user_name} in natural, adaptive conversation to deeply understand what needs to be built.
**Discovery Approach:**
Adapt your questioning style to the complexity:
- For single-story changes: Focus on the specific problem, location, and approach
- For multi-story features: Explore user value, integration strategy, and scope boundaries
**Core Discovery Goals (accomplish through natural dialogue):**
1. **The Problem/Need**
- What user or technical problem are we solving?
- Why does this matter now?
- What's the impact if we don't do this?
2. **The Solution Approach**
- What's the proposed solution?
- How should this work from a user/system perspective?
- What alternatives were considered?
3. **Integration & Location**
- <check if="brownfield">Where does this fit in the existing codebase?</check>
- What existing code/patterns should we reference or follow?
- What are the integration points?
4. **Scope Clarity**
- What's IN scope for this work?
- What's explicitly OUT of scope (future work, not needed)?
- If multiple stories: What's MVP vs enhancement?
5. **Constraints & Dependencies**
- Technical limitations or requirements?
- Dependencies on other systems, APIs, or services?
- Performance, security, or compliance considerations?
6. **Success Criteria**
- How will we know this is done correctly?
- What does "working" look like?
- What edge cases matter?
**Conversation Style:**
- Be warm and collaborative, not interrogative
- Ask follow-up questions based on their responses
- Help them think through implications
- Reference context from Phase 1 (existing code, stack, patterns)
- Adapt depth to {{story_count}} complexity
Synthesize discoveries into clear, comprehensive specifications.
</action>
<template-output>problem_statement</template-output>
<template-output>solution_overview</template-output>
<template-output>change_type</template-output>
<template-output>scope_in</template-output>
<template-output>scope_out</template-output>
</step>
<step n="3" goal="Generate context-aware, definitive technical specification">
<critical>ALL TECHNICAL DECISIONS MUST BE DEFINITIVE - NO AMBIGUITY ALLOWED</critical>
<critical>Use existing stack info to make SPECIFIC decisions</critical>
<critical>Reference brownfield code to guide implementation</critical>
<action>Initialize tech-spec.md with the rich template</action>
<action>**Generate Context Section (already captured):**
These template variables are already populated from Step 1:
- {{loaded_documents_summary}}
- {{project_stack_summary}}
- {{existing_structure_summary}}
Just save them to the file.
</action>
<template-output file="tech-spec.md">loaded_documents_summary</template-output>
<template-output file="tech-spec.md">project_stack_summary</template-output>
<template-output file="tech-spec.md">existing_structure_summary</template-output>
<action>**Generate The Change Section:**
Already captured from Step 2:
- {{problem_statement}}
- {{solution_overview}}
- {{scope_in}}
- {{scope_out}}
Save to file.
</action>
<template-output file="tech-spec.md">problem_statement</template-output>
<template-output file="tech-spec.md">solution_overview</template-output>
<template-output file="tech-spec.md">scope_in</template-output>
<template-output file="tech-spec.md">scope_out</template-output>
<action>**Generate Implementation Details:**
Now make DEFINITIVE technical decisions using all the context gathered.
**Source Tree Changes - BE SPECIFIC:**
Bad (NEVER do this):
- "Update some files in the services folder"
- "Add tests somewhere"
Good (ALWAYS do this):
- "src/services/UserService.ts - MODIFY - Add validateEmail() method at line 45"
- "src/routes/api/users.ts - MODIFY - Add POST /users/validate endpoint"
- "tests/services/UserService.test.ts - CREATE - Test suite for email validation"
Include:
- Exact file paths
- Action: CREATE, MODIFY, DELETE
- Specific what changes (methods, classes, endpoints, components)
**Use brownfield context:**
- If modifying existing files, reference current structure
- Follow existing naming patterns
- Place new code logically based on current organization
</action>
<template-output file="tech-spec.md">source_tree_changes</template-output>
<action>**Technical Approach - BE DEFINITIVE:**
Bad (ambiguous):
- "Use a logging library like winston or pino"
- "Use Python 2 or 3"
- "Set up some kind of validation"
Good (definitive):
- "Use winston v3.8.2 (already in package.json) for logging"
- "Implement using Python 3.11 as specified in pyproject.toml"
- "Use Joi v17.9.0 for request validation following pattern in UserController.ts"
**Use detected stack:**
- Reference exact versions from package.json/requirements.txt
- Specify frameworks already in use
- Make decisions based on what's already there
**For greenfield:**
- Make definitive choices and justify them
- Specify exact versions
- No "or" statements allowed
</action>
<template-output file="tech-spec.md">technical_approach</template-output>
<action>**Existing Patterns to Follow:**
<check if="brownfield">
Document patterns from the existing codebase:
- Class structure patterns
- Function naming conventions
- Error handling approach
- Testing patterns
- Documentation style
Example:
"Follow the service pattern established in UserService.ts:
- Export class with constructor injection
- Use async/await for all asynchronous operations
- Throw ServiceError with error codes
- Include JSDoc comments for all public methods"
</check>
<check if="greenfield">
"Greenfield project - establishing new patterns:
- [Define the patterns to establish]"
</check>
</action>
<template-output file="tech-spec.md">existing_patterns</template-output>
<action>**Integration Points:**
Identify how this change connects:
- Internal modules it depends on
- External APIs or services
- Database interactions
- Event emitters/listeners
- State management
Be specific about interfaces and contracts.
</action>
<template-output file="tech-spec.md">integration_points</template-output>
<action>**Development Context:**
**Relevant Existing Code:**
<check if="brownfield">
Reference specific files or code sections developers should review:
- "See UserService.ts lines 120-150 for similar validation pattern"
- "Reference AuthMiddleware.ts for authentication approach"
- "Follow error handling in PaymentService.ts"
</check>
**Framework/Libraries:**
List with EXACT versions from detected stack:
- Express 4.18.2 (web framework)
- winston 3.8.2 (logging)
- Joi 17.9.0 (validation)
- TypeScript 5.1.6 (language)
**Internal Modules:**
List internal dependencies:
- @/services/UserService
- @/middleware/auth
- @/utils/validation
**Configuration Changes:**
Any config files to update:
- Update .env with new SMTP settings
- Add validation schema to config/schemas.ts
- Update package.json scripts if needed
</action>
<template-output file="tech-spec.md">existing_code_references</template-output>
<template-output file="tech-spec.md">framework_dependencies</template-output>
<template-output file="tech-spec.md">internal_dependencies</template-output>
<template-output file="tech-spec.md">configuration_changes</template-output>
<check if="field_type == brownfield">
<template-output file="tech-spec.md">existing_conventions</template-output>
</check>
<check if="field_type == greenfield">
<action>Set {{existing_conventions}} = "Greenfield project - establishing new conventions per modern best practices"</action>
<template-output file="tech-spec.md">existing_conventions</template-output>
</check>
<action>**Implementation Stack:**
Comprehensive stack with versions:
- Runtime: Node.js 20.x
- Framework: Express 4.18.2
- Language: TypeScript 5.1.6
- Testing: Jest 29.5.0
- Linting: ESLint 8.42.0
- Validation: Joi 17.9.0
All from detected project setup!
</action>
<template-output file="tech-spec.md">implementation_stack</template-output>
<action>**Technical Details:**
Deep technical specifics:
- Algorithms to implement
- Data structures to use
- Performance considerations
- Security considerations
- Error scenarios and handling
- Edge cases
Be thorough - developers need details!
</action>
<template-output file="tech-spec.md">technical_details</template-output>
<action>**Development Setup:**
What does a developer need to run this locally?
Based on detected stack and scripts:
```
1. Clone repo (if not already)
2. npm install (installs all deps from package.json)
3. cp .env.example .env (configure environment)
4. npm run dev (starts development server)
5. npm test (runs test suite)
```
Or for Python:
```
1. python -m venv venv
2. source venv/bin/activate
3. pip install -r requirements.txt
4. python manage.py runserver
```
Use the actual scripts from package.json/setup files!
</action>
<template-output file="tech-spec.md">development_setup</template-output>
<action>**Implementation Guide:**
**Setup Steps:**
Pre-implementation checklist:
- Create feature branch
- Verify dev environment running
- Review existing code references
- Set up test data if needed
**Implementation Steps:**
Step-by-step breakdown:
For single-story changes:
1. [Step 1 with specific file and action]
2. [Step 2 with specific file and action]
3. [Write tests]
4. [Verify acceptance criteria]
For multi-story features:
Organize by story/phase:
1. Phase 1: [Foundation work]
2. Phase 2: [Core implementation]
3. Phase 3: [Testing and validation]
**Testing Strategy:**
- Unit tests for [specific functions]
- Integration tests for [specific flows]
- Manual testing checklist
- Performance testing if applicable
**Acceptance Criteria:**
Specific, measurable, testable criteria:
1. Given [scenario], when [action], then [outcome]
2. [Metric] meets [threshold]
3. [Feature] works in [environment]
</action>
<template-output file="tech-spec.md">setup_steps</template-output>
<template-output file="tech-spec.md">implementation_steps</template-output>
<template-output file="tech-spec.md">testing_strategy</template-output>
<template-output file="tech-spec.md">acceptance_criteria</template-output>
<action>**Developer Resources:**
**File Paths Reference:**
Complete list of all files involved:
- /src/services/UserService.ts
- /src/routes/api/users.ts
- /tests/services/UserService.test.ts
- /src/types/user.ts
**Key Code Locations:**
Important functions, classes, modules:
- UserService class (src/services/UserService.ts:15)
- validateUser function (src/utils/validation.ts:42)
- User type definition (src/types/user.ts:8)
**Testing Locations:**
Where tests go:
- Unit: tests/services/
- Integration: tests/integration/
- E2E: tests/e2e/
**Documentation to Update:**
Docs that need updating:
- README.md - Add new endpoint documentation
- API.md - Document /users/validate endpoint
- CHANGELOG.md - Note the new feature
</action>
<template-output file="tech-spec.md">file_paths_complete</template-output>
<template-output file="tech-spec.md">key_code_locations</template-output>
<template-output file="tech-spec.md">testing_locations</template-output>
<template-output file="tech-spec.md">documentation_updates</template-output>
<action>**UX/UI Considerations:**
<check if="change affects user interface OR user experience">
**Determine if this change has UI/UX impact:**
- Does it change what users see?
- Does it change how users interact?
- Does it affect user workflows?
If YES, document:
**UI Components Affected:**
- List specific components (buttons, forms, modals, pages)
- Note which need creation vs modification
**UX Flow Changes:**
- Current flow vs new flow
- User journey impact
- Navigation changes
**Visual/Interaction Patterns:**
- Follow existing design system? (check for design tokens, component library)
- New patterns needed?
- Responsive design considerations (mobile, tablet, desktop)
**Accessibility:**
- Keyboard navigation requirements
- Screen reader compatibility
- ARIA labels needed
- Color contrast standards
**User Feedback:**
- Loading states
- Error messages
- Success confirmations
- Progress indicators
</check>
<check if="no UI/UX impact">
"No UI/UX impact - backend/API/infrastructure change only"
</check>
</action>
<template-output file="tech-spec.md">ux_ui_considerations</template-output>
<action>**Testing Approach:**
Comprehensive testing strategy using {{test_framework_info}}:
**CONFORM TO EXISTING TEST STANDARDS:**
<check if="conform_to_conventions == yes">
- Follow existing test file naming: {{detected_test_patterns.file_naming}}
- Use existing test organization: {{detected_test_patterns.organization}}
- Match existing assertion style: {{detected_test_patterns.assertion_style}}
- Meet existing coverage requirements: {{detected_test_patterns.coverage}}
</check>
**Test Strategy:**
- Test framework: {{detected_test_framework}} (from project dependencies)
- Unit tests for [specific functions/methods]
- Integration tests for [specific flows/APIs]
- E2E tests if UI changes
- Mock/stub strategies (use existing patterns: {{detected_test_patterns.mocking}})
- Performance benchmarks if applicable
- Accessibility tests if UI changes
**Coverage:**
- Unit test coverage: [target %]
- Integration coverage: [critical paths]
- Ensure all acceptance criteria have corresponding tests
</action>
<template-output file="tech-spec.md">test_framework_info</template-output>
<template-output file="tech-spec.md">testing_approach</template-output>
<action>**Deployment Strategy:**
**Deployment Steps:**
How to deploy this change:
1. Merge to main branch
2. Run CI/CD pipeline
3. Deploy to staging
4. Verify in staging
5. Deploy to production
6. Monitor for issues
**Rollback Plan:**
How to undo if problems:
1. Revert commit [hash]
2. Redeploy previous version
3. Verify rollback successful
**Monitoring:**
What to watch after deployment:
- Error rates in [logging service]
- Response times for [endpoint]
- User feedback on [feature]
</action>
<template-output file="tech-spec.md">deployment_steps</template-output>
<template-output file="tech-spec.md">rollback_plan</template-output>
<template-output file="tech-spec.md">monitoring_approach</template-output>
</step>
<step n="4" goal="Auto-validate cohesion, completeness, and quality">
<critical>Always run validation - this is NOT optional!</critical>
<action>Tech-spec generation complete! Now running automatic validation...</action>
<action>Load {installed_path}/checklist.md</action>
<action>Review tech-spec.md against ALL checklist criteria:
**Section 1: Output Files Exist**
- Verify tech-spec.md created
- Check for unfilled template variables
**Section 2: Context Gathering**
- Validate all available documents were loaded
- Confirm stack detection worked
- Verify brownfield analysis (if applicable)
**Section 3: Tech-Spec Definitiveness**
- Scan for "or" statements (FAIL if found)
- Verify all versions are specific
- Check stack alignment
**Section 4: Context-Rich Content**
- Verify all new template sections populated
- Check existing code references (brownfield)
- Validate framework dependencies listed
**Section 5-6: Story Quality (deferred to Step 5)**
**Section 7: Workflow Status (if applicable)**
**Section 8: Implementation Readiness**
- Can developer start immediately?
- Is tech-spec comprehensive enough?
</action>
<action>Generate validation report with specific scores:
- Context Gathering: [Comprehensive/Partial/Insufficient]
- Definitiveness: [All definitive/Some ambiguity/Major issues]
- Brownfield Integration: [N/A/Excellent/Partial/Missing]
- Stack Alignment: [Perfect/Good/Partial/None]
- Implementation Readiness: [Yes/No]
</action>
<check if="validation issues found">
<output>⚠️ **Validation Issues Detected:**
{{list_of_issues}}
I can fix these automatically. Shall I proceed? (yes/no)</output>
<ask>Fix validation issues? (yes/no)</ask>
<check if="yes">
<action>Fix each issue and re-validate</action>
<output>✅ Issues fixed! Re-validation passed.</output>
</check>
<check if="no">
<output>⚠️ Proceeding with warnings. Issues should be addressed manually.</output>
</check>
</check>
<check if="validation passes">
<output>✅ **Validation Passed!**
**Scores:**
- Context Gathering: {{context_score}}
- Definitiveness: {{definitiveness_score}}
- Brownfield Integration: {{brownfield_score}}
- Stack Alignment: {{stack_score}}
- Implementation Readiness: ✅ Ready
Tech-spec is high quality and ready for story generation!</output>
</check>
</step>
<step n="5" goal="Generate epic and context-rich stories">
<action>Invoke unified story generation workflow: {instructions_generate_stories}</action>
<action>This will generate:
- **epics.md** - Epic structure (minimal for 1 story, detailed for multiple)
- **story-{epic-slug}-N.md** - Story files (where N = 1 to {{story_count}})
All stories reference tech-spec.md as primary context - comprehensive enough that developers can often skip story-context workflow.
</action>
</step>
<step n="6" goal="Finalize and guide next steps">
<output>**✅ Tech-Spec Complete, {user_name}!**
**Deliverables Created:**
-**tech-spec.md** - Context-rich technical specification
- Includes: brownfield analysis, framework details, existing patterns
-**epics.md** - Epic structure{{#if story_count == 1}} (minimal for single story){{else}} with {{story_count}} stories{{/if}}
-**story-{epic-slug}-1.md** - First story{{#if story_count > 1}}
-**story-{epic-slug}-2.md** - Second story{{/if}}{{#if story_count > 2}}
-**story-{epic-slug}-3.md** - Third story{{/if}}{{#if story_count > 3}}
-**Additional stories** through story-{epic-slug}-{{story_count}}.md{{/if}}
**What Makes This Tech-Spec Special:**
The tech-spec is comprehensive enough to serve as the primary context document:
- ✨ Brownfield codebase analysis (if applicable)
- ✨ Exact framework and library versions from your project
- ✨ Existing patterns and code references
- ✨ Specific file paths and integration points
- ✨ Complete developer resources
**Next Steps:**
**🎯 Recommended Path - Direct to Development:**
Since the tech-spec is CONTEXT-RICH, you can often skip story-context generation!
{{#if story_count == 1}}
**For Your Single Story:**
1. Ask DEV agent to run `dev-story`
- Select story-{epic-slug}-1.md
- Tech-spec provides all the context needed!
💡 **Optional:** Only run `story-context` (SM agent) if this is unusually complex
{{else}}
**For Your {{story_count}} Stories - Iterative Approach:**
1. **Start with Story 1:**
- Ask DEV agent to run `dev-story`
- Select story-{epic-slug}-1.md
- Tech-spec provides context
2. **After Story 1 Complete:**
- Repeat for story-{epic-slug}-2.md
- Continue through story {{story_count}}
💡 **Alternative:** Use `sprint-planning` (SM agent) to organize all stories as a coordinated sprint
💡 **Optional:** Run `story-context` (SM agent) for complex stories needing additional context
{{/if}}
**Your Tech-Spec:**
- 📄 Saved to: `{output_folder}/tech-spec.md`
- Epic & Stories: `{output_folder}/epics.md` + `{sprint_artifacts}/`
- Contains: All context, decisions, patterns, and implementation guidance
- Ready for: Direct development!
The tech-spec is your single source of truth! 🚀
</output>
</step>
</workflow>

View File

@@ -1,181 +0,0 @@
# {{project_name}} - Technical Specification
**Author:** {{user_name}}
**Date:** {{date}}
**Project Level:** {{project_level}}
**Change Type:** {{change_type}}
**Development Context:** {{development_context}}
---
## Context
### Available Documents
{{loaded_documents_summary}}
### Project Stack
{{project_stack_summary}}
### Existing Codebase Structure
{{existing_structure_summary}}
---
## The Change
### Problem Statement
{{problem_statement}}
### Proposed Solution
{{solution_overview}}
### Scope
**In Scope:**
{{scope_in}}
**Out of Scope:**
{{scope_out}}
---
## Implementation Details
### Source Tree Changes
{{source_tree_changes}}
### Technical Approach
{{technical_approach}}
### Existing Patterns to Follow
{{existing_patterns}}
### Integration Points
{{integration_points}}
---
## Development Context
### Relevant Existing Code
{{existing_code_references}}
### Dependencies
**Framework/Libraries:**
{{framework_dependencies}}
**Internal Modules:**
{{internal_dependencies}}
### Configuration Changes
{{configuration_changes}}
### Existing Conventions (Brownfield)
{{existing_conventions}}
### Test Framework & Standards
{{test_framework_info}}
---
## Implementation Stack
{{implementation_stack}}
---
## Technical Details
{{technical_details}}
---
## Development Setup
{{development_setup}}
---
## Implementation Guide
### Setup Steps
{{setup_steps}}
### Implementation Steps
{{implementation_steps}}
### Testing Strategy
{{testing_strategy}}
### Acceptance Criteria
{{acceptance_criteria}}
---
## Developer Resources
### File Paths Reference
{{file_paths_complete}}
### Key Code Locations
{{key_code_locations}}
### Testing Locations
{{testing_locations}}
### Documentation to Update
{{documentation_updates}}
---
## UX/UI Considerations
{{ux_ui_considerations}}
---
## Testing Approach
{{testing_approach}}
---
## Deployment Strategy
### Deployment Steps
{{deployment_steps}}
### Rollback Plan
{{rollback_plan}}
### Monitoring
{{monitoring_approach}}

View File

@@ -1,90 +0,0 @@
# Story {{N}}.{{M}}: {{story_title}}
**Status:** Draft
---
## User Story
As a {{user_type}},
I want {{capability}},
So that {{value_benefit}}.
---
## Acceptance Criteria
**Given** {{precondition}}
**When** {{action}}
**Then** {{expected_outcome}}
**And** {{additional_criteria}}
---
## Implementation Details
### Tasks / Subtasks
{{tasks_subtasks}}
### Technical Summary
{{technical_summary}}
### Project Structure Notes
- **Files to modify:** {{files_to_modify}}
- **Expected test locations:** {{test_locations}}
- **Estimated effort:** {{story_points}} story points ({{time_estimate}})
- **Prerequisites:** {{dependencies}}
### Key Code References
{{existing_code_references}}
---
## Context References
**Tech-Spec:** [tech-spec.md](../tech-spec.md) - Primary context document containing:
- Brownfield codebase analysis (if applicable)
- Framework and library details with versions
- Existing patterns to follow
- Integration points and dependencies
- Complete implementation guidance
**Architecture:** {{architecture_references}}
<!-- Additional context XML paths will be added here if story-context workflow is run -->
---
## Dev Agent Record
### Agent Model Used
<!-- Will be populated during dev-story execution -->
### Debug Log References
<!-- Will be populated during dev-story execution -->
### Completion Notes
<!-- Will be populated during dev-story execution -->
### Files Modified
<!-- Will be populated during dev-story execution -->
### Test Results
<!-- Will be populated during dev-story execution -->
---
## Review Notes
<!-- Will be populated during code review -->

View File

@@ -1,60 +0,0 @@
# Technical Specification
name: tech-spec
description: "Technical specification workflow for quick-flow projects. Creates focused tech spec and generates epic + stories (1 story for simple changes, 2-5 stories for features). Tech-spec only - no PRD needed."
author: "BMad"
# Critical variables from config
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
project_name: "{config_source}:project_name"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
user_skill_level: "{config_source}:user_skill_level"
date: system-generated
workflow-status: "{output_folder}/bmm-workflow-status.yaml"
# Runtime variables (captured during workflow execution)
story_count: runtime-captured
epic_slug: runtime-captured
change_type: runtime-captured
field_type: runtime-captured
# Workflow components
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/2-plan-workflows/tech-spec"
instructions: "{installed_path}/instructions.md"
template: "{installed_path}/tech-spec-template.md"
# Story generation (unified approach - always generates epic + stories)
instructions_generate_stories: "{installed_path}/instructions-generate-stories.md"
user_story_template: "{installed_path}/user-story-template.md"
epics_template: "{installed_path}/epics-template.md"
# Output configuration
default_output_file: "{output_folder}/tech-spec.md"
epics_file: "{output_folder}/epics.md"
sprint_artifacts: "{output_folder}/sprint_artifacts"
# Smart input file references - handles both whole docs and sharded docs
# Priority: Whole document first, then sharded version
# Strategy: How to load sharded documents (FULL_LOAD, SELECTIVE_LOAD, INDEX_GUIDED)
input_file_patterns:
product_brief:
description: "Product vision and goals (optional)"
whole: "{output_folder}/*brief*.md"
sharded: "{output_folder}/*brief*/index.md"
load_strategy: "FULL_LOAD"
research:
description: "Market or domain research (optional)"
whole: "{output_folder}/*research*.md"
sharded: "{output_folder}/*research*/index.md"
load_strategy: "FULL_LOAD"
document_project:
description: "Brownfield project documentation (optional)"
sharded: "{output_folder}/index.md"
load_strategy: "INDEX_GUIDED"
standalone: true
web_bundle: false

View File

@@ -0,0 +1,115 @@
# Create Tech-Spec - Spec Engineering for AI Development
<workflow>
<critical>Communicate in {communication_language}, tailored to {user_skill_level}</critical>
<critical>Generate documents in {document_output_language}</critical>
<critical>Conversational spec engineering - ask questions, investigate code, produce complete spec</critical>
<critical>Spec must contain ALL context a fresh dev agent needs to implement it</critical>
<checkpoint-handlers>
<on-select key="a">Load and execute {advanced_elicitation}, then return to current step</on-select>
<on-select key="p">Load and execute {party_mode_workflow}, then return to current step</on-select>
<on-select key="b">Load and execute {quick_dev_workflow} with the tech-spec file</on-select>
</checkpoint-handlers>
<step n="1" goal="Understand what the user wants to build">
<action>Greet {user_name} and ask them to describe what they want to build or change.</action>
<action>Ask clarifying questions: problem, who's affected, scope, constraints, existing code?</action>
<action>Check for existing context in {output_folder} and {sprint_artifacts}</action>
<checkpoint title="Problem Understanding">
[a] Advanced Elicitation [c] Continue [p] Party Mode
</checkpoint>
</step>
<step n="2" goal="Investigate existing code (if applicable)">
<action>If brownfield: get file paths, read code, identify patterns/conventions/dependencies</action>
<action>Document: tech stack, code patterns, files to modify, test patterns</action>
<checkpoint title="Context Gathered">
[a] Advanced Elicitation [c] Continue [p] Party Mode
</checkpoint>
</step>
<step n="3" goal="Generate the technical specification">
<action>Create tech-spec using this structure:
```markdown
# Tech-Spec: {title}
**Created:** {date}
**Status:** Ready for Development
## Overview
### Problem Statement
### Solution
### Scope (In/Out)
## Context for Development
### Codebase Patterns
### Files to Reference
### Technical Decisions
## Implementation Plan
### Tasks
- [ ] Task 1: Description
- [ ] Task 2: Description
### Acceptance Criteria
- [ ] AC 1: Given/When/Then
- [ ] AC 2: ...
## Additional Context
### Dependencies
### Testing Strategy
### Notes
```
</action>
<action>Save to {sprint_artifacts}/tech-spec-{slug}.md</action>
</step>
<step n="4" goal="Review and finalize">
<action>Present spec to {user_name}, ask if it captures intent, make changes as needed</action>
<output>**Tech-Spec Complete!**
Saved to: {sprint_artifacts}/tech-spec-{slug}.md
[a] Advanced Elicitation - refine further
[b] Begin Development (not recommended - fresh context better)
[d] Done - exit
[p] Party Mode - get feedback
**Recommended:** Run `dev-spec {sprint_artifacts}/tech-spec-{slug}.md` in fresh context.
</output>
<ask>Choice (a/b/d/p):</ask>
</step>
</workflow>

View File

@@ -0,0 +1,26 @@
# Quick-Flow: Create Tech-Spec
name: create-tech-spec
description: "Conversational spec engineering - ask questions, investigate code, produce implementation-ready tech-spec."
author: "BMad"
# Config
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
sprint_artifacts: "{config_source}:sprint_artifacts"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
user_skill_level: "{config_source}:user_skill_level"
date: system-generated
# Workflow components
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/bmad-quick-flow/create-tech-spec"
instructions: "{installed_path}/instructions.md"
# Related workflows
quick_dev_workflow: "{project-root}/{bmad_folder}/bmm/workflows/bmad-quick-flow/quick-dev/workflow.yaml"
party_mode_workflow: "{project-root}/{bmad_folder}/core/workflows/party-mode/workflow.yaml"
advanced_elicitation: "{project-root}/{bmad_folder}/core/tasks/advanced-elicitation.xml"
standalone: true
web_bundle: false

View File

@@ -0,0 +1,25 @@
# Quick-Dev Checklist
## Before Implementation
- [ ] Context loaded (tech-spec or user guidance)
- [ ] Files to modify identified
- [ ] Patterns understood
## Implementation
- [ ] All tasks completed
- [ ] Code follows existing patterns
- [ ] Error handling appropriate
## Testing
- [ ] Tests written (where appropriate)
- [ ] All tests passing
- [ ] No regressions
## Completion
- [ ] Acceptance criteria satisfied
- [ ] Tech-spec updated (if applicable)
- [ ] Summary provided to user

View File

@@ -0,0 +1,96 @@
# Quick-Dev - Flexible Development Workflow
<workflow>
<critical>Communicate in {communication_language}, tailored to {user_skill_level}</critical>
<critical>Execute continuously until COMPLETE - do not stop for milestones</critical>
<critical>Flexible - handles tech-specs OR direct instructions</critical>
<critical>ALWAYS respect {project_context} if it exists - it defines project standards</critical>
<checkpoint-handlers>
<on-select key="a">Load and execute {advanced_elicitation}, then return</on-select>
<on-select key="p">Load and execute {party_mode_workflow}, then return</on-select>
<on-select key="t">Load and execute {create_tech_spec_workflow}</on-select>
</checkpoint-handlers>
<step n="1" goal="Load project context and determine execution mode">
<action>Check if {project_context} exists. If yes, load it - this is your foundational reference for ALL implementation decisions (patterns, conventions, architecture).</action>
<action>Parse user input:
**Mode A: Tech-Spec** - e.g., `quick-dev tech-spec-auth.md`
→ Load spec, extract tasks/context/AC, goto step 3
**Mode B: Direct Instructions** - e.g., `refactor src/foo.ts...`
→ Offer planning choice
</action>
<check if="Mode A">
<action>Load tech-spec, extract tasks/context/AC</action>
<goto>step_3</goto>
</check>
<check if="Mode B">
<ask>**[t] Plan first** - Create tech-spec then implement
**[e] Execute directly** - Start now</ask>
<check if="t">
<action>Load and execute {create_tech_spec_workflow}</action>
<action>Continue to implementation after spec complete</action>
</check>
<check if="e">
<ask>Any additional guidance before I begin? (patterns, files, constraints) Or "go" to start.</ask>
<goto>step_2</goto>
</check>
</check>
</step>
<step n="2" goal="Quick context gathering (direct mode)">
<action>Identify files to modify, find relevant patterns, note dependencies</action>
<action>Create mental plan: tasks, acceptance criteria, files to touch</action>
</step>
<step n="3" goal="Execute implementation" id="step_3">
<action>For each task:
1. **Load Context** - read files from spec or relevant to change
2. **Implement** - follow patterns, handle errors, follow conventions
3. **Test** - write tests, run existing tests, verify AC
4. **Mark Complete** - check off task [x], continue
</action>
<action if="3 failures">HALT and request guidance</action>
<action if="tests fail">Fix before continuing</action>
<critical>Continue through ALL tasks without stopping</critical>
</step>
<step n="4" goal="Verify and complete">
<action>Verify: all tasks [x], tests passing, AC satisfied, patterns followed</action>
<check if="using tech-spec">
<action>Update tech-spec status to "Completed", mark all tasks [x]</action>
</check>
<output>**Implementation Complete!**
**Summary:** {{implementation_summary}}
**Files Modified:** {{files_list}}
**Tests:** {{test_summary}}
**AC Status:** {{ac_status}}
</output>
<action>You must explain what was implemented based on {user_skill_level}</action>
</step>
</workflow>

View File

@@ -0,0 +1,29 @@
# Quick-Flow: Quick-Dev
name: quick-dev
description: "Flexible development - execute tech-specs OR direct instructions with optional planning."
author: "BMad"
# Config
config_source: "{project-root}/{bmad_folder}/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
sprint_artifacts: "{config_source}:sprint_artifacts"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
user_skill_level: "{config_source}:user_skill_level"
date: system-generated
# Project context
project_context: "{output_folder}/project-context.md"
# Workflow components
installed_path: "{project-root}/{bmad_folder}/bmm/workflows/bmad-quick-flow/quick-dev"
instructions: "{installed_path}/instructions.md"
checklist: "{installed_path}/checklist.md"
# Related workflows
create_tech_spec_workflow: "{project-root}/{bmad_folder}/bmm/workflows/bmad-quick-flow/create-tech-spec/workflow.yaml"
party_mode_workflow: "{project-root}/{bmad_folder}/core/workflows/party-mode/workflow.yaml"
advanced_elicitation: "{project-root}/{bmad_folder}/core/tasks/advanced-elicitation.xml"
standalone: true
web_bundle: false

View File

@@ -1,10 +1,10 @@
# BMad Quick Flow - Brownfield
# Fast implementation path for existing codebases (1-15 stories typically)
# Fast spec-driven development for existing codebases (1-10 stories typically)
method_name: "BMad Quick Flow"
track: "quick-flow"
field_type: "brownfield"
description: "Fast tech-spec based implementation for brownfield projects"
description: "Spec-driven development for brownfield projects - streamlined path with codebase context"
phases:
- prerequisite: true
@@ -17,7 +17,7 @@ phases:
agent: "analyst"
command: "document-project"
output: "Comprehensive project documentation"
purpose: "Understand existing codebase before planning"
purpose: "Generate codebase context for spec engineering"
- phase: 0
name: "Discovery (Optional)"
@@ -37,22 +37,31 @@ phases:
included_by: "user_choice"
- phase: 1
name: "Planning"
name: "Spec Engineering"
required: true
workflows:
- id: "tech-spec"
- id: "create-tech-spec"
required: true
agent: "pm"
command: "tech-spec"
output: "Technical Specification with stories (auto-detects epic if 2+ stories)"
note: "Integrates with existing codebase patterns from document-project"
agent: "quick-flow-solo-dev"
command: "create-tech-spec"
output: "Technical Specification with implementation-ready stories"
note: "Stories include codebase context from document-project"
- phase: 2
name: "Implementation"
required: true
note: "Barry executes all stories, optional code-review after each"
workflows:
- id: "sprint-planning"
- id: "dev-spec"
required: true
agent: "sm"
command: "sprint-planning"
note: "Creates sprint plan with all stories"
repeat: true
agent: "quick-flow-solo-dev"
command: "dev-spec"
note: "Execute stories from spec - Barry is the one-man powerhouse"
- id: "code-review"
optional: true
repeat: true
agent: "quick-flow-solo-dev"
command: "code-review"
note: "Review completed story implementation"

View File

@@ -1,10 +1,10 @@
# BMad Quick Flow - Greenfield
# Fast implementation path with tech-spec planning (1-15 stories typically)
# Fast spec-driven development path (1-10 stories typically)
method_name: "BMad Quick Flow"
track: "quick-flow"
field_type: "greenfield"
description: "Fast tech-spec based implementation for greenfield projects"
description: "Spec-driven development for greenfield projects - streamlined path without sprint overhead"
phases:
- phase: 0
@@ -26,22 +26,31 @@ phases:
note: "Can have multiple research workflows"
- phase: 1
name: "Planning"
name: "Spec Engineering"
required: true
workflows:
- id: "tech-spec"
- id: "create-tech-spec"
required: true
agent: "pm"
command: "tech-spec"
output: "Technical Specification with stories (auto-detects epic if 2+ stories)"
note: "Quick Spec Flow - implementation-focused planning"
agent: "quick-flow-solo-dev"
command: "create-tech-spec"
output: "Technical Specification with implementation-ready stories"
note: "Stories contain all context for execution"
- phase: 2
name: "Implementation"
required: true
note: "Barry executes all stories, optional code-review after each"
workflows:
- id: "sprint-planning"
- id: "dev-spec"
required: true
agent: "sm"
command: "sprint-planning"
note: "Creates sprint plan with all stories - subsequent work tracked in sprint plan output, not workflow-status"
repeat: true
agent: "quick-flow-solo-dev"
command: "dev-spec"
note: "Execute stories from spec - Barry is the one-man powerhouse"
- id: "code-review"
optional: true
repeat: true
agent: "quick-flow-solo-dev"
command: "code-review"
note: "Review completed story implementation"