mirror of
https://github.com/bmadcode/BMAD-METHOD.git
synced 2025-12-29 16:14:59 +00:00
docs: massive documentation overhaul + introduce Paige (Documentation Guide agent)
## 📚 Complete Documentation Restructure **BMM Documentation Hub Created:** - New centralized documentation system at `src/modules/bmm/docs/` - 18 comprehensive guides organized by topic (7000+ lines total) - Clear learning paths for greenfield, brownfield, and quick spec flows - Professional technical writing standards throughout **New Documentation:** - `README.md` - Complete documentation hub with navigation - `quick-start.md` - 15-minute getting started guide - `agents-guide.md` - Comprehensive 12-agent reference (45 min read) - `party-mode.md` - Multi-agent collaboration guide (20 min read) - `scale-adaptive-system.md` - Deep dive on Levels 0-4 (42 min read) - `brownfield-guide.md` - Existing codebase development (53 min read) - `quick-spec-flow.md` - Rapid Level 0-1 development (26 min read) - `workflows-analysis.md` - Phase 1 workflows (12 min read) - `workflows-planning.md` - Phase 2 workflows (19 min read) - `workflows-solutioning.md` - Phase 3 workflows (13 min read) - `workflows-implementation.md` - Phase 4 workflows (33 min read) - `workflows-testing.md` - Testing & QA workflows (29 min read) - `workflow-architecture-reference.md` - Architecture workflow deep-dive - `workflow-document-project-reference.md` - Document-project workflow reference - `enterprise-agentic-development.md` - Team collaboration patterns - `faq.md` - Comprehensive Q&A covering all topics - `glossary.md` - Complete terminology reference - `troubleshooting.md` - Common issues and solutions **Documentation Improvements:** - Removed all version/date footers (git handles versioning) - Agent customization docs now include full rebuild process - Cross-referenced links between all guides - Reading time estimates for all major docs - Consistent professional formatting and structure **Consolidated & Streamlined:** - Module README (`src/modules/bmm/README.md`) streamlined to lean signpost - Root README polished with better hierarchy and clear CTAs - Moved docs from root `docs/` to module-specific locations - Better separation of user docs vs. developer reference ## 🤖 New Agent: Paige (Documentation Guide) **Role:** Technical documentation specialist and information architect **Expertise:** - Professional technical writing standards - Documentation structure and organization - Information architecture and navigation - User-focused content design - Style guide enforcement **Status:** Work in progress - Paige will evolve as documentation needs grow **Integration:** - Listed in agents-guide.md, glossary.md, FAQ - Available for all phases (documentation is continuous) - Can be customized like all BMM agents ## 🔧 Additional Changes - Updated agent manifest with Paige - Updated workflow manifest with new documentation workflows - Fixed workflow-to-agent mappings across all guides - Improved root README with clearer Quick Start section - Better module structure explanations - Enhanced community links with Discord channel names **Total Impact:** - 18 new/restructured documentation files - 7000+ lines of professional technical documentation - Complete navigation system with cross-references - Clear learning paths for all user types - Foundation for knowledge base (coming in beta) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -1,38 +0,0 @@
|
||||
---
|
||||
last-redoc-date: 2025-10-01
|
||||
---
|
||||
|
||||
# Game Brainstorming Workflow
|
||||
|
||||
This workflow employs structured ideation methodologies to generate and refine game concepts through systematic creative exploration. It leverages five distinct brainstorming techniques—SCAMPER, Mind Mapping, Lotus Blossom, Six Thinking Hats, and Random Word Association—each applied in isolation to produce diverse conceptual approaches. The workflow emphasizes iterative refinement where initial concepts are evaluated against design pillars, technical feasibility, and market positioning to identify the most promising directions.
|
||||
|
||||
The system operates through a game-specific context framework that considers platform constraints, target audience characteristics, monetization models, and core gameplay pillars. Each brainstorming method generates distinct artifacts: SCAMPER produces systematic modification analyses, Mind Mapping reveals conceptual hierarchies, Lotus Blossom creates radial expansion patterns, Six Thinking Hats enforces multi-perspective evaluation, and Random Word Association drives lateral thinking breakthroughs. The workflow culminates in a consolidated concept document that synthesizes the strongest elements from each method into cohesive game proposals.
|
||||
|
||||
Critical to this workflow is its emphasis on constraint-driven creativity. The game-context.md framework establishes technical boundaries (platform capabilities, performance targets), market parameters (genre conventions, competitive positioning), and design philosophy (accessibility requirements, monetization ethics) that ground creative exploration in practical realities. This prevents ideation from drifting into infeasible territory while maintaining creative ambition.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad bmm 1-analysis brainstorm-game
|
||||
```
|
||||
|
||||
## Inputs
|
||||
|
||||
- **Game Context Document**: Platform specifications, genre preferences, technical constraints, target audience demographics, monetization approach, and core design pillars
|
||||
- **Initial Concept Seed** (optional): High-level game idea or theme to guide brainstorming direction
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Method-Specific Artifacts**: Five separate brainstorming documents, each applying a different ideation methodology to the concept space
|
||||
- **Consolidated Concept Document**: Synthesized game concepts with feasibility assessments, unique value propositions, and recommended next steps
|
||||
- **Design Pillar Alignment Matrix**: Evaluation of each concept against stated design objectives and technical constraints
|
||||
|
||||
## Brainstorming Methods
|
||||
|
||||
| Method | Focus | Output Characteristics |
|
||||
| ----------------------- | ------------------------ | ---------------------------------- |
|
||||
| SCAMPER | Systematic modification | Structured transformation analysis |
|
||||
| Mind Mapping | Hierarchical exploration | Visual concept relationships |
|
||||
| Lotus Blossom | Radial expansion | Layered thematic development |
|
||||
| Six Thinking Hats | Multi-perspective | Balanced evaluation framework |
|
||||
| Random Word Association | Lateral thinking | Unexpected conceptual combinations |
|
||||
@@ -1,113 +0,0 @@
|
||||
# Project Brainstorming Workflow
|
||||
|
||||
Structured ideation for software projects exploring problem spaces, architectures, and innovative solutions beyond traditional requirements gathering.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Purpose](#purpose)
|
||||
- [Usage](#usage)
|
||||
- [Process](#process)
|
||||
- [Inputs & Outputs](#inputs--outputs)
|
||||
- [Integration](#integration)
|
||||
|
||||
## Purpose
|
||||
|
||||
Generate multiple solution approaches for software projects through:
|
||||
|
||||
- Parallel ideation tracks (architecture, UX, integration, value delivery)
|
||||
- Technical-business alignment from inception
|
||||
- Hidden assumption discovery
|
||||
- Innovation beyond obvious solutions
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Run brainstorming session
|
||||
bmad bmm *brainstorm-project
|
||||
|
||||
# Or via Analyst agent
|
||||
*brainstorm-project
|
||||
```
|
||||
|
||||
## Process
|
||||
|
||||
### 1. Context Capture
|
||||
|
||||
- Business objectives and constraints
|
||||
- Technical environment
|
||||
- Stakeholder needs
|
||||
- Success criteria
|
||||
|
||||
### 2. Parallel Ideation
|
||||
|
||||
- **Architecture Track**: Technical approaches with trade-offs
|
||||
- **UX Track**: Interface paradigms and user journeys
|
||||
- **Integration Track**: System connection patterns
|
||||
- **Value Track**: Feature prioritization and delivery
|
||||
|
||||
### 3. Solution Synthesis
|
||||
|
||||
- Evaluate feasibility and impact
|
||||
- Align with strategic objectives
|
||||
- Surface hidden assumptions
|
||||
- Generate recommendations
|
||||
|
||||
## Inputs & Outputs
|
||||
|
||||
### Inputs
|
||||
|
||||
| Input | Type | Purpose |
|
||||
| ----------------- | -------- | --------------------------------------------- |
|
||||
| Project Context | Document | Business objectives, environment, constraints |
|
||||
| Problem Statement | Optional | Core challenge or opportunity |
|
||||
|
||||
### Outputs
|
||||
|
||||
| Output | Content |
|
||||
| ------------------------ | ------------------------------------------- |
|
||||
| Architecture Proposals | Multiple approaches with trade-off analysis |
|
||||
| Value Framework | Prioritized features aligned to objectives |
|
||||
| Risk Analysis | Dependencies, challenges, opportunities |
|
||||
| Strategic Recommendation | Synthesized direction with rationale |
|
||||
|
||||
## Integration
|
||||
|
||||
### Workflow Chain
|
||||
|
||||
1. **brainstorm-project** ← Current step
|
||||
2. research (optional deep dive)
|
||||
3. product-brief (strategic document)
|
||||
4. Phase 2 planning (PRD/tech-spec)
|
||||
|
||||
### Feed Into
|
||||
|
||||
- Product Brief development
|
||||
- Architecture decisions
|
||||
- PRD requirements
|
||||
- Epic prioritization
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Prepare context** - Gather business and technical background
|
||||
2. **Think broadly** - Explore non-obvious approaches
|
||||
3. **Document assumptions** - Capture implicit beliefs
|
||||
4. **Consider constraints** - Technical, organizational, resource
|
||||
5. **Focus on value** - Align to business objectives
|
||||
|
||||
## Configuration
|
||||
|
||||
```yaml
|
||||
# bmad/bmm/config.yaml
|
||||
output_folder: ./output
|
||||
project_name: Your Project
|
||||
```
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- [Research](../research/README.md) - Deep investigation
|
||||
- [Product Brief](../product-brief/README.md) - Strategic planning
|
||||
- [PRD](../../2-plan-workflows/prd/README.md) - Requirements document
|
||||
|
||||
---
|
||||
|
||||
Part of BMad Method v6 - Phase 1 Analysis workflows
|
||||
@@ -1,221 +0,0 @@
|
||||
# Game Brief Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
The Game Brief workflow is the starting point for game projects in the BMad Method. It's a lightweight, interactive brainstorming and planning session that captures your game vision before diving into detailed Game Design Documents (GDD).
|
||||
|
||||
## Purpose
|
||||
|
||||
**Game Brief answers:**
|
||||
|
||||
- What game are you making?
|
||||
- Who is it for?
|
||||
- What makes it unique?
|
||||
- Is it feasible?
|
||||
|
||||
**This is NOT:**
|
||||
|
||||
- A full Game Design Document
|
||||
- A technical specification
|
||||
- A production plan
|
||||
- A detailed content outline
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use the game-brief workflow when:
|
||||
|
||||
- Starting a new game project from scratch
|
||||
- Exploring a game idea before committing
|
||||
- Pitching a concept to team/stakeholders
|
||||
- Validating market fit and feasibility
|
||||
- Preparing input for the GDD workflow
|
||||
|
||||
Skip if:
|
||||
|
||||
- You already have a complete GDD
|
||||
- Continuing an existing project
|
||||
- Prototyping without planning needs
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Interactive Mode (Recommended)
|
||||
|
||||
Work through each section collaboratively:
|
||||
|
||||
1. Game Vision (concept, pitch, vision statement)
|
||||
2. Target Market (audience, competition, positioning)
|
||||
3. Game Fundamentals (pillars, mechanics, experience goals)
|
||||
4. Scope and Constraints (platforms, timeline, budget, team)
|
||||
5. Reference Framework (inspiration, competitors, differentiators)
|
||||
6. Content Framework (world, narrative, volume)
|
||||
7. Art and Audio Direction (visual and audio style)
|
||||
8. Risk Assessment (risks, challenges, mitigation)
|
||||
9. Success Criteria (MVP, metrics, launch goals)
|
||||
10. Next Steps (immediate actions, research, questions)
|
||||
|
||||
### YOLO Mode
|
||||
|
||||
AI generates complete draft, then you refine sections iteratively.
|
||||
|
||||
## Key Features
|
||||
|
||||
### Optional Inputs
|
||||
|
||||
The workflow can incorporate:
|
||||
|
||||
- Market research
|
||||
- Brainstorming results
|
||||
- Competitive analysis
|
||||
- Design notes
|
||||
- Reference game lists
|
||||
|
||||
### Realistic Scoping
|
||||
|
||||
The workflow actively helps you:
|
||||
|
||||
- Identify scope vs. resource mismatches
|
||||
- Assess technical feasibility
|
||||
- Recognize market risks
|
||||
- Plan mitigation strategies
|
||||
|
||||
### Clear Handoff
|
||||
|
||||
Output is designed to feed directly into:
|
||||
|
||||
- GDD workflow (2-plan phase)
|
||||
- Prototyping decisions
|
||||
- Team discussions
|
||||
- Stakeholder presentations
|
||||
|
||||
## Output
|
||||
|
||||
**game-brief-{game_name}-{date}.md** containing:
|
||||
|
||||
- Executive summary
|
||||
- Complete game vision
|
||||
- Target market analysis
|
||||
- Core gameplay definition
|
||||
- Scope and constraints
|
||||
- Reference framework
|
||||
- Art/audio direction
|
||||
- Risk assessment
|
||||
- Success criteria
|
||||
- Next steps
|
||||
|
||||
## Integration with BMad Method
|
||||
|
||||
```
|
||||
1-analysis/game-brief (You are here)
|
||||
↓
|
||||
2-plan-workflows/gdd (Game Design Document)
|
||||
↓
|
||||
2-plan-workflows/narrative (Optional: Story-heavy games)
|
||||
↓
|
||||
3-solutioning (Technical architecture, engine selection)
|
||||
↓
|
||||
4-dev-stories (Implementation stories)
|
||||
```
|
||||
|
||||
## Comparison: Game Brief vs. GDD
|
||||
|
||||
| Aspect | Game Brief | GDD |
|
||||
| ------------------- | --------------------------- | ------------------------- |
|
||||
| **Purpose** | Validate concept | Design for implementation |
|
||||
| **Detail Level** | High-level vision | Detailed specifications |
|
||||
| **Time Investment** | 1-2 hours | 4-10 hours |
|
||||
| **Audience** | Self, team, stakeholders | Development team |
|
||||
| **Scope** | Concept validation | Implementation roadmap |
|
||||
| **Format** | Conversational, exploratory | Structured, comprehensive |
|
||||
| **Output** | 3-5 pages | 10-30+ pages |
|
||||
|
||||
## Comparison: Game Brief vs. Product Brief
|
||||
|
||||
| Aspect | Game Brief | Product Brief |
|
||||
| ----------------- | ---------------------------- | --------------------------------- |
|
||||
| **Focus** | Player experience, fun, feel | User problems, features, value |
|
||||
| **Metrics** | Engagement, retention, fun | Revenue, conversion, satisfaction |
|
||||
| **Core Elements** | Gameplay pillars, mechanics | Problem/solution, user segments |
|
||||
| **References** | Other games | Competitors, market |
|
||||
| **Vision** | Emotional experience | Business outcomes |
|
||||
|
||||
## Example Use Case
|
||||
|
||||
### Scenario: Indie Roguelike Card Game
|
||||
|
||||
**Starting Point:**
|
||||
"I want to make a roguelike card game with a twist"
|
||||
|
||||
**After Game Brief:**
|
||||
|
||||
- **Core Concept:** "A roguelike card battler where you play as emotions battling inner demons"
|
||||
- **Target Audience:** Core gamers who love Slay the Spire, interested in mental health themes
|
||||
- **Differentiator:** Emotional narrative system where deck composition affects story
|
||||
- **MVP Scope:** 3 characters, 80 cards, 30 enemy types, 3 bosses, 6-hour first run
|
||||
- **Platform:** PC (Steam) first, mobile later
|
||||
- **Timeline:** 12 months with 2-person team
|
||||
- **Key Risk:** Emotional theme might alienate hardcore roguelike fans
|
||||
- **Mitigation:** Prototype early, test with target audience, offer "mechanical-only" mode
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
1. Build card combat prototype (2 weeks)
|
||||
2. Test emotional resonance with players
|
||||
3. Proceed to GDD workflow if prototype validates
|
||||
|
||||
## Tips for Success
|
||||
|
||||
### Be Honest About Scope
|
||||
|
||||
The most common game dev failure is scope mismatch. Use this workflow to reality-check:
|
||||
|
||||
- Can your team actually build this?
|
||||
- Is the timeline realistic?
|
||||
- Do you have budget for assets?
|
||||
|
||||
### Focus on Player Experience
|
||||
|
||||
Don't think about code or implementation. Think about:
|
||||
|
||||
- What will players DO?
|
||||
- How will they FEEL?
|
||||
- Why will they CARE?
|
||||
|
||||
### Validate Early
|
||||
|
||||
The brief identifies assumptions and risks. Don't skip to GDD without:
|
||||
|
||||
- Prototyping risky mechanics
|
||||
- Testing with target audience
|
||||
- Validating market interest
|
||||
|
||||
### Use References Wisely
|
||||
|
||||
"Like X but with Y" is a starting point, not a differentiator. Push beyond:
|
||||
|
||||
- What specifically from reference games?
|
||||
- What are you explicitly NOT doing?
|
||||
- What's genuinely new?
|
||||
|
||||
## FAQ
|
||||
|
||||
**Q: Is this required before GDD?**
|
||||
A: No, but highly recommended for new projects. You can start directly with GDD if you have a clear vision.
|
||||
|
||||
**Q: Can I use this for game jams?**
|
||||
A: Yes, but use YOLO mode for speed. Focus on vision, mechanics, and MVP scope.
|
||||
|
||||
**Q: What if my game concept changes?**
|
||||
A: Revisit and update the brief. It's a living document during early development.
|
||||
|
||||
**Q: How detailed should content volume estimates be?**
|
||||
A: Rough orders of magnitude are fine. "~50 enemies" not "47 enemies with 3 variants each."
|
||||
|
||||
**Q: Should I complete this alone or with my team?**
|
||||
A: Involve your team! Collaborative briefs catch blind spots and build shared vision.
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **Product Brief** (`1-analysis/product-brief`): For software products, not games
|
||||
- **GDD** (`2-plan-workflows/gdd`): Next step after game brief
|
||||
- **Narrative Design** (`2-plan-workflows/narrative`): For story-heavy games after GDD
|
||||
- **Solutioning** (`3-solutioning`): Technical architecture after planning
|
||||
@@ -1,180 +0,0 @@
|
||||
# Product Brief Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Interactive product brief creation workflow that guides users through defining their product vision with multiple input sources and conversational collaboration. Supports both structured interactive mode and rapid "YOLO" mode for quick draft generation.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Dual Mode Operation** - Interactive step-by-step or rapid draft generation
|
||||
- **Multi-Input Support** - Integrates market research, competitive analysis, and brainstorming results
|
||||
- **Conversational Design** - Guides users through strategic thinking with probing questions
|
||||
- **Executive Summary Generation** - Creates compelling summaries for stakeholder communication
|
||||
- **Comprehensive Coverage** - Addresses all critical product planning dimensions
|
||||
- **Stakeholder Ready** - Generates professional briefs suitable for PM handoff
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Invocation
|
||||
|
||||
```bash
|
||||
workflow product-brief
|
||||
```
|
||||
|
||||
### With Input Documents
|
||||
|
||||
```bash
|
||||
# With market research
|
||||
workflow product-brief --input market-research.md
|
||||
|
||||
# With multiple inputs
|
||||
workflow product-brief --input market-research.md --input competitive-analysis.md
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
- **brief_format**: "comprehensive" (full detail) or "executive" (3-page limit)
|
||||
- **autonomous**: false (requires user collaboration)
|
||||
- **output_folder**: Location for generated brief
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Files Included
|
||||
|
||||
```
|
||||
product-brief/
|
||||
├── workflow.yaml # Configuration and metadata
|
||||
├── instructions.md # Interactive workflow steps
|
||||
├── template.md # Product brief document structure
|
||||
├── checklist.md # Validation criteria
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Initialization and Context (Steps 0-2)
|
||||
|
||||
- **Project Setup**: Captures project name and basic context
|
||||
- **Input Gathering**: Collects and analyzes available documents
|
||||
- **Mode Selection**: Chooses interactive or YOLO collaboration approach
|
||||
- **Context Extraction**: Identifies core problems and opportunities
|
||||
|
||||
### Phase 2: Interactive Development (Steps 3-12) - Interactive Mode
|
||||
|
||||
- **Problem Definition**: Deep dive into user pain points and market gaps
|
||||
- **Solution Articulation**: Develops clear value proposition and approach
|
||||
- **User Segmentation**: Defines primary and secondary target audiences
|
||||
- **Success Metrics**: Establishes measurable goals and KPIs
|
||||
- **MVP Scoping**: Ruthlessly defines minimum viable features
|
||||
- **Financial Planning**: Assesses ROI and strategic alignment
|
||||
- **Technical Context**: Captures platform and technology considerations
|
||||
- **Risk Assessment**: Identifies constraints, assumptions, and unknowns
|
||||
|
||||
### Phase 3: Rapid Generation (Steps 3-4) - YOLO Mode
|
||||
|
||||
- **Complete Draft**: Generates full brief based on initial context
|
||||
- **Iterative Refinement**: User-guided section improvements
|
||||
- **Quality Validation**: Ensures completeness and consistency
|
||||
|
||||
### Phase 4: Finalization (Steps 13-15)
|
||||
|
||||
- **Executive Summary**: Creates compelling overview for stakeholders
|
||||
- **Supporting Materials**: Compiles research summaries and references
|
||||
- **Final Review**: Quality check and handoff preparation
|
||||
|
||||
## Output
|
||||
|
||||
### Generated Files
|
||||
|
||||
- **Primary output**: product-brief-{project_name}-{date}.md
|
||||
- **Supporting files**: Research summaries and stakeholder input documentation
|
||||
|
||||
### Output Structure
|
||||
|
||||
1. **Executive Summary** - High-level product concept and value proposition
|
||||
2. **Problem Statement** - Detailed problem analysis with evidence
|
||||
3. **Proposed Solution** - Core approach and key differentiators
|
||||
4. **Target Users** - Primary and secondary user segments with personas
|
||||
5. **Goals and Success Metrics** - Business objectives and measurable KPIs
|
||||
6. **MVP Scope** - Must-have features and out-of-scope items
|
||||
7. **Post-MVP Vision** - Phase 2 features and long-term roadmap
|
||||
8. **Financial Impact** - Investment requirements and ROI projections
|
||||
9. **Strategic Alignment** - Connection to company OKRs and initiatives
|
||||
10. **Technical Considerations** - Platform requirements and preferences
|
||||
11. **Constraints and Assumptions** - Resource limits and key assumptions
|
||||
12. **Risks and Open Questions** - Risk assessment and research needs
|
||||
13. **Supporting Materials** - Research summaries and references
|
||||
|
||||
## Requirements
|
||||
|
||||
No special requirements - designed to work with or without existing documentation.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting
|
||||
|
||||
1. **Gather Available Research**: Collect any market research, competitive analysis, or user feedback
|
||||
2. **Define Stakeholder Audience**: Know who will use this brief for decision-making
|
||||
3. **Set Time Boundaries**: Interactive mode requires 60-90 minutes for quality results
|
||||
|
||||
### During Execution
|
||||
|
||||
1. **Be Specific**: Avoid generic statements - provide concrete examples and data
|
||||
2. **Think Strategically**: Focus on "why" and "what" rather than "how"
|
||||
3. **Challenge Assumptions**: Use the conversation to test and refine your thinking
|
||||
4. **Scope Ruthlessly**: Resist feature creep in MVP definition
|
||||
|
||||
### After Completion
|
||||
|
||||
1. **Validate with Checklist**: Use included criteria to ensure completeness
|
||||
2. **Stakeholder Review**: Share executive summary first, then full brief
|
||||
3. **Iterate Based on Feedback**: Product briefs should evolve with new insights
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Brief lacks specificity or contains vague statements
|
||||
|
||||
- **Solution**: Restart problem definition with concrete examples and measurable impacts
|
||||
- **Check**: Ensure each section answers "so what?" and provides actionable insights
|
||||
|
||||
**Issue**: MVP scope is too large or undefined
|
||||
|
||||
- **Solution**: Use the "what's the minimum to validate core hypothesis?" filter
|
||||
- **Check**: Verify that each MVP feature is truly essential for initial value delivery
|
||||
|
||||
**Issue**: Missing strategic context or business justification
|
||||
|
||||
- **Solution**: Return to financial impact and strategic alignment sections
|
||||
- **Check**: Ensure connection to company goals and clear ROI potential
|
||||
|
||||
## Customization
|
||||
|
||||
To customize this workflow:
|
||||
|
||||
1. **Modify Questions**: Update instructions.md to add industry-specific or company-specific prompts
|
||||
2. **Adjust Template**: Customize template.md sections for organizational brief standards
|
||||
3. **Add Validation**: Extend checklist.md with company-specific quality criteria
|
||||
4. **Configure Modes**: Adjust brief_format settings for different output styles
|
||||
|
||||
## Version History
|
||||
|
||||
- **v6.0.0** - Interactive conversational design with dual modes
|
||||
- Interactive and YOLO mode support
|
||||
- Multi-input document integration
|
||||
- Executive summary generation
|
||||
- Strategic alignment focus
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Review the workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
|
||||
- Validate output using `checklist.md`
|
||||
- Consider running market research workflow first if lacking business context
|
||||
- Consult BMAD documentation for product planning methodology
|
||||
|
||||
---
|
||||
|
||||
_Part of the BMad Method v6 - BMM (Method) Module_
|
||||
@@ -1,454 +0,0 @@
|
||||
# Research Workflow - Multi-Type Research System
|
||||
|
||||
## Overview
|
||||
|
||||
The Research Workflow is a comprehensive, adaptive research system that supports multiple research types through an intelligent router pattern. This workflow consolidates various research methodologies into a single, powerful tool that adapts to your specific research needs - from market analysis to technical evaluation to AI prompt generation.
|
||||
|
||||
**Version 2.0.0** - Multi-type research system with router-based architecture
|
||||
|
||||
## Key Features
|
||||
|
||||
### 🔀 Intelligent Research Router
|
||||
|
||||
- **6 Research Types**: Market, Deep Prompt, Technical, Competitive, User, Domain
|
||||
- **Dynamic Instructions**: Loads appropriate instruction set based on research type
|
||||
- **Adaptive Templates**: Selects optimal output format for research goal
|
||||
- **Context-Aware**: Adjusts frameworks and methods per research type
|
||||
|
||||
### 🔍 Market Research (Type: `market`)
|
||||
|
||||
- Real-time web research for current market data
|
||||
- TAM/SAM/SOM calculations with multiple methodologies
|
||||
- Competitive landscape analysis and positioning
|
||||
- Customer persona development and Jobs-to-be-Done
|
||||
- Porter's Five Forces and strategic frameworks
|
||||
- Go-to-market strategy recommendations
|
||||
|
||||
### 🤖 Deep Research Prompt Generation (Type: `deep_prompt`)
|
||||
|
||||
- **Optimized for AI Research Platforms**: ChatGPT Deep Research, Gemini, Grok DeepSearch, Claude Projects
|
||||
- **Prompt Engineering Best Practices**: Multi-stage research workflows, iterative refinement
|
||||
- **Platform-Specific Optimization**: Tailored prompts for each AI research tool
|
||||
- **Context Packaging**: Structures background information for optimal AI understanding
|
||||
- **Research Question Refinement**: Transforms vague questions into precise research prompts
|
||||
|
||||
### 🏗️ Technical/Architecture Research (Type: `technical`)
|
||||
|
||||
- Technology evaluation and comparison matrices
|
||||
- Architecture pattern research and trade-off analysis
|
||||
- Framework/library assessment with pros/cons
|
||||
- Technical feasibility studies
|
||||
- Cost-benefit analysis for technology decisions
|
||||
- Architecture Decision Records (ADR) generation
|
||||
|
||||
### 🎯 Competitive Intelligence (Type: `competitive`)
|
||||
|
||||
- Deep competitor analysis and profiling
|
||||
- Competitive positioning and gap analysis
|
||||
- Strategic group mapping
|
||||
- Feature comparison matrices
|
||||
- Pricing strategy analysis
|
||||
- Market share and growth tracking
|
||||
|
||||
### 👥 User Research (Type: `user`)
|
||||
|
||||
- Customer insights and behavioral analysis
|
||||
- Persona development with demographics and psychographics
|
||||
- Jobs-to-be-Done framework application
|
||||
- Customer journey mapping
|
||||
- Pain point identification
|
||||
- Willingness-to-pay analysis
|
||||
|
||||
### 🌐 Domain/Industry Research (Type: `domain`)
|
||||
|
||||
- Industry deep dives and trend analysis
|
||||
- Regulatory landscape assessment
|
||||
- Domain expertise synthesis
|
||||
- Best practices identification
|
||||
- Standards and compliance requirements
|
||||
- Emerging patterns and disruptions
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Invocation
|
||||
|
||||
```bash
|
||||
workflow research
|
||||
```
|
||||
|
||||
The workflow will prompt you to select a research type.
|
||||
|
||||
### Direct Research Type Selection
|
||||
|
||||
```bash
|
||||
# Market research
|
||||
workflow research --type market
|
||||
|
||||
# Deep research prompt generation
|
||||
workflow research --type deep_prompt
|
||||
|
||||
# Technical evaluation
|
||||
workflow research --type technical
|
||||
|
||||
# Competitive intelligence
|
||||
workflow research --type competitive
|
||||
|
||||
# User research
|
||||
workflow research --type user
|
||||
|
||||
# Domain analysis
|
||||
workflow research --type domain
|
||||
```
|
||||
|
||||
### With Input Documents
|
||||
|
||||
```bash
|
||||
workflow research --type market --input product-brief.md --input competitor-list.md
|
||||
workflow research --type technical --input requirements.md --input architecture.md
|
||||
workflow research --type deep_prompt --input research-question.md
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
Can be customized through `workflow.yaml`:
|
||||
|
||||
- **research_depth**: `quick`, `standard`, or `comprehensive`
|
||||
- **enable_web_research**: `true`/`false` for real-time data gathering
|
||||
- **enable_competitor_analysis**: `true`/`false` (market/competitive types)
|
||||
- **enable_financial_modeling**: `true`/`false` (market type)
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Files Included
|
||||
|
||||
```
|
||||
research/
|
||||
├── workflow.yaml # Multi-type configuration
|
||||
├── instructions-router.md # Router logic (loads correct instructions)
|
||||
├── instructions-market.md # Market research workflow
|
||||
├── instructions-deep-prompt.md # Deep prompt generation workflow
|
||||
├── instructions-technical.md # Technical evaluation workflow
|
||||
├── template-market.md # Market research report template
|
||||
├── template-deep-prompt.md # Research prompt template
|
||||
├── template-technical.md # Technical evaluation template
|
||||
├── checklist.md # Universal validation criteria
|
||||
├── README.md # This file
|
||||
└── claude-code/ # Claude Code enhancements (optional)
|
||||
├── injections.yaml # Integration configuration
|
||||
└── sub-agents/ # Specialized research agents
|
||||
├── bmm-market-researcher.md
|
||||
├── bmm-trend-spotter.md
|
||||
├── bmm-data-analyst.md
|
||||
├── bmm-competitor-analyzer.md
|
||||
├── bmm-user-researcher.md
|
||||
└── bmm-technical-evaluator.md
|
||||
```
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Research Type Selection and Setup
|
||||
|
||||
1. Router presents research type menu
|
||||
2. User selects research type (market, deep_prompt, technical, competitive, user, domain)
|
||||
3. Router loads appropriate instructions and template
|
||||
4. Gather research parameters and inputs
|
||||
|
||||
### Phase 2: Research Type-Specific Execution
|
||||
|
||||
**For Market Research:**
|
||||
|
||||
1. Define research objectives and market boundaries
|
||||
2. Conduct web research across multiple sources
|
||||
3. Calculate TAM/SAM/SOM with triangulation
|
||||
4. Develop customer segments and personas
|
||||
5. Analyze competitive landscape
|
||||
6. Apply industry frameworks (Porter's Five Forces, etc.)
|
||||
7. Identify trends and opportunities
|
||||
8. Develop strategic recommendations
|
||||
9. Create financial projections (optional)
|
||||
10. Compile comprehensive report
|
||||
|
||||
**For Deep Prompt Generation:**
|
||||
|
||||
1. Analyze research question or topic
|
||||
2. Identify optimal AI research platform (ChatGPT, Gemini, Grok, Claude)
|
||||
3. Structure research context and background
|
||||
4. Generate platform-optimized prompt
|
||||
5. Create multi-stage research workflow
|
||||
6. Define iteration and refinement strategy
|
||||
7. Package with context documents
|
||||
8. Provide execution guidance
|
||||
|
||||
**For Technical Research:**
|
||||
|
||||
1. Define technical requirements and constraints
|
||||
2. Identify technologies/frameworks to evaluate
|
||||
3. Research each option (documentation, community, maturity)
|
||||
4. Create comparison matrix with criteria
|
||||
5. Perform trade-off analysis
|
||||
6. Calculate cost-benefit for each option
|
||||
7. Generate Architecture Decision Record (ADR)
|
||||
8. Provide recommendation with rationale
|
||||
|
||||
**For Competitive/User/Domain:**
|
||||
|
||||
- Uses market research workflow with specific focus
|
||||
- Adapts questions and frameworks to research type
|
||||
- Customizes output format for target audience
|
||||
|
||||
### Phase 3: Validation and Delivery
|
||||
|
||||
1. Review outputs against checklist
|
||||
2. Validate completeness and quality
|
||||
3. Generate final report/document
|
||||
4. Provide next steps and recommendations
|
||||
|
||||
## Output
|
||||
|
||||
### Generated Files by Research Type
|
||||
|
||||
**Market Research:**
|
||||
|
||||
- `market-research-{product_name}-{date}.md`
|
||||
- Comprehensive market analysis report (10+ sections)
|
||||
|
||||
**Deep Research Prompt:**
|
||||
|
||||
- `deep-research-prompt-{date}.md`
|
||||
- Optimized AI research prompt with context and instructions
|
||||
|
||||
**Technical Research:**
|
||||
|
||||
- `technical-research-{date}.md`
|
||||
- Technology evaluation with comparison matrix and ADR
|
||||
|
||||
**Competitive Intelligence:**
|
||||
|
||||
- `competitive-intelligence-{date}.md`
|
||||
- Detailed competitor analysis and positioning
|
||||
|
||||
**User Research:**
|
||||
|
||||
- `user-research-{date}.md`
|
||||
- Customer insights and persona documentation
|
||||
|
||||
**Domain Research:**
|
||||
|
||||
- `domain-research-{date}.md`
|
||||
- Industry deep dive with trends and best practices
|
||||
|
||||
## Requirements
|
||||
|
||||
### All Research Types
|
||||
|
||||
- BMAD Core v6 project structure
|
||||
- Web search capability (for real-time research)
|
||||
- Access to research data sources
|
||||
|
||||
### Market Research
|
||||
|
||||
- Product or business description
|
||||
- Target customer hypotheses (optional)
|
||||
- Known competitors list (optional)
|
||||
|
||||
### Deep Prompt Research
|
||||
|
||||
- Research question or topic
|
||||
- Background context documents (optional)
|
||||
- Target AI platform preference (optional)
|
||||
|
||||
### Technical Research
|
||||
|
||||
- Technical requirements document
|
||||
- Current architecture (if brownfield)
|
||||
- Technical constraints list
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting
|
||||
|
||||
1. **Know Your Research Goal**: Select the most appropriate research type
|
||||
2. **Gather Context**: Collect relevant documents before starting
|
||||
3. **Set Depth Level**: Choose appropriate research_depth (quick/standard/comprehensive)
|
||||
4. **Define Success Criteria**: What decisions will this research inform?
|
||||
|
||||
### During Execution
|
||||
|
||||
**Market Research:**
|
||||
|
||||
- Provide specific product/service details
|
||||
- Validate market boundaries carefully
|
||||
- Review TAM/SAM/SOM assumptions
|
||||
- Challenge competitive positioning
|
||||
|
||||
**Deep Prompt Generation:**
|
||||
|
||||
- Be specific about research platform target
|
||||
- Provide rich context documents
|
||||
- Clarify expected research outcome
|
||||
- Define iteration strategy
|
||||
|
||||
**Technical Research:**
|
||||
|
||||
- List all evaluation criteria upfront
|
||||
- Weight criteria by importance
|
||||
- Consider long-term implications
|
||||
- Include cost analysis
|
||||
|
||||
### After Completion
|
||||
|
||||
1. Review using the validation checklist
|
||||
2. Update with any missing information
|
||||
3. Share with stakeholders for feedback
|
||||
4. Schedule follow-up research if needed
|
||||
5. Document decisions made based on research
|
||||
|
||||
## Research Frameworks Available
|
||||
|
||||
### Market Research Frameworks
|
||||
|
||||
- TAM/SAM/SOM Analysis
|
||||
- Porter's Five Forces
|
||||
- Jobs-to-be-Done (JTBD)
|
||||
- Technology Adoption Lifecycle
|
||||
- SWOT Analysis
|
||||
- Value Chain Analysis
|
||||
|
||||
### Technical Research Frameworks
|
||||
|
||||
- Trade-off Analysis Matrix
|
||||
- Architecture Decision Records (ADR)
|
||||
- Technology Radar
|
||||
- Comparison Matrix
|
||||
- Cost-Benefit Analysis
|
||||
- Technical Risk Assessment
|
||||
|
||||
### Deep Prompt Frameworks
|
||||
|
||||
- ChatGPT Deep Research Best Practices
|
||||
- Gemini Deep Research Framework
|
||||
- Grok DeepSearch Optimization
|
||||
- Claude Projects Methodology
|
||||
- Iterative Prompt Refinement
|
||||
|
||||
## Data Sources
|
||||
|
||||
The workflow leverages multiple data sources:
|
||||
|
||||
- Industry reports and publications
|
||||
- Government statistics and databases
|
||||
- Financial reports and SEC filings
|
||||
- News articles and press releases
|
||||
- Academic research papers
|
||||
- Technical documentation and RFCs
|
||||
- GitHub repositories and discussions
|
||||
- Stack Overflow and developer forums
|
||||
- Market research firm reports
|
||||
- Social media and communities
|
||||
- Patent databases
|
||||
- Benchmarking studies
|
||||
|
||||
## Claude Code Enhancements
|
||||
|
||||
### Available Subagents
|
||||
|
||||
1. **bmm-market-researcher** - Market intelligence gathering
|
||||
2. **bmm-trend-spotter** - Emerging trends and weak signals
|
||||
3. **bmm-data-analyst** - Quantitative analysis and modeling
|
||||
4. **bmm-competitor-analyzer** - Competitive intelligence
|
||||
5. **bmm-user-researcher** - Customer insights and personas
|
||||
6. **bmm-technical-evaluator** - Technology assessment
|
||||
|
||||
These are automatically invoked during workflow execution if Claude Code integration is configured.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Don't know which research type to choose
|
||||
|
||||
- **Solution**: Start with research question - "What do I need to know?"
|
||||
- Market viability? → `market`
|
||||
- Best technology? → `technical`
|
||||
- Need AI to research deeper? → `deep_prompt`
|
||||
- Who are competitors? → `competitive`
|
||||
- Who are users? → `user`
|
||||
- Industry understanding? → `domain`
|
||||
|
||||
### Issue: Market research results seem incomplete
|
||||
|
||||
- **Solution**: Increase research_depth to `comprehensive`
|
||||
- **Check**: Enable web_research in workflow.yaml
|
||||
- **Try**: Run competitive and user research separately for more depth
|
||||
|
||||
### Issue: Deep prompt doesn't work with target platform
|
||||
|
||||
- **Solution**: Review platform-specific best practices in generated prompt
|
||||
- **Check**: Ensure context documents are included
|
||||
- **Try**: Regenerate with different platform selection
|
||||
|
||||
### Issue: Technical comparison is subjective
|
||||
|
||||
- **Solution**: Add more objective criteria (performance metrics, cost, community size)
|
||||
- **Check**: Weight criteria by business importance
|
||||
- **Try**: Run pilot implementations for top 2 options
|
||||
|
||||
## Customization
|
||||
|
||||
### Adding New Research Types
|
||||
|
||||
1. Create new instructions file: `instructions-{type}.md`
|
||||
2. Create new template file: `template-{type}.md`
|
||||
3. Add research type to `workflow.yaml` `research_types` section
|
||||
4. Update router logic in `instructions-router.md`
|
||||
|
||||
### Modifying Existing Research Types
|
||||
|
||||
1. Edit appropriate `instructions-{type}.md` file
|
||||
2. Update corresponding `template-{type}.md` if needed
|
||||
3. Adjust validation criteria in `checklist.md`
|
||||
|
||||
### Creating Custom Frameworks
|
||||
|
||||
Add to `workflow.yaml` `frameworks` section under appropriate research type.
|
||||
|
||||
## Version History
|
||||
|
||||
- **v2.0.0** - Multi-type research system with router architecture
|
||||
- Added deep_prompt research type for AI research platform optimization
|
||||
- Added technical research type for technology evaluation
|
||||
- Consolidated competitive, user, domain under market with focus variants
|
||||
- Router-based instruction loading
|
||||
- Template selection by research type
|
||||
- Enhanced Claude Code subagent support
|
||||
|
||||
- **v1.0.0** - Initial market research only implementation
|
||||
- Single-purpose market research workflow
|
||||
- Now deprecated in favor of v2.0.0 multi-type system
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Review workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
|
||||
- Check validation against `checklist.md`
|
||||
- Examine router logic in `instructions-router.md`
|
||||
- Review research type-specific instructions
|
||||
- Consult BMAD Method v6 documentation
|
||||
|
||||
## Migration from v1.0 market-research
|
||||
|
||||
If you're used to the standalone `market-research` workflow:
|
||||
|
||||
```bash
|
||||
# Old way
|
||||
workflow market-research
|
||||
|
||||
# New way
|
||||
workflow research --type market
|
||||
# Or just: workflow research (then select market)
|
||||
```
|
||||
|
||||
All market research functionality is preserved and enhanced in v2.0.0.
|
||||
|
||||
---
|
||||
|
||||
_Part of the BMad Method v6 - BMM (BMad Method) Module - Empowering systematic research and analysis_
|
||||
@@ -1,258 +0,0 @@
|
||||
---
|
||||
last-redoc-date: 2025-10-01
|
||||
---
|
||||
|
||||
# Project Planning Workflow (Phase 2)
|
||||
|
||||
The Phase 2 Planning workflow is **scale-adaptive**, meaning it automatically determines the right level of planning documentation based on project complexity (Levels 0-4). This ensures planning overhead matches project value—from minimal tech specs for bug fixes to comprehensive PRDs for enterprise platforms.
|
||||
|
||||
## Scale-Adaptive Flow (Levels 0-4)
|
||||
|
||||
The workflow routes to different planning approaches based on project level:
|
||||
|
||||
### Level 0 - Single File Change / Bug Fix
|
||||
|
||||
**Planning:** Tech-spec only (lightweight implementation plan)
|
||||
**Output:** `tech-spec.md` with single story
|
||||
**Next Phase:** Direct to implementation (Phase 4)
|
||||
|
||||
### Level 1 - Small Feature (1-3 files, 2-5 stories)
|
||||
|
||||
**Planning:** Tech-spec only (implementation-focused)
|
||||
**Output:** `tech-spec.md` with epic breakdown and stories
|
||||
**Next Phase:** Direct to implementation (Phase 4)
|
||||
|
||||
### Level 2 - Feature Set / Small Project (5-15 stories, 1-2 epics)
|
||||
|
||||
**Planning:** PRD (product-focused) + Tech-spec (technical planning)
|
||||
**Output:** `PRD.md`, `epics.md`, `tech-spec.md`
|
||||
**Next Phase:** Tech-spec workflow (lightweight solutioning), then implementation (Phase 4)
|
||||
**Note:** Level 2 uses tech-spec instead of full architecture to keep planning lightweight
|
||||
|
||||
### Level 3 - Medium Project (15-40 stories, 2-5 epics)
|
||||
|
||||
**Planning:** PRD (strategic product document)
|
||||
**Output:** `PRD.md`, `epics.md`
|
||||
**Next Phase:** create-architecture workflow (Phase 3), then implementation (Phase 4)
|
||||
|
||||
### Level 4 - Large/Enterprise Project (40-100+ stories, 5-10 epics)
|
||||
|
||||
**Planning:** PRD (comprehensive product specification)
|
||||
**Output:** `PRD.md`, `epics.md`
|
||||
**Next Phase:** create-architecture workflow (Phase 3), then implementation (Phase 4)
|
||||
|
||||
**Critical Distinction:**
|
||||
|
||||
- **Levels 0-1:** No PRD, tech-spec only
|
||||
- **Level 2:** PRD + tech-spec (skips full architecture)
|
||||
- **Levels 3-4:** PRD → full create-architecture workflow
|
||||
|
||||
Critical to v6's flow improvement is this workflow's integration with the bmm-workflow-status.md tracking document, which maintains project state across sessions, tracks which agents participate in each phase, and provides continuity for multi-session planning efforts. The workflow can resume from any point, intelligently detecting existing artifacts and determining next steps without redundant work. For UX-heavy projects, it can generate standalone UX specifications or AI frontend prompts from existing specs.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Scale-adaptive planning** - Automatically determines output based on project complexity
|
||||
- **Intelligent routing** - Uses router system to load appropriate instruction sets
|
||||
- **Continuation support** - Can resume from previous sessions and handle incremental work
|
||||
- **Multi-level outputs** - Supports 5 project levels (0-4) with appropriate artifacts
|
||||
- **Input integration** - Leverages product briefs and market research when available
|
||||
- **Template-driven** - Uses validated templates for consistent output structure
|
||||
|
||||
### Configuration
|
||||
|
||||
The workflow adapts automatically based on project assessment, but key configuration options include:
|
||||
|
||||
- **scale_parameters**: Defines story/epic counts for each project level
|
||||
- **output_folder**: Where all generated documents are stored
|
||||
- **project_name**: Used in document names and templates
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Files Included
|
||||
|
||||
```
|
||||
2-plan-workflows/
|
||||
├── README.md # Overview and usage details
|
||||
├── checklist.md # Shared validation criteria
|
||||
├── prd/
|
||||
│ ├── epics-template.md # Epic breakdown template
|
||||
│ ├── instructions.md # Level 2-4 PRD instructions
|
||||
│ ├── prd-template.md # Product Requirements Document template
|
||||
│ └── workflow.yaml
|
||||
├── tech-spec/
|
||||
│ ├── epics-template.md # Epic-to-story handoff template
|
||||
│ ├── instructions-level0-story.md
|
||||
│ ├── instructions-level1-stories.md
|
||||
│ ├── instructions.md # Level 0-1 tech-spec instructions
|
||||
│ ├── tech-spec-template.md # Technical Specification template
|
||||
│ ├── user-story-template.md # Story template for Level 0/1
|
||||
│ └── workflow.yaml
|
||||
├── gdd/
|
||||
│ ├── instructions-gdd.md # Game Design Document instructions
|
||||
│ ├── gdd-template.md # GDD template
|
||||
│ ├── game-types.csv # Genre catalog
|
||||
│ ├── game-types/ # Genre-specific templates
|
||||
│ └── workflow.yaml
|
||||
├── narrative/
|
||||
│ ├── instructions-narrative.md # Narrative design instructions
|
||||
│ ├── narrative-template.md # Narrative planning template
|
||||
│ └── workflow.yaml
|
||||
└── create-ux-design/
|
||||
├── instructions.md # UX design instructions
|
||||
├── ux-design-template.md # UX design template
|
||||
├── checklist.md # UX design validation checklist
|
||||
└── workflow.yaml
|
||||
```
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Assessment and Routing (Steps 1-5)
|
||||
|
||||
- **Project Analysis**: Determines project type (greenfield/brownfield/legacy)
|
||||
- **Scope Assessment**: Classifies into 5 levels based on complexity
|
||||
- **Document Discovery**: Identifies existing inputs and documentation
|
||||
- **Workflow Routing**: Loads appropriate instruction set based on level
|
||||
- **Continuation Handling**: Resumes from previous work when applicable
|
||||
|
||||
### Phase 2: Level-Specific Planning (Steps vary by level)
|
||||
|
||||
**Level 0 (Single File Change / Bug Fix)**:
|
||||
|
||||
- Tech-spec only workflow
|
||||
- Single story implementation plan
|
||||
- Direct to Phase 4 (implementation)
|
||||
|
||||
**Level 1 (Small Feature)**:
|
||||
|
||||
- Tech-spec only workflow
|
||||
- Epic breakdown with 2-5 stories
|
||||
- Direct to Phase 4 (implementation)
|
||||
|
||||
**Level 2 (Feature Set / Small Project)**:
|
||||
|
||||
- PRD workflow (strategic product document)
|
||||
- Generates `PRD.md` and `epics.md`
|
||||
- Then runs tech-spec workflow (lightweight solutioning)
|
||||
- Then to Phase 4 (implementation)
|
||||
|
||||
**Level 3-4 (Medium to Enterprise Projects)**:
|
||||
|
||||
- PRD workflow (comprehensive product specification)
|
||||
- Generates `PRD.md` and `epics.md`
|
||||
- Hands off to Phase 3 (create-architecture workflow)
|
||||
- Full architecture design before implementation
|
||||
|
||||
### Phase 3: Validation and Handoff (Final steps)
|
||||
|
||||
- **Document Review**: Validates outputs against checklists
|
||||
- **Architect Preparation**: For Level 3-4, prepares handoff materials
|
||||
- **Next Steps**: Provides guidance for development phase
|
||||
|
||||
## Output
|
||||
|
||||
### Generated Files
|
||||
|
||||
- **Primary output**: PRD.md (except Level 0), tech-spec.md, bmm-workflow-status.md
|
||||
- **Supporting files**: epics.md (Level 2-4), PRD-validation-report.md (if validation run)
|
||||
|
||||
### Output Structure by Level
|
||||
|
||||
**Level 0 - Tech Spec Only**:
|
||||
|
||||
- `tech-spec.md` - Single story implementation plan
|
||||
- Direct to implementation
|
||||
|
||||
**Level 1 - Tech Spec with Epic Breakdown**:
|
||||
|
||||
- `tech-spec.md` - Epic breakdown with 2-5 stories
|
||||
- Direct to implementation
|
||||
|
||||
**Level 2 - PRD + Tech Spec**:
|
||||
|
||||
- `PRD.md` - Strategic product document (goals, requirements, user journeys, UX principles, UI goals, epic list, scope)
|
||||
- `epics.md` - Tactical implementation roadmap (detailed story breakdown)
|
||||
- `tech-spec.md` - Lightweight technical planning (generated after PRD)
|
||||
- Then to implementation
|
||||
|
||||
**Level 3-4 - PRD + Full Architecture**:
|
||||
|
||||
- `PRD.md` - Comprehensive product specification
|
||||
- `epics.md` - Complete epic/story breakdown
|
||||
- Hands off to create-architecture workflow (Phase 3)
|
||||
- `architecture.md` - Generated by architect workflow
|
||||
- Then to implementation
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Input Documents**: Product brief and/or market research (recommended but not required)
|
||||
- **Project Configuration**: Valid config.yaml with project_name and output_folder
|
||||
- **Assessment Readiness**: Clear understanding of project scope and objectives
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting
|
||||
|
||||
1. **Gather Context**: Collect any existing product briefs, market research, or requirements
|
||||
2. **Define Scope**: Have a clear sense of project boundaries and complexity
|
||||
3. **Prepare Stakeholders**: Ensure key stakeholders are available for input if needed
|
||||
|
||||
### During Execution
|
||||
|
||||
1. **Be Honest About Scope**: Accurate assessment ensures appropriate planning depth
|
||||
2. **Leverage Existing Work**: Reference previous documents and avoid duplication
|
||||
3. **Think Incrementally**: Remember that planning can evolve - start with what you know
|
||||
|
||||
### After Completion
|
||||
|
||||
1. **Validate Against Checklist**: Use included validation criteria to ensure completeness
|
||||
2. **Share with Stakeholders**: Distribute appropriate documents to relevant team members
|
||||
3. **Prepare for Architecture**: For Level 3-4 projects, ensure architect has complete context
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Workflow creates wrong level of documentation
|
||||
|
||||
- **Solution**: Review project assessment and restart with correct scope classification
|
||||
- **Check**: Verify the bmm-workflow-status.md reflects actual project complexity
|
||||
|
||||
**Issue**: Missing input documents cause incomplete planning
|
||||
|
||||
- **Solution**: Gather recommended inputs or proceed with manual context gathering
|
||||
- **Check**: Ensure critical business context is captured even without formal documents
|
||||
|
||||
**Issue**: Continuation from previous session fails
|
||||
|
||||
- **Solution**: Check for existing bmm-workflow-status.md and ensure output folder is correct
|
||||
- **Check**: Verify previous session completed at a valid checkpoint
|
||||
|
||||
## Customization
|
||||
|
||||
To customize this workflow:
|
||||
|
||||
1. **Modify Assessment Logic**: Update instructions-router.md to adjust level classification
|
||||
2. **Adjust Templates**: Customize PRD, tech-spec, or epic templates for organizational needs
|
||||
3. **Add Validation**: Extend checklist.md with organization-specific quality criteria
|
||||
4. **Configure Outputs**: Modify workflow.yaml to change file naming or structure
|
||||
|
||||
## Version History
|
||||
|
||||
- **v6.0.0** - Scale-adaptive architecture with intelligent routing
|
||||
- Multi-level project support (0-4)
|
||||
- Continuation and resumption capabilities
|
||||
- Template-driven output generation
|
||||
- Input document integration
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Review the workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
|
||||
- Validate output using `checklist.md`
|
||||
- Consult project assessment in `bmm-workflow-status.md`
|
||||
- Check continuation status in existing output documents
|
||||
|
||||
---
|
||||
|
||||
_Part of the BMad Method v6 - BMM (Method) Module_
|
||||
@@ -1,222 +0,0 @@
|
||||
# Game Design Document (GDD) Workflow
|
||||
|
||||
This folder contains the GDD workflow for game projects, replacing the traditional PRD approach with game-specific documentation.
|
||||
|
||||
## Overview
|
||||
|
||||
The GDD workflow creates a comprehensive Game Design Document that captures:
|
||||
|
||||
- Core gameplay mechanics and pillars
|
||||
- Game type-specific elements (RPG systems, platformer movement, puzzle mechanics, etc.)
|
||||
- Level design framework
|
||||
- Art and audio direction
|
||||
- Technical specifications (platform-agnostic)
|
||||
- Development epics
|
||||
|
||||
## Architecture
|
||||
|
||||
### Universal Template
|
||||
|
||||
`gdd-template.md` contains sections common to ALL game types:
|
||||
|
||||
- Executive Summary
|
||||
- Goals and Context
|
||||
- Core Gameplay
|
||||
- Win/Loss Conditions
|
||||
- Progression and Balance
|
||||
- Level Design Framework
|
||||
- Art and Audio Direction
|
||||
- Technical Specs
|
||||
- Development Epics
|
||||
- Success Metrics
|
||||
|
||||
### Game-Type-Specific Injection
|
||||
|
||||
The template includes a `{{GAME_TYPE_SPECIFIC_SECTIONS}}` placeholder that gets replaced with game-type-specific content.
|
||||
|
||||
### Game Types Registry
|
||||
|
||||
`game-types.csv` defines 24+ game types with:
|
||||
|
||||
- **id**: Unique identifier (e.g., `action-platformer`, `rpg`, `roguelike`)
|
||||
- **name**: Human-readable name
|
||||
- **description**: Brief description of the game type
|
||||
- **genre_tags**: Searchable tags
|
||||
- **fragment_file**: Path to type-specific template fragment
|
||||
|
||||
### Game-Type Fragments
|
||||
|
||||
Located in `game-types/` folder, these markdown files contain sections specific to each game type:
|
||||
|
||||
**action-platformer.md**:
|
||||
|
||||
- Movement System (jump mechanics, air control, special moves)
|
||||
- Combat System (attack types, combos, enemy AI)
|
||||
- Level Design Patterns (platforming challenges, combat arenas)
|
||||
- Player Abilities and Unlocks
|
||||
|
||||
**rpg.md**:
|
||||
|
||||
- Character System (stats, classes, leveling)
|
||||
- Inventory and Equipment
|
||||
- Quest System
|
||||
- World and Exploration
|
||||
- NPC and Dialogue
|
||||
- Combat System
|
||||
|
||||
**puzzle.md**:
|
||||
|
||||
- Core Puzzle Mechanics
|
||||
- Puzzle Progression
|
||||
- Level Structure
|
||||
- Player Assistance
|
||||
- Replayability
|
||||
|
||||
**roguelike.md**:
|
||||
|
||||
- Run Structure
|
||||
- Procedural Generation
|
||||
- Permadeath and Progression
|
||||
- Item and Upgrade System
|
||||
- Character Selection
|
||||
- Difficulty Modifiers
|
||||
|
||||
...and 20+ more game types!
|
||||
|
||||
## Workflow Flow
|
||||
|
||||
1. **Router Detection** (instructions-router.md):
|
||||
- Step 3 asks for project type
|
||||
- If "Game" selected → sets `workflow_type = "gdd"`
|
||||
- Skips standard level classification
|
||||
- Jumps to GDD-specific assessment
|
||||
|
||||
2. **Game Type Selection** (instructions-gdd.md Step 1):
|
||||
- Presents 9 common game types + "Other"
|
||||
- Maps selection to `game-types.csv`
|
||||
- Loads corresponding fragment file
|
||||
- Stores `game_type` for injection
|
||||
|
||||
3. **Universal GDD Sections** (Steps 2-5, 7-13):
|
||||
- Platform and target audience
|
||||
- Goals and context
|
||||
- Core gameplay (pillars, loop, win/loss)
|
||||
- Mechanics and controls
|
||||
- Progression and balance
|
||||
- Level design
|
||||
- Art and audio
|
||||
- Technical specs
|
||||
- Epics and metrics
|
||||
|
||||
4. **Game-Type Injection** (Step 6):
|
||||
- Loads fragment from `game-types/{game_type}.md`
|
||||
- For each `{{placeholder}}` in fragment, elicits details
|
||||
- Injects completed sections into `{{GAME_TYPE_SPECIFIC_SECTIONS}}`
|
||||
|
||||
5. **Solutioning Handoff** (Step 14):
|
||||
- Routes to `3-solutioning` workflow
|
||||
- Platform/engine specifics handled by solutioning registry
|
||||
- Game-\* entries in solutioning `registry.csv` provide engine-specific guidance
|
||||
|
||||
## Platform vs. Game Type Separation
|
||||
|
||||
**GDD (this workflow)**: Game-type specifics
|
||||
|
||||
- What makes an RPG an RPG (stats, quests, inventory)
|
||||
- What makes a platformer a platformer (jump mechanics, level design)
|
||||
- Genre-defining mechanics and systems
|
||||
|
||||
**Solutioning (3-solutioning workflow)**: Platform/engine specifics
|
||||
|
||||
- Unity vs. Godot vs. Phaser vs. Unreal
|
||||
- 2D vs. 3D rendering
|
||||
- Physics engines
|
||||
- Input systems
|
||||
- Platform constraints (mobile, web, console)
|
||||
|
||||
This separation allows:
|
||||
|
||||
- Single universal GDD regardless of platform
|
||||
- Platform decisions made during architecture phase
|
||||
- Easy platform pivots without rewriting GDD
|
||||
|
||||
## Output
|
||||
|
||||
**GDD.md**: Complete game design document with:
|
||||
|
||||
- All universal sections filled
|
||||
- Game-type-specific sections injected
|
||||
- Ready for solutioning handoff
|
||||
|
||||
## Example Game Types
|
||||
|
||||
| ID | Name | Key Sections |
|
||||
| ----------------- | ----------------- | ------------------------------------------------- |
|
||||
| action-platformer | Action Platformer | Movement, Combat, Level Patterns, Abilities |
|
||||
| rpg | RPG | Character System, Inventory, Quests, World, NPCs |
|
||||
| puzzle | Puzzle | Puzzle Mechanics, Progression, Level Structure |
|
||||
| roguelike | Roguelike | Run Structure, Procgen, Permadeath, Items |
|
||||
| shooter | Shooter | Weapon Systems, Enemy AI, Arena Design |
|
||||
| strategy | Strategy | Resources, Units, AI, Victory Conditions |
|
||||
| metroidvania | Metroidvania | Interconnected World, Ability Gating, Exploration |
|
||||
| visual-novel | Visual Novel | Branching Story, Dialogue Trees, Choices |
|
||||
| tower-defense | Tower Defense | Waves, Tower Types, Placement, Economy |
|
||||
| card-game | Card Game | Deck Building, Card Mechanics, Turn System |
|
||||
|
||||
...and 14+ more!
|
||||
|
||||
## Adding New Game Types
|
||||
|
||||
1. Add row to `game-types.csv`:
|
||||
|
||||
```csv
|
||||
new-type,New Type Name,"Description",tags,new-type.md
|
||||
```
|
||||
|
||||
2. Create `game-types/new-type.md`:
|
||||
|
||||
```markdown
|
||||
## New Type Specific Elements
|
||||
|
||||
### System Name
|
||||
|
||||
{{system_placeholder}}
|
||||
|
||||
**Details:**
|
||||
|
||||
- Element 1
|
||||
- Element 2
|
||||
```
|
||||
|
||||
3. The workflow automatically uses it!
|
||||
|
||||
## Integration with Solutioning
|
||||
|
||||
When a game project completes the GDD and moves to solutioning:
|
||||
|
||||
1. Solutioning workflow reads `project_type == "game"`
|
||||
2. Loads GDD.md instead of PRD.md
|
||||
3. Matches game platforms to solutioning `registry.csv` game-\* entries
|
||||
4. Provides engine-specific guidance (Unity, Godot, Phaser, etc.)
|
||||
5. Generates architecture.md with platform decisions
|
||||
6. Creates per-epic tech specs
|
||||
|
||||
Example solutioning registry entries:
|
||||
|
||||
- `game-unity-2d`: Unity 2D games
|
||||
- `game-godot-3d`: Godot 3D games
|
||||
- `game-phaser`: Phaser web games
|
||||
- `game-unreal-3d`: Unreal Engine games
|
||||
- `game-custom-3d-rust`: Custom Rust game engines
|
||||
|
||||
## Philosophy
|
||||
|
||||
**Game projects are fundamentally different from software products**:
|
||||
|
||||
- Gameplay feel > feature lists
|
||||
- Playtesting > user testing
|
||||
- Game pillars > product goals
|
||||
- Mechanics > requirements
|
||||
- Fun > utility
|
||||
|
||||
The GDD workflow respects these differences while maintaining BMAD Method rigor.
|
||||
@@ -1 +0,0 @@
|
||||
New Doc Incoming...
|
||||
@@ -1,318 +0,0 @@
|
||||
# Decision Architecture Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
The Decision Architecture workflow is a complete reimagining of how architectural decisions are made in the BMAD Method. Instead of template-driven documentation, this workflow facilitates an intelligent conversation that produces a **decision-focused architecture document** optimized for preventing AI agent conflicts during implementation.
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
**The Problem**: When multiple AI agents implement different parts of a system, they make conflicting technical decisions leading to incompatible implementations.
|
||||
|
||||
**The Solution**: A "consistency contract" that documents all critical technical decisions upfront, ensuring every agent follows the same patterns and uses the same technologies.
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. Starter Template Intelligence ⭐ NEW
|
||||
|
||||
- Discovers relevant starter templates (create-next-app, create-t3-app, etc.)
|
||||
- Considers UX requirements when selecting templates (animations, accessibility, etc.)
|
||||
- Searches for current CLI options and defaults
|
||||
- Documents decisions made BY the starter template
|
||||
- Makes remaining architectural decisions around the starter foundation
|
||||
- First implementation story becomes "initialize with starter command"
|
||||
|
||||
### 2. Adaptive Facilitation
|
||||
|
||||
- Adjusts conversation style based on user skill level (beginner/intermediate/expert)
|
||||
- Experts get rapid, technical discussions
|
||||
- Beginners receive education and protection from complexity
|
||||
- Everyone produces the same high-quality output
|
||||
|
||||
### 3. Dynamic Version Verification
|
||||
|
||||
- NEVER trusts hardcoded version numbers
|
||||
- Uses WebSearch to find current stable versions
|
||||
- Verifies versions during the conversation
|
||||
- Documents only verified, current versions
|
||||
|
||||
### 4. Intelligent Discovery
|
||||
|
||||
- No rigid project type templates
|
||||
- Analyzes PRD to identify which decisions matter for THIS project
|
||||
- Uses knowledge base of decisions and patterns
|
||||
- Scales to infinite project types
|
||||
|
||||
### 5. Collaborative Decision Making
|
||||
|
||||
- Facilitates discussion for each critical decision
|
||||
- Presents options with trade-offs
|
||||
- Integrates advanced elicitation for innovative approaches
|
||||
- Ensures decisions are coherent and compatible
|
||||
|
||||
### 6. Consistent Output
|
||||
|
||||
- Structured decision collection during conversation
|
||||
- Strict document generation from collected decisions
|
||||
- Validated against hard requirements
|
||||
- Optimized for AI agent consumption
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
```
|
||||
Step 0: Validate workflow and extract project configuration
|
||||
Step 0.5: Validate workflow sequencing
|
||||
Step 1: Load PRD and understand project context
|
||||
Step 2: Discover and evaluate starter templates ⭐ NEW
|
||||
Step 3: Adapt facilitation style and identify remaining decisions
|
||||
Step 4: Facilitate collaborative decision making (with version verification)
|
||||
Step 5: Address cross-cutting concerns
|
||||
Step 6: Define project structure and boundaries
|
||||
Step 7: Design novel architectural patterns (when needed) ⭐ NEW
|
||||
Step 8: Define implementation patterns to prevent agent conflicts
|
||||
Step 9: Validate architectural coherence
|
||||
Step 10: Generate decision architecture document (with initialization commands)
|
||||
Step 11: Validate document completeness
|
||||
Step 12: Final review and update workflow status
|
||||
```
|
||||
|
||||
## Files in This Workflow
|
||||
|
||||
- **workflow.yaml** - Configuration and metadata
|
||||
- **instructions.md** - The adaptive facilitation flow
|
||||
- **decision-catalog.yaml** - Knowledge base of all architectural decisions
|
||||
- **architecture-patterns.yaml** - Common patterns identified from requirements
|
||||
- **pattern-categories.csv** - Pattern principles that teach LLM what needs defining
|
||||
- **checklist.md** - Validation requirements for the output document
|
||||
- **architecture-template.md** - Strict format for the final document
|
||||
|
||||
## How It's Different from Old architecture
|
||||
|
||||
| Aspect | Old Workflow | New Workflow |
|
||||
| -------------------- | -------------------------------------------- | ----------------------------------------------- |
|
||||
| **Approach** | Template-driven | Conversation-driven |
|
||||
| **Project Types** | 11 rigid types with 22+ files | Infinite flexibility with intelligent discovery |
|
||||
| **User Interaction** | Output sections with "Continue?" | Collaborative decision facilitation |
|
||||
| **Skill Adaptation** | One-size-fits-all | Adapts to beginner/intermediate/expert |
|
||||
| **Decision Making** | Late in process (Step 5) | Upfront and central focus |
|
||||
| **Output** | Multiple documents including faux tech-specs | Single decision-focused architecture |
|
||||
| **Time** | Confusing and slow | 30-90 minutes depending on skill level |
|
||||
| **Elicitation** | Never used | Integrated at decision points |
|
||||
|
||||
## Expected Inputs
|
||||
|
||||
- **PRD** (Product Requirements Document) with:
|
||||
- Functional Requirements
|
||||
- Non-Functional Requirements
|
||||
- Performance and compliance needs
|
||||
|
||||
- **Epics** file with:
|
||||
- User stories
|
||||
- Acceptance criteria
|
||||
- Dependencies
|
||||
|
||||
- **UX Spec** (Optional but valuable) with:
|
||||
- Interface designs and interaction patterns
|
||||
- Accessibility requirements (WCAG levels)
|
||||
- Animation and transition needs
|
||||
- Platform-specific UI requirements
|
||||
- Performance expectations for interactions
|
||||
|
||||
## Output Document
|
||||
|
||||
A single `architecture.md` file containing:
|
||||
|
||||
- Executive summary (2-3 sentences)
|
||||
- Project initialization command (if using starter template)
|
||||
- Decision summary table with verified versions and epic mapping
|
||||
- Complete project structure
|
||||
- Integration specifications
|
||||
- Consistency rules for AI agents
|
||||
|
||||
## How Novel Pattern Design Works
|
||||
|
||||
Step 7 handles unique or complex patterns that need to be INVENTED:
|
||||
|
||||
1. **Detection**:
|
||||
The workflow analyzes the PRD for concepts that don't have standard solutions:
|
||||
- Novel interaction patterns (e.g., "swipe to match" when Tinder doesn't exist)
|
||||
- Complex multi-epic workflows (e.g., "viral invitation system")
|
||||
- Unique data relationships (e.g., "social graph" before Facebook)
|
||||
- New paradigms (e.g., "ephemeral messages" before Snapchat)
|
||||
|
||||
2. **Design Collaboration**:
|
||||
Instead of just picking technologies, the workflow helps DESIGN the solution:
|
||||
- Identifies the core problem to solve
|
||||
- Explores different approaches with the user
|
||||
- Documents how components interact
|
||||
- Creates sequence diagrams for complex flows
|
||||
- Uses elicitation to find innovative solutions
|
||||
|
||||
3. **Documentation**:
|
||||
Novel patterns become part of the architecture with:
|
||||
- Pattern name and purpose
|
||||
- Component interactions
|
||||
- Data flow diagrams
|
||||
- Which epics/stories are affected
|
||||
- Implementation guidance for agents
|
||||
|
||||
4. **Example**:
|
||||
```
|
||||
PRD: "Users can create 'circles' of friends with overlapping membership"
|
||||
↓
|
||||
Workflow detects: This is a novel social structure pattern
|
||||
↓
|
||||
Designs with user: Circle membership model, permission cascading, UI patterns
|
||||
↓
|
||||
Documents: "Circle Pattern" with component design and data flow
|
||||
↓
|
||||
All agents understand how to implement circle-related features consistently
|
||||
```
|
||||
|
||||
## How Implementation Patterns Work
|
||||
|
||||
Step 8 prevents agent conflicts by defining patterns for consistency:
|
||||
|
||||
1. **The Core Principle**:
|
||||
|
||||
> "Any time multiple agents might make the SAME decision DIFFERENTLY, that's a pattern to capture"
|
||||
|
||||
The LLM asks: "What could an agent encounter where they'd have to guess?"
|
||||
|
||||
2. **Pattern Categories** (principles, not prescriptions):
|
||||
- **Naming**: How things are named (APIs, database fields, files)
|
||||
- **Structure**: How things are organized (folders, modules, layers)
|
||||
- **Format**: How data is formatted (JSON structures, responses)
|
||||
- **Communication**: How components talk (events, messages, protocols)
|
||||
- **Lifecycle**: How states change (workflows, transitions)
|
||||
- **Location**: Where things go (URLs, paths, storage)
|
||||
- **Consistency**: Cross-cutting concerns (dates, errors, logs)
|
||||
|
||||
3. **LLM Intelligence**:
|
||||
- Uses the principle to identify patterns beyond the 7 categories
|
||||
- Figures out what specific patterns matter for chosen tech
|
||||
- Only asks about patterns that could cause conflicts
|
||||
- Skips obvious patterns that the tech choice determines
|
||||
|
||||
4. **Example**:
|
||||
```
|
||||
Tech chosen: REST API + PostgreSQL + React
|
||||
↓
|
||||
LLM identifies needs:
|
||||
- REST: URL structure, response format, status codes
|
||||
- PostgreSQL: table naming, column naming, FK patterns
|
||||
- React: component structure, state management, test location
|
||||
↓
|
||||
Facilitates each with user
|
||||
↓
|
||||
Documents as Implementation Patterns in architecture
|
||||
```
|
||||
|
||||
## How Starter Templates Work
|
||||
|
||||
When the workflow detects a project type that has a starter template:
|
||||
|
||||
1. **Discovery**: Searches for relevant starter templates based on PRD
|
||||
2. **Investigation**: Looks up current CLI options and defaults
|
||||
3. **Presentation**: Shows user what the starter provides
|
||||
4. **Integration**: Documents starter decisions as "PROVIDED BY STARTER"
|
||||
5. **Continuation**: Only asks about decisions NOT made by starter
|
||||
6. **Documentation**: Includes exact initialization command in architecture
|
||||
|
||||
### Example Flow
|
||||
|
||||
```
|
||||
PRD says: "Next.js web application with authentication"
|
||||
↓
|
||||
Workflow finds: create-next-app and create-t3-app
|
||||
↓
|
||||
User chooses: create-t3-app (includes auth setup)
|
||||
↓
|
||||
Starter provides: Next.js, TypeScript, tRPC, Prisma, NextAuth, Tailwind
|
||||
↓
|
||||
Workflow only asks about: Database choice, deployment target, additional services
|
||||
↓
|
||||
First story becomes: "npx create t3-app@latest my-app --trpc --nextauth --prisma"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# In your BMAD-enabled project
|
||||
workflow architecture
|
||||
```
|
||||
|
||||
The AI agent will:
|
||||
|
||||
1. Load your PRD and epics
|
||||
2. Identify critical decisions needed
|
||||
3. Facilitate discussion on each decision
|
||||
4. Generate a comprehensive architecture document
|
||||
5. Validate completeness
|
||||
|
||||
## Design Principles
|
||||
|
||||
1. **Facilitation over Prescription** - Guide users to good decisions rather than imposing templates
|
||||
2. **Intelligence over Templates** - Use AI understanding rather than rigid structures
|
||||
3. **Decisions over Details** - Focus on what prevents agent conflicts, not implementation minutiae
|
||||
4. **Adaptation over Uniformity** - Meet users where they are while ensuring quality output
|
||||
5. **Collaboration over Output** - The conversation matters as much as the document
|
||||
|
||||
## For Developers
|
||||
|
||||
This workflow assumes:
|
||||
|
||||
- Single developer + AI agents (not teams)
|
||||
- Speed matters (decisions in minutes, not days)
|
||||
- AI agents need clear constraints to prevent conflicts
|
||||
- The architecture document is for agents, not humans
|
||||
|
||||
## Migration from architecture
|
||||
|
||||
Projects using the old `architecture` workflow should:
|
||||
|
||||
1. Complete any in-progress architecture work
|
||||
2. Use `architecture` for new projects
|
||||
3. The old workflow remains available but is deprecated
|
||||
|
||||
## Version
|
||||
|
||||
1.3.2 - UX specification integration and fuzzy file matching
|
||||
|
||||
- Added UX spec as optional input with fuzzy file matching
|
||||
- Updated workflow.yaml with input file references
|
||||
- Starter template selection now considers UX requirements
|
||||
- Added UX alignment validation to checklist
|
||||
- Instructions use variable references for flexible file names
|
||||
|
||||
1.3.1 - Workflow refinement and standardization
|
||||
|
||||
- Added workflow status checking at start (Steps 0 and 0.5)
|
||||
- Added workflow status updating at end (Step 12)
|
||||
- Reorganized step numbering for clarity (removed fractional steps)
|
||||
- Enhanced with intent-based approach throughout
|
||||
- Improved cohesiveness across all workflow components
|
||||
|
||||
1.3.0 - Novel pattern design for unique architectures
|
||||
|
||||
- Added novel pattern design (now Step 7, formerly Step 5.3)
|
||||
- Detects novel concepts in PRD that need architectural invention
|
||||
- Facilitates design collaboration with sequence diagrams
|
||||
- Uses elicitation for innovative approaches
|
||||
- Documents custom patterns for multi-epic consistency
|
||||
|
||||
1.2.0 - Implementation patterns for agent consistency
|
||||
|
||||
- Added implementation patterns (now Step 8, formerly Step 5.5)
|
||||
- Created principle-based pattern-categories.csv (7 principles, not 118 prescriptions)
|
||||
- Core principle: "What could agents decide differently?"
|
||||
- LLM uses principle to identify patterns beyond the categories
|
||||
- Prevents agent conflicts through intelligent pattern discovery
|
||||
|
||||
1.1.0 - Enhanced with starter template discovery and version verification
|
||||
|
||||
- Added intelligent starter template detection and integration (now Step 2)
|
||||
- Added dynamic version verification via web search
|
||||
- Starter decisions are documented as "PROVIDED BY STARTER"
|
||||
- First implementation story uses starter initialization command
|
||||
|
||||
1.0.0 - Initial release replacing architecture workflow
|
||||
@@ -1,177 +0,0 @@
|
||||
# Implementation Ready Check Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
The Implementation Ready Check workflow provides a systematic validation of all planning and solutioning artifacts before transitioning from Phase 3 (Solutioning) to Phase 4 (Implementation) in the BMad Method. This workflow ensures that PRDs, architecture documents, and story breakdowns are properly aligned with no critical gaps or contradictions.
|
||||
|
||||
## Purpose
|
||||
|
||||
This workflow is designed to:
|
||||
|
||||
- **Validate Completeness**: Ensure all required planning documents exist and are complete
|
||||
- **Verify Alignment**: Check that PRD, architecture, and stories are cohesive and aligned
|
||||
- **Identify Gaps**: Detect missing stories, unaddressed requirements, or sequencing issues
|
||||
- **Assess Risks**: Find contradictions, conflicts, or potential implementation blockers
|
||||
- **Provide Confidence**: Give teams confidence that planning is solid before starting development
|
||||
|
||||
## When to Use
|
||||
|
||||
This workflow should be invoked:
|
||||
|
||||
- At the end of Phase 3 (Solutioning) for Level 2-4 projects
|
||||
- Before beginning Phase 4 (Implementation)
|
||||
- After significant planning updates or architectural changes
|
||||
- When validating readiness for Level 0-1 projects (simplified validation)
|
||||
|
||||
## Project Level Adaptations
|
||||
|
||||
The workflow adapts its validation based on project level:
|
||||
|
||||
### Level 0-1 Projects
|
||||
|
||||
- Validates tech spec and simple stories only
|
||||
- Checks internal consistency and basic coverage
|
||||
- Lighter validation appropriate for simple projects
|
||||
|
||||
### Level 2 Projects
|
||||
|
||||
- Validates PRD, tech spec (with embedded architecture), and stories
|
||||
- Ensures PRD requirements are fully covered
|
||||
- Verifies technical approach aligns with business goals
|
||||
|
||||
### Level 3-4 Projects
|
||||
|
||||
- Full validation of PRD, architecture document, and comprehensive stories
|
||||
- Deep cross-reference checking across all artifacts
|
||||
- Validates architectural decisions don't introduce scope creep
|
||||
- Checks UX artifacts if applicable
|
||||
|
||||
## How to Invoke
|
||||
|
||||
### Via Scrum Master Agent
|
||||
|
||||
```
|
||||
*solutioning-gate-check
|
||||
```
|
||||
|
||||
### Direct Workflow Invocation
|
||||
|
||||
```
|
||||
workflow solutioning-gate-check
|
||||
```
|
||||
|
||||
## Expected Inputs
|
||||
|
||||
The workflow will automatically search your project's output folder for:
|
||||
|
||||
- Product Requirements Documents (PRD)
|
||||
- Architecture documents
|
||||
- Technical Specifications
|
||||
- Epic and Story breakdowns
|
||||
- UX artifacts (if applicable)
|
||||
|
||||
No manual input file specification needed - the workflow discovers documents automatically.
|
||||
|
||||
## Generated Output
|
||||
|
||||
The workflow produces a comprehensive **Implementation Readiness Report** containing:
|
||||
|
||||
1. **Executive Summary** - Overall readiness status
|
||||
2. **Document Inventory** - What was found and reviewed
|
||||
3. **Alignment Validation** - Cross-reference analysis results
|
||||
4. **Gap Analysis** - Missing items and risks identified
|
||||
5. **Findings by Severity** - Critical, High, Medium, Low issues
|
||||
6. **Recommendations** - Specific actions to address issues
|
||||
7. **Readiness Decision** - Ready, Ready with Conditions, or Not Ready
|
||||
|
||||
Output Location: `{output_folder}/implementation-readiness-report-{date}.md`
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
1. **Initialize** - Get current workflow status and project level
|
||||
2. **Document Discovery** - Find all planning artifacts
|
||||
3. **Deep Analysis** - Extract requirements, decisions, and stories
|
||||
4. **Cross-Reference Validation** - Check alignment between all documents
|
||||
5. **Gap and Risk Analysis** - Identify issues and conflicts
|
||||
6. **UX Validation** (optional) - Verify UX concerns are addressed
|
||||
7. **Generate Report** - Compile comprehensive readiness assessment
|
||||
8. **Status Update** (optional) - Offer to advance workflow to next phase
|
||||
|
||||
## Validation Criteria
|
||||
|
||||
The workflow uses systematic validation rules adapted to each project level:
|
||||
|
||||
- **Document completeness and quality**
|
||||
- **Requirement to story traceability**
|
||||
- **Architecture to implementation alignment**
|
||||
- **Story sequencing and dependencies**
|
||||
- **Greenfield project setup coverage**
|
||||
- **Risk identification and mitigation**
|
||||
|
||||
For projects using the new architecture workflow (decision-architecture.md), additional validations include:
|
||||
|
||||
- **Implementation patterns defined for consistency**
|
||||
- **Technology versions verified and current**
|
||||
- **Starter template initialization as first story**
|
||||
- **UX specification alignment (if provided)**
|
||||
|
||||
## Special Features
|
||||
|
||||
### Intelligent Adaptation
|
||||
|
||||
- Automatically adjusts validation based on project level
|
||||
- Recognizes when UX workflow is active
|
||||
- Handles greenfield vs. brownfield projects differently
|
||||
|
||||
### Comprehensive Coverage
|
||||
|
||||
- Validates not just presence but quality and alignment
|
||||
- Checks for both gaps and gold-plating
|
||||
- Ensures logical story sequencing
|
||||
|
||||
### Actionable Output
|
||||
|
||||
- Provides specific, actionable recommendations
|
||||
- Categorizes issues by severity
|
||||
- Includes positive findings and commendations
|
||||
|
||||
## Integration with BMad Method
|
||||
|
||||
This workflow integrates seamlessly with the BMad Method workflow system:
|
||||
|
||||
- Uses workflow-status to understand project context
|
||||
- Can update workflow status to advance to next phase
|
||||
- Follows standard BMad document naming conventions
|
||||
- Searches standard output folders automatically
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Documents Not Found
|
||||
|
||||
- Ensure documents are in the configured output folder
|
||||
- Check that document names follow BMad conventions
|
||||
- Verify workflow-status is properly configured
|
||||
|
||||
### Validation Too Strict
|
||||
|
||||
- The workflow adapts to project level automatically
|
||||
- Level 0-1 projects get lighter validation
|
||||
- Consider if your project level is set correctly
|
||||
|
||||
### Report Too Long
|
||||
|
||||
- Focus on Critical and High priority issues first
|
||||
- Use the executive summary for quick decisions
|
||||
- Review detailed findings only for areas of concern
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions about this workflow:
|
||||
|
||||
- Consult the BMad Method documentation
|
||||
- Check the SM agent for workflow guidance
|
||||
- Review validation-criteria.yaml for detailed rules
|
||||
|
||||
---
|
||||
|
||||
_This workflow is part of the BMad Method v6-alpha suite of planning and solutioning tools_
|
||||
@@ -1,221 +0,0 @@
|
||||
# Phase 4: Implementation
|
||||
|
||||
## Overview
|
||||
|
||||
Phase 4 is where planning transitions into actual development. This phase manages the iterative implementation of stories defined in the epic files, tracking their progress through a well-defined status workflow.
|
||||
|
||||
## Status Definitions
|
||||
|
||||
### Epic Status
|
||||
|
||||
Epics progress through a simple two-state flow:
|
||||
|
||||
| Status | Description | Next Status |
|
||||
| ------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
|
||||
| **backlog** | Epic exists in epic file but technical context has not been created | contexted |
|
||||
| **contexted** | Epic technical context has been created via `epic-tech-context` workflow. This is a prerequisite before any stories in the epic can be drafted. | - |
|
||||
|
||||
**File Indicators:**
|
||||
|
||||
- `backlog`: No `epic-{n}-context.md` file exists
|
||||
- `contexted`: `{output_folder}/epic-{n}-context.md` file exists
|
||||
|
||||
### Story Status
|
||||
|
||||
Stories progress through a six-state flow representing their journey from idea to implementation:
|
||||
|
||||
| Status | Description | Set By | Next Status |
|
||||
| ----------------- | ---------------------------------------------------------------------------------- | ------------- | ------------- |
|
||||
| **backlog** | Story only exists in the epic file, no work has begun | Initial state | drafted |
|
||||
| **drafted** | Story file has been created via `create-story` workflow | SM Agent | ready-for-dev |
|
||||
| **ready-for-dev** | Story has been drafted, approved, and context created via `story-context` workflow | SM Agent | in-progress |
|
||||
| **in-progress** | Developer is actively implementing the story | Dev Agent | review |
|
||||
| **review** | Implementation complete, under SM review via `code-review` workflow | Dev Agent | done |
|
||||
| **done** | Story has been reviewed and completed | Dev Agent | - |
|
||||
|
||||
**File Indicators:**
|
||||
|
||||
- `backlog`: No story file exists
|
||||
- `drafted`: `{story_dir}/{story-key}.md` file exists (e.g., `1-1-user-auth.md`)
|
||||
- `ready-for-dev`: Both story file and context exist (e.g., `1-1-user-auth-context.md`)
|
||||
- `in-progress`, `review`, `done`: Manual status updates in sprint-status.yaml
|
||||
|
||||
### Retrospective Status
|
||||
|
||||
Optional retrospectives can be completed after an epic:
|
||||
|
||||
| Status | Description |
|
||||
| ------------- | -------------------------------------------------- |
|
||||
| **optional** | Retrospective can be completed but is not required |
|
||||
| **completed** | Retrospective has been completed |
|
||||
|
||||
## Status Transitions
|
||||
|
||||
### Epic Flow
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
backlog --> contexted[contexted via epic-tech-context]
|
||||
```
|
||||
|
||||
### Story Flow
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
backlog --> drafted[drafted via create-story]
|
||||
drafted --> ready[ready-for-dev via story-context]
|
||||
ready --> progress[in-progress - dev starts]
|
||||
progress --> review[review via code-review]
|
||||
review --> done[done - dev completes]
|
||||
```
|
||||
|
||||
## Sprint Status Management
|
||||
|
||||
The `sprint-status.yaml` file is the single source of truth for tracking all work items. It contains:
|
||||
|
||||
- All epics with their current status
|
||||
- All stories with their current status
|
||||
- Retrospective placeholders for each epic
|
||||
- Clear documentation of status definitions
|
||||
|
||||
### Example Sprint Status File
|
||||
|
||||
```yaml
|
||||
development_status:
|
||||
epic-1: contexted
|
||||
1-1-project-foundation: done
|
||||
1-2-app-shell: done
|
||||
1-3-user-authentication: in-progress
|
||||
1-4-plant-data-model: ready-for-dev
|
||||
1-5-add-plant-manual: drafted
|
||||
1-6-photo-identification: backlog
|
||||
1-7-plant-naming: backlog
|
||||
epic-1-retrospective: optional
|
||||
|
||||
epic-2: backlog
|
||||
2-1-personality-system: backlog
|
||||
2-2-chat-interface: backlog
|
||||
2-3-llm-integration: backlog
|
||||
epic-2-retrospective: optional
|
||||
```
|
||||
|
||||
## Workflows in Phase 4
|
||||
|
||||
### Core Workflows
|
||||
|
||||
| Workflow | Purpose | Updates Status |
|
||||
| --------------------- | -------------------------------------------------- | ----------------------------------- |
|
||||
| **sprint-planning** | Generate/update sprint-status.yaml from epic files | Auto-detects file-based statuses |
|
||||
| **epic-tech-context** | Create technical context for an epic | epic: backlog → contexted |
|
||||
| **create-story** | Draft a story from epics/PRD | story: backlog → drafted |
|
||||
| **story-context** | Add implementation context to story | story: drafted → ready-for-dev |
|
||||
| **dev-story** | Developer implements the story | story: ready-for-dev → in-progress |
|
||||
| **code-review** | SM reviews implementation | story: in-progress → review |
|
||||
| **retrospective** | Conduct epic retrospective | retrospective: optional → completed |
|
||||
| **correct-course** | Course correction when needed | Various status updates |
|
||||
|
||||
### Sprint Planning Workflow
|
||||
|
||||
The `sprint-planning` workflow is the foundation of Phase 4. It:
|
||||
|
||||
1. **Parses all epic files** (`epic*.md` or `epics.md`)
|
||||
2. **Extracts all epics and stories** maintaining their order
|
||||
3. **Auto-detects current status** based on existing files:
|
||||
- Checks for epic context files
|
||||
- Checks for story files
|
||||
- Checks for story context files
|
||||
4. **Generates sprint-status.yaml** with current reality
|
||||
5. **Preserves manual status updates** (won't downgrade statuses)
|
||||
|
||||
Run this workflow:
|
||||
|
||||
- Initially after Phase 3 completion
|
||||
- After creating epic contexts
|
||||
- Periodically to sync file-based status
|
||||
- To verify current project state
|
||||
|
||||
### Workflow Guidelines
|
||||
|
||||
1. **Epic Context First**: Epics should be contexted before drafting their stories
|
||||
2. **Flexible Parallelism**: Multiple stories can be in-progress based on team capacity
|
||||
3. **Sequential Default**: Stories within an epic are typically worked in order
|
||||
4. **Learning Transfer**: SM drafts next story after previous is done, incorporating learnings
|
||||
5. **Review Flow**: Dev moves to review, SM reviews, Dev moves to done
|
||||
|
||||
## Agent Responsibilities
|
||||
|
||||
### SM (Scrum Master) Agent
|
||||
|
||||
- Run `sprint-planning` to generate initial status
|
||||
- Create epic contexts (`epic-tech-context`)
|
||||
- Draft stories (`create-story`)
|
||||
- Create story contexts (`story-context`)
|
||||
- Review completed work (`code-review`)
|
||||
- Update status in sprint-status.yaml
|
||||
|
||||
### Developer Agent
|
||||
|
||||
- Check sprint-status.yaml for `ready-for-dev` stories
|
||||
- Update status to `in-progress` when starting
|
||||
- Implement stories (`dev-story`)
|
||||
- Move to `review` when complete
|
||||
- Address review feedback
|
||||
- Update to `done` after approval
|
||||
|
||||
### Test Architect
|
||||
|
||||
- Monitor stories entering `review` status
|
||||
- Track epic progress
|
||||
- Identify when retrospectives needed
|
||||
- Validate implementation quality
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always run sprint-planning first** to establish current state
|
||||
2. **Update status immediately** as work progresses
|
||||
3. **Check sprint-status.yaml** before starting any work
|
||||
4. **Preserve learning** by drafting stories sequentially when possible
|
||||
5. **Document decisions** in story and context files
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
### Story File Naming
|
||||
|
||||
- Format: `{epic}-{story}-{kebab-title}.md`
|
||||
- Example: `1-1-user-authentication.md`
|
||||
- Avoids YAML float parsing issues (1.1 vs 1.10)
|
||||
- Makes files self-descriptive
|
||||
|
||||
### Git Branch Naming
|
||||
|
||||
- Format: `feat/{epic}-{story}-{kebab-title}`
|
||||
- Example: `feat/1-1-user-authentication`
|
||||
- Consistent with story file naming
|
||||
- Clean for branch management
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
{output_folder}/
|
||||
├── sprint-status.yaml # Sprint status tracking
|
||||
├── epic*.md or epics.md # Epic definitions
|
||||
├── epic-1-context.md # Epic technical contexts
|
||||
├── epic-2-context.md
|
||||
└── stories/
|
||||
├── 1-1-user-authentication.md # Story drafts
|
||||
├── 1-1-user-authentication-context.md # Story contexts
|
||||
├── 1-2-account-management.md
|
||||
├── 1-2-account-management-context.md
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
After Phase 4 implementation, projects typically move to:
|
||||
|
||||
- Deployment and release
|
||||
- User acceptance testing
|
||||
- Production monitoring
|
||||
- Maintenance and updates
|
||||
|
||||
The sprint-status.yaml file provides a complete audit trail of the development process and can be used for project reporting and retrospectives.
|
||||
@@ -1,69 +0,0 @@
|
||||
# Review Story (Senior Developer Code Review)
|
||||
|
||||
Perform an AI-driven Senior Developer Code Review on a story flagged "Ready for Review". The workflow ingests the story, its Story Context, and the epic’s Tech Spec, consults local docs, uses enabled MCP servers for up-to-date best practices (with web search fallback), and appends structured review notes to the story.
|
||||
|
||||
## What It Does
|
||||
|
||||
- Auto-discovers the target story or accepts an explicit `story_path`
|
||||
- Verifies the story is in a reviewable state (e.g., Ready for Review/Review)
|
||||
- Loads Story Context (from Dev Agent Record → Context Reference or auto-discovery)
|
||||
- Locates the epic Tech Spec and relevant architecture/standards docs
|
||||
- Uses MCP servers for best-practices and security references; falls back to web search
|
||||
- Reviews implementation vs Acceptance Criteria, Tech Spec, and repo standards
|
||||
- Evaluates code quality, security, and test coverage
|
||||
- Appends a "Senior Developer Review (AI)" section to the story with findings and action items
|
||||
- Optionally updates story Status based on outcome
|
||||
|
||||
## How to Invoke
|
||||
|
||||
- Tell the Dev Agent to perform a \*code-review after loading the dev agent. This is a context intensive operation so this should not be done in the same context as a just completed dev session - clear the context, reload the dev, then run code-review!
|
||||
|
||||
## Inputs and Variables
|
||||
|
||||
- `story_path` (optional): Explicit path to a story file
|
||||
- `story_dir` (from config): `{project-root}/bmad/bmm/config.yaml` → `dev_story_location`
|
||||
- `allow_status_values`: Defaults include `Ready for Review`, `Review`
|
||||
- `auto_discover_context` (default: true)
|
||||
- `auto_discover_tech_spec` (default: true)
|
||||
- `tech_spec_glob_template`: `tech-spec-epic-{{epic_num}}*.md`
|
||||
- `arch_docs_search_dirs`: Defaults to `docs/` and `outputs/`
|
||||
- `enable_mcp_doc_search` (default: true)
|
||||
- `enable_web_fallback` (default: true)
|
||||
- `update_status_on_result` (default: false)
|
||||
|
||||
## Story Updates
|
||||
|
||||
- Appends a section titled: `Senior Developer Review (AI)` at the end
|
||||
- Adds a Change Log entry: "Senior Developer Review notes appended"
|
||||
- If enabled, updates `Status` based on outcome
|
||||
|
||||
## Persistence and Backlog
|
||||
|
||||
To ensure review findings become actionable work, the workflow can persist action items to multiple targets (configurable):
|
||||
|
||||
- Story tasks: Inserts unchecked items under `Tasks / Subtasks` in a "Review Follow-ups (AI)" subsection so `dev-story` can pick them up next.
|
||||
- Story review section: Keeps a full list under "Senior Developer Review (AI) → Action Items".
|
||||
- Backlog file: Appends normalized rows to `docs/backlog.md` (created if missing) for cross-cutting or longer-term improvements.
|
||||
- Epic follow-ups: If an epic Tech Spec is found, appends to its `Post-Review Follow-ups` section.
|
||||
|
||||
Configure via `workflow.yaml` variables:
|
||||
|
||||
- `persist_action_items` (default: true)
|
||||
- `persist_targets`: `story_tasks`, `story_review_section`, `backlog_file`, `epic_followups`
|
||||
- `backlog_file` (default: `{project-root}/docs/backlog.md`)
|
||||
- `update_epic_followups` (default: true)
|
||||
- `epic_followups_section_title` (default: `Post-Review Follow-ups`)
|
||||
|
||||
Routing guidance:
|
||||
|
||||
- Put must-fix items to ship the story into the story’s tasks.
|
||||
- Put same-epic but non-blocking improvements into the epic Tech Spec follow-ups.
|
||||
- Put cross-cutting, future, or refactor work into the backlog file.
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- `dev-story` — Implements tasks/subtasks and can act on review action items
|
||||
- `story-context` — Generates Story Context for a single story
|
||||
- `tech-spec` — Generates epic Tech Spec documents
|
||||
|
||||
_Part of the BMAD Method v6 — Implementation Phase_
|
||||
@@ -1,73 +0,0 @@
|
||||
---
|
||||
last-redoc-date: 2025-10-01
|
||||
---
|
||||
|
||||
# Correct Course Workflow
|
||||
|
||||
The correct-course workflow is v6's adaptive response mechanism for stories that encounter issues during development or review, enabling intelligent course correction without restarting from scratch. Run by the Scrum Master (SM) or Team Lead, this workflow analyzes why a story is blocked or has issues, determines whether the story scope needs adjustment, requirements need clarification, technical approach needs revision, or the story needs to be split or reconsidered. This represents the agile principle of embracing change even late in the development cycle, but doing so in a structured way that captures learning and maintains project coherence.
|
||||
|
||||
Unlike simply abandoning failed work or blindly pushing forward, correct-course provides a systematic approach to diagnosing issues and determining appropriate remediation. The workflow examines the original story specification, what was actually implemented, what issues arose during development or review, and the broader epic context to recommend the best path forward. This might involve clarifying requirements, adjusting acceptance criteria, revising technical approach, splitting the story into smaller pieces, or even determining the story should be deprioritized.
|
||||
|
||||
The critical value of this workflow is that it prevents thrashing—endless cycles of implementation and rework without clear direction. By stepping back to analyze what went wrong and charting a clear path forward, the correct-course workflow enables teams to learn from difficulties and adapt intelligently rather than stubbornly proceeding with a flawed approach.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Analyze issues and recommend course correction
|
||||
bmad run correct-course
|
||||
```
|
||||
|
||||
The workflow should be run when:
|
||||
|
||||
- Review identifies critical issues requiring rethinking
|
||||
- Developer encounters blocking issues during implementation
|
||||
- Acceptance criteria prove infeasible or unclear
|
||||
- Technical approach needs significant revision
|
||||
- Story scope needs adjustment based on discoveries
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context:**
|
||||
|
||||
- **Story Document**: Original story specification
|
||||
- **Implementation Attempts**: Code changes and approaches tried
|
||||
- **Review Feedback**: Issues and concerns identified
|
||||
- **Epic Context**: Overall epic goals and current progress
|
||||
- **Technical Constraints**: Architecture decisions and limitations discovered
|
||||
|
||||
**Analysis Focus:**
|
||||
|
||||
- Root cause of issues or blockages
|
||||
- Feasibility of current story scope
|
||||
- Clarity of requirements and acceptance criteria
|
||||
- Appropriateness of technical approach
|
||||
- Whether story should be split or revised
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverable:**
|
||||
|
||||
- **Course Correction Report** (`[story-id]-correction.md`): Analysis and recommendations including:
|
||||
- Issue root cause analysis
|
||||
- Impact assessment on epic and project
|
||||
- Recommended corrective actions with rationale
|
||||
- Revised story specification if applicable
|
||||
- Updated acceptance criteria if needed
|
||||
- Technical approach adjustments
|
||||
- Timeline and effort implications
|
||||
|
||||
**Possible Outcomes:**
|
||||
|
||||
1. **Clarify Requirements**: Update story with clearer acceptance criteria
|
||||
2. **Revise Scope**: Adjust story scope to be more achievable
|
||||
3. **Split Story**: Break into multiple smaller stories
|
||||
4. **Change Approach**: Recommend different technical approach
|
||||
5. **Deprioritize**: Determine story should be deferred or cancelled
|
||||
6. **Unblock**: Identify and address blocking dependencies
|
||||
|
||||
**Artifacts:**
|
||||
|
||||
- Updated story document if revision needed
|
||||
- New story documents if splitting story
|
||||
- Updated epic backlog reflecting changes
|
||||
- Lessons learned for retrospective
|
||||
@@ -1,146 +0,0 @@
|
||||
# Create Story Workflow
|
||||
|
||||
Just-in-time story generation creating one story at a time based on epic backlog state. Run by Scrum Master (SM) agent to ensure planned stories align with approved epics.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Usage](#usage)
|
||||
- [Key Features](#key-features)
|
||||
- [Inputs & Outputs](#inputs--outputs)
|
||||
- [Workflow Behavior](#workflow-behavior)
|
||||
- [Integration](#integration)
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# SM initiates next story creation
|
||||
bmad sm *create-story
|
||||
```
|
||||
|
||||
**When to run:**
|
||||
|
||||
- Sprint has capacity for new work
|
||||
- Previous story is Done/Approved
|
||||
- Team ready for next planned story
|
||||
|
||||
## Key Features
|
||||
|
||||
### Strict Planning Enforcement
|
||||
|
||||
- **Only creates stories enumerated in epics.md**
|
||||
- Halts if story not found in epic plan
|
||||
- Prevents scope creep through validation
|
||||
|
||||
### Intelligent Document Discovery
|
||||
|
||||
- Auto-finds tech spec: `tech-spec-epic-{N}-*.md`
|
||||
- Discovers architecture docs across directories
|
||||
- Builds prioritized requirement sources
|
||||
|
||||
### Source Document Grounding
|
||||
|
||||
- Every requirement traced to source
|
||||
- No invention of domain facts
|
||||
- Citations included in output
|
||||
|
||||
### Non-Interactive Mode
|
||||
|
||||
- Default "#yolo" mode minimizes prompts
|
||||
- Smooth automated story preparation
|
||||
- Only prompts when critical
|
||||
|
||||
## Inputs & Outputs
|
||||
|
||||
### Required Files
|
||||
|
||||
| File | Purpose | Priority |
|
||||
| ------------------------ | ----------------------------- | -------- |
|
||||
| epics.md | Story enumeration (MANDATORY) | Critical |
|
||||
| tech-spec-epic-{N}-\*.md | Epic technical spec | High |
|
||||
| PRD.md | Product requirements | Medium |
|
||||
| Architecture docs | Technical constraints | Low |
|
||||
|
||||
### Auto-Discovered Docs
|
||||
|
||||
- `tech-stack.md`, `unified-project-structure.md`
|
||||
- `testing-strategy.md`, `backend/frontend-architecture.md`
|
||||
- `data-models.md`, `database-schema.md`, `api-specs.md`
|
||||
|
||||
### Output
|
||||
|
||||
**Story Document:** `{story_dir}/story-{epic}.{story}.md`
|
||||
|
||||
- User story statement (role, action, benefit)
|
||||
- Acceptance criteria from tech spec/epics
|
||||
- Tasks mapped to ACs
|
||||
- Testing requirements
|
||||
- Dev notes with sources
|
||||
- Status: "Draft"
|
||||
|
||||
## Workflow Behavior
|
||||
|
||||
### Story Number Management
|
||||
|
||||
- Auto-detects next story number
|
||||
- No duplicates or skipped numbers
|
||||
- Maintains epic.story convention
|
||||
|
||||
### Update vs Create
|
||||
|
||||
- **If current story not Done:** Updates existing
|
||||
- **If current story Done:** Creates next (if planned)
|
||||
|
||||
### Validation Safeguards
|
||||
|
||||
**No Story Found:**
|
||||
|
||||
```
|
||||
"No planned next story found in epics.md for epic {N}.
|
||||
Run *correct-course to add/modify epic stories."
|
||||
```
|
||||
|
||||
**Missing Config:**
|
||||
Ensure `dev_story_location` set in config.yaml
|
||||
|
||||
## Integration
|
||||
|
||||
### v6 Implementation Cycle
|
||||
|
||||
1. **create-story** ← Current step (defines "what")
|
||||
2. story-context (adds technical "how")
|
||||
3. dev-story (implementation)
|
||||
4. code-review (validation)
|
||||
5. correct-course (if needed)
|
||||
6. retrospective (after epic)
|
||||
|
||||
### Document Priority
|
||||
|
||||
1. **tech_spec_file** - Epic-specific spec
|
||||
2. **epics_file** - Story breakdown
|
||||
3. **prd_file** - Business requirements
|
||||
4. **architecture_docs** - Technical guidance
|
||||
|
||||
## Configuration
|
||||
|
||||
```yaml
|
||||
# bmad/bmm/config.yaml
|
||||
dev_story_location: ./stories
|
||||
output_folder: ./output
|
||||
|
||||
# workflow.yaml defaults
|
||||
non_interactive: true
|
||||
auto_run_context: true
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
| ----------------------- | ------------------------------------------ |
|
||||
| "No planned next story" | Run `*correct-course` to add story to epic |
|
||||
| Missing story_dir | Set `dev_story_location` in config |
|
||||
| Tech spec not found | Use naming: `tech-spec-epic-{N}-*.md` |
|
||||
| No architecture docs | Add docs to docs/ or output/ folder |
|
||||
|
||||
---
|
||||
|
||||
For workflow details, see [instructions.md](./instructions.md) and [checklist.md](./checklist.md).
|
||||
@@ -1,206 +0,0 @@
|
||||
# Dev Story Workflow
|
||||
|
||||
The dev-story workflow is where v6's just-in-time context approach delivers its primary value, enabling the Developer (DEV) agent to implement stories with expert-level guidance injected directly into their context. This workflow is run EXCLUSIVELY by the DEV agent and operates on a single story that has been prepared by the SM through create-story and enhanced with story-context. The DEV loads both the story specification and the dynamically-generated story context XML, then proceeds through implementation with the combined knowledge of requirements and domain-specific expertise.
|
||||
|
||||
The workflow operates with two critical inputs: the story file (created by SM's create-story) containing acceptance criteria, tasks, and requirements; and the story-context XML (generated by SM's story-context) providing just-in-time expertise injection tailored to the story's technical needs. This dual-input approach ensures the developer has both the "what" (from the story) and the "how" (from the context) needed for successful implementation. The workflow iterates through tasks sequentially, implementing code, writing tests, and updating the story document's allowed sections until all tasks are complete.
|
||||
|
||||
A critical aspect of v6 flow is that dev-story may be run multiple times for the same story. Initially run to implement the story, it may be run again after code-review identifies issues that need correction. The workflow intelligently resumes from incomplete tasks, making it ideal for both initial implementation and post-review fixes. The DEV agent maintains strict boundaries on what can be modified in the story file—only Tasks/Subtasks checkboxes, Dev Agent Record, File List, Change Log, and Status may be updated, preserving the story's requirements integrity.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Developer implements the story with injected context
|
||||
bmad dev *develop
|
||||
|
||||
# Or if returning to fix review issues
|
||||
bmad dev *develop # Will resume from incomplete tasks
|
||||
```
|
||||
|
||||
The DEV runs this workflow:
|
||||
|
||||
- After SM completes both create-story and story-context
|
||||
- When a story status is "Draft" or "Approved" (ready for development)
|
||||
- After code-review identifies issues requiring fixes
|
||||
- To resume work on a partially completed story
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Files:**
|
||||
|
||||
- **Story Document** (`{story_dir}/story-{epic}.{story}.md`): Complete specification including:
|
||||
- User story statement
|
||||
- Acceptance criteria (immutable during dev)
|
||||
- Tasks and subtasks (checkable during implementation)
|
||||
- Dev Notes section (for context and guidance)
|
||||
- Status field (Draft → In Progress → Ready for Review)
|
||||
|
||||
- **Story Context XML** (`{story_dir}/story-{epic}.{story}-context.xml`): Domain expertise including:
|
||||
- Technical patterns and best practices
|
||||
- Gotchas and common pitfalls
|
||||
- Testing strategies
|
||||
- Relevant code references
|
||||
- Architecture constraints
|
||||
|
||||
**Configuration:**
|
||||
|
||||
- `dev_story_location`: Directory containing story files (from bmm/config.yaml)
|
||||
- `story_selection_limit`: Number of recent stories to show (default: 10)
|
||||
- `run_tests_command`: Test command (default: auto-detected)
|
||||
- `strict`: Whether to halt on test failures (default: true)
|
||||
|
||||
## Outputs
|
||||
|
||||
**Code Implementation:**
|
||||
|
||||
- Production code satisfying acceptance criteria
|
||||
- Unit, integration, and E2E tests as appropriate
|
||||
- Documentation updates
|
||||
- Migration scripts if needed
|
||||
|
||||
**Story File Updates (allowed sections only):**
|
||||
|
||||
- **Tasks/Subtasks**: Checkboxes marked complete as work progresses
|
||||
- **Dev Agent Record**: Debug log and completion notes
|
||||
- **File List**: All files created or modified
|
||||
- **Change Log**: Summary of implementation changes
|
||||
- **Status**: Updated to "Ready for Review" when complete
|
||||
|
||||
**Validation:**
|
||||
|
||||
- All tests passing (existing + new)
|
||||
- Acceptance criteria verified
|
||||
- Code quality checks passed
|
||||
- No regression in existing functionality
|
||||
|
||||
## Key Features
|
||||
|
||||
**Story-Context Integration**: The workflow loads and leverages the story-context XML throughout implementation, providing expert guidance without cluttering the main conversation. This ensures best practices are followed while maintaining developer autonomy.
|
||||
|
||||
**Task-by-Task Iteration**: Implements one task at a time, marking completion before moving to the next. This granular approach enables progress tracking and clean resumption if interrupted or after review feedback.
|
||||
|
||||
**Strict File Boundaries**: Only specific sections of the story file may be modified, preserving requirement integrity. The story's acceptance criteria, main description, and other planning sections remain immutable during development.
|
||||
|
||||
**Test-Driven Approach**: For each task, the workflow emphasizes writing tests that verify acceptance criteria before or alongside implementation, ensuring requirements are actually met.
|
||||
|
||||
**Resumable Implementation**: If the workflow is run again on a story with incomplete tasks (e.g., after code-review finds issues), it intelligently resumes from where it left off rather than starting over.
|
||||
|
||||
## Integration with v6 Flow
|
||||
|
||||
The dev-story workflow is step 3 in the v6 implementation cycle:
|
||||
|
||||
1. SM: create-story (generates the story specification)
|
||||
2. SM: story-context (adds JIT technical expertise)
|
||||
3. **DEV: dev-story** ← You are here (initial implementation)
|
||||
4. DEV/SR: code-review (validates completion)
|
||||
5. If issues found: **DEV: dev-story** ← May return here for fixes
|
||||
6. After epic: retrospective (captures learnings)
|
||||
|
||||
This workflow may be executed multiple times for the same story as part of the review-fix cycle. Each execution picks up from incomplete tasks, making it efficient for iterative improvement.
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Story Selection
|
||||
|
||||
- If `story_path` provided: Use that story directly
|
||||
- Otherwise: List recent stories from `dev_story_location`
|
||||
- Parse story structure and validate format
|
||||
- Load corresponding story-context XML
|
||||
|
||||
### Phase 2: Task Implementation Loop
|
||||
|
||||
For each incomplete task:
|
||||
|
||||
1. **Plan**: Log approach in Dev Agent Record
|
||||
2. **Implement**: Write code following story-context guidance
|
||||
3. **Test**: Create tests verifying acceptance criteria
|
||||
4. **Validate**: Run tests and quality checks
|
||||
5. **Update**: Mark task complete, update File List
|
||||
|
||||
### Phase 3: Completion
|
||||
|
||||
- Verify all tasks completed
|
||||
- Run full test suite
|
||||
- Update Status to "Ready for Review"
|
||||
- Add completion notes to Dev Agent Record
|
||||
|
||||
## Story File Structure
|
||||
|
||||
The workflow expects stories with this structure:
|
||||
|
||||
```markdown
|
||||
# Story {epic}.{story}: {Title}
|
||||
|
||||
**Status**: Draft|In Progress|Ready for Review|Done
|
||||
**Epic**: {epic_number}
|
||||
**Estimated Effort**: {estimate}
|
||||
|
||||
## Story
|
||||
|
||||
As a {role}, I want to {action} so that {benefit}
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
- [ ] AC1...
|
||||
- [ ] AC2...
|
||||
|
||||
## Tasks and Subtasks
|
||||
|
||||
- [ ] Task 1
|
||||
- [ ] Subtask 1.1
|
||||
- [ ] Task 2
|
||||
|
||||
## Dev Notes
|
||||
|
||||
{Context and guidance from story creation}
|
||||
|
||||
## Dev Agent Record
|
||||
|
||||
### Debug Log
|
||||
|
||||
{Implementation notes added during development}
|
||||
|
||||
### Completion Notes
|
||||
|
||||
{Summary added when complete}
|
||||
|
||||
## File List
|
||||
|
||||
{Files created/modified}
|
||||
|
||||
## Change Log
|
||||
|
||||
{Implementation summary}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
**Load Context First**: Always ensure the story-context XML is loaded at the start. This provides the expert guidance needed throughout implementation.
|
||||
|
||||
**Follow Task Order**: Implement tasks in the order listed. Dependencies are typically structured in the task sequence.
|
||||
|
||||
**Test Each Task**: Don't wait until the end to write tests. Test each task as you complete it to ensure it meets acceptance criteria.
|
||||
|
||||
**Update Incrementally**: Update the story file after each task completion rather than batching updates at the end.
|
||||
|
||||
**Respect Boundaries**: Never modify acceptance criteria or story description. If they seem wrong, that's a code-review or correct-course issue, not a dev-story fix.
|
||||
|
||||
**Use Context Guidance**: Actively reference the story-context for patterns, gotchas, and best practices. It's there to help you succeed.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Story Not Found**: Ensure story exists in `dev_story_location` and follows naming pattern `story-{epic}.{story}.md`
|
||||
|
||||
**Context XML Missing**: The story-context workflow must be run first. Check for `story-{epic}.{story}-context.xml`
|
||||
|
||||
**Tests Failing**: If strict mode is on, the workflow halts. Fix tests before continuing or set strict=false for development iteration.
|
||||
|
||||
**Can't Modify Story Section**: Remember only Tasks, Dev Agent Record, File List, Change Log, and Status can be modified. Other sections are immutable.
|
||||
|
||||
**Resuming After Review**: If code-review found issues, the workflow automatically picks up from incomplete tasks when run again.
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **create-story**: Creates the story specification (run by SM)
|
||||
- **story-context**: Generates the context XML (run by SM)
|
||||
- **code-review**: Validates implementation (run by SR/DEV)
|
||||
- **correct-course**: Handles major issues or scope changes (run by SM)
|
||||
@@ -1,195 +0,0 @@
|
||||
# Technical Specification Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Generate a comprehensive Technical Specification for a single epic from PRD, Epics file and Architecture to produce a document with full acceptance criteria and traceability mapping. Creates detailed implementation guidance that bridges business requirements with technical execution.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **PRD-Architecture Integration** - Synthesizes business requirements with technical constraints
|
||||
- **Traceability Mapping** - Links acceptance criteria to technical components and tests
|
||||
- **Multi-Input Support** - Leverages PRD, architecture, front-end specs, and brownfield notes
|
||||
- **Implementation Focus** - Provides concrete technical guidance for development teams
|
||||
- **Testing Integration** - Includes test strategy aligned with acceptance criteria
|
||||
- **Validation Ready** - Generates specifications suitable for architectural review
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Invocation
|
||||
|
||||
```bash
|
||||
workflow tech-spec
|
||||
```
|
||||
|
||||
### With Input Documents
|
||||
|
||||
```bash
|
||||
# With specific PRD and architecture
|
||||
workflow tech-spec --input PRD.md --input architecture.md
|
||||
|
||||
# With comprehensive inputs
|
||||
workflow tech-spec --input PRD.md --input architecture.md --input front-end-spec.md
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
- **output_folder**: Location for generated technical specification
|
||||
- **epic_id**: Used in output filename (extracted from PRD or prompted)
|
||||
- **recommended_inputs**: PRD, architecture, front-end spec, brownfield notes
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Files Included
|
||||
|
||||
```
|
||||
tech-spec/
|
||||
├── workflow.yaml # Configuration and metadata
|
||||
├── instructions.md # Step-by-step execution guide
|
||||
├── template.md # Technical specification structure
|
||||
├── checklist.md # Validation criteria
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Input Collection and Analysis (Steps 1-2)
|
||||
|
||||
- **Document Discovery**: Locates PRD and Architecture documents
|
||||
- **Epic Extraction**: Identifies epic title and ID from PRD content
|
||||
- **Template Initialization**: Creates tech-spec document from template
|
||||
- **Context Analysis**: Reviews complete PRD and architecture for alignment
|
||||
|
||||
### Phase 2: Technical Design Specification (Step 3)
|
||||
|
||||
- **Service Architecture**: Defines services/modules with responsibilities and interfaces
|
||||
- **Data Modeling**: Specifies entities, schemas, and relationships
|
||||
- **API Design**: Documents endpoints, request/response models, and error handling
|
||||
- **Workflow Definition**: Details sequence flows and data processing patterns
|
||||
|
||||
### Phase 3: Non-Functional Requirements (Step 4)
|
||||
|
||||
- **Performance Specs**: Defines measurable latency, throughput, and scalability targets
|
||||
- **Security Requirements**: Specifies authentication, authorization, and data protection
|
||||
- **Reliability Standards**: Documents availability, recovery, and degradation behavior
|
||||
- **Observability Framework**: Defines logging, metrics, and tracing requirements
|
||||
|
||||
### Phase 4: Dependencies and Integration (Step 5)
|
||||
|
||||
- **Dependency Analysis**: Scans repository for package manifests and frameworks
|
||||
- **Integration Mapping**: Identifies external systems and API dependencies
|
||||
- **Version Management**: Documents version constraints and compatibility requirements
|
||||
- **Infrastructure Needs**: Specifies deployment and runtime dependencies
|
||||
|
||||
### Phase 5: Acceptance and Testing (Step 6)
|
||||
|
||||
- **Criteria Normalization**: Extracts and refines acceptance criteria from PRD
|
||||
- **Traceability Matrix**: Maps acceptance criteria to technical components
|
||||
- **Test Strategy**: Defines testing approach for each acceptance criterion
|
||||
- **Component Mapping**: Links requirements to specific APIs and modules
|
||||
|
||||
### Phase 6: Risk and Validation (Steps 7-8)
|
||||
|
||||
- **Risk Assessment**: Identifies technical risks, assumptions, and open questions
|
||||
- **Mitigation Planning**: Provides strategies for addressing identified risks
|
||||
- **Quality Validation**: Ensures specification meets completeness criteria
|
||||
- **Checklist Verification**: Validates against workflow quality standards
|
||||
|
||||
## Output
|
||||
|
||||
### Generated Files
|
||||
|
||||
- **Primary output**: tech-spec-{epic_id}-{date}.md
|
||||
- **Traceability data**: Embedded mapping tables linking requirements to implementation
|
||||
|
||||
### Output Structure
|
||||
|
||||
1. **Overview and Scope** - Project context and boundaries
|
||||
2. **System Architecture Alignment** - Connection to architecture
|
||||
3. **Detailed Design** - Services, data models, APIs, and workflows
|
||||
4. **Non-Functional Requirements** - Performance, security, reliability, observability
|
||||
5. **Dependencies and Integrations** - External systems and technical dependencies
|
||||
6. **Acceptance Criteria** - Testable requirements from PRD
|
||||
7. **Traceability Mapping** - Requirements-to-implementation mapping
|
||||
8. **Test Strategy** - Testing approach and framework alignment
|
||||
9. **Risks and Assumptions** - Technical risks and mitigation strategies
|
||||
|
||||
## Requirements
|
||||
|
||||
- **PRD Document**: Product Requirements Document with clear acceptance criteria
|
||||
- **Architecture Document**: architecture or technical design
|
||||
- **Repository Access**: For dependency analysis and framework detection
|
||||
- **Epic Context**: Clear epic identification and scope definition
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting
|
||||
|
||||
1. **Validate Inputs**: Ensure PRD and architecture documents are complete and current
|
||||
2. **Review Requirements**: Confirm acceptance criteria are specific and testable
|
||||
3. **Check Dependencies**: Verify repository structure supports automated dependency analysis
|
||||
|
||||
### During Execution
|
||||
|
||||
1. **Maintain Traceability**: Ensure every acceptance criterion maps to implementation
|
||||
2. **Be Implementation-Specific**: Provide concrete technical guidance, not high-level concepts
|
||||
3. **Consider Constraints**: Factor in existing system limitations and brownfield challenges
|
||||
|
||||
### After Completion
|
||||
|
||||
1. **Architect Review**: Have technical lead validate design decisions
|
||||
2. **Developer Walkthrough**: Ensure implementation team understands specifications
|
||||
3. **Test Planning**: Use traceability matrix for test case development
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Missing or incomplete technical details
|
||||
|
||||
- **Solution**: Review architecture document for implementation specifics
|
||||
- **Check**: Ensure PRD contains sufficient functional requirements detail
|
||||
|
||||
**Issue**: Acceptance criteria don't map cleanly to technical components
|
||||
|
||||
- **Solution**: Break down complex criteria into atomic, testable statements
|
||||
- **Check**: Verify PRD acceptance criteria are properly structured
|
||||
|
||||
**Issue**: Dependency analysis fails or is incomplete
|
||||
|
||||
- **Solution**: Check repository structure and manifest file locations
|
||||
- **Check**: Verify repository contains standard dependency files (package.json, etc.)
|
||||
|
||||
**Issue**: Non-functional requirements are too vague
|
||||
|
||||
- **Solution**: Extract specific performance and quality targets from architecture
|
||||
- **Check**: Ensure architecture document contains measurable NFR specifications
|
||||
|
||||
## Customization
|
||||
|
||||
To customize this workflow:
|
||||
|
||||
1. **Modify Template**: Update template.md to match organizational technical spec standards
|
||||
2. **Extend Dependencies**: Add support for additional package managers or frameworks
|
||||
3. **Enhance Validation**: Extend checklist.md with company-specific technical criteria
|
||||
4. **Add Sections**: Include additional technical concerns like DevOps or monitoring
|
||||
|
||||
## Version History
|
||||
|
||||
- **v6.0.0** - Comprehensive technical specification with traceability
|
||||
- PRD-Architecture integration
|
||||
- Automated dependency detection
|
||||
- Traceability mapping
|
||||
- Test strategy alignment
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Review the workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
|
||||
- Validate output using `checklist.md`
|
||||
- Ensure PRD and architecture documents are complete before starting
|
||||
- Consult BMAD documentation for technical specification standards
|
||||
|
||||
---
|
||||
|
||||
_Part of the BMad Method v6 - BMM (Method) Module_
|
||||
@@ -1,77 +0,0 @@
|
||||
---
|
||||
last-redoc-date: 2025-10-01
|
||||
---
|
||||
|
||||
# Retrospective Workflow
|
||||
|
||||
The retrospective workflow is v6's learning capture mechanism, run by the Scrum Master (SM) after epic completion to systematically harvest insights, patterns, and improvements discovered during implementation. Unlike traditional retrospectives that focus primarily on process and team dynamics, this workflow performs deep technical and methodological analysis of the entire epic journey—from story creation through implementation to review—identifying what worked well, what could improve, and what patterns emerged. The retrospective produces actionable intelligence that informs future epics, improves workflow templates, and evolves the team's shared knowledge.
|
||||
|
||||
This workflow represents a critical feedback loop in the v6 methodology. Each epic is an experiment in adaptive software development, and the retrospective is where we analyze the results of that experiment. The SM examines completed stories, review feedback, course corrections made, story-context effectiveness, technical decisions, and team collaboration patterns to extract transferable learning. This learning manifests as updates to workflow templates, new story-context patterns, refined estimation approaches, and documented best practices.
|
||||
|
||||
The retrospective is not just a compliance ritual but a genuine opportunity for continuous improvement. By systematically analyzing what happened during the epic, the team builds institutional knowledge that makes each subsequent epic smoother, faster, and higher quality. The insights captured here directly improve future story creation, context generation, development efficiency, and review effectiveness.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
# Conduct retrospective after epic completion
|
||||
bmad run retrospective
|
||||
```
|
||||
|
||||
The SM should run this workflow:
|
||||
|
||||
- After all stories in an epic are completed
|
||||
- Before starting the next major epic
|
||||
- When significant learning has accumulated
|
||||
- As preparation for improving future epic execution
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context:**
|
||||
|
||||
- **Epic Document**: Complete epic specification and goals
|
||||
- **All Story Documents**: Every story created for the epic
|
||||
- **Review Reports**: All review feedback and findings
|
||||
- **Course Corrections**: Any correct-course actions taken
|
||||
- **Story Contexts**: Generated expert guidance for each story
|
||||
- **Implementation Artifacts**: Code commits, test results, deployment records
|
||||
|
||||
**Analysis Targets:**
|
||||
|
||||
- Story creation accuracy and sizing
|
||||
- Story-context effectiveness and relevance
|
||||
- Implementation challenges and solutions
|
||||
- Review findings and patterns
|
||||
- Technical decisions and outcomes
|
||||
- Estimation accuracy
|
||||
- Team collaboration and communication
|
||||
- Workflow effectiveness
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverable:**
|
||||
|
||||
- **Retrospective Report** (`[epic-id]-retrospective.xml`): Comprehensive analysis including:
|
||||
- Executive summary of epic outcomes
|
||||
- Story-by-story analysis of what was learned
|
||||
- Technical insights and architecture learnings
|
||||
- Story-context effectiveness assessment
|
||||
- Process improvements identified
|
||||
- Patterns discovered (good and bad)
|
||||
- Recommendations for future epics
|
||||
- Metrics and statistics (velocity, cycle time, etc.)
|
||||
|
||||
**Actionable Outputs:**
|
||||
|
||||
- **Template Updates**: Improvements to workflow templates based on learning
|
||||
- **Pattern Library**: New story-context patterns for common scenarios
|
||||
- **Best Practices**: Documented approaches that worked well
|
||||
- **Gotchas**: Issues to avoid in future work
|
||||
- **Estimation Refinements**: Better story sizing guidance
|
||||
- **Process Changes**: Workflow adjustments for next epic
|
||||
|
||||
**Artifacts:**
|
||||
|
||||
- Epic marked as complete with retrospective attached
|
||||
- Knowledge base updated with new patterns
|
||||
- Workflow templates updated if needed
|
||||
- Team learning captured for onboarding
|
||||
@@ -1,156 +0,0 @@
|
||||
# Sprint Planning Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
The sprint-planning workflow generates and manages the sprint status tracking file that serves as the single source of truth for Phase 4 implementation. It extracts all epics and stories from epic files and tracks their progress through the development lifecycle.
|
||||
|
||||
In Agile terminology, this workflow facilitates **Sprint Planning** or **Sprint 0 Kickoff** - the transition from planning/architecture into actual development execution.
|
||||
|
||||
## Purpose
|
||||
|
||||
This workflow creates a `sprint-status.yaml` file that:
|
||||
|
||||
- Lists all epics, stories, and retrospectives in order
|
||||
- Tracks the current status of each item
|
||||
- Provides a clear view of what needs to be worked on next
|
||||
- Ensures only one story is in progress at a time
|
||||
- Maintains the development flow from backlog to done
|
||||
|
||||
## When to Use
|
||||
|
||||
Run this workflow:
|
||||
|
||||
1. **Initially** - After Phase 3 (solutioning) is complete and epics are finalized
|
||||
2. **After epic context creation** - To update epic status to 'contexted'
|
||||
3. **Periodically** - To auto-detect newly created story files
|
||||
4. **For status checks** - To see overall project progress
|
||||
|
||||
## Status State Machine
|
||||
|
||||
### Epic Flow
|
||||
|
||||
```
|
||||
backlog → contexted
|
||||
```
|
||||
|
||||
### Story Flow
|
||||
|
||||
```
|
||||
backlog → drafted → ready-for-dev → in-progress → review → done
|
||||
```
|
||||
|
||||
### Retrospective Flow
|
||||
|
||||
```
|
||||
optional ↔ completed
|
||||
```
|
||||
|
||||
## Key Guidelines
|
||||
|
||||
1. **Epic Context Recommended**: Epics should be `contexted` before their stories can be `drafted`
|
||||
2. **Flexible Parallelism**: Multiple stories can be `in-progress` based on team capacity
|
||||
3. **Sequential Default**: Stories within an epic are typically worked in order, but parallel work is supported
|
||||
4. **Review Flow**: Stories should go through `review` before `done`
|
||||
5. **Learning Transfer**: SM typically drafts next story after previous is `done`, incorporating learnings
|
||||
|
||||
## File Locations
|
||||
|
||||
### Input Files
|
||||
|
||||
- **Epic Files**: `{output_folder}/epic*.md` or `{output_folder}/epics.md`
|
||||
- **Epic Context**: `{output_folder}/epic-{n}-context.md`
|
||||
- **Story Files**: `{story_dir}/{epic}-{story}-{title}.md`
|
||||
- Example: `stories/1-1-user-authentication.md`
|
||||
- **Story Context**: `{story_dir}/{epic}-{story}-{title}-context.md`
|
||||
- Example: `stories/1-1-user-authentication-context.md`
|
||||
|
||||
### Output File
|
||||
|
||||
- **Status File**: `{output_folder}/sprint-status.yaml`
|
||||
|
||||
## Usage by Agents
|
||||
|
||||
### SM (Scrum Master) Agent
|
||||
|
||||
```yaml
|
||||
Tasks:
|
||||
- Check sprint-status.yaml for stories in 'done' status
|
||||
- Identify next 'backlog' story to draft
|
||||
- Run create-story workflow
|
||||
- Update status to 'drafted'
|
||||
- Create story context
|
||||
- Update status to 'ready-for-dev'
|
||||
```
|
||||
|
||||
### Developer Agent
|
||||
|
||||
```yaml
|
||||
Tasks:
|
||||
- Find stories with 'ready-for-dev' status
|
||||
- Update to 'in-progress' when starting
|
||||
- Implement the story
|
||||
- Update to 'review' when complete
|
||||
- Address review feedback
|
||||
- Update to 'done' after review
|
||||
```
|
||||
|
||||
### Test Architect
|
||||
|
||||
```yaml
|
||||
Tasks:
|
||||
- Monitor stories entering 'review'
|
||||
- Track epic progress
|
||||
- Identify when retrospectives are needed
|
||||
```
|
||||
|
||||
## Example Output
|
||||
|
||||
```yaml
|
||||
# Sprint Status
|
||||
# Generated: 2025-01-20
|
||||
# Project: MyPlantFamily
|
||||
|
||||
development_status:
|
||||
epic-1: contexted
|
||||
1-1-project-foundation: done
|
||||
1-2-app-shell: done
|
||||
1-3-user-authentication: in-progress
|
||||
1-4-plant-data-model: ready-for-dev
|
||||
1-5-add-plant-manual: drafted
|
||||
1-6-photo-identification: backlog
|
||||
epic-1-retrospective: optional
|
||||
|
||||
epic-2: contexted
|
||||
2-1-personality-system: in-progress
|
||||
2-2-chat-interface: drafted
|
||||
2-3-llm-integration: backlog
|
||||
2-4-reminder-system: backlog
|
||||
epic-2-retrospective: optional
|
||||
```
|
||||
|
||||
## Integration with BMM Workflow
|
||||
|
||||
This workflow is part of Phase 4 (Implementation) and integrates with:
|
||||
|
||||
1. **epic-tech-context** - Creates technical context for epics
|
||||
2. **create-story** - Drafts individual story files
|
||||
3. **story-context** - Adds implementation context to stories
|
||||
4. **dev-story** - Developer implements the story
|
||||
5. **code-review** - SM reviews implementation
|
||||
6. **retrospective** - Optional epic retrospective
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Clear Visibility**: Everyone knows what's being worked on
|
||||
- **Flexible Capacity**: Supports both sequential and parallel work patterns
|
||||
- **Learning Transfer**: SM can incorporate learnings when drafting next story
|
||||
- **Progress Tracking**: Easy to see overall project status
|
||||
- **Automation Friendly**: Simple YAML format for agent updates
|
||||
|
||||
## Tips
|
||||
|
||||
1. **Initial Generation**: Run immediately after epics are finalized
|
||||
2. **Regular Updates**: Agents should update status as they work
|
||||
3. **Manual Override**: You can manually edit the file if needed
|
||||
4. **Backup First**: The workflow backs up existing status before regenerating
|
||||
5. **Validation**: The workflow validates legal status transitions
|
||||
@@ -1,234 +0,0 @@
|
||||
# Story Context Assembly Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Assembles a dynamic Story Context XML by pulling latest documentation and existing code/library artifacts relevant to a drafted story. Creates comprehensive development context for AI agents and developers working on specific stories.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Automated Context Discovery** - Scans documentation and codebase for relevant artifacts
|
||||
- **XML Output Format** - Structured context optimized for LLM consumption
|
||||
- **Dependency Detection** - Identifies frameworks, packages, and technical dependencies
|
||||
- **Interface Mapping** - Locates existing APIs and interfaces to reuse
|
||||
- **Testing Integration** - Includes testing standards and generates test ideas
|
||||
- **Status Tracking** - Updates story status and maintains context references
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Invocation
|
||||
|
||||
```bash
|
||||
workflow story-context
|
||||
```
|
||||
|
||||
### With Specific Story
|
||||
|
||||
```bash
|
||||
# Process specific story file
|
||||
workflow story-context --input /path/to/story.md
|
||||
|
||||
# Using story path variable
|
||||
workflow story-context --story_path "docs/stories/1.2.feature-name.md"
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
- **story_path**: Path to target story markdown file (defaults to latest.md)
|
||||
- **auto_update_status**: Whether to automatically update story status (default: false)
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Files Included
|
||||
|
||||
```
|
||||
story-context/
|
||||
├── workflow.yaml # Configuration and metadata
|
||||
├── instructions.md # Step-by-step execution guide
|
||||
├── context-template.xml # XML structure template
|
||||
├── checklist.md # Validation criteria
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Story Analysis (Steps 1-2)
|
||||
|
||||
- **Story Location**: Finds and loads target story markdown file
|
||||
- **Content Parsing**: Extracts epic ID, story ID, title, acceptance criteria, and tasks
|
||||
- **Template Initialization**: Creates initial XML context structure
|
||||
- **User Story Extraction**: Parses "As a... I want... So that..." components
|
||||
|
||||
### Phase 2: Documentation Discovery (Step 3)
|
||||
|
||||
- **Keyword Analysis**: Identifies relevant terms from story content
|
||||
- **Document Scanning**: Searches docs and module documentation
|
||||
- **Authority Prioritization**: Prefers PRDs, architecture docs, and specs
|
||||
- **Context Extraction**: Captures relevant sections with snippets
|
||||
|
||||
### Phase 3: Code Analysis (Step 4)
|
||||
|
||||
- **Symbol Search**: Finds relevant modules, functions, and components
|
||||
- **Interface Identification**: Locates existing APIs and interfaces
|
||||
- **Constraint Extraction**: Identifies development patterns and requirements
|
||||
- **Reuse Opportunities**: Highlights existing code to leverage
|
||||
|
||||
### Phase 4: Dependency Analysis (Step 5)
|
||||
|
||||
- **Manifest Detection**: Scans for package.json, requirements.txt, go.mod, etc.
|
||||
- **Framework Identification**: Identifies Unity, Node.js, Python, Go ecosystems
|
||||
- **Version Tracking**: Captures dependency versions where available
|
||||
- **Configuration Discovery**: Finds relevant project configurations
|
||||
|
||||
### Phase 5: Testing Context (Step 6)
|
||||
|
||||
- **Standards Extraction**: Identifies testing frameworks and patterns
|
||||
- **Location Mapping**: Documents where tests should be placed
|
||||
- **Test Ideas**: Generates initial test concepts for acceptance criteria
|
||||
- **Framework Integration**: Links to existing test infrastructure
|
||||
|
||||
### Phase 6: Validation and Updates (Steps 7-8)
|
||||
|
||||
- **XML Validation**: Ensures proper structure and completeness
|
||||
- **Status Updates**: Changes story status from Draft to ContextReadyDraft
|
||||
- **Reference Tracking**: Adds context file reference to story document
|
||||
- **Quality Assurance**: Validates against workflow checklist
|
||||
|
||||
## Output
|
||||
|
||||
### Generated Files
|
||||
|
||||
- **Primary output**: story-context-{epic_id}.{story_id}-{date}.xml
|
||||
- **Story updates**: Modified original story file with context references
|
||||
|
||||
### Output Structure
|
||||
|
||||
```xml
|
||||
<storyContext>
|
||||
<story>
|
||||
<basicInfo>
|
||||
<epicId>...</epicId>
|
||||
<storyId>...</storyId>
|
||||
<title>...</title>
|
||||
<status>...</status>
|
||||
</basicInfo>
|
||||
<userStory>
|
||||
<asA>...</asA>
|
||||
<iWant>...</iWant>
|
||||
<soThat>...</soThat>
|
||||
</userStory>
|
||||
<acceptanceCriteria>
|
||||
<criterion id="1">...</criterion>
|
||||
</acceptanceCriteria>
|
||||
<tasks>
|
||||
<task>...</task>
|
||||
</tasks>
|
||||
</story>
|
||||
|
||||
<artifacts>
|
||||
<docs>
|
||||
<doc path="..." title="..." section="..." snippet="..."/>
|
||||
</docs>
|
||||
<code>
|
||||
<file path="..." kind="..." symbol="..." lines="..." reason="..."/>
|
||||
</code>
|
||||
<dependencies>
|
||||
<node>
|
||||
<package name="..." version="..."/>
|
||||
</node>
|
||||
</dependencies>
|
||||
</artifacts>
|
||||
|
||||
<interfaces>
|
||||
<interface name="..." kind="..." signature="..." path="..."/>
|
||||
</interfaces>
|
||||
|
||||
<constraints>
|
||||
<constraint>...</constraint>
|
||||
</constraints>
|
||||
|
||||
<tests>
|
||||
<standards>...</standards>
|
||||
<locations>
|
||||
<location>...</location>
|
||||
</locations>
|
||||
<ideas>
|
||||
<idea ac="1">...</idea>
|
||||
</ideas>
|
||||
</tests>
|
||||
</storyContext>
|
||||
```
|
||||
|
||||
## Requirements
|
||||
|
||||
- **Story File**: Valid story markdown with proper structure (epic.story.title.md format)
|
||||
- **Repository Access**: Ability to scan documentation and source code
|
||||
- **Documentation**: Project documentation in standard locations (docs/, src/, etc.)
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting
|
||||
|
||||
1. **Ensure Story Quality**: Verify story has clear acceptance criteria and tasks
|
||||
2. **Update Documentation**: Ensure relevant docs are current and discoverable
|
||||
3. **Clean Repository**: Remove obsolete code that might confuse context assembly
|
||||
|
||||
### During Execution
|
||||
|
||||
1. **Review Extracted Context**: Verify that discovered artifacts are actually relevant
|
||||
2. **Check Interface Accuracy**: Ensure identified APIs and interfaces are current
|
||||
3. **Validate Dependencies**: Confirm dependency information matches project state
|
||||
|
||||
### After Completion
|
||||
|
||||
1. **Review XML Output**: Validate the generated context makes sense
|
||||
2. **Test with Developer**: Have a developer review context usefulness
|
||||
3. **Update Story Status**: Verify story status was properly updated
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Issue**: Context contains irrelevant or outdated information
|
||||
|
||||
- **Solution**: Review keyword extraction and document filtering logic
|
||||
- **Check**: Ensure story title and acceptance criteria are specific and clear
|
||||
|
||||
**Issue**: Missing relevant code or interfaces
|
||||
|
||||
- **Solution**: Verify code search patterns and symbol extraction
|
||||
- **Check**: Ensure relevant code follows project naming conventions
|
||||
|
||||
**Issue**: Dependency information is incomplete or wrong
|
||||
|
||||
- **Solution**: Check for multiple package manifests or unusual project structure
|
||||
- **Check**: Verify dependency files are in expected locations and formats
|
||||
|
||||
## Customization
|
||||
|
||||
To customize this workflow:
|
||||
|
||||
1. **Modify Search Patterns**: Update instructions.md to adjust code and doc discovery
|
||||
2. **Extend XML Schema**: Customize context-template.xml for additional context types
|
||||
3. **Add Validation**: Extend checklist.md with project-specific quality criteria
|
||||
4. **Configure Dependencies**: Adjust dependency detection for specific tech stacks
|
||||
|
||||
## Version History
|
||||
|
||||
- **v6.0.0** - XML-based context assembly with comprehensive artifact discovery
|
||||
- Multi-ecosystem dependency detection
|
||||
- Interface and constraint extraction
|
||||
- Testing context integration
|
||||
- Story status management
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Review the workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
|
||||
- Validate output using `checklist.md`
|
||||
- Ensure story files follow expected markdown structure
|
||||
- Check that repository structure supports automated discovery
|
||||
|
||||
---
|
||||
|
||||
_Part of the BMad Method v6 - BMM (Method) Module_
|
||||
@@ -1,256 +0,0 @@
|
||||
# BMM Workflows - Complete v6 Guide
|
||||
|
||||
Master guide for BMM's four-phase methodology that adapts to project scale (Level 0-4) and context (greenfield/brownfield).
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Core Innovations](#core-innovations)
|
||||
- [Universal Entry Point](#universal-entry-point)
|
||||
- [Four Phases Overview](#four-phases-overview)
|
||||
- [Phase Details](#phase-details)
|
||||
- [Documentation Prerequisite](#documentation-prerequisite)
|
||||
- [Phase 1: Analysis](#phase-1-analysis)
|
||||
- [Phase 2: Planning](#phase-2-planning)
|
||||
- [Phase 3: Solutioning](#phase-3-solutioning)
|
||||
- [Phase 4: Implementation](#phase-4-implementation)
|
||||
- [Scale Levels](#scale-levels)
|
||||
- [Greenfield vs Brownfield](#greenfield-vs-brownfield)
|
||||
- [Critical Rules](#critical-rules)
|
||||
|
||||
## Core Innovations
|
||||
|
||||
- **Scale-Adaptive Planning** - Automatic routing based on complexity (Level 0-4)
|
||||
- **Just-In-Time Design** - Tech specs created per epic during implementation
|
||||
- **Dynamic Expertise Injection** - Story-specific technical guidance
|
||||
- **Continuous Learning Loop** - Retrospectives improve each cycle
|
||||
- **Document Sharding Support** - All workflows handle whole or sharded documents for efficiency
|
||||
|
||||
## Universal Entry Point
|
||||
|
||||
**Always start with `workflow-status` or `workflow-init`**
|
||||
|
||||
### workflow-status
|
||||
|
||||
- Checks for existing workflow status file
|
||||
- Displays current phase and progress
|
||||
- Routes to appropriate next workflow
|
||||
- Shows Phase 4 implementation state
|
||||
|
||||
### workflow-init
|
||||
|
||||
- Creates initial bmm-workflow-status.md
|
||||
- Detects greenfield vs brownfield
|
||||
- Routes undocumented brownfield to document-project
|
||||
- Sets up workflow tracking
|
||||
|
||||
## Four Phases Overview
|
||||
|
||||
```
|
||||
PREREQUISITE: document-project (brownfield without docs)
|
||||
↓
|
||||
PHASE 1: Analysis (optional)
|
||||
brainstorm → research → brief
|
||||
↓
|
||||
PHASE 2: Planning (required, scale-adaptive)
|
||||
Level 0-1: tech-spec only
|
||||
Level 2-4: PRD + epics
|
||||
↓
|
||||
PHASE 3: Solutioning (Level 2-4 only)
|
||||
architecture → validation → gate-check
|
||||
↓
|
||||
PHASE 4: Implementation (iterative)
|
||||
sprint-planning → epic-context → story cycle
|
||||
```
|
||||
|
||||
## Phase Details
|
||||
|
||||
### Documentation Prerequisite
|
||||
|
||||
**When:** Brownfield projects without documentation OR post-completion cleanup
|
||||
|
||||
| Workflow | Purpose | Output |
|
||||
| ---------------- | ----------------------------- | ------------------ |
|
||||
| document-project | Analyze and document codebase | Comprehensive docs |
|
||||
|
||||
**Use Cases:**
|
||||
|
||||
1. **Pre-Phase 1**: Understand existing brownfield code
|
||||
2. **Post-Phase 4**: Create clean documentation replacing scattered artifacts
|
||||
|
||||
### Phase 1: Analysis
|
||||
|
||||
**Optional workflows for discovery and requirements gathering**
|
||||
|
||||
| Workflow | Agent | Purpose | Output |
|
||||
| ------------------ | ------------- | --------------------- | ---------------------- |
|
||||
| brainstorm-project | Analyst | Software ideation | Architecture proposals |
|
||||
| brainstorm-game | Game Designer | Game concept ideation | Concept proposals |
|
||||
| research | Analyst | Multi-mode research | Research artifacts |
|
||||
| product-brief | Analyst | Strategic planning | Product brief |
|
||||
| game-brief | Game Designer | Game foundation | Game brief |
|
||||
|
||||
### Phase 2: Planning
|
||||
|
||||
**Required phase with scale-adaptive routing**
|
||||
|
||||
| Workflow | Agent | Output | Levels |
|
||||
| ---------------- | ------------- | -------------- | ----------- |
|
||||
| prd | PM | PRD.md + epics | 2-4 |
|
||||
| tech-spec | PM | tech-spec.md | 0-1 |
|
||||
| gdd | Game Designer | GDD.md | Games |
|
||||
| create-ux-design | UX | ux-design.md | Conditional |
|
||||
|
||||
### Phase 3: Solutioning
|
||||
|
||||
**Architecture phase for Level 2-4 projects**
|
||||
|
||||
| Workflow | Agent | Purpose | Output |
|
||||
| ---------------------- | --------- | ----------------- | ---------------------- |
|
||||
| create-architecture | Architect | System design | architecture.md + ADRs |
|
||||
| validate-architecture | Architect | Design validation | Validation report |
|
||||
| solutioning-gate-check | Architect | PRD/UX/arch check | Gate report |
|
||||
|
||||
### Phase 4: Implementation
|
||||
|
||||
**Sprint-based development cycle**
|
||||
|
||||
#### Sprint Status System
|
||||
|
||||
**Epic Flow:** `backlog → contexted`
|
||||
|
||||
**Story Flow:** `backlog → drafted → ready-for-dev → in-progress → review → done`
|
||||
|
||||
#### Implementation Workflows
|
||||
|
||||
| Workflow | Agent | Purpose | Status Updates |
|
||||
| ----------------- | ----- | ----------------------- | ------------------------------------------- |
|
||||
| sprint-planning | SM | Initialize tracking | Creates sprint-status.yaml |
|
||||
| epic-tech-context | SM | Epic technical context | Epic: backlog → contexted |
|
||||
| create-story | SM | Draft story files | Story: backlog → drafted |
|
||||
| story-context | SM | Implementation guidance | Story: drafted → ready-for-dev |
|
||||
| dev-story | DEV | Implement | Story: ready-for-dev → in-progress → review |
|
||||
| code-review | DEV | Quality validation | No auto update |
|
||||
| retrospective | SM | Capture learnings | Retrospective: optional → completed |
|
||||
| correct-course | SM | Handle issues | Adaptive |
|
||||
|
||||
#### Implementation Loop
|
||||
|
||||
```
|
||||
sprint-planning (creates sprint-status.yaml)
|
||||
↓
|
||||
For each epic:
|
||||
epic-tech-context
|
||||
↓
|
||||
For each story:
|
||||
create-story → story-context → dev-story → code-review
|
||||
↓
|
||||
Mark done in sprint-status.yaml
|
||||
↓
|
||||
retrospective (epic complete)
|
||||
```
|
||||
|
||||
## Scale Levels
|
||||
|
||||
| Level | Scope | Documentation | Path |
|
||||
| ----- | ------------- | --------------------- | --------------- |
|
||||
| 0 | Single change | tech-spec only | Phase 2 → 4 |
|
||||
| 1 | 1-10 stories | tech-spec only | Phase 2 → 4 |
|
||||
| 2 | 5-15 stories | PRD + architecture | Phase 2 → 3 → 4 |
|
||||
| 3 | 12-40 stories | PRD + full arch | Phase 2 → 3 → 4 |
|
||||
| 4 | 40+ stories | PRD + enterprise arch | Phase 2 → 3 → 4 |
|
||||
|
||||
## Greenfield vs Brownfield
|
||||
|
||||
### Greenfield (New Projects)
|
||||
|
||||
```
|
||||
Phase 1 (optional) → Phase 2 → Phase 3 (L2-4) → Phase 4
|
||||
```
|
||||
|
||||
- Clean slate for design
|
||||
- No existing constraints
|
||||
- Direct to planning
|
||||
|
||||
### Brownfield (Existing Code)
|
||||
|
||||
```
|
||||
document-project (if needed) → Phase 1 (optional) → Phase 2 → Phase 3 (L2-4) → Phase 4
|
||||
```
|
||||
|
||||
- Must understand existing patterns
|
||||
- Documentation prerequisite if undocumented
|
||||
- Consider technical debt in planning
|
||||
|
||||
## Critical Rules
|
||||
|
||||
### Phase Transitions
|
||||
|
||||
1. **Check workflow-status** before any Phase 1-3 workflow
|
||||
2. **Document brownfield** before planning if undocumented
|
||||
3. **Complete planning** before solutioning
|
||||
4. **Complete architecture** (L2-4) before implementation
|
||||
|
||||
### Implementation Rules
|
||||
|
||||
1. **Epic context first** - Must context epic before drafting stories
|
||||
2. **Sequential by default** - Work stories in order within epic
|
||||
3. **Learning transfer** - Draft next story after previous done
|
||||
4. **Manual status updates** - Update sprint-status.yaml as needed
|
||||
|
||||
### Story Management
|
||||
|
||||
1. **Single source of truth** - sprint-status.yaml tracks everything
|
||||
2. **No story search** - Agents read exact story from status file
|
||||
3. **Context injection** - Each story gets specific technical guidance
|
||||
4. **Retrospective learning** - Capture improvements per epic
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Trust the process** - Let workflows guide you
|
||||
2. **Respect scale** - Don't over-document small projects
|
||||
3. **Use status tracking** - Always know where you are
|
||||
4. **Iterate and learn** - Each epic improves the next
|
||||
5. **Consider sharding** - Split large documents (PRD, epics, architecture) for efficiency
|
||||
|
||||
## Document Sharding
|
||||
|
||||
For large multi-epic projects, consider sharding planning documents to improve workflow efficiency.
|
||||
|
||||
### What is Sharding?
|
||||
|
||||
Splits large markdown files into smaller section-based files:
|
||||
|
||||
- PRD with 15 epics → `prd/epic-1.md`, `prd/epic-2.md`, etc.
|
||||
- Large epics file → Individual epic files
|
||||
- Architecture layers → Separate layer files
|
||||
|
||||
### Benefits
|
||||
|
||||
**Phase 1-3 Workflows:**
|
||||
|
||||
- Workflows load entire sharded documents (transparent to user)
|
||||
- Better organization for large projects
|
||||
|
||||
**Phase 4 Workflows:**
|
||||
|
||||
- **Selective loading** - Only load the epic/section needed
|
||||
- **Massive efficiency** - 90%+ token reduction for 10+ epic projects
|
||||
- Examples: `epic-tech-context`, `create-story`, `story-context`, `code-review`
|
||||
|
||||
### Usage
|
||||
|
||||
```
|
||||
Load bmad-master or analyst agent:
|
||||
/shard-doc
|
||||
|
||||
Source: docs/PRD.md
|
||||
Destination: docs/prd/
|
||||
```
|
||||
|
||||
**All BMM workflows automatically support both whole and sharded documents.**
|
||||
|
||||
**[→ Complete Sharding Guide](../../../docs/document-sharding-guide.md)**
|
||||
|
||||
---
|
||||
|
||||
For specific workflow details, see individual workflow READMEs in their respective directories.
|
||||
@@ -1,444 +0,0 @@
|
||||
# Document Project Workflow
|
||||
|
||||
**Version:** 1.2.0
|
||||
**Module:** BMM (BMAD Method Module)
|
||||
**Type:** Action Workflow (Documentation Generator)
|
||||
|
||||
## Purpose
|
||||
|
||||
Analyzes and documents brownfield projects by scanning codebase, architecture, and patterns to create comprehensive reference documentation for AI-assisted development. Generates a master index and multiple documentation files tailored to project structure and type.
|
||||
|
||||
**NEW in v1.2.0:** Context-safe architecture with scan levels, resumability, and write-as-you-go pattern to prevent context exhaustion.
|
||||
|
||||
## Key Features
|
||||
|
||||
- **Multi-Project Type Support**: Handles web, backend, mobile, CLI, game, embedded, data, infra, library, desktop, and extension projects
|
||||
- **Multi-Part Detection**: Automatically detects and documents projects with separate client/server or multiple services
|
||||
- **Three Scan Levels** (NEW v1.2.0): Quick (2-5 min), Deep (10-30 min), Exhaustive (30-120 min)
|
||||
- **Resumability** (NEW v1.2.0): Interrupt and resume workflows without losing progress
|
||||
- **Write-as-you-go** (NEW v1.2.0): Documents written immediately to prevent context exhaustion
|
||||
- **Intelligent Batching** (NEW v1.2.0): Subfolder-based processing for deep/exhaustive scans
|
||||
- **Data-Driven Analysis**: Uses CSV-based project type detection and documentation requirements
|
||||
- **Comprehensive Scanning**: Analyzes APIs, data models, UI components, configuration, security patterns, and more
|
||||
- **Architecture Matching**: Matches projects to 170+ architecture templates from the solutioning registry
|
||||
- **Brownfield PRD Ready**: Generates documentation specifically designed for AI agents planning new features
|
||||
|
||||
## How to Invoke
|
||||
|
||||
```bash
|
||||
workflow document-project
|
||||
```
|
||||
|
||||
Or from BMAD CLI:
|
||||
|
||||
```bash
|
||||
/bmad:bmm:workflows:document-project
|
||||
```
|
||||
|
||||
## Scan Levels (NEW in v1.2.0)
|
||||
|
||||
Choose the right scan depth for your needs:
|
||||
|
||||
### 1. Quick Scan (Default)
|
||||
|
||||
**Duration:** 2-5 minutes
|
||||
**What it does:** Pattern-based analysis without reading source files
|
||||
**Reads:** Config files, package manifests, directory structure, README
|
||||
**Use when:**
|
||||
|
||||
- You need a fast project overview
|
||||
- Initial understanding of project structure
|
||||
- Planning next steps before deeper analysis
|
||||
|
||||
**Does NOT read:** Source code files (_.js, _.ts, _.py, _.go, etc.)
|
||||
|
||||
### 2. Deep Scan
|
||||
|
||||
**Duration:** 10-30 minutes
|
||||
**What it does:** Reads files in critical directories based on project type
|
||||
**Reads:** Files in critical paths defined by documentation requirements
|
||||
**Use when:**
|
||||
|
||||
- Creating comprehensive documentation for brownfield PRD
|
||||
- Need detailed analysis of key areas
|
||||
- Want balance between depth and speed
|
||||
|
||||
**Example:** For a web app, reads controllers/, models/, components/, but not every utility file
|
||||
|
||||
### 3. Exhaustive Scan
|
||||
|
||||
**Duration:** 30-120 minutes
|
||||
**What it does:** Reads ALL source files in project
|
||||
**Reads:** Every source file (excludes node_modules, dist, build, .git)
|
||||
**Use when:**
|
||||
|
||||
- Complete project analysis needed
|
||||
- Migration planning requires full understanding
|
||||
- Detailed audit of entire codebase
|
||||
- Deep technical debt assessment
|
||||
|
||||
**Note:** Deep-dive mode ALWAYS uses exhaustive scan (no choice)
|
||||
|
||||
## Resumability (NEW in v1.2.0)
|
||||
|
||||
The workflow can be interrupted and resumed without losing progress:
|
||||
|
||||
- **State Tracking:** Progress saved in `project-scan-report.json`
|
||||
- **Auto-Detection:** Workflow detects incomplete runs (<24 hours old)
|
||||
- **Resume Prompt:** Choose to resume or start fresh
|
||||
- **Step-by-Step:** Resume from exact step where interrupted
|
||||
- **Archiving:** Old state files automatically archived
|
||||
|
||||
**Example Resume Flow:**
|
||||
|
||||
```
|
||||
> workflow document-project
|
||||
|
||||
I found an in-progress workflow state from 2025-10-11 14:32:15.
|
||||
|
||||
Current Progress:
|
||||
- Mode: initial_scan
|
||||
- Scan Level: deep
|
||||
- Completed Steps: 5/12
|
||||
- Last Step: step_5
|
||||
|
||||
Would you like to:
|
||||
1. Resume from where we left off - Continue from step 6
|
||||
2. Start fresh - Archive old state and begin new scan
|
||||
3. Cancel - Exit without changes
|
||||
|
||||
Your choice [1/2/3]:
|
||||
```
|
||||
|
||||
## What It Does
|
||||
|
||||
### Step-by-Step Process
|
||||
|
||||
1. **Detects Project Structure** - Identifies if project is single-part or multi-part (client/server/etc.)
|
||||
2. **Classifies Project Type** - Matches against 12 project types (web, backend, mobile, etc.)
|
||||
3. **Discovers Documentation** - Finds existing README, CONTRIBUTING, ARCHITECTURE files
|
||||
4. **Analyzes Tech Stack** - Parses package files, identifies frameworks, versions, dependencies
|
||||
5. **Conditional Scanning** - Performs targeted analysis based on project type requirements:
|
||||
- API routes and endpoints
|
||||
- Database models and schemas
|
||||
- State management patterns
|
||||
- UI component libraries
|
||||
- Configuration and security
|
||||
- CI/CD and deployment configs
|
||||
6. **Generates Source Tree** - Creates annotated directory structure with critical paths
|
||||
7. **Extracts Dev Instructions** - Documents setup, build, run, and test commands
|
||||
8. **Creates Architecture Docs** - Generates detailed architecture using matched templates
|
||||
9. **Builds Master Index** - Creates comprehensive index.md as primary AI retrieval source
|
||||
10. **Validates Output** - Runs 140+ point checklist to ensure completeness
|
||||
|
||||
### Output Files
|
||||
|
||||
**Single-Part Projects:**
|
||||
|
||||
- `index.md` - Master index
|
||||
- `project-overview.md` - Executive summary
|
||||
- `architecture.md` - Detailed architecture
|
||||
- `source-tree-analysis.md` - Annotated directory tree
|
||||
- `component-inventory.md` - Component catalog (if applicable)
|
||||
- `development-guide.md` - Local dev instructions
|
||||
- `api-contracts.md` - API documentation (if applicable)
|
||||
- `data-models.md` - Database schema (if applicable)
|
||||
- `deployment-guide.md` - Deployment process (optional)
|
||||
- `contribution-guide.md` - Contributing guidelines (optional)
|
||||
- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
|
||||
|
||||
**Multi-Part Projects (e.g., client + server):**
|
||||
|
||||
- `index.md` - Master index with part navigation
|
||||
- `project-overview.md` - Multi-part summary
|
||||
- `architecture-{part_id}.md` - Per-part architecture docs
|
||||
- `source-tree-analysis.md` - Full tree with part annotations
|
||||
- `component-inventory-{part_id}.md` - Per-part components
|
||||
- `development-guide-{part_id}.md` - Per-part dev guides
|
||||
- `integration-architecture.md` - How parts communicate
|
||||
- `project-parts.json` - Machine-readable metadata
|
||||
- `project-scan-report.json` - State file for resumability (NEW v1.2.0)
|
||||
- Additional conditional files per part (API, data models, etc.)
|
||||
|
||||
## Data Files
|
||||
|
||||
The workflow uses a single comprehensive CSV file:
|
||||
|
||||
**documentation-requirements.csv** - Complete project analysis guide
|
||||
|
||||
- Location: `/bmad/bmm/workflows/document-project/documentation-requirements.csv`
|
||||
- 12 project types (web, mobile, backend, cli, library, desktop, game, data, extension, infra, embedded)
|
||||
- 24 columns combining:
|
||||
- **Detection columns**: `project_type_id`, `key_file_patterns` (identifies project type from codebase)
|
||||
- **Requirement columns**: `requires_api_scan`, `requires_data_models`, `requires_ui_components`, etc.
|
||||
- **Pattern columns**: `critical_directories`, `test_file_patterns`, `config_patterns`, etc.
|
||||
- Self-contained: All project detection AND scanning requirements in one file
|
||||
- Architecture patterns inferred from tech stack (no external registry needed)
|
||||
|
||||
## Use Cases
|
||||
|
||||
### Primary Use Case: Brownfield PRD Creation
|
||||
|
||||
After running this workflow, use the generated `index.md` as input to brownfield PRD workflows:
|
||||
|
||||
```
|
||||
User: "I want to add a new dashboard feature"
|
||||
PRD Workflow: Loads docs/index.md
|
||||
→ Understands existing architecture
|
||||
→ Identifies reusable components
|
||||
→ Plans integration with existing APIs
|
||||
→ Creates contextual PRD with epics and stories
|
||||
```
|
||||
|
||||
### Other Use Cases
|
||||
|
||||
- **Onboarding New Developers** - Comprehensive project documentation
|
||||
- **Architecture Review** - Structured analysis of existing system
|
||||
- **Technical Debt Assessment** - Identify patterns and anti-patterns
|
||||
- **Migration Planning** - Understand current state before refactoring
|
||||
|
||||
## Requirements
|
||||
|
||||
### Recommended Inputs (Optional)
|
||||
|
||||
- Project root directory (defaults to current directory)
|
||||
- README.md or similar docs (auto-discovered if present)
|
||||
- User guidance on key areas to focus (workflow will ask)
|
||||
|
||||
### Tools Used
|
||||
|
||||
- File system scanning (Glob, Read, Grep)
|
||||
- Code analysis
|
||||
- Git repository analysis (optional)
|
||||
|
||||
## Configuration
|
||||
|
||||
### Default Output Location
|
||||
|
||||
Files are saved to: `{output_folder}` (from config.yaml)
|
||||
|
||||
Default: `/docs/` folder in project root
|
||||
|
||||
### Customization
|
||||
|
||||
- Modify `documentation-requirements.csv` to adjust scanning patterns for project types
|
||||
- Add new project types to `project-types.csv`
|
||||
- Add new architecture templates to `registry.csv`
|
||||
|
||||
## Example: Multi-Part Web App
|
||||
|
||||
**Input:**
|
||||
|
||||
```
|
||||
my-app/
|
||||
├── client/ # React frontend
|
||||
├── server/ # Express backend
|
||||
└── README.md
|
||||
```
|
||||
|
||||
**Detection Result:**
|
||||
|
||||
- Repository Type: Monorepo
|
||||
- Part 1: client (web/React)
|
||||
- Part 2: server (backend/Express)
|
||||
|
||||
**Output (10+ files):**
|
||||
|
||||
```
|
||||
docs/
|
||||
├── index.md
|
||||
├── project-overview.md
|
||||
├── architecture-client.md
|
||||
├── architecture-server.md
|
||||
├── source-tree-analysis.md
|
||||
├── component-inventory-client.md
|
||||
├── development-guide-client.md
|
||||
├── development-guide-server.md
|
||||
├── api-contracts-server.md
|
||||
├── data-models-server.md
|
||||
├── integration-architecture.md
|
||||
└── project-parts.json
|
||||
```
|
||||
|
||||
## Example: Simple CLI Tool
|
||||
|
||||
**Input:**
|
||||
|
||||
```
|
||||
hello-cli/
|
||||
├── main.go
|
||||
├── go.mod
|
||||
└── README.md
|
||||
```
|
||||
|
||||
**Detection Result:**
|
||||
|
||||
- Repository Type: Monolith
|
||||
- Part 1: main (cli/Go)
|
||||
|
||||
**Output (4 files):**
|
||||
|
||||
```
|
||||
docs/
|
||||
├── index.md
|
||||
├── project-overview.md
|
||||
├── architecture.md
|
||||
└── source-tree-analysis.md
|
||||
```
|
||||
|
||||
## Deep-Dive Mode
|
||||
|
||||
### What is Deep-Dive Mode?
|
||||
|
||||
When you run the workflow on a project that already has documentation, you'll be offered a choice:
|
||||
|
||||
1. **Rescan entire project** - Update all documentation with latest changes
|
||||
2. **Deep-dive into specific area** - Generate EXHAUSTIVE documentation for a particular feature/module/folder
|
||||
3. **Cancel** - Keep existing documentation
|
||||
|
||||
Deep-dive mode performs **comprehensive, file-by-file analysis** of a specific area, reading EVERY file completely and documenting:
|
||||
|
||||
- All exports with complete signatures
|
||||
- All imports and dependencies
|
||||
- Dependency graphs and data flow
|
||||
- Code patterns and implementations
|
||||
- Testing coverage and strategies
|
||||
- Integration points
|
||||
- Reuse opportunities
|
||||
|
||||
### When to Use Deep-Dive Mode
|
||||
|
||||
- **Before implementing a feature** - Deep-dive the area you'll be modifying
|
||||
- **During architecture review** - Deep-dive complex modules
|
||||
- **For code understanding** - Deep-dive unfamiliar parts of codebase
|
||||
- **When creating PRDs** - Deep-dive areas affected by new features
|
||||
|
||||
### Deep-Dive Process
|
||||
|
||||
1. Workflow detects existing `index.md`
|
||||
2. Offers deep-dive option
|
||||
3. Suggests areas based on project structure:
|
||||
- API route groups
|
||||
- Feature modules
|
||||
- UI component areas
|
||||
- Services/business logic
|
||||
4. You select area or specify custom path
|
||||
5. Workflow reads EVERY file in that area
|
||||
6. Generates `deep-dive-{area-name}.md` with complete analysis
|
||||
7. Updates `index.md` with link to deep-dive doc
|
||||
8. Offers to deep-dive another area or finish
|
||||
|
||||
### Deep-Dive Output Example
|
||||
|
||||
**docs/deep-dive-dashboard-feature.md:**
|
||||
|
||||
- Complete file inventory (47 files analyzed)
|
||||
- Every export with signatures
|
||||
- Dependency graph
|
||||
- Data flow analysis
|
||||
- Integration points
|
||||
- Testing coverage
|
||||
- Related code references
|
||||
- Implementation guidance
|
||||
- ~3,000 LOC documented in detail
|
||||
|
||||
### Incremental Deep-Diving
|
||||
|
||||
You can deep-dive multiple areas over time:
|
||||
|
||||
- First run: Scan entire project → generates index.md
|
||||
- Second run: Deep-dive dashboard feature
|
||||
- Third run: Deep-dive API layer
|
||||
- Fourth run: Deep-dive authentication system
|
||||
|
||||
All deep-dive docs are linked from the master index.
|
||||
|
||||
## Validation
|
||||
|
||||
The workflow includes a comprehensive 160+ point checklist covering:
|
||||
|
||||
- Project detection accuracy
|
||||
- Technology stack completeness
|
||||
- Codebase scanning thoroughness
|
||||
- Architecture documentation quality
|
||||
- Multi-part handling (if applicable)
|
||||
- Brownfield PRD readiness
|
||||
- Deep-dive completeness (if applicable)
|
||||
|
||||
## Next Steps After Completion
|
||||
|
||||
1. **Review** `docs/index.md` - Your master documentation index
|
||||
2. **Validate** - Check generated docs for accuracy
|
||||
3. **Use for PRD** - Point brownfield PRD workflow to index.md
|
||||
4. **Maintain** - Re-run workflow when architecture changes significantly
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
document-project/
|
||||
├── workflow.yaml # Workflow configuration
|
||||
├── instructions.md # Step-by-step workflow logic
|
||||
├── checklist.md # Validation criteria
|
||||
├── documentation-requirements.csv # Project type scanning patterns
|
||||
├── templates/ # Output templates
|
||||
│ ├── index-template.md
|
||||
│ ├── project-overview-template.md
|
||||
│ └── source-tree-template.md
|
||||
└── README.md # This file
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Issue: Project type not detected correctly**
|
||||
|
||||
- Solution: Workflow will ask for confirmation; manually select correct type
|
||||
|
||||
**Issue: Missing critical information**
|
||||
|
||||
- Solution: Provide additional context when prompted; re-run specific analysis steps
|
||||
|
||||
**Issue: Multi-part detection missed a part**
|
||||
|
||||
- Solution: When asked to confirm parts, specify the missing part and its path
|
||||
|
||||
**Issue: Architecture template doesn't match well**
|
||||
|
||||
- Solution: Check registry.csv; may need to add new template or adjust matching criteria
|
||||
|
||||
## Architecture Improvements in v1.2.0
|
||||
|
||||
### Context-Safe Design
|
||||
|
||||
The workflow now uses a write-as-you-go architecture:
|
||||
|
||||
- Documents written immediately to disk (not accumulated in memory)
|
||||
- Detailed findings purged after writing (only summaries kept)
|
||||
- State tracking enables resumption from any step
|
||||
- Batching strategy prevents context exhaustion on large projects
|
||||
|
||||
### Batching Strategy
|
||||
|
||||
For deep/exhaustive scans:
|
||||
|
||||
- Process ONE subfolder at a time
|
||||
- Read files → Extract info → Write output → Validate → Purge context
|
||||
- Primary concern is file SIZE (not count)
|
||||
- Track batches in state file for resumability
|
||||
|
||||
### State File Format
|
||||
|
||||
Optimized JSON (no pretty-printing):
|
||||
|
||||
```json
|
||||
{
|
||||
"workflow_version": "1.2.0",
|
||||
"timestamps": {...},
|
||||
"mode": "initial_scan",
|
||||
"scan_level": "deep",
|
||||
"completed_steps": [...],
|
||||
"current_step": "step_6",
|
||||
"findings": {"summary": "only"},
|
||||
"outputs_generated": [...],
|
||||
"resume_instructions": "..."
|
||||
}
|
||||
```
|
||||
@@ -1,38 +0,0 @@
|
||||
# Document Project Workflow Templates
|
||||
|
||||
This directory contains template files for the `document-project` workflow.
|
||||
|
||||
## Template Files
|
||||
|
||||
- **index-template.md** - Master index template (adapts for single/multi-part projects)
|
||||
- **project-overview-template.md** - Executive summary and high-level overview
|
||||
- **source-tree-template.md** - Annotated directory structure
|
||||
|
||||
## Template Usage
|
||||
|
||||
The workflow dynamically selects and populates templates based on:
|
||||
|
||||
1. **Project structure** (single part vs multi-part)
|
||||
2. **Project type** (web, backend, mobile, etc.)
|
||||
3. **Documentation requirements** (from documentation-requirements.csv)
|
||||
|
||||
## Variable Naming Convention
|
||||
|
||||
Templates use Handlebars-style variables:
|
||||
|
||||
- `{{variable_name}}` - Simple substitution
|
||||
- `{{#if condition}}...{{/if}}` - Conditional blocks
|
||||
- `{{#each collection}}...{{/each}}` - Iteration
|
||||
|
||||
## Additional Templates
|
||||
|
||||
Architecture-specific templates are dynamically loaded from:
|
||||
`/bmad/bmm/workflows/3-solutioning/templates/`
|
||||
|
||||
Based on the matched architecture type from the registry.
|
||||
|
||||
## Notes
|
||||
|
||||
- Templates support both simple and complex project structures
|
||||
- Multi-part projects get part-specific file naming (e.g., `architecture-{part_id}.md`)
|
||||
- Single-part projects use simplified naming (e.g., `architecture.md`)
|
||||
239
src/modules/bmm/workflows/techdoc/documentation-standards.md
Normal file
239
src/modules/bmm/workflows/techdoc/documentation-standards.md
Normal file
@@ -0,0 +1,239 @@
|
||||
# Technical Documentation Standards for BMAD
|
||||
|
||||
**For Agent: Paige (Documentation Guide)**
|
||||
**Purpose: Concise reference for documentation creation and review**
|
||||
|
||||
---
|
||||
|
||||
## CRITICAL RULE: CommonMark Strict Compliance
|
||||
|
||||
ALL documentation MUST follow CommonMark specification exactly. No exceptions.
|
||||
|
||||
### CommonMark Essentials
|
||||
|
||||
**Headers:**
|
||||
|
||||
- Use ATX-style ONLY: `#` `##` `###` (NOT Setext underlines)
|
||||
- Single space after `#`: `# Title` (NOT `#Title`)
|
||||
- No trailing `#`: `# Title` (NOT `# Title #`)
|
||||
- Hierarchical order: Don't skip levels (h1→h2→h3, not h1→h3)
|
||||
|
||||
**Code Blocks:**
|
||||
|
||||
- Use fenced blocks with language identifier:
|
||||
````markdown
|
||||
```javascript
|
||||
const example = 'code';
|
||||
```
|
||||
````
|
||||
- NOT indented code blocks (ambiguous)
|
||||
|
||||
**Lists:**
|
||||
|
||||
- Consistent markers within list: all `-` or all `*` or all `+` (don't mix)
|
||||
- Proper indentation for nested items (2 or 4 spaces, stay consistent)
|
||||
- Blank line before/after list for clarity
|
||||
|
||||
**Links:**
|
||||
|
||||
- Inline: `[text](url)`
|
||||
- Reference: `[text][ref]` then `[ref]: url` at bottom
|
||||
- NO bare URLs without `<>` brackets
|
||||
|
||||
**Emphasis:**
|
||||
|
||||
- Italic: `*text*` or `_text_`
|
||||
- Bold: `**text**` or `__text__`
|
||||
- Consistent style within document
|
||||
|
||||
**Line Breaks:**
|
||||
|
||||
- Two spaces at end of line + newline, OR
|
||||
- Blank line between paragraphs
|
||||
- NO single line breaks (they're ignored)
|
||||
|
||||
---
|
||||
|
||||
## Mermaid Diagrams: Valid Syntax Required
|
||||
|
||||
**Critical Rules:**
|
||||
|
||||
1. Always specify diagram type first line
|
||||
2. Use valid Mermaid v10+ syntax
|
||||
3. Test syntax before outputting (mental validation)
|
||||
4. Keep focused: 5-10 nodes ideal, max 15
|
||||
|
||||
**Diagram Type Selection:**
|
||||
|
||||
- **flowchart** - Process flows, decision trees, workflows
|
||||
- **sequenceDiagram** - API interactions, message flows, time-based processes
|
||||
- **classDiagram** - Object models, class relationships, system structure
|
||||
- **erDiagram** - Database schemas, entity relationships
|
||||
- **stateDiagram-v2** - State machines, lifecycle stages
|
||||
- **gitGraph** - Branch strategies, version control flows
|
||||
|
||||
**Formatting:**
|
||||
|
||||
````markdown
|
||||
```mermaid
|
||||
flowchart TD
|
||||
Start[Clear Label] --> Decision{Question?}
|
||||
Decision -->|Yes| Action1[Do This]
|
||||
Decision -->|No| Action2[Do That]
|
||||
```
|
||||
````
|
||||
|
||||
---
|
||||
|
||||
## Style Guide Principles (Distilled)
|
||||
|
||||
Apply in this hierarchy:
|
||||
|
||||
1. **Project-specific guide** (if exists) - always ask first
|
||||
2. **BMAD conventions** (this document)
|
||||
3. **Google Developer Docs style** (defaults below)
|
||||
4. **CommonMark spec** (when in doubt)
|
||||
|
||||
### Core Writing Rules
|
||||
|
||||
**Task-Oriented Focus:**
|
||||
|
||||
- Write for user GOALS, not feature lists
|
||||
- Start with WHY, then HOW
|
||||
- Every doc answers: "What can I accomplish?"
|
||||
|
||||
**Clarity Principles:**
|
||||
|
||||
- Active voice: "Click the button" NOT "The button should be clicked"
|
||||
- Present tense: "The function returns" NOT "The function will return"
|
||||
- Direct language: "Use X for Y" NOT "X can be used for Y"
|
||||
- Second person: "You configure" NOT "Users configure" or "One configures"
|
||||
|
||||
**Structure:**
|
||||
|
||||
- One idea per sentence
|
||||
- One topic per paragraph
|
||||
- Headings describe content accurately
|
||||
- Examples follow explanations
|
||||
|
||||
**Accessibility:**
|
||||
|
||||
- Descriptive link text: "See the API reference" NOT "Click here"
|
||||
- Alt text for diagrams: Describe what it shows
|
||||
- Semantic heading hierarchy (don't skip levels)
|
||||
- Tables have headers
|
||||
- Emojis are acceptable if user preferences allow (modern accessibility tools support emojis well)
|
||||
|
||||
---
|
||||
|
||||
## OpenAPI/API Documentation
|
||||
|
||||
**Required Elements:**
|
||||
|
||||
- Endpoint path and method
|
||||
- Authentication requirements
|
||||
- Request parameters (path, query, body) with types
|
||||
- Request example (realistic, working)
|
||||
- Response schema with types
|
||||
- Response examples (success + common errors)
|
||||
- Error codes and meanings
|
||||
|
||||
**Quality Standards:**
|
||||
|
||||
- OpenAPI 3.0+ specification compliance
|
||||
- Complete schemas (no missing fields)
|
||||
- Examples that actually work
|
||||
- Clear error messages
|
||||
- Security schemes documented
|
||||
|
||||
---
|
||||
|
||||
## Documentation Types: Quick Reference
|
||||
|
||||
**README:**
|
||||
|
||||
- What (overview), Why (purpose), How (quick start)
|
||||
- Installation, Usage, Contributing, License
|
||||
- Under 500 lines (link to detailed docs)
|
||||
|
||||
**API Reference:**
|
||||
|
||||
- Complete endpoint coverage
|
||||
- Request/response examples
|
||||
- Authentication details
|
||||
- Error handling
|
||||
- Rate limits if applicable
|
||||
|
||||
**User Guide:**
|
||||
|
||||
- Task-based sections (How to...)
|
||||
- Step-by-step instructions
|
||||
- Screenshots/diagrams where helpful
|
||||
- Troubleshooting section
|
||||
|
||||
**Architecture Docs:**
|
||||
|
||||
- System overview diagram (Mermaid)
|
||||
- Component descriptions
|
||||
- Data flow
|
||||
- Technology decisions (ADRs)
|
||||
- Deployment architecture
|
||||
|
||||
**Developer Guide:**
|
||||
|
||||
- Setup/environment requirements
|
||||
- Code organization
|
||||
- Development workflow
|
||||
- Testing approach
|
||||
- Contribution guidelines
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finalizing ANY documentation:
|
||||
|
||||
- [ ] CommonMark compliant (no violations)
|
||||
- [ ] Headers in proper hierarchy
|
||||
- [ ] All code blocks have language tags
|
||||
- [ ] Links work and have descriptive text
|
||||
- [ ] Mermaid diagrams render correctly
|
||||
- [ ] Active voice, present tense
|
||||
- [ ] Task-oriented (answers "how do I...")
|
||||
- [ ] Examples are concrete and working
|
||||
- [ ] Accessibility standards met
|
||||
- [ ] Spelling/grammar checked
|
||||
- [ ] Reads clearly at target skill level
|
||||
|
||||
---
|
||||
|
||||
## BMAD-Specific Conventions
|
||||
|
||||
**File Organization:**
|
||||
|
||||
- `README.md` at root of each major component
|
||||
- `docs/` folder for extensive documentation
|
||||
- Workflow-specific docs in workflow folder
|
||||
- Cross-references use relative paths
|
||||
|
||||
**Frontmatter:**
|
||||
Use YAML frontmatter when appropriate:
|
||||
|
||||
```yaml
|
||||
---
|
||||
title: Document Title
|
||||
description: Brief description
|
||||
author: Author name
|
||||
date: YYYY-MM-DD
|
||||
---
|
||||
```
|
||||
|
||||
**Metadata:**
|
||||
|
||||
- Always include last-updated date
|
||||
- Version info for versioned docs
|
||||
- Author attribution for accountability
|
||||
|
||||
---
|
||||
|
||||
**Remember: This is your foundation. Follow these rules consistently, and all documentation will be clear, accessible, and maintainable.**
|
||||
@@ -1,26 +0,0 @@
|
||||
# Test Architect Workflows
|
||||
|
||||
This directory houses the per-command workflows used by the Test Architect agent (`tea`). Each workflow wraps the standalone instructions that used to live under `testarch/` so they can run through the standard BMAD workflow runner.
|
||||
|
||||
## Available workflows
|
||||
|
||||
- `framework` – scaffolds Playwright/Cypress harnesses.
|
||||
- `atdd` – generates failing acceptance tests before coding.
|
||||
- `automate` – expands regression coverage after implementation.
|
||||
- `ci` – bootstraps CI/CD pipelines aligned with TEA practices.
|
||||
- `test-design` – combines risk assessment and coverage planning.
|
||||
- `trace` – maps requirements to tests (Phase 1) and makes quality gate decisions (Phase 2).
|
||||
- `nfr-assess` – evaluates non-functional requirements.
|
||||
- `test-review` – reviews test quality using knowledge base patterns and generates quality score.
|
||||
|
||||
**Note**: The `gate` workflow has been merged into `trace` as Phase 2. The `*trace` command now performs both requirements-to-tests traceability mapping AND quality gate decision (PASS/CONCERNS/FAIL/WAIVED) in a single atomic operation.
|
||||
|
||||
Each subdirectory contains:
|
||||
|
||||
- `README.md` – comprehensive workflow documentation with usage, inputs, outputs, and integration notes.
|
||||
- `instructions.md` – detailed workflow steps in pure markdown v4.0 format.
|
||||
- `workflow.yaml` – metadata, variables, and configuration for BMAD workflow runner.
|
||||
- `checklist.md` – validation checklist for quality assurance and completeness verification.
|
||||
- `template.md` – output template for workflow deliverables (where applicable).
|
||||
|
||||
The TEA agent now invokes these workflows via `run-workflow` rather than executing instruction files directly.
|
||||
@@ -1,672 +0,0 @@
|
||||
# ATDD (Acceptance Test-Driven Development) Workflow
|
||||
|
||||
Generates failing acceptance tests BEFORE implementation following TDD's red-green-refactor cycle. Creates comprehensive test coverage at appropriate levels (E2E, API, Component) with supporting infrastructure (fixtures, factories, mocks) and provides an implementation checklist to guide development toward passing tests.
|
||||
|
||||
**Core Principle**: Tests fail first (red phase), guide development to green, then enable confident refactoring.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *atdd
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- User story is approved with clear acceptance criteria
|
||||
- Development is about to begin (before any implementation code)
|
||||
- Team is practicing Test-Driven Development (TDD)
|
||||
- Need to establish test-first contract with DEV team
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context Files:**
|
||||
|
||||
- **Story markdown** (`{story_file}`): User story with acceptance criteria, functional requirements, and technical constraints
|
||||
- **Framework configuration**: Test framework config (playwright.config.ts or cypress.config.ts) from framework workflow
|
||||
|
||||
**Workflow Variables:**
|
||||
|
||||
- `story_file`: Path to story markdown with acceptance criteria (required)
|
||||
- `test_dir`: Directory for test files (default: `{project-root}/tests`)
|
||||
- `test_framework`: Detected from framework workflow (playwright or cypress)
|
||||
- `test_levels`: Which test levels to generate (default: "e2e,api,component")
|
||||
- `primary_level`: Primary test level for acceptance criteria (default: "e2e")
|
||||
- `start_failing`: Tests must fail initially - red phase (default: true)
|
||||
- `use_given_when_then`: BDD-style test structure (default: true)
|
||||
- `network_first`: Route interception before navigation to prevent race conditions (default: true)
|
||||
- `one_assertion_per_test`: Atomic test design (default: true)
|
||||
- `generate_factories`: Create data factory stubs using faker (default: true)
|
||||
- `generate_fixtures`: Create fixture architecture with auto-cleanup (default: true)
|
||||
- `auto_cleanup`: Fixtures clean up their data automatically (default: true)
|
||||
- `include_data_testids`: List required data-testid attributes for DEV (default: true)
|
||||
- `include_mock_requirements`: Document mock/stub needs (default: true)
|
||||
- `auto_load_knowledge`: Load fixture-architecture, data-factories, component-tdd fragments (default: true)
|
||||
- `share_with_dev`: Provide implementation checklist to DEV agent (default: true)
|
||||
- `output_checklist`: Path for implementation checklist (default: `{output_folder}/atdd-checklist-{story_id}.md`)
|
||||
|
||||
**Optional Context:**
|
||||
|
||||
- **Test design document**: For risk/priority context alignment (P0-P3 scenarios)
|
||||
- **Existing fixtures/helpers**: For consistency with established patterns
|
||||
- **Architecture documents**: For understanding system boundaries and integration points
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverable:**
|
||||
|
||||
- **ATDD Checklist** (`atdd-checklist-{story_id}.md`): Implementation guide containing:
|
||||
- Story summary and acceptance criteria breakdown
|
||||
- Test files created with paths and line counts
|
||||
- Data factories created with patterns
|
||||
- Fixtures created with auto-cleanup logic
|
||||
- Mock requirements for external services
|
||||
- Required data-testid attributes list
|
||||
- Implementation checklist mapping tests to code tasks
|
||||
- Red-green-refactor workflow guidance
|
||||
- Execution commands for running tests
|
||||
|
||||
**Test Files Created:**
|
||||
|
||||
- **E2E tests** (`tests/e2e/{feature-name}.spec.ts`): Full user journey tests for critical paths
|
||||
- **API tests** (`tests/api/{feature-name}.api.spec.ts`): Business logic and service contract tests
|
||||
- **Component tests** (`tests/component/{ComponentName}.test.tsx`): UI component behavior tests
|
||||
|
||||
**Supporting Infrastructure:**
|
||||
|
||||
- **Data factories** (`tests/support/factories/{entity}.factory.ts`): Factory functions using @faker-js/faker for generating test data with overrides support
|
||||
- **Test fixtures** (`tests/support/fixtures/{feature}.fixture.ts`): Playwright fixtures with setup/teardown and auto-cleanup
|
||||
- **Mock/stub documentation**: Requirements for external service mocking (payment gateways, email services, etc.)
|
||||
- **data-testid requirements**: List of required test IDs for stable selectors in UI implementation
|
||||
|
||||
**Validation Safeguards:**
|
||||
|
||||
- All tests must fail initially (red phase verified by local test run)
|
||||
- Failure messages are clear and actionable
|
||||
- Tests use Given-When-Then format for readability
|
||||
- Network-first pattern applied (route interception before navigation)
|
||||
- One assertion per test (atomic test design)
|
||||
- No hard waits or sleeps (explicit waits only)
|
||||
|
||||
## Key Features
|
||||
|
||||
### Red-Green-Refactor Cycle
|
||||
|
||||
**RED Phase** (TEA Agent responsibility):
|
||||
|
||||
- Write failing tests first defining expected behavior
|
||||
- Tests fail for right reason (missing implementation, not test bugs)
|
||||
- All supporting infrastructure (factories, fixtures, mocks) created
|
||||
|
||||
**GREEN Phase** (DEV Agent responsibility):
|
||||
|
||||
- Implement minimal code to pass one test at a time
|
||||
- Use implementation checklist as guide
|
||||
- Run tests frequently to verify progress
|
||||
|
||||
**REFACTOR Phase** (DEV Agent responsibility):
|
||||
|
||||
- Improve code quality with confidence (tests provide safety net)
|
||||
- Extract duplications, optimize performance
|
||||
- Ensure tests still pass after changes
|
||||
|
||||
### Test Level Selection Framework
|
||||
|
||||
**E2E (End-to-End)**:
|
||||
|
||||
- Critical user journeys (login, checkout, core workflows)
|
||||
- Multi-system integration
|
||||
- User-facing acceptance criteria
|
||||
- Characteristics: High confidence, slow execution, brittle
|
||||
|
||||
**API (Integration)**:
|
||||
|
||||
- Business logic validation
|
||||
- Service contracts and data transformations
|
||||
- Backend integration without UI
|
||||
- Characteristics: Fast feedback, good balance, stable
|
||||
|
||||
**Component**:
|
||||
|
||||
- UI component behavior (buttons, forms, modals)
|
||||
- Interaction testing (click, hover, keyboard navigation)
|
||||
- Visual regression and state management
|
||||
- Characteristics: Fast, isolated, granular
|
||||
|
||||
**Unit**:
|
||||
|
||||
- Pure business logic and algorithms
|
||||
- Edge cases and error handling
|
||||
- Minimal dependencies
|
||||
- Characteristics: Fastest, most granular
|
||||
|
||||
**Selection Strategy**: Avoid duplicate coverage. Use E2E for critical happy path, API for business logic variations, component for UI edge cases, unit for pure logic.
|
||||
|
||||
### Recording Mode (NEW - Phase 2.5)
|
||||
|
||||
**atdd** can record complex UI interactions instead of AI generation.
|
||||
|
||||
**Activation**: Automatic for complex UI when config.tea_use_mcp_enhancements is true and MCP available
|
||||
|
||||
- Fallback: AI generation (silent, automatic)
|
||||
|
||||
**When to Use Recording Mode:**
|
||||
|
||||
- ✅ Complex UI interactions (drag-drop, multi-step forms, wizards)
|
||||
- ✅ Visual workflows (modals, dialogs, animations)
|
||||
- ✅ Unclear requirements (exploratory, discovering expected behavior)
|
||||
- ✅ Multi-page flows (checkout, registration, onboarding)
|
||||
- ❌ NOT for simple CRUD (AI generation faster)
|
||||
- ❌ NOT for API-only tests (no UI to record)
|
||||
|
||||
**When to Use AI Generation (Default):**
|
||||
|
||||
- ✅ Clear acceptance criteria available
|
||||
- ✅ Standard patterns (login, CRUD, navigation)
|
||||
- ✅ Need many tests quickly
|
||||
- ✅ API/backend tests (no UI interaction)
|
||||
|
||||
**How Test Generation Works (Default - AI-Based):**
|
||||
|
||||
TEA generates tests using AI by:
|
||||
|
||||
1. **Analyzing acceptance criteria** from story markdown
|
||||
2. **Inferring selectors** from requirement descriptions (e.g., "login button" → `[data-testid="login-button"]`)
|
||||
3. **Synthesizing test code** based on knowledge base patterns
|
||||
4. **Estimating interactions** using common UI patterns (click, type, verify)
|
||||
5. **Applying best practices** from knowledge fragments (Given-When-Then, network-first, fixtures)
|
||||
|
||||
**This works well for:**
|
||||
|
||||
- ✅ Clear requirements with known UI patterns
|
||||
- ✅ Standard workflows (login, CRUD, navigation)
|
||||
- ✅ When selectors follow conventions (data-testid attributes)
|
||||
|
||||
**What MCP Adds (Interactive Verification & Enhancement):**
|
||||
|
||||
When Playwright MCP is available, TEA **additionally**:
|
||||
|
||||
1. **Verifies generated tests** by:
|
||||
- **Launching real browser** with `generator_setup_page`
|
||||
- **Executing generated test steps** with `browser_*` tools (`navigate`, `click`, `type`)
|
||||
- **Seeing actual UI** with `browser_snapshot` (visual verification)
|
||||
- **Discovering real selectors** with `browser_generate_locator` (auto-generate from live DOM)
|
||||
|
||||
2. **Enhances AI-generated tests** by:
|
||||
- **Validating selectors exist** in actual DOM (not just guesses)
|
||||
- **Verifying behavior** with `browser_verify_text`, `browser_verify_visible`, `browser_verify_url`
|
||||
- **Capturing actual interaction log** with `generator_read_log`
|
||||
- **Refining test code** with real observed behavior
|
||||
|
||||
3. **Catches issues early** by:
|
||||
- **Finding missing selectors** before DEV implements (requirements clarification)
|
||||
- **Discovering edge cases** not in requirements (loading states, error messages)
|
||||
- **Validating assumptions** about UI structure and behavior
|
||||
|
||||
**Key Benefits of MCP Enhancement:**
|
||||
|
||||
- ✅ **AI generates tests** (fast, based on requirements) **+** **MCP verifies tests** (accurate, based on reality)
|
||||
- ✅ **Accurate selectors**: Validated against actual DOM, not just inferred
|
||||
- ✅ **Visual validation**: TEA sees what user sees (modals, animations, state changes)
|
||||
- ✅ **Complex flows**: Records multi-step interactions precisely
|
||||
- ✅ **Edge case discovery**: Observes actual app behavior beyond requirements
|
||||
- ✅ **Selector resilience**: MCP generates robust locators from live page (role-based, text-based, fallback chains)
|
||||
|
||||
**Example Enhancement Flow:**
|
||||
|
||||
```
|
||||
1. AI generates test based on acceptance criteria
|
||||
→ await page.click('[data-testid="submit-button"]')
|
||||
|
||||
2. MCP verifies selector exists (browser_generate_locator)
|
||||
→ Found: button[type="submit"].btn-primary
|
||||
→ No data-testid attribute exists!
|
||||
|
||||
3. TEA refines test with actual selector
|
||||
→ await page.locator('button[type="submit"]').click()
|
||||
→ Documents requirement: "Add data-testid='submit-button' to button"
|
||||
```
|
||||
|
||||
**Recording Workflow (MCP-Based):**
|
||||
|
||||
```
|
||||
1. Set generation_mode: "recording"
|
||||
2. Use generator_setup_page to init recording session
|
||||
3. For each acceptance criterion:
|
||||
a. Execute scenario with browser_* tools:
|
||||
- browser_navigate, browser_click, browser_type
|
||||
- browser_select, browser_check
|
||||
b. Add verifications with browser_verify_* tools:
|
||||
- browser_verify_text, browser_verify_visible
|
||||
- browser_verify_url
|
||||
c. Capture log with generator_read_log
|
||||
d. Generate test with generator_write_test
|
||||
4. Enhance generated tests with knowledge base patterns:
|
||||
- Add Given-When-Then comments
|
||||
- Replace selectors with data-testid
|
||||
- Add network-first interception
|
||||
- Add fixtures/factories
|
||||
5. Verify tests fail (RED phase)
|
||||
```
|
||||
|
||||
**Example: Recording a Checkout Flow**
|
||||
|
||||
```markdown
|
||||
Recording session for: "User completes checkout with credit card"
|
||||
|
||||
Actions recorded:
|
||||
|
||||
1. browser_navigate('/cart')
|
||||
2. browser_click('[data-testid="checkout-button"]')
|
||||
3. browser_type('[data-testid="card-number"]', '4242424242424242')
|
||||
4. browser_type('[data-testid="expiry"]', '12/25')
|
||||
5. browser_type('[data-testid="cvv"]', '123')
|
||||
6. browser_click('[data-testid="place-order"]')
|
||||
7. browser_verify_text('Order confirmed')
|
||||
8. browser_verify_url('/confirmation')
|
||||
|
||||
Generated test (enhanced):
|
||||
|
||||
- Given-When-Then structure added
|
||||
- data-testid selectors used
|
||||
- Network-first payment API mock added
|
||||
- Card factory created for test data
|
||||
- Test verified to FAIL (checkout not implemented)
|
||||
```
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
- Recording mode is OPTIONAL (default: AI generation)
|
||||
- Requires Playwright MCP (falls back to AI if unavailable)
|
||||
- Generated tests enhanced with knowledge base patterns
|
||||
- Same quality output regardless of generation method
|
||||
|
||||
### Given-When-Then Structure
|
||||
|
||||
All tests follow BDD format for clarity:
|
||||
|
||||
```typescript
|
||||
test('should display error for invalid credentials', async ({ page }) => {
|
||||
// GIVEN: User is on login page
|
||||
await page.goto('/login');
|
||||
|
||||
// WHEN: User submits invalid credentials
|
||||
await page.fill('[data-testid="email-input"]', 'invalid@example.com');
|
||||
await page.fill('[data-testid="password-input"]', 'wrongpassword');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
|
||||
// THEN: Error message is displayed
|
||||
await expect(page.locator('[data-testid="error-message"]')).toHaveText('Invalid email or password');
|
||||
});
|
||||
```
|
||||
|
||||
### Network-First Testing Pattern
|
||||
|
||||
**Critical pattern to prevent race conditions**:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Intercept BEFORE navigation
|
||||
await page.route('**/api/data', handler);
|
||||
await page.goto('/page');
|
||||
|
||||
// ❌ WRONG: Navigate then intercept (race condition)
|
||||
await page.goto('/page');
|
||||
await page.route('**/api/data', handler); // Too late!
|
||||
```
|
||||
|
||||
Always set up route interception before navigating to pages that make network requests.
|
||||
|
||||
### Data Factory Architecture
|
||||
|
||||
Use faker for all test data generation:
|
||||
|
||||
```typescript
|
||||
// tests/support/factories/user.factory.ts
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
export const createUser = (overrides = {}) => ({
|
||||
id: faker.number.int(),
|
||||
email: faker.internet.email(),
|
||||
name: faker.person.fullName(),
|
||||
createdAt: faker.date.recent().toISOString(),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
export const createUsers = (count: number) => Array.from({ length: count }, () => createUser());
|
||||
```
|
||||
|
||||
**Factory principles:**
|
||||
|
||||
- Use faker for random data (no hardcoded values to prevent collisions)
|
||||
- Support overrides for specific test scenarios
|
||||
- Generate complete valid objects matching API contracts
|
||||
- Include helper functions for bulk creation
|
||||
|
||||
### Fixture Architecture with Auto-Cleanup
|
||||
|
||||
Playwright fixtures with automatic data cleanup:
|
||||
|
||||
```typescript
|
||||
// tests/support/fixtures/auth.fixture.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
|
||||
export const test = base.extend({
|
||||
authenticatedUser: async ({ page }, use) => {
|
||||
// Setup: Create and authenticate user
|
||||
const user = await createUser();
|
||||
await page.goto('/login');
|
||||
await page.fill('[data-testid="email"]', user.email);
|
||||
await page.fill('[data-testid="password"]', 'password123');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
await page.waitForURL('/dashboard');
|
||||
|
||||
// Provide to test
|
||||
await use(user);
|
||||
|
||||
// Cleanup: Delete user (automatic)
|
||||
await deleteUser(user.id);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**Fixture principles:**
|
||||
|
||||
- Auto-cleanup (always delete created data in teardown)
|
||||
- Composable (fixtures can use other fixtures via mergeTests)
|
||||
- Isolated (each test gets fresh data)
|
||||
- Type-safe with TypeScript
|
||||
|
||||
### One Assertion Per Test (Atomic Design)
|
||||
|
||||
Each test should verify exactly one behavior:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: One assertion
|
||||
test('should display user name', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
});
|
||||
|
||||
// ❌ WRONG: Multiple assertions (not atomic)
|
||||
test('should display user info', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
await expect(page.locator('[data-testid="user-email"]')).toHaveText('john@example.com');
|
||||
});
|
||||
```
|
||||
|
||||
**Why?** If second assertion fails, you don't know if first is still valid. Split into separate tests for clear failure diagnosis.
|
||||
|
||||
### Implementation Checklist for DEV
|
||||
|
||||
Maps each failing test to concrete implementation tasks:
|
||||
|
||||
```markdown
|
||||
## Implementation Checklist
|
||||
|
||||
### Test: User Login with Valid Credentials
|
||||
|
||||
- [ ] Create `/login` route
|
||||
- [ ] Implement login form component
|
||||
- [ ] Add email/password validation
|
||||
- [ ] Integrate authentication API
|
||||
- [ ] Add `data-testid` attributes: `email-input`, `password-input`, `login-button`
|
||||
- [ ] Implement error handling
|
||||
- [ ] Run test: `npm run test:e2e -- login.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
```
|
||||
|
||||
Provides clear path from red to green for each test.
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
**Before this workflow:**
|
||||
|
||||
- **framework** workflow: Must run first to establish test framework architecture (Playwright or Cypress config, directory structure, base fixtures)
|
||||
- **test-design** workflow: Optional but recommended for P0-P3 priority alignment and risk assessment context
|
||||
|
||||
**After this workflow:**
|
||||
|
||||
- **DEV agent** implements features guided by failing tests and implementation checklist
|
||||
- **test-review** workflow: Review generated test quality before sharing with DEV team
|
||||
- **automate** workflow: After story completion, expand regression suite with additional edge case coverage
|
||||
|
||||
**Coordinates with:**
|
||||
|
||||
- **Story approval process**: ATDD runs after story is approved but before DEV begins implementation
|
||||
- **Quality gates**: Failing tests serve as acceptance criteria for story completion (all tests must pass)
|
||||
|
||||
## Important Notes
|
||||
|
||||
### ATDD is Test-First, Not Test-After
|
||||
|
||||
**Critical timing**: Tests must be written BEFORE any implementation code. This ensures:
|
||||
|
||||
- Tests define the contract (what needs to be built)
|
||||
- Implementation is guided by tests (no over-engineering)
|
||||
- Tests verify behavior, not implementation details
|
||||
- Confidence in refactoring (tests catch regressions)
|
||||
|
||||
### All Tests Must Fail Initially
|
||||
|
||||
**Red phase verification is mandatory**:
|
||||
|
||||
- Run tests locally after creation to confirm RED phase
|
||||
- Failure should be due to missing implementation, not test bugs
|
||||
- Failure messages should be clear and actionable
|
||||
- Document expected failure messages in ATDD checklist
|
||||
|
||||
If a test passes before implementation, it's not testing the right thing.
|
||||
|
||||
### Use data-testid for Stable Selectors
|
||||
|
||||
**Why data-testid?**
|
||||
|
||||
- CSS classes change frequently (styling refactors)
|
||||
- IDs may not be unique or stable
|
||||
- Text content changes with localization
|
||||
- data-testid is explicit contract between tests and UI
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Stable selector
|
||||
await page.click('[data-testid="login-button"]');
|
||||
|
||||
// ❌ FRAGILE: Class-based selector
|
||||
await page.click('.btn.btn-primary.login-btn');
|
||||
```
|
||||
|
||||
ATDD checklist includes complete list of required data-testid attributes for DEV team.
|
||||
|
||||
### No Hard Waits or Sleeps
|
||||
|
||||
**Use explicit waits only**:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Explicit wait for condition
|
||||
await page.waitForSelector('[data-testid="user-name"]');
|
||||
await expect(page.locator('[data-testid="user-name"]')).toBeVisible();
|
||||
|
||||
// ❌ WRONG: Hard wait (flaky, slow)
|
||||
await page.waitForTimeout(2000);
|
||||
```
|
||||
|
||||
Playwright's auto-waiting is preferred (expect() automatically waits up to timeout).
|
||||
|
||||
### Component Tests for Complex UI Only
|
||||
|
||||
**When to use component tests:**
|
||||
|
||||
- Complex UI interactions (drag-drop, keyboard navigation)
|
||||
- Form validation logic
|
||||
- State management within component
|
||||
- Visual edge cases
|
||||
|
||||
**When NOT to use:**
|
||||
|
||||
- Simple rendering (snapshot tests are sufficient)
|
||||
- Integration with backend (use E2E or API tests)
|
||||
- Full user journeys (use E2E tests)
|
||||
|
||||
Component tests are valuable but should complement, not replace, E2E and API tests.
|
||||
|
||||
### Auto-Cleanup is Non-Negotiable
|
||||
|
||||
**Every test must clean up its data**:
|
||||
|
||||
- Use fixtures with automatic teardown
|
||||
- Never leave test data in database/storage
|
||||
- Each test should be isolated (no shared state)
|
||||
|
||||
**Cleanup patterns:**
|
||||
|
||||
- Fixtures: Cleanup in teardown function
|
||||
- Factories: Provide deletion helpers
|
||||
- Tests: Use `test.afterEach()` for manual cleanup if needed
|
||||
|
||||
Without auto-cleanup, tests become flaky and depend on execution order.
|
||||
|
||||
## Knowledge Base References
|
||||
|
||||
This workflow automatically consults:
|
||||
|
||||
- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's test.extend()
|
||||
- **data-factories.md** - Factory patterns using @faker-js/faker for random test data generation with overrides support
|
||||
- **component-tdd.md** - Component test strategies using Playwright Component Testing (@playwright/experimental-ct-react)
|
||||
- **network-first.md** - Route interception patterns (intercept before navigation to prevent race conditions)
|
||||
- **test-quality.md** - Test design principles (Given-When-Then, one assertion per test, determinism, isolation)
|
||||
- **test-levels-framework.md** - Test level selection framework (E2E vs API vs Component vs Unit)
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping and additional references.
|
||||
|
||||
## Example Output
|
||||
|
||||
After running this workflow, the ATDD checklist will contain:
|
||||
|
||||
````markdown
|
||||
# ATDD Checklist - Epic 3, Story 5: User Authentication
|
||||
|
||||
## Story Summary
|
||||
|
||||
As a user, I want to log in with email and password so that I can access my personalized dashboard.
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
1. User can log in with valid credentials
|
||||
2. User sees error message with invalid credentials
|
||||
3. User is redirected to dashboard after successful login
|
||||
|
||||
## Failing Tests Created (RED Phase)
|
||||
|
||||
### E2E Tests (3 tests)
|
||||
|
||||
- `tests/e2e/user-authentication.spec.ts` (87 lines)
|
||||
- ✅ should log in with valid credentials (RED - missing /login route)
|
||||
- ✅ should display error for invalid credentials (RED - error message not implemented)
|
||||
- ✅ should redirect to dashboard after login (RED - redirect logic missing)
|
||||
|
||||
### API Tests (2 tests)
|
||||
|
||||
- `tests/api/auth.api.spec.ts` (54 lines)
|
||||
- ✅ POST /api/auth/login - should return token for valid credentials (RED - endpoint not implemented)
|
||||
- ✅ POST /api/auth/login - should return 401 for invalid credentials (RED - validation missing)
|
||||
|
||||
## Data Factories Created
|
||||
|
||||
- `tests/support/factories/user.factory.ts` - createUser(), createUsers(count)
|
||||
|
||||
## Fixtures Created
|
||||
|
||||
- `tests/support/fixtures/auth.fixture.ts` - authenticatedUser fixture with auto-cleanup
|
||||
|
||||
## Required data-testid Attributes
|
||||
|
||||
### Login Page
|
||||
|
||||
- `email-input` - Email input field
|
||||
- `password-input` - Password input field
|
||||
- `login-button` - Submit button
|
||||
- `error-message` - Error message container
|
||||
|
||||
### Dashboard Page
|
||||
|
||||
- `user-name` - User name display
|
||||
- `logout-button` - Logout button
|
||||
|
||||
## Implementation Checklist
|
||||
|
||||
### Test: User Login with Valid Credentials
|
||||
|
||||
- [ ] Create `/login` route
|
||||
- [ ] Implement login form component
|
||||
- [ ] Add email/password validation
|
||||
- [ ] Integrate authentication API
|
||||
- [ ] Add data-testid attributes: `email-input`, `password-input`, `login-button`
|
||||
- [ ] Run test: `npm run test:e2e -- user-authentication.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
### Test: Display Error for Invalid Credentials
|
||||
|
||||
- [ ] Add error state management
|
||||
- [ ] Display error message UI
|
||||
- [ ] Add `data-testid="error-message"`
|
||||
- [ ] Run test: `npm run test:e2e -- user-authentication.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
### Test: Redirect to Dashboard After Login
|
||||
|
||||
- [ ] Implement redirect logic after successful auth
|
||||
- [ ] Verify authentication token stored
|
||||
- [ ] Add dashboard route protection
|
||||
- [ ] Run test: `npm run test:e2e -- user-authentication.spec.ts`
|
||||
- [ ] ✅ Test passes (green phase)
|
||||
|
||||
## Running Tests
|
||||
|
||||
```bash
|
||||
# Run all failing tests
|
||||
npm run test:e2e
|
||||
|
||||
# Run specific test file
|
||||
npm run test:e2e -- user-authentication.spec.ts
|
||||
|
||||
# Run tests in headed mode (see browser)
|
||||
npm run test:e2e -- --headed
|
||||
|
||||
# Debug specific test
|
||||
npm run test:e2e -- user-authentication.spec.ts --debug
|
||||
```
|
||||
````
|
||||
|
||||
## Red-Green-Refactor Workflow
|
||||
|
||||
**RED Phase** (Complete):
|
||||
|
||||
- ✅ All tests written and failing
|
||||
- ✅ Fixtures and factories created
|
||||
- ✅ data-testid requirements documented
|
||||
|
||||
**GREEN Phase** (DEV Team - Next Steps):
|
||||
|
||||
1. Pick one failing test from checklist
|
||||
2. Implement minimal code to make it pass
|
||||
3. Run test to verify green
|
||||
4. Check off task in checklist
|
||||
5. Move to next test
|
||||
6. Repeat until all tests pass
|
||||
|
||||
**REFACTOR Phase** (DEV Team - After All Tests Pass):
|
||||
|
||||
1. All tests passing (green)
|
||||
2. Improve code quality (extract functions, optimize)
|
||||
3. Remove duplications
|
||||
4. Ensure tests still pass after each refactor
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. Review this checklist with team
|
||||
2. Run failing tests to confirm RED phase: `npm run test:e2e`
|
||||
3. Begin implementation using checklist as guide
|
||||
4. Share progress in daily standup
|
||||
5. When all tests pass, run `bmad sm story-done` to move story to DONE
|
||||
|
||||
```
|
||||
|
||||
This comprehensive checklist guides DEV team from red to green with clear tasks and validation steps.
|
||||
```
|
||||
@@ -1,869 +0,0 @@
|
||||
# Automate Workflow
|
||||
|
||||
Expands test automation coverage by generating comprehensive test suites at appropriate levels (E2E, API, Component, Unit) with supporting infrastructure. This workflow operates in **dual mode** - works seamlessly WITH or WITHOUT BMad artifacts.
|
||||
|
||||
**Core Principle**: Generate prioritized, deterministic tests that avoid duplicate coverage and follow testing best practices.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *automate
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- **BMad-Integrated**: After story implementation to expand coverage beyond ATDD tests
|
||||
- **Standalone**: Point at any codebase/feature and generate tests independently ("work out of thin air")
|
||||
- **Auto-discover**: No targets specified - scans codebase for features needing tests
|
||||
|
||||
## Inputs
|
||||
|
||||
**Execution Modes:**
|
||||
|
||||
1. **BMad-Integrated Mode** (story available) - OPTIONAL
|
||||
2. **Standalone Mode** (no BMad artifacts) - Direct code analysis
|
||||
3. **Auto-discover Mode** (no targets) - Scan for coverage gaps
|
||||
|
||||
**Required Context Files:**
|
||||
|
||||
- **Framework configuration**: Test framework config (playwright.config.ts or cypress.config.ts) - REQUIRED
|
||||
|
||||
**Optional Context (BMad-Integrated Mode):**
|
||||
|
||||
- **Story markdown** (`{story_file}`): User story with acceptance criteria (enhances coverage targeting but NOT required)
|
||||
- **Tech spec**: Technical specification (provides architectural context)
|
||||
- **Test design**: Risk/priority context (P0-P3 alignment)
|
||||
- **PRD**: Product requirements (business context)
|
||||
|
||||
**Optional Context (Standalone Mode):**
|
||||
|
||||
- **Source code**: Feature implementation to analyze
|
||||
- **Existing tests**: Current test suite for gap analysis
|
||||
|
||||
**Workflow Variables:**
|
||||
|
||||
- `standalone_mode`: Can work without BMad artifacts (default: true)
|
||||
- `story_file`: Path to story markdown (optional)
|
||||
- `target_feature`: Feature name or directory to analyze (e.g., "user-authentication" or "src/auth/")
|
||||
- `target_files`: Specific files to analyze (comma-separated paths)
|
||||
- `test_dir`: Directory for test files (default: `{project-root}/tests`)
|
||||
- `source_dir`: Source code directory (default: `{project-root}/src`)
|
||||
- `auto_discover_features`: Automatically find features needing tests (default: true)
|
||||
- `analyze_coverage`: Check existing test coverage gaps (default: true)
|
||||
- `coverage_target`: Coverage strategy - "critical-paths", "comprehensive", "selective" (default: "critical-paths")
|
||||
- `test_levels`: Which levels to generate - "e2e,api,component,unit" (default: all)
|
||||
- `avoid_duplicate_coverage`: Don't test same behavior at multiple levels (default: true)
|
||||
- `include_p0`: Include P0 critical path tests (default: true)
|
||||
- `include_p1`: Include P1 high priority tests (default: true)
|
||||
- `include_p2`: Include P2 medium priority tests (default: true)
|
||||
- `include_p3`: Include P3 low priority tests (default: false)
|
||||
- `use_given_when_then`: BDD-style test structure (default: true)
|
||||
- `one_assertion_per_test`: Atomic test design (default: true)
|
||||
- `network_first`: Route interception before navigation (default: true)
|
||||
- `deterministic_waits`: No hard waits or sleeps (default: true)
|
||||
- `generate_fixtures`: Create/enhance fixture architecture (default: true)
|
||||
- `generate_factories`: Create/enhance data factories (default: true)
|
||||
- `update_helpers`: Add utility functions (default: true)
|
||||
- `use_test_design`: Load test-design.md if exists (default: true)
|
||||
- `use_tech_spec`: Load tech-spec.md if exists (default: true)
|
||||
- `use_prd`: Load PRD.md if exists (default: true)
|
||||
- `update_readme`: Update test README with new specs (default: true)
|
||||
- `update_package_scripts`: Add test execution scripts (default: true)
|
||||
- `output_summary`: Path for automation summary (default: `{output_folder}/automation-summary.md`)
|
||||
- `max_test_duration`: Maximum seconds per test (default: 90)
|
||||
- `max_file_lines`: Maximum lines per test file (default: 300)
|
||||
- `require_self_cleaning`: All tests must clean up data (default: true)
|
||||
- `auto_load_knowledge`: Load relevant knowledge fragments (default: true)
|
||||
- `run_tests_after_generation`: Verify tests pass/fail as expected (default: true)
|
||||
- `auto_validate`: Run generated tests after creation (default: true) **NEW**
|
||||
- `auto_heal_failures`: Enable automatic healing (default: false, opt-in) **NEW**
|
||||
- `max_healing_iterations`: Maximum healing attempts per test (default: 3) **NEW**
|
||||
- `fail_on_unhealable`: Fail workflow if tests can't be healed (default: false) **NEW**
|
||||
- `mark_unhealable_as_fixme`: Mark unfixable tests with test.fixme() (default: true) **NEW**
|
||||
- `use_mcp_healing`: Use Playwright MCP if available (default: true) **NEW**
|
||||
- `healing_knowledge_fragments`: Healing patterns to load (default: "test-healing-patterns,selector-resilience,timing-debugging") **NEW**
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverable:**
|
||||
|
||||
- **Automation Summary** (`automation-summary.md`): Comprehensive report containing:
|
||||
- Execution mode (BMad-Integrated, Standalone, Auto-discover)
|
||||
- Feature analysis (source files analyzed, coverage gaps)
|
||||
- Tests created (E2E, API, Component, Unit) with counts and paths
|
||||
- Infrastructure created (fixtures, factories, helpers)
|
||||
- Test execution instructions
|
||||
- Coverage analysis (P0-P3 breakdown, coverage percentage)
|
||||
- Definition of Done checklist
|
||||
- Next steps and recommendations
|
||||
|
||||
**Test Files Created:**
|
||||
|
||||
- **E2E tests** (`tests/e2e/{feature-name}.spec.ts`): Critical user journeys (P0-P1)
|
||||
- **API tests** (`tests/api/{feature-name}.api.spec.ts`): Business logic and contracts (P1-P2)
|
||||
- **Component tests** (`tests/component/{ComponentName}.test.tsx`): UI behavior (P1-P2)
|
||||
- **Unit tests** (`tests/unit/{module-name}.test.ts`): Pure logic (P2-P3)
|
||||
|
||||
**Supporting Infrastructure:**
|
||||
|
||||
- **Fixtures** (`tests/support/fixtures/{feature}.fixture.ts`): Setup/teardown with auto-cleanup
|
||||
- **Data factories** (`tests/support/factories/{entity}.factory.ts`): Random test data using faker
|
||||
- **Helpers** (`tests/support/helpers/{utility}.ts`): Utility functions (waitFor, retry, etc.)
|
||||
|
||||
**Documentation Updates:**
|
||||
|
||||
- **Test README** (`tests/README.md`): Test suite overview, execution instructions, priority tagging, patterns
|
||||
- **package.json scripts**: Test execution commands (test:e2e, test:e2e:p0, test:api, etc.)
|
||||
|
||||
**Validation Safeguards:**
|
||||
|
||||
- All tests follow Given-When-Then format
|
||||
- All tests have priority tags ([P0], [P1], [P2], [P3])
|
||||
- All tests use data-testid selectors (stable, not CSS classes)
|
||||
- All tests are self-cleaning (fixtures with auto-cleanup)
|
||||
- No hard waits or flaky patterns (deterministic)
|
||||
- Test files under 300 lines (lean and focused)
|
||||
- Tests run under 1.5 minutes each (fast feedback)
|
||||
|
||||
## Key Features
|
||||
|
||||
### Dual-Mode Operation
|
||||
|
||||
**BMad-Integrated Mode** (story available):
|
||||
|
||||
- Uses story acceptance criteria for coverage targeting
|
||||
- Aligns with test-design risk/priority assessment
|
||||
- Expands ATDD tests with edge cases and negative paths
|
||||
- Optional - story enhances coverage but not required
|
||||
|
||||
**Standalone Mode** (no story):
|
||||
|
||||
- Analyzes source code independently
|
||||
- Identifies coverage gaps automatically
|
||||
- Generates tests based on code analysis
|
||||
- Works with any project (BMad or non-BMad)
|
||||
|
||||
**Auto-discover Mode** (no targets):
|
||||
|
||||
- Scans codebase for features needing tests
|
||||
- Prioritizes features with no coverage
|
||||
- Generates comprehensive test plan
|
||||
|
||||
### Avoid Duplicate Coverage
|
||||
|
||||
**Critical principle**: Don't test same behavior at multiple levels
|
||||
|
||||
**Good coverage strategy:**
|
||||
|
||||
- **E2E**: User can login → Dashboard loads (critical happy path only)
|
||||
- **API**: POST /auth/login returns correct status codes (variations: 200, 401, 400)
|
||||
- **Component**: LoginForm validates input (UI edge cases: empty fields, invalid format)
|
||||
- **Unit**: validateEmail() logic (pure function edge cases)
|
||||
|
||||
**Bad coverage (duplicate):**
|
||||
|
||||
- E2E: User can login → Dashboard loads
|
||||
- E2E: User can login with different emails → Dashboard loads (unnecessary duplication)
|
||||
- API: POST /auth/login returns 200 (already covered in E2E)
|
||||
|
||||
Use E2E sparingly for critical paths. Use API/Component/Unit for variations and edge cases.
|
||||
|
||||
### Healing Capabilities (NEW - Phase 2.5)
|
||||
|
||||
**automate** automatically validates and heals test failures after generation.
|
||||
|
||||
**Configuration**: Controlled by `config.tea_use_mcp_enhancements` (default: true)
|
||||
|
||||
- If true + MCP available → MCP-assisted healing
|
||||
- If true + MCP unavailable → Pattern-based healing
|
||||
- If false → No healing, document failures for manual review
|
||||
|
||||
**Constants**: Max 3 healing attempts, unfixable tests marked as `test.fixme()`
|
||||
|
||||
**How Healing Works (Default - Pattern-Based):**
|
||||
|
||||
TEA heals tests using pattern-based analysis by:
|
||||
|
||||
1. **Parsing error messages** from test output logs
|
||||
2. **Matching patterns** against known failure signatures
|
||||
3. **Applying fixes** from healing knowledge fragments:
|
||||
- `test-healing-patterns.md` - Common failure patterns (selectors, timing, data, network)
|
||||
- `selector-resilience.md` - Selector refactoring (CSS → data-testid, nth() → filter())
|
||||
- `timing-debugging.md` - Race condition fixes (hard waits → event-based waits)
|
||||
4. **Re-running tests** to verify fix (max 3 iterations)
|
||||
5. **Marking unfixable tests** as `test.fixme()` with detailed comments
|
||||
|
||||
**This works well for:**
|
||||
|
||||
- ✅ Common failure patterns (stale selectors, timing issues, dynamic data)
|
||||
- ✅ Text-based errors with clear signatures
|
||||
- ✅ Issues documented in knowledge base
|
||||
- ✅ Automated CI environments without browser access
|
||||
|
||||
**What MCP Adds (Interactive Debugging Enhancement):**
|
||||
|
||||
When Playwright MCP is available, TEA **additionally**:
|
||||
|
||||
1. **Debugs failures interactively** before applying pattern-based fixes:
|
||||
- **Pause test execution** with `playwright_test_debug_test` (step through, inspect state)
|
||||
- **See visual failure context** with `browser_snapshot` (screenshot of failure state)
|
||||
- **Inspect live DOM** with browser tools (find why selector doesn't match)
|
||||
- **Analyze console logs** with `browser_console_messages` (JS errors, warnings, debug output)
|
||||
- **Inspect network activity** with `browser_network_requests` (failed API calls, CORS errors, timeouts)
|
||||
|
||||
2. **Enhances pattern-based fixes** with real-world data:
|
||||
- **Pattern match identifies issue** (e.g., "stale selector")
|
||||
- **MCP discovers actual selector** with `browser_generate_locator` from live page
|
||||
- **TEA applies refined fix** using real DOM structure (not just pattern guess)
|
||||
- **Verification happens in browser** (see if fix works visually)
|
||||
|
||||
3. **Catches root causes** pattern matching might miss:
|
||||
- **Network failures**: MCP shows 500 error on API call (not just timeout)
|
||||
- **JS errors**: MCP shows `TypeError: undefined` in console (not just "element not found")
|
||||
- **Timing issues**: MCP shows loading spinner still visible (not just "selector timeout")
|
||||
- **State problems**: MCP shows modal blocking button (not just "not clickable")
|
||||
|
||||
**Key Benefits of MCP Enhancement:**
|
||||
|
||||
- ✅ **Pattern-based fixes** (fast, automated) **+** **MCP verification** (accurate, context-aware)
|
||||
- ✅ **Visual debugging**: See exactly what user sees when test fails
|
||||
- ✅ **DOM inspection**: Discover why selectors don't match (element missing, wrong attributes, dynamic IDs)
|
||||
- ✅ **Network visibility**: Identify API failures, slow requests, CORS issues
|
||||
- ✅ **Console analysis**: Catch JS errors that break page functionality
|
||||
- ✅ **Robust selectors**: Generate locators from actual DOM (role, text, testid hierarchy)
|
||||
- ✅ **Faster iteration**: Debug and fix in same browser session (no restart needed)
|
||||
- ✅ **Higher success rate**: MCP helps diagnose failures pattern matching can't solve
|
||||
|
||||
**Example Enhancement Flow:**
|
||||
|
||||
```
|
||||
1. Pattern-based healing identifies issue
|
||||
→ Error: "Locator '.submit-btn' resolved to 0 elements"
|
||||
→ Pattern match: Stale selector (CSS class)
|
||||
→ Suggested fix: Replace with data-testid
|
||||
|
||||
2. MCP enhances diagnosis (if available)
|
||||
→ browser_snapshot shows button exists but has class ".submit-button" (not ".submit-btn")
|
||||
→ browser_generate_locator finds: button[type="submit"].submit-button
|
||||
→ browser_console_messages shows no errors
|
||||
|
||||
3. TEA applies refined fix
|
||||
→ await page.locator('button[type="submit"]').click()
|
||||
→ (More accurate than pattern-based guess)
|
||||
```
|
||||
|
||||
**Healing Modes:**
|
||||
|
||||
1. **MCP-Enhanced Healing** (when Playwright MCP available):
|
||||
- Pattern-based analysis **+** Interactive debugging
|
||||
- Visual context with `browser_snapshot`
|
||||
- Console log analysis with `browser_console_messages`
|
||||
- Network inspection with `browser_network_requests`
|
||||
- Live DOM inspection with `browser_generate_locator`
|
||||
- Step-by-step debugging with `playwright_test_debug_test`
|
||||
|
||||
2. **Pattern-Based Healing** (always available):
|
||||
- Error message parsing and pattern matching
|
||||
- Automated fixes from healing knowledge fragments
|
||||
- Text-based analysis (no visual/DOM inspection)
|
||||
- Works in CI without browser access
|
||||
|
||||
**Healing Workflow:**
|
||||
|
||||
```
|
||||
1. Generate tests → Run tests
|
||||
2. IF pass → Success ✅
|
||||
3. IF fail AND auto_heal_failures=false → Report failures ⚠️
|
||||
4. IF fail AND auto_heal_failures=true → Enter healing loop:
|
||||
a. Identify failure pattern (selector, timing, data, network)
|
||||
b. Apply automated fix from knowledge base
|
||||
c. Re-run test (max 3 iterations)
|
||||
d. IF healed → Success ✅
|
||||
e. IF unhealable → Mark test.fixme() with detailed comment
|
||||
```
|
||||
|
||||
**Example Healing Outcomes:**
|
||||
|
||||
```typescript
|
||||
// ❌ Original (failing): CSS class selector
|
||||
await page.locator('.btn-primary').click();
|
||||
|
||||
// ✅ Healed: data-testid selector
|
||||
await page.getByTestId('submit-button').click();
|
||||
|
||||
// ❌ Original (failing): Hard wait
|
||||
await page.waitForTimeout(3000);
|
||||
|
||||
// ✅ Healed: Network-first pattern
|
||||
await page.waitForResponse('**/api/data');
|
||||
|
||||
// ❌ Original (failing): Hardcoded ID
|
||||
await expect(page.getByText('User 123')).toBeVisible();
|
||||
|
||||
// ✅ Healed: Regex pattern
|
||||
await expect(page.getByText(/User \d+/)).toBeVisible();
|
||||
```
|
||||
|
||||
**Unfixable Tests (Marked as test.fixme()):**
|
||||
|
||||
```typescript
|
||||
test.fixme('[P1] should handle complex interaction', async ({ page }) => {
|
||||
// FIXME: Test healing failed after 3 attempts
|
||||
// Failure: "Locator 'button[data-action="submit"]' resolved to 0 elements"
|
||||
// Attempted fixes:
|
||||
// 1. Replaced with page.getByTestId('submit-button') - still failing
|
||||
// 2. Replaced with page.getByRole('button', { name: 'Submit' }) - still failing
|
||||
// 3. Added waitForLoadState('networkidle') - still failing
|
||||
// Manual investigation needed: Selector may require application code changes
|
||||
// TODO: Review with team, may need data-testid added to button component
|
||||
// Original test code...
|
||||
});
|
||||
```
|
||||
|
||||
**When to Enable Healing:**
|
||||
|
||||
- ✅ Enable for greenfield projects (catch generated test issues early)
|
||||
- ✅ Enable for brownfield projects (auto-fix legacy selector patterns)
|
||||
- ❌ Disable if environment not ready (application not deployed/seeded)
|
||||
- ❌ Disable if preferring manual review of all generated tests
|
||||
|
||||
**Healing Report Example:**
|
||||
|
||||
```markdown
|
||||
## Test Healing Report
|
||||
|
||||
**Auto-Heal Enabled**: true
|
||||
**Healing Mode**: Pattern-based
|
||||
**Iterations Allowed**: 3
|
||||
|
||||
### Validation Results
|
||||
|
||||
- **Total tests**: 10
|
||||
- **Passing**: 7
|
||||
- **Failing**: 3
|
||||
|
||||
### Healing Outcomes
|
||||
|
||||
**Successfully Healed (2 tests):**
|
||||
|
||||
- `tests/e2e/login.spec.ts:15` - Stale selector (CSS class → data-testid)
|
||||
- `tests/e2e/checkout.spec.ts:42` - Race condition (added network-first interception)
|
||||
|
||||
**Unable to Heal (1 test):**
|
||||
|
||||
- `tests/e2e/complex-flow.spec.ts:67` - Marked as test.fixme()
|
||||
- Requires application code changes (add data-testid to component)
|
||||
|
||||
### Healing Patterns Applied
|
||||
|
||||
- **Selector fixes**: 1
|
||||
- **Timing fixes**: 1
|
||||
```
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
- Healing is OPTIONAL (default: disabled)
|
||||
- Works without Playwright MCP (pattern-based fallback)
|
||||
- Unfixable tests marked clearly (not silently broken)
|
||||
- Manual investigation path documented
|
||||
|
||||
### Recording Mode (NEW - Phase 2.5)
|
||||
|
||||
**automate** can record complex UI interactions instead of AI generation.
|
||||
|
||||
**Activation**: Automatic for complex UI scenarios when config.tea_use_mcp_enhancements is true and MCP available
|
||||
|
||||
- Complex scenarios: drag-drop, wizards, multi-page flows
|
||||
- Fallback: AI generation (silent, automatic)
|
||||
|
||||
**When to Use Recording Mode:**
|
||||
|
||||
- ✅ Complex UI interactions (drag-drop, multi-step forms, wizards)
|
||||
- ✅ Visual workflows (modals, dialogs, animations, transitions)
|
||||
- ✅ Unclear requirements (exploratory, discovering behavior)
|
||||
- ✅ Multi-page flows (checkout, registration, onboarding)
|
||||
- ❌ NOT for simple CRUD (AI generation faster)
|
||||
- ❌ NOT for API-only tests (no UI to record)
|
||||
|
||||
**When to Use AI Generation (Default):**
|
||||
|
||||
- ✅ Clear requirements available
|
||||
- ✅ Standard patterns (login, CRUD, navigation)
|
||||
- ✅ Need many tests quickly
|
||||
- ✅ API/backend tests (no UI interaction)
|
||||
|
||||
**Recording Workflow (Same as atdd):**
|
||||
|
||||
```
|
||||
1. Set generation_mode: "recording"
|
||||
2. Use generator_setup_page to init recording
|
||||
3. For each test scenario:
|
||||
- Execute with browser_* tools (navigate, click, type, select)
|
||||
- Add verifications with browser_verify_* tools
|
||||
- Capture log and generate test file
|
||||
4. Enhance with knowledge base patterns:
|
||||
- Given-When-Then structure
|
||||
- data-testid selectors
|
||||
- Network-first interception
|
||||
- Fixtures/factories
|
||||
5. Validate (run tests if auto_validate enabled)
|
||||
6. Heal if needed (if auto_heal_failures enabled)
|
||||
```
|
||||
|
||||
**Combination: Recording + Healing:**
|
||||
|
||||
automate can use BOTH recording and healing together:
|
||||
|
||||
- Generate tests via recording (complex flows captured interactively)
|
||||
- Run tests to validate (auto_validate)
|
||||
- Heal failures automatically (auto_heal_failures)
|
||||
|
||||
This is particularly powerful for brownfield projects where:
|
||||
|
||||
- Requirements unclear → Use recording to capture existing behavior
|
||||
- Application complex → Recording captures nuances AI might miss
|
||||
- Tests may fail → Healing fixes common issues automatically
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
- Recording mode is OPTIONAL (default: AI generation)
|
||||
- Requires Playwright MCP (falls back to AI if unavailable)
|
||||
- Works with or without healing enabled
|
||||
- Same quality output regardless of generation method
|
||||
|
||||
### Test Level Selection Framework
|
||||
|
||||
**E2E (End-to-End)**:
|
||||
|
||||
- Critical user journeys (login, checkout, core workflows)
|
||||
- Multi-system integration
|
||||
- User-facing acceptance criteria
|
||||
- Characteristics: High confidence, slow execution, brittle
|
||||
|
||||
**API (Integration)**:
|
||||
|
||||
- Business logic validation
|
||||
- Service contracts and data transformations
|
||||
- Backend integration without UI
|
||||
- Characteristics: Fast feedback, good balance, stable
|
||||
|
||||
**Component**:
|
||||
|
||||
- UI component behavior (buttons, forms, modals)
|
||||
- Interaction testing (click, hover, keyboard navigation)
|
||||
- State management within component
|
||||
- Characteristics: Fast, isolated, granular
|
||||
|
||||
**Unit**:
|
||||
|
||||
- Pure business logic and algorithms
|
||||
- Edge cases and error handling
|
||||
- Minimal dependencies
|
||||
- Characteristics: Fastest, most granular
|
||||
|
||||
### Priority Classification (P0-P3)
|
||||
|
||||
**P0 (Critical - Every commit)**:
|
||||
|
||||
- Critical user paths that must always work
|
||||
- Security-critical functionality (auth, permissions)
|
||||
- Data integrity scenarios
|
||||
- Run in pre-commit hooks or PR checks
|
||||
|
||||
**P1 (High - PR to main)**:
|
||||
|
||||
- Important features with high user impact
|
||||
- Integration points between systems
|
||||
- Error handling for common failures
|
||||
- Run before merging to main branch
|
||||
|
||||
**P2 (Medium - Nightly)**:
|
||||
|
||||
- Edge cases with moderate impact
|
||||
- Less-critical feature variations
|
||||
- Performance/load testing
|
||||
- Run in nightly CI builds
|
||||
|
||||
**P3 (Low - On-demand)**:
|
||||
|
||||
- Nice-to-have validations
|
||||
- Rarely-used features
|
||||
- Exploratory testing scenarios
|
||||
- Run manually or weekly
|
||||
|
||||
**Priority tagging enables selective execution:**
|
||||
|
||||
```bash
|
||||
npm run test:e2e:p0 # Run only P0 tests (critical paths)
|
||||
npm run test:e2e:p1 # Run P0 + P1 tests (pre-merge)
|
||||
```
|
||||
|
||||
### Given-When-Then Test Structure
|
||||
|
||||
All tests follow BDD format for clarity:
|
||||
|
||||
```typescript
|
||||
test('[P0] should login with valid credentials and load dashboard', async ({ page }) => {
|
||||
// GIVEN: User is on login page
|
||||
await page.goto('/login');
|
||||
|
||||
// WHEN: User submits valid credentials
|
||||
await page.fill('[data-testid="email-input"]', 'user@example.com');
|
||||
await page.fill('[data-testid="password-input"]', 'Password123!');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
|
||||
// THEN: User is redirected to dashboard
|
||||
await expect(page).toHaveURL('/dashboard');
|
||||
await expect(page.locator('[data-testid="user-name"]')).toBeVisible();
|
||||
});
|
||||
```
|
||||
|
||||
### One Assertion Per Test (Atomic Design)
|
||||
|
||||
Each test verifies exactly one behavior:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: One assertion
|
||||
test('[P0] should display user name', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
});
|
||||
|
||||
// ❌ WRONG: Multiple assertions (not atomic)
|
||||
test('[P0] should display user info', async ({ page }) => {
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('John');
|
||||
await expect(page.locator('[data-testid="user-email"]')).toHaveText('john@example.com');
|
||||
});
|
||||
```
|
||||
|
||||
**Why?** If second assertion fails, you don't know if first is still valid. Split into separate tests for clear failure diagnosis.
|
||||
|
||||
### Network-First Testing Pattern
|
||||
|
||||
**Critical pattern to prevent race conditions**:
|
||||
|
||||
```typescript
|
||||
test('should load user dashboard after login', async ({ page }) => {
|
||||
// CRITICAL: Intercept routes BEFORE navigation
|
||||
await page.route('**/api/user', (route) =>
|
||||
route.fulfill({
|
||||
status: 200,
|
||||
body: JSON.stringify({ id: 1, name: 'Test User' }),
|
||||
}),
|
||||
);
|
||||
|
||||
// NOW navigate
|
||||
await page.goto('/dashboard');
|
||||
|
||||
await expect(page.locator('[data-testid="user-name"]')).toHaveText('Test User');
|
||||
});
|
||||
```
|
||||
|
||||
Always set up route interception before navigating to pages that make network requests.
|
||||
|
||||
### Fixture Architecture with Auto-Cleanup
|
||||
|
||||
Playwright fixtures with automatic data cleanup:
|
||||
|
||||
```typescript
|
||||
// tests/support/fixtures/auth.fixture.ts
|
||||
import { test as base } from '@playwright/test';
|
||||
import { createUser, deleteUser } from '../factories/user.factory';
|
||||
|
||||
export const test = base.extend({
|
||||
authenticatedUser: async ({ page }, use) => {
|
||||
// Setup: Create and authenticate user
|
||||
const user = await createUser();
|
||||
await page.goto('/login');
|
||||
await page.fill('[data-testid="email"]', user.email);
|
||||
await page.fill('[data-testid="password"]', user.password);
|
||||
await page.click('[data-testid="login-button"]');
|
||||
await page.waitForURL('/dashboard');
|
||||
|
||||
// Provide to test
|
||||
await use(user);
|
||||
|
||||
// Cleanup: Delete user automatically
|
||||
await deleteUser(user.id);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**Fixture principles:**
|
||||
|
||||
- Auto-cleanup (always delete created data in teardown)
|
||||
- Composable (fixtures can use other fixtures)
|
||||
- Isolated (each test gets fresh data)
|
||||
- Type-safe with TypeScript
|
||||
|
||||
### Data Factory Architecture
|
||||
|
||||
Use faker for all test data generation:
|
||||
|
||||
```typescript
|
||||
// tests/support/factories/user.factory.ts
|
||||
import { faker } from '@faker-js/faker';
|
||||
|
||||
export const createUser = (overrides = {}) => ({
|
||||
id: faker.number.int(),
|
||||
email: faker.internet.email(),
|
||||
password: faker.internet.password(),
|
||||
name: faker.person.fullName(),
|
||||
role: 'user',
|
||||
createdAt: faker.date.recent().toISOString(),
|
||||
...overrides,
|
||||
});
|
||||
|
||||
export const createUsers = (count: number) => Array.from({ length: count }, () => createUser());
|
||||
|
||||
// API helper for cleanup
|
||||
export const deleteUser = async (userId: number) => {
|
||||
await fetch(`/api/users/${userId}`, { method: 'DELETE' });
|
||||
};
|
||||
```
|
||||
|
||||
**Factory principles:**
|
||||
|
||||
- Use faker for random data (no hardcoded values to prevent collisions)
|
||||
- Support overrides for specific test scenarios
|
||||
- Generate complete valid objects matching API contracts
|
||||
- Include helper functions for bulk creation and cleanup
|
||||
|
||||
### No Page Objects
|
||||
|
||||
**Do NOT create page object classes.** Keep tests simple and direct:
|
||||
|
||||
```typescript
|
||||
// ✅ CORRECT: Direct test
|
||||
test('should login', async ({ page }) => {
|
||||
await page.goto('/login');
|
||||
await page.fill('[data-testid="email"]', 'user@example.com');
|
||||
await page.click('[data-testid="login-button"]');
|
||||
await expect(page).toHaveURL('/dashboard');
|
||||
});
|
||||
|
||||
// ❌ WRONG: Page object abstraction
|
||||
class LoginPage {
|
||||
async login(email, password) { ... }
|
||||
}
|
||||
```
|
||||
|
||||
Use fixtures for setup/teardown, not page objects for actions.
|
||||
|
||||
### Deterministic Tests Only
|
||||
|
||||
**No flaky patterns allowed:**
|
||||
|
||||
```typescript
|
||||
// ❌ WRONG: Hard wait
|
||||
await page.waitForTimeout(2000);
|
||||
|
||||
// ✅ CORRECT: Explicit wait
|
||||
await page.waitForSelector('[data-testid="user-name"]');
|
||||
await expect(page.locator('[data-testid="user-name"]')).toBeVisible();
|
||||
|
||||
// ❌ WRONG: Conditional flow
|
||||
if (await element.isVisible()) {
|
||||
await element.click();
|
||||
}
|
||||
|
||||
// ✅ CORRECT: Deterministic assertion
|
||||
await expect(element).toBeVisible();
|
||||
await element.click();
|
||||
|
||||
// ❌ WRONG: Try-catch for test logic
|
||||
try {
|
||||
await element.click();
|
||||
} catch (e) {
|
||||
// Test shouldn't catch errors
|
||||
}
|
||||
|
||||
// ✅ CORRECT: Let test fail if element not found
|
||||
await element.click();
|
||||
```
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
**Before this workflow:**
|
||||
|
||||
- **framework** workflow: Establish test framework architecture (Playwright/Cypress config, directory structure) - REQUIRED
|
||||
- **test-design** workflow: Optional for P0-P3 priority alignment and risk assessment context (BMad-Integrated mode only)
|
||||
- **atdd** workflow: Optional - automate expands beyond ATDD tests with edge cases (BMad-Integrated mode only)
|
||||
|
||||
**After this workflow:**
|
||||
|
||||
- **trace** workflow: Update traceability matrix with new test coverage (Phase 1) and make quality gate decision (Phase 2)
|
||||
- **CI pipeline**: Run tests in burn-in loop to detect flaky patterns
|
||||
|
||||
**Coordinates with:**
|
||||
|
||||
- **DEV agent**: Tests validate implementation correctness
|
||||
- **Story workflow**: Tests cover acceptance criteria (BMad-Integrated mode only)
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Works Out of Thin Air
|
||||
|
||||
**automate does NOT require BMad artifacts:**
|
||||
|
||||
- Can analyze any codebase independently
|
||||
- User can point TEA at a feature: "automate tests for src/auth/"
|
||||
- Works on non-BMad projects
|
||||
- BMad artifacts (story, tech-spec, PRD) are OPTIONAL enhancements, not requirements
|
||||
|
||||
**Similar to:**
|
||||
|
||||
- **framework**: Can scaffold tests on any project
|
||||
- **ci**: Can generate CI config without BMad context
|
||||
|
||||
**Different from:**
|
||||
|
||||
- **atdd**: REQUIRES story with acceptance criteria (halt if missing)
|
||||
- **test-design**: REQUIRES PRD/epic context (halt if missing)
|
||||
- **trace (Phase 2)**: REQUIRES test results for gate decision (halt if missing)
|
||||
|
||||
### File Size Limits
|
||||
|
||||
**Keep test files lean (under 300 lines):**
|
||||
|
||||
- If file exceeds limit, split into multiple files by feature area
|
||||
- Group related tests in describe blocks
|
||||
- Extract common setup to fixtures
|
||||
|
||||
### Quality Standards Enforced
|
||||
|
||||
**Every test must:**
|
||||
|
||||
- ✅ Use Given-When-Then format
|
||||
- ✅ Have clear, descriptive name with priority tag
|
||||
- ✅ One assertion per test (atomic)
|
||||
- ✅ No hard waits or sleeps
|
||||
- ✅ Use data-testid selectors (not CSS classes)
|
||||
- ✅ Self-cleaning (fixtures with auto-cleanup)
|
||||
- ✅ Deterministic (no flaky patterns)
|
||||
- ✅ Fast (under 90 seconds)
|
||||
|
||||
**Forbidden patterns:**
|
||||
|
||||
- ❌ Hard waits: `await page.waitForTimeout(2000)`
|
||||
- ❌ Conditional flow: `if (await element.isVisible()) { ... }`
|
||||
- ❌ Try-catch for test logic
|
||||
- ❌ Hardcoded test data (use factories with faker)
|
||||
- ❌ Page objects
|
||||
- ❌ Shared state between tests
|
||||
|
||||
## Knowledge Base References
|
||||
|
||||
This workflow automatically consults:
|
||||
|
||||
- **test-levels-framework.md** - Test level selection (E2E vs API vs Component vs Unit) with characteristics and use cases
|
||||
- **test-priorities.md** - Priority classification (P0-P3) with execution timing and risk alignment
|
||||
- **fixture-architecture.md** - Test fixture patterns with setup/teardown and auto-cleanup using Playwright's test.extend()
|
||||
- **data-factories.md** - Factory patterns using @faker-js/faker for random test data generation with overrides
|
||||
- **selective-testing.md** - Targeted test execution strategies for CI optimization
|
||||
- **ci-burn-in.md** - Flaky test detection patterns (10 iterations to catch intermittent failures)
|
||||
- **test-quality.md** - Test design principles (Given-When-Then, determinism, isolation, atomic assertions)
|
||||
|
||||
**Healing Knowledge (If `auto_heal_failures` enabled):**
|
||||
|
||||
- **test-healing-patterns.md** - Common failure patterns and automated fixes (selectors, timing, data, network, hard waits)
|
||||
- **selector-resilience.md** - Robust selector strategies and debugging (data-testid hierarchy, filter vs nth, anti-patterns)
|
||||
- **timing-debugging.md** - Race condition identification and deterministic wait fixes (network-first, event-based waits)
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping (22 fragments total).
|
||||
|
||||
## Example Output
|
||||
|
||||
### BMad-Integrated Mode
|
||||
|
||||
````markdown
|
||||
# Automation Summary - User Authentication
|
||||
|
||||
**Date:** 2025-10-14
|
||||
**Story:** Epic 3, Story 5
|
||||
**Coverage Target:** critical-paths
|
||||
|
||||
## Tests Created
|
||||
|
||||
### E2E Tests (2 tests, P0-P1)
|
||||
|
||||
- `tests/e2e/user-authentication.spec.ts` (87 lines)
|
||||
- [P0] Login with valid credentials → Dashboard loads
|
||||
- [P1] Display error for invalid credentials
|
||||
|
||||
### API Tests (3 tests, P1-P2)
|
||||
|
||||
- `tests/api/auth.api.spec.ts` (102 lines)
|
||||
- [P1] POST /auth/login - valid credentials → 200 + token
|
||||
- [P1] POST /auth/login - invalid credentials → 401 + error
|
||||
- [P2] POST /auth/login - missing fields → 400 + validation
|
||||
|
||||
### Component Tests (2 tests, P1)
|
||||
|
||||
- `tests/component/LoginForm.test.tsx` (45 lines)
|
||||
- [P1] Empty fields → submit button disabled
|
||||
- [P1] Valid input → submit button enabled
|
||||
|
||||
## Infrastructure Created
|
||||
|
||||
- Fixtures: `tests/support/fixtures/auth.fixture.ts`
|
||||
- Factories: `tests/support/factories/user.factory.ts`
|
||||
|
||||
## Test Execution
|
||||
|
||||
```bash
|
||||
npm run test:e2e # Run all tests
|
||||
npm run test:e2e:p0 # Critical paths only
|
||||
npm run test:e2e:p1 # P0 + P1 tests
|
||||
```
|
||||
````
|
||||
|
||||
## Coverage Analysis
|
||||
|
||||
**Total:** 7 tests (P0: 1, P1: 5, P2: 1)
|
||||
**Levels:** E2E: 2, API: 3, Component: 2
|
||||
|
||||
✅ All acceptance criteria covered
|
||||
✅ Happy path (E2E + API)
|
||||
✅ Error cases (API)
|
||||
✅ UI validation (Component)
|
||||
|
||||
````
|
||||
|
||||
### Standalone Mode
|
||||
|
||||
```markdown
|
||||
# Automation Summary - src/auth/
|
||||
|
||||
**Date:** 2025-10-14
|
||||
**Target:** src/auth/ (standalone analysis)
|
||||
**Coverage Target:** critical-paths
|
||||
|
||||
## Feature Analysis
|
||||
|
||||
**Source Files Analyzed:**
|
||||
- `src/auth/login.ts`
|
||||
- `src/auth/session.ts`
|
||||
- `src/auth/validation.ts`
|
||||
|
||||
**Existing Coverage:** 0 tests found
|
||||
|
||||
**Coverage Gaps:**
|
||||
- ❌ No E2E tests for login flow
|
||||
- ❌ No API tests for /auth/login endpoint
|
||||
- ❌ No unit tests for validateEmail()
|
||||
|
||||
## Tests Created
|
||||
|
||||
{Same structure as BMad-Integrated mode}
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. **High Priority (P0-P1):**
|
||||
- Add E2E test for password reset flow
|
||||
- Add API tests for token refresh endpoint
|
||||
|
||||
2. **Medium Priority (P2):**
|
||||
- Add unit tests for session timeout logic
|
||||
````
|
||||
|
||||
Ready to continue?
|
||||
@@ -1,493 +0,0 @@
|
||||
# CI/CD Pipeline Setup Workflow
|
||||
|
||||
Scaffolds a production-ready CI/CD quality pipeline with test execution, burn-in loops for flaky test detection, parallel sharding, and artifact collection. This workflow creates platform-specific CI configuration optimized for fast feedback (< 45 min total) and reliable test execution with 20× speedup over sequential runs.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *ci
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- Test framework is configured and tests pass locally
|
||||
- Team is ready to enable continuous integration
|
||||
- Existing CI pipeline needs optimization or modernization
|
||||
- Burn-in loop is needed for flaky test detection
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context Files:**
|
||||
|
||||
- **Framework config** (playwright.config.ts, cypress.config.ts): Determines test commands and configuration
|
||||
- **package.json**: Dependencies and scripts for caching strategy
|
||||
- **.nvmrc**: Node version for CI (optional, defaults to Node 20 LTS)
|
||||
|
||||
**Optional Context Files:**
|
||||
|
||||
- **Existing CI config**: To update rather than create new
|
||||
- **.git/config**: For CI platform auto-detection
|
||||
|
||||
**Workflow Variables:**
|
||||
|
||||
- `ci_platform`: Auto-detected (github-actions/gitlab-ci/circle-ci) or explicit
|
||||
- `test_framework`: Detected from framework config (playwright/cypress)
|
||||
- `parallel_jobs`: Number of parallel shards (default: 4)
|
||||
- `burn_in_enabled`: Enable burn-in loop (default: true)
|
||||
- `burn_in_iterations`: Burn-in iterations (default: 10)
|
||||
- `selective_testing_enabled`: Run only changed tests (default: true)
|
||||
- `artifact_retention_days`: Artifact storage duration (default: 30)
|
||||
- `cache_enabled`: Enable dependency caching (default: true)
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverables:**
|
||||
|
||||
1. **CI Configuration File**
|
||||
- `.github/workflows/test.yml` (GitHub Actions)
|
||||
- `.gitlab-ci.yml` (GitLab CI)
|
||||
- Platform-specific optimizations and best practices
|
||||
|
||||
2. **Pipeline Stages**
|
||||
- **Lint**: Code quality checks (<2 min)
|
||||
- **Test**: Parallel execution with 4 shards (<10 min per shard)
|
||||
- **Burn-In**: Flaky test detection with 10 iterations (<30 min)
|
||||
- **Report**: Aggregate results and publish artifacts
|
||||
|
||||
3. **Helper Scripts**
|
||||
- `scripts/test-changed.sh`: Selective testing (run only affected tests)
|
||||
- `scripts/ci-local.sh`: Local CI mirror for debugging
|
||||
- `scripts/burn-in.sh`: Standalone burn-in execution
|
||||
|
||||
4. **Documentation**
|
||||
- `docs/ci.md`: Pipeline guide, debugging, secrets setup
|
||||
- `docs/ci-secrets-checklist.md`: Required secrets and configuration
|
||||
- Inline comments in CI configuration files
|
||||
|
||||
5. **Optimization Features**
|
||||
- Dependency caching (npm + browser binaries): 2-5 min savings
|
||||
- Parallel sharding: 75% time reduction
|
||||
- Retry logic: Handles transient failures (2 retries)
|
||||
- Failure-only artifacts: Cost-effective debugging
|
||||
|
||||
**Performance Targets:**
|
||||
|
||||
- Lint: <2 minutes
|
||||
- Test (per shard): <10 minutes
|
||||
- Burn-in: <30 minutes
|
||||
- **Total: <45 minutes** (20× faster than sequential)
|
||||
|
||||
**Validation Safeguards:**
|
||||
|
||||
- ✅ Git repository initialized
|
||||
- ✅ Local tests pass before CI setup
|
||||
- ✅ Framework configuration exists
|
||||
- ✅ CI platform accessible
|
||||
|
||||
## Key Features
|
||||
|
||||
### Burn-In Loop for Flaky Test Detection
|
||||
|
||||
**Critical production pattern:**
|
||||
|
||||
```yaml
|
||||
burn-in:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- run: |
|
||||
for i in {1..10}; do
|
||||
echo "🔥 Burn-in iteration $i/10"
|
||||
npm run test:e2e || exit 1
|
||||
done
|
||||
```
|
||||
|
||||
**Purpose**: Runs tests 10 times to catch non-deterministic failures before they reach main branch.
|
||||
|
||||
**When to run:**
|
||||
|
||||
- On PRs to main/develop
|
||||
- Weekly on cron schedule
|
||||
- After test infrastructure changes
|
||||
|
||||
**Failure threshold**: Even ONE failure → tests are flaky, must fix before merging.
|
||||
|
||||
### Parallel Sharding
|
||||
|
||||
**Splits tests across 4 jobs:**
|
||||
|
||||
```yaml
|
||||
strategy:
|
||||
matrix:
|
||||
shard: [1, 2, 3, 4]
|
||||
steps:
|
||||
- run: npm run test:e2e -- --shard=${{ matrix.shard }}/4
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- 75% time reduction (40 min → 10 min per shard)
|
||||
- Faster feedback on PRs
|
||||
- Configurable shard count
|
||||
|
||||
### Smart Caching
|
||||
|
||||
**Node modules + browser binaries:**
|
||||
|
||||
```yaml
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.npm
|
||||
key: ${{ runner.os }}-node-${{ hashFiles('**/package-lock.json') }}
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- 2-5 min savings per run
|
||||
- Consistent across builds
|
||||
- Automatic invalidation on dependency changes
|
||||
|
||||
### Selective Testing
|
||||
|
||||
**Run only tests affected by code changes:**
|
||||
|
||||
```bash
|
||||
# scripts/test-changed.sh
|
||||
CHANGED_FILES=$(git diff --name-only HEAD~1)
|
||||
npm run test:e2e -- --grep="$AFFECTED_TESTS"
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- 50-80% time reduction for focused PRs
|
||||
- Faster feedback cycle
|
||||
- Full suite still runs on main branch
|
||||
|
||||
### Failure-Only Artifacts
|
||||
|
||||
**Upload debugging materials only on test failures:**
|
||||
|
||||
- Traces (Playwright): 5-10 MB per test
|
||||
- Screenshots: 100-500 KB each
|
||||
- Videos: 2-5 MB per test
|
||||
- HTML reports: 1-2 MB
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- Reduces storage costs by 90%
|
||||
- Maintains full debugging capability
|
||||
- 30-day retention default
|
||||
|
||||
### Local CI Mirror
|
||||
|
||||
**Debug CI failures locally:**
|
||||
|
||||
```bash
|
||||
./scripts/ci-local.sh
|
||||
# Runs: lint → test → burn-in (3 iterations)
|
||||
```
|
||||
|
||||
**Mirrors CI environment:**
|
||||
|
||||
- Same Node version
|
||||
- Same commands
|
||||
- Reduced burn-in (3 vs 10 for faster feedback)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
Automatically consults TEA knowledge base:
|
||||
|
||||
- `ci-burn-in.md` - Burn-in loop patterns and iterations
|
||||
- `selective-testing.md` - Changed test detection strategies
|
||||
- `visual-debugging.md` - Artifact collection best practices
|
||||
- `test-quality.md` - CI-specific quality criteria
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
**Before ci:**
|
||||
|
||||
- **framework**: Sets up test infrastructure and configuration
|
||||
- **test-design** (optional): Plans test coverage strategy
|
||||
|
||||
**After ci:**
|
||||
|
||||
- **atdd**: Generate failing tests that run in CI
|
||||
- **automate**: Expand test coverage that CI executes
|
||||
- **trace (Phase 2)**: Use CI results for quality gate decisions
|
||||
|
||||
**Coordinates with:**
|
||||
|
||||
- **dev-story**: Tests run in CI after story implementation
|
||||
- **retrospective**: CI metrics inform process improvements
|
||||
|
||||
**Updates:**
|
||||
|
||||
- `bmm-workflow-status.md`: Adds CI setup to Quality & Testing Progress section
|
||||
|
||||
## Important Notes
|
||||
|
||||
### CI Platform Auto-Detection
|
||||
|
||||
**GitHub Actions** (default):
|
||||
|
||||
- Auto-selected if `github.com` in git remote
|
||||
- Free 2000 min/month for private repos
|
||||
- Unlimited for public repos
|
||||
- `.github/workflows/test.yml`
|
||||
|
||||
**GitLab CI**:
|
||||
|
||||
- Auto-selected if `gitlab.com` in git remote
|
||||
- Free 400 min/month
|
||||
- `.gitlab-ci.yml`
|
||||
|
||||
**Circle CI** / **Jenkins**:
|
||||
|
||||
- User must specify explicitly
|
||||
- Templates provided for both
|
||||
|
||||
### Burn-In Strategy
|
||||
|
||||
**Iterations:**
|
||||
|
||||
- **3**: Quick feedback (local development)
|
||||
- **10**: Standard (PR checks) ← recommended
|
||||
- **100**: High-confidence (release branches)
|
||||
|
||||
**When to run:**
|
||||
|
||||
- ✅ On PRs to main/develop
|
||||
- ✅ Weekly scheduled (cron)
|
||||
- ✅ After test infra changes
|
||||
- ❌ Not on every commit (too slow)
|
||||
|
||||
**Cost-benefit:**
|
||||
|
||||
- 30 minutes of CI time → Prevents hours of debugging flaky tests
|
||||
|
||||
### Artifact Collection Strategy
|
||||
|
||||
**Failure-only collection:**
|
||||
|
||||
- Saves 90% storage costs
|
||||
- Maintains debugging capability
|
||||
- Automatic cleanup after retention period
|
||||
|
||||
**What to collect:**
|
||||
|
||||
- Traces: Full execution context (Playwright)
|
||||
- Screenshots: Visual evidence
|
||||
- Videos: Interaction playback
|
||||
- HTML reports: Detailed results
|
||||
- Console logs: Error messages
|
||||
|
||||
**What NOT to collect:**
|
||||
|
||||
- Passing test artifacts (waste of space)
|
||||
- Large binaries
|
||||
- Sensitive data (use secrets instead)
|
||||
|
||||
### Selective Testing Trade-offs
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- 50-80% time reduction for focused changes
|
||||
- Faster feedback loop
|
||||
- Lower CI costs
|
||||
|
||||
**Risks:**
|
||||
|
||||
- May miss integration issues
|
||||
- Relies on accurate change detection
|
||||
- False positives if detection is too aggressive
|
||||
|
||||
**Mitigation:**
|
||||
|
||||
- Always run full suite on merge to main
|
||||
- Use burn-in loop on main branch
|
||||
- Monitor for missed issues
|
||||
|
||||
### Parallelism Configuration
|
||||
|
||||
**4 shards** (default):
|
||||
|
||||
- Optimal for 40-80 test files
|
||||
- ~10 min per shard
|
||||
- Balances speed vs resource usage
|
||||
|
||||
**Adjust if:**
|
||||
|
||||
- Tests complete in <5 min → reduce shards
|
||||
- Tests take >15 min → increase shards
|
||||
- CI limits concurrent jobs → reduce shards
|
||||
|
||||
**Formula:**
|
||||
|
||||
```
|
||||
Total test time / Target shard time = Optimal shards
|
||||
Example: 40 min / 10 min = 4 shards
|
||||
```
|
||||
|
||||
### Retry Logic
|
||||
|
||||
**2 retries** (default):
|
||||
|
||||
- Handles transient network issues
|
||||
- Mitigates race conditions
|
||||
- Does NOT mask flaky tests (burn-in catches those)
|
||||
|
||||
**When retries trigger:**
|
||||
|
||||
- Network timeouts
|
||||
- Service unavailability
|
||||
- Resource constraints
|
||||
|
||||
**When retries DON'T help:**
|
||||
|
||||
- Assertion failures (logic errors)
|
||||
- Flaky tests (non-deterministic)
|
||||
- Configuration errors
|
||||
|
||||
### Notification Setup (Optional)
|
||||
|
||||
**Supported channels:**
|
||||
|
||||
- Slack: Webhook integration
|
||||
- Email: SMTP configuration
|
||||
- Discord: Webhook integration
|
||||
|
||||
**Configuration:**
|
||||
|
||||
```yaml
|
||||
notify_on_failure: true
|
||||
notification_channels: 'slack'
|
||||
# Requires SLACK_WEBHOOK secret in CI settings
|
||||
```
|
||||
|
||||
**Best practice:** Enable for main/develop branches only, not PRs.
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
After workflow completion, verify:
|
||||
|
||||
- [ ] CI configuration file created and syntactically valid
|
||||
- [ ] Burn-in loop configured (10 iterations)
|
||||
- [ ] Parallel sharding enabled (4 jobs)
|
||||
- [ ] Caching configured (dependencies + browsers)
|
||||
- [ ] Artifact collection on failure only
|
||||
- [ ] Helper scripts created and executable
|
||||
- [ ] Documentation complete (ci.md, secrets checklist)
|
||||
- [ ] No errors or warnings during scaffold
|
||||
- [ ] First CI run triggered and passes
|
||||
|
||||
Refer to `checklist.md` for comprehensive validation criteria.
|
||||
|
||||
## Example Execution
|
||||
|
||||
**Scenario 1: New GitHub Actions setup**
|
||||
|
||||
```bash
|
||||
bmad tea *ci
|
||||
|
||||
# TEA detects:
|
||||
# - GitHub repository (github.com in git remote)
|
||||
# - Playwright framework
|
||||
# - Node 20 from .nvmrc
|
||||
# - 60 test files
|
||||
|
||||
# TEA scaffolds:
|
||||
# - .github/workflows/test.yml
|
||||
# - 4-shard parallel execution
|
||||
# - Burn-in loop (10 iterations)
|
||||
# - Dependency + browser caching
|
||||
# - Failure artifacts (traces, screenshots)
|
||||
# - Helper scripts
|
||||
# - Documentation
|
||||
|
||||
# Result:
|
||||
# Total CI time: 42 minutes (was 8 hours sequential)
|
||||
# - Lint: 1.5 min
|
||||
# - Test (4 shards): 9 min each
|
||||
# - Burn-in: 28 min
|
||||
```
|
||||
|
||||
**Scenario 2: Update existing GitLab CI**
|
||||
|
||||
```bash
|
||||
bmad tea *ci
|
||||
|
||||
# TEA detects:
|
||||
# - Existing .gitlab-ci.yml
|
||||
# - Cypress framework
|
||||
# - No caching configured
|
||||
|
||||
# TEA asks: "Update existing CI or create new?"
|
||||
# User: "Update"
|
||||
|
||||
# TEA enhances:
|
||||
# - Adds burn-in job
|
||||
# - Configures caching (cache: paths)
|
||||
# - Adds parallel: 4
|
||||
# - Updates artifact collection
|
||||
# - Documents secrets needed
|
||||
|
||||
# Result:
|
||||
# CI time reduced from 45 min → 12 min
|
||||
```
|
||||
|
||||
**Scenario 3: Standalone burn-in setup**
|
||||
|
||||
```bash
|
||||
# User wants only burn-in, no full CI
|
||||
bmad tea *ci
|
||||
# Set burn_in_enabled: true, skip other stages
|
||||
|
||||
# TEA creates:
|
||||
# - Minimal workflow with burn-in only
|
||||
# - scripts/burn-in.sh for local testing
|
||||
# - Documentation for running burn-in
|
||||
|
||||
# Use case:
|
||||
# - Validate test stability before full CI setup
|
||||
# - Debug intermittent failures
|
||||
# - Confidence check before release
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Issue: "Git repository not found"**
|
||||
|
||||
- **Cause**: No .git/ directory
|
||||
- **Solution**: Run `git init` and `git remote add origin <url>`
|
||||
|
||||
**Issue: "Tests fail locally but should set up CI anyway"**
|
||||
|
||||
- **Cause**: Workflow halts if local tests fail
|
||||
- **Solution**: Fix tests first, or temporarily skip preflight (not recommended)
|
||||
|
||||
**Issue: "CI takes longer than 10 min per shard"**
|
||||
|
||||
- **Cause**: Too many tests per shard
|
||||
- **Solution**: Increase shard count (e.g., 4 → 8)
|
||||
|
||||
**Issue: "Burn-in passes locally but fails in CI"**
|
||||
|
||||
- **Cause**: Environment differences (timing, resources)
|
||||
- **Solution**: Use `scripts/ci-local.sh` to mirror CI environment
|
||||
|
||||
**Issue: "Caching not working"**
|
||||
|
||||
- **Cause**: Cache key mismatch or cache limit exceeded
|
||||
- **Solution**: Check cache key formula, verify platform limits
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **framework**: Set up test infrastructure → [framework/README.md](../framework/README.md)
|
||||
- **atdd**: Generate acceptance tests → [atdd/README.md](../atdd/README.md)
|
||||
- **automate**: Expand test coverage → [automate/README.md](../automate/README.md)
|
||||
- **trace**: Traceability and quality gate decisions → [trace/README.md](../trace/README.md)
|
||||
|
||||
## Version History
|
||||
|
||||
- **v4.0 (BMad v6)**: Pure markdown instructions, enhanced workflow.yaml, burn-in loop integration
|
||||
- **v3.x**: XML format instructions, basic CI setup
|
||||
- **v2.x**: Legacy task-based approach
|
||||
@@ -1,340 +0,0 @@
|
||||
# Test Framework Setup Workflow
|
||||
|
||||
Initializes a production-ready test framework architecture (Playwright or Cypress) with fixtures, helpers, configuration, and industry best practices. This workflow scaffolds the complete testing infrastructure for modern web applications, providing a robust foundation for test automation.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *framework
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- Starting a new project that needs test infrastructure
|
||||
- Migrating from an older testing approach
|
||||
- Setting up testing from scratch
|
||||
- Standardizing test architecture across teams
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context Files:**
|
||||
|
||||
- **package.json**: Project dependencies and scripts to detect project type and bundler
|
||||
|
||||
**Optional Context Files:**
|
||||
|
||||
- **Architecture docs** (architecture.md, tech-spec.md): Informs framework configuration decisions
|
||||
- **Existing tests**: Detects current framework to avoid conflicts
|
||||
|
||||
**Workflow Variables:**
|
||||
|
||||
- `test_framework`: Auto-detected (playwright/cypress) or manually specified
|
||||
- `project_type`: Auto-detected from package.json (react/vue/angular/next/node)
|
||||
- `bundler`: Auto-detected from package.json (vite/webpack/rollup/esbuild)
|
||||
- `test_dir`: Root test directory (default: `{project-root}/tests`)
|
||||
- `use_typescript`: Prefer TypeScript configuration (default: true)
|
||||
- `framework_preference`: Auto-detection or force specific framework (default: "auto")
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverables:**
|
||||
|
||||
1. **Configuration File**
|
||||
- `playwright.config.ts` or `cypress.config.ts` with production-ready settings
|
||||
- Timeouts: action 15s, navigation 30s, test 60s
|
||||
- Reporters: HTML + JUnit XML
|
||||
- Failure-only artifacts (traces, screenshots, videos)
|
||||
|
||||
2. **Directory Structure**
|
||||
|
||||
```
|
||||
tests/
|
||||
├── e2e/ # Test files (organize as needed)
|
||||
├── support/ # Framework infrastructure (key pattern)
|
||||
│ ├── fixtures/ # Test fixtures with auto-cleanup
|
||||
│ │ ├── index.ts # Fixture merging
|
||||
│ │ └── factories/ # Data factories (faker-based)
|
||||
│ ├── helpers/ # Utility functions
|
||||
│ └── page-objects/ # Page object models (optional)
|
||||
└── README.md # Setup and usage guide
|
||||
```
|
||||
|
||||
**Note**: Test organization (e2e/, api/, integration/, etc.) is flexible. The **support/** folder contains reusable fixtures, helpers, and factories - the core framework pattern.
|
||||
|
||||
3. **Environment Configuration**
|
||||
- `.env.example` with `TEST_ENV`, `BASE_URL`, `API_URL`, auth credentials
|
||||
- `.nvmrc` with Node version (LTS)
|
||||
|
||||
4. **Test Infrastructure**
|
||||
- Fixture architecture using `mergeTests` pattern
|
||||
- Data factories with auto-cleanup (faker-based)
|
||||
- Sample tests demonstrating best practices
|
||||
- Helper utilities for common operations
|
||||
|
||||
5. **Documentation**
|
||||
- `tests/README.md` with comprehensive setup instructions
|
||||
- Inline comments explaining configuration choices
|
||||
- References to TEA knowledge base
|
||||
|
||||
**Secondary Deliverables:**
|
||||
|
||||
- Updated `package.json` with minimal test script (`test:e2e`)
|
||||
- Sample test demonstrating fixture usage
|
||||
- Network-first testing patterns
|
||||
- Selector strategy guidance (data-testid)
|
||||
|
||||
**Validation Safeguards:**
|
||||
|
||||
- ✅ No existing framework detected (prevents conflicts)
|
||||
- ✅ package.json exists and is valid
|
||||
- ✅ Framework auto-detection successful or explicit choice provided
|
||||
- ✅ Sample test runs successfully
|
||||
- ✅ All generated files are syntactically correct
|
||||
|
||||
## Key Features
|
||||
|
||||
### Smart Framework Selection
|
||||
|
||||
- **Auto-detection logic** based on project characteristics:
|
||||
- **Playwright** recommended for: Large repos (100+ files), performance-critical apps, multi-browser support, complex debugging needs
|
||||
- **Cypress** recommended for: Small teams prioritizing DX, component testing focus, real-time test development
|
||||
- Falls back to Playwright as default if uncertain
|
||||
|
||||
### Production-Ready Patterns
|
||||
|
||||
- **Fixture Architecture**: Pure function → fixture → `mergeTests` composition pattern
|
||||
- **Auto-Cleanup**: Fixtures automatically clean up test data in teardown
|
||||
- **Network-First**: Route interception before navigation to prevent race conditions
|
||||
- **Failure-Only Artifacts**: Screenshots/videos/traces only captured on failure to reduce storage
|
||||
- **Parallel Execution**: Configured for optimal CI performance
|
||||
|
||||
### Industry Best Practices
|
||||
|
||||
- **Selector Strategy**: Prescriptive guidance on `data-testid` attributes
|
||||
- **Data Factories**: Faker-based factories for realistic test data
|
||||
- **Contract Testing**: Recommends Pact for microservices architectures
|
||||
- **Error Handling**: Comprehensive timeout and retry configuration
|
||||
- **Reporting**: Multiple reporter formats (HTML, JUnit, console)
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
Automatically consults TEA knowledge base:
|
||||
|
||||
- `fixture-architecture.md` - Pure function → fixture → mergeTests pattern
|
||||
- `data-factories.md` - Faker-based factories with auto-cleanup
|
||||
- `network-first.md` - Network interception before navigation
|
||||
- `playwright-config.md` - Playwright-specific best practices
|
||||
- `test-config.md` - General configuration guidelines
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
**Before framework:**
|
||||
|
||||
- **prd** (Phase 2): Determines project scope and testing needs
|
||||
- **workflow-status**: Verifies project readiness
|
||||
|
||||
**After framework:**
|
||||
|
||||
- **ci**: Scaffold CI/CD pipeline using framework configuration
|
||||
- **test-design**: Plan test coverage strategy for the project
|
||||
- **atdd**: Generate failing acceptance tests using the framework
|
||||
|
||||
**Coordinates with:**
|
||||
|
||||
- **architecture** (Phase 3): Aligns test structure with system architecture
|
||||
- **tech-spec**: Uses technical specifications to inform test configuration
|
||||
|
||||
**Updates:**
|
||||
|
||||
- `bmm-workflow-status.md`: Adds framework initialization to Quality & Testing Progress section
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Preflight Checks
|
||||
|
||||
**Critical requirements** verified before scaffolding:
|
||||
|
||||
- package.json exists in project root
|
||||
- No modern E2E framework already configured
|
||||
- Architecture/stack context available
|
||||
|
||||
If any check fails, workflow **HALTS** and notifies user.
|
||||
|
||||
### Framework-Specific Guidance
|
||||
|
||||
**Playwright Advantages:**
|
||||
|
||||
- Worker parallelism (significantly faster for large suites)
|
||||
- Trace viewer (powerful debugging with screenshots, network, console logs)
|
||||
- Multi-language support (TypeScript, JavaScript, Python, C#, Java)
|
||||
- Built-in API testing capabilities
|
||||
- Better handling of multiple browser contexts
|
||||
|
||||
**Cypress Advantages:**
|
||||
|
||||
- Superior developer experience (real-time reloading)
|
||||
- Excellent for component testing
|
||||
- Simpler setup for small teams
|
||||
- Better suited for watch mode during development
|
||||
|
||||
**Avoid Cypress when:**
|
||||
|
||||
- API chains are heavy and complex
|
||||
- Multi-tab/window scenarios are common
|
||||
- Worker parallelism is critical for CI performance
|
||||
|
||||
### Selector Strategy
|
||||
|
||||
**Always recommend:**
|
||||
|
||||
- `data-testid` attributes for UI elements (framework-agnostic)
|
||||
- `data-cy` attributes if Cypress is chosen (Cypress-specific)
|
||||
- Avoid brittle CSS selectors or XPath
|
||||
|
||||
### Standalone Operation
|
||||
|
||||
This workflow operates independently:
|
||||
|
||||
- **No story required**: Can be run at project initialization
|
||||
- **No epic context needed**: Works for greenfield and brownfield projects
|
||||
- **Autonomous**: Auto-detects configuration and proceeds without user input
|
||||
|
||||
### Output Summary Format
|
||||
|
||||
After completion, provides structured summary:
|
||||
|
||||
```markdown
|
||||
## Framework Scaffold Complete
|
||||
|
||||
**Framework Selected**: Playwright (or Cypress)
|
||||
|
||||
**Artifacts Created**:
|
||||
|
||||
- ✅ Configuration file: playwright.config.ts
|
||||
- ✅ Directory structure: tests/e2e/, tests/support/
|
||||
- ✅ Environment config: .env.example
|
||||
- ✅ Node version: .nvmrc
|
||||
- ✅ Fixture architecture: tests/support/fixtures/
|
||||
- ✅ Data factories: tests/support/fixtures/factories/
|
||||
- ✅ Sample tests: tests/e2e/example.spec.ts
|
||||
- ✅ Documentation: tests/README.md
|
||||
|
||||
**Next Steps**:
|
||||
|
||||
1. Copy .env.example to .env and fill in environment variables
|
||||
2. Run npm install to install test dependencies
|
||||
3. Run npm run test:e2e to execute sample tests
|
||||
4. Review tests/README.md for detailed setup instructions
|
||||
|
||||
**Knowledge Base References Applied**:
|
||||
|
||||
- Fixture architecture pattern (pure functions + mergeTests)
|
||||
- Data factories with auto-cleanup (faker-based)
|
||||
- Network-first testing safeguards
|
||||
- Failure-only artifact capture
|
||||
```
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
After workflow completion, verify:
|
||||
|
||||
- [ ] Configuration file created and syntactically valid
|
||||
- [ ] Directory structure exists with all folders
|
||||
- [ ] Environment configuration generated (.env.example, .nvmrc)
|
||||
- [ ] Sample tests run successfully (npm run test:e2e)
|
||||
- [ ] Documentation complete and accurate (tests/README.md)
|
||||
- [ ] No errors or warnings during scaffold
|
||||
- [ ] package.json scripts updated correctly
|
||||
- [ ] Fixtures and factories follow patterns from knowledge base
|
||||
|
||||
Refer to `checklist.md` for comprehensive validation criteria.
|
||||
|
||||
## Example Execution
|
||||
|
||||
**Scenario 1: New React + Vite project**
|
||||
|
||||
```bash
|
||||
# User runs framework workflow
|
||||
bmad tea *framework
|
||||
|
||||
# TEA detects:
|
||||
# - React project (from package.json)
|
||||
# - Vite bundler
|
||||
# - No existing test framework
|
||||
# - 150+ files (recommends Playwright)
|
||||
|
||||
# TEA scaffolds:
|
||||
# - playwright.config.ts with Vite detection
|
||||
# - Component testing configuration
|
||||
# - React Testing Library helpers
|
||||
# - Sample component + E2E tests
|
||||
```
|
||||
|
||||
**Scenario 2: Existing Node.js API project**
|
||||
|
||||
```bash
|
||||
# User runs framework workflow
|
||||
bmad tea *framework
|
||||
|
||||
# TEA detects:
|
||||
# - Node.js backend (no frontend framework)
|
||||
# - Express framework
|
||||
# - Small project (50 files)
|
||||
# - API endpoints in routes/
|
||||
|
||||
# TEA scaffolds:
|
||||
# - playwright.config.ts focused on API testing
|
||||
# - tests/api/ directory structure
|
||||
# - API helper utilities
|
||||
# - Sample API tests with auth
|
||||
```
|
||||
|
||||
**Scenario 3: Cypress preferred (explicit)**
|
||||
|
||||
```bash
|
||||
# User sets framework preference
|
||||
# (in workflow config: framework_preference: "cypress")
|
||||
|
||||
bmad tea *framework
|
||||
|
||||
# TEA scaffolds:
|
||||
# - cypress.config.ts
|
||||
# - tests/e2e/ with Cypress patterns
|
||||
# - Cypress-specific commands
|
||||
# - data-cy selector strategy
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Issue: "Existing test framework detected"**
|
||||
|
||||
- **Cause**: playwright.config._ or cypress.config._ already exists
|
||||
- **Solution**: Use `upgrade-framework` workflow (TBD) or manually remove existing config
|
||||
|
||||
**Issue: "Cannot detect project type"**
|
||||
|
||||
- **Cause**: package.json missing or malformed
|
||||
- **Solution**: Ensure package.json exists and has valid dependencies
|
||||
|
||||
**Issue: "Sample test fails to run"**
|
||||
|
||||
- **Cause**: Missing dependencies or incorrect BASE_URL
|
||||
- **Solution**: Run `npm install` and configure `.env` with correct URLs
|
||||
|
||||
**Issue: "TypeScript compilation errors"**
|
||||
|
||||
- **Cause**: Missing @types packages or tsconfig misconfiguration
|
||||
- **Solution**: Ensure TypeScript and type definitions are installed
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **ci**: Scaffold CI/CD pipeline → [ci/README.md](../ci/README.md)
|
||||
- **test-design**: Plan test coverage → [test-design/README.md](../test-design/README.md)
|
||||
- **atdd**: Generate acceptance tests → [atdd/README.md](../atdd/README.md)
|
||||
- **automate**: Expand regression suite → [automate/README.md](../automate/README.md)
|
||||
|
||||
## Version History
|
||||
|
||||
- **v4.0 (BMad v6)**: Pure markdown instructions, enhanced workflow.yaml, comprehensive README
|
||||
- **v3.x**: XML format instructions
|
||||
- **v2.x**: Legacy task-based approach
|
||||
@@ -1,469 +0,0 @@
|
||||
# Non-Functional Requirements Assessment Workflow
|
||||
|
||||
**Workflow ID:** `testarch-nfr`
|
||||
**Agent:** Test Architect (TEA)
|
||||
**Command:** `bmad tea *nfr-assess`
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The **nfr-assess** workflow performs a comprehensive assessment of non-functional requirements (NFRs) to validate that the implementation meets performance, security, reliability, and maintainability standards before release. It uses evidence-based validation with deterministic PASS/CONCERNS/FAIL rules and provides actionable recommendations for remediation.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- Assess multiple NFR categories (performance, security, reliability, maintainability, custom)
|
||||
- Validate NFRs against defined thresholds from tech specs, PRD, or defaults
|
||||
- Classify status deterministically (PASS/CONCERNS/FAIL) based on evidence
|
||||
- Never guess thresholds - mark as CONCERNS if unknown
|
||||
- Generate CI/CD-ready YAML snippets for quality gates
|
||||
- Provide quick wins and recommended actions for remediation
|
||||
- Create evidence checklists for gaps
|
||||
|
||||
---
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use `*nfr-assess` when you need to:
|
||||
|
||||
- ✅ Validate non-functional requirements before release
|
||||
- ✅ Assess performance against defined thresholds
|
||||
- ✅ Verify security requirements are met
|
||||
- ✅ Validate reliability and error handling
|
||||
- ✅ Check maintainability standards (coverage, quality, documentation)
|
||||
- ✅ Generate NFR assessment reports for stakeholders
|
||||
- ✅ Create gate-ready metrics for CI/CD pipelines
|
||||
|
||||
**Typical Timing:**
|
||||
|
||||
- Before release (validate all NFRs)
|
||||
- Before PR merge (validate critical NFRs)
|
||||
- During sprint retrospectives (assess maintainability)
|
||||
- After performance testing (validate performance NFRs)
|
||||
- After security audit (validate security NFRs)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Required:**
|
||||
|
||||
- Implementation deployed locally or accessible for evaluation
|
||||
- Evidence sources available (test results, metrics, logs, CI results)
|
||||
|
||||
**Recommended:**
|
||||
|
||||
- NFR requirements defined in tech-spec.md, PRD.md, or story
|
||||
- Test results from performance, security, reliability tests
|
||||
- Application metrics (response times, error rates, throughput)
|
||||
- CI/CD pipeline results for burn-in validation
|
||||
|
||||
**Halt Conditions:**
|
||||
|
||||
- NFR targets are undefined and cannot be obtained → Halt and request definition
|
||||
- Implementation is not accessible for evaluation → Halt and request deployment
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage (BMad Mode)
|
||||
|
||||
```bash
|
||||
bmad tea *nfr-assess
|
||||
```
|
||||
|
||||
The workflow will:
|
||||
|
||||
1. Read tech-spec.md for NFR requirements
|
||||
2. Gather evidence from test results, metrics, logs
|
||||
3. Assess each NFR category against thresholds
|
||||
4. Generate NFR assessment report
|
||||
5. Save to `bmad/output/nfr-assessment.md`
|
||||
|
||||
### Standalone Mode (No Tech Spec)
|
||||
|
||||
```bash
|
||||
bmad tea *nfr-assess --feature-name "User Authentication"
|
||||
```
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
```bash
|
||||
bmad tea *nfr-assess \
|
||||
--assess-performance true \
|
||||
--assess-security true \
|
||||
--assess-reliability true \
|
||||
--assess-maintainability true \
|
||||
--performance-response-time-ms 500 \
|
||||
--security-score-min 85
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
1. **Load Context** - Read tech spec, PRD, knowledge base fragments
|
||||
2. **Identify NFRs** - Determine categories and thresholds
|
||||
3. **Gather Evidence** - Read test results, metrics, logs, CI results
|
||||
4. **Assess NFRs** - Apply deterministic PASS/CONCERNS/FAIL rules
|
||||
5. **Identify Actions** - Quick wins, recommended actions, monitoring hooks
|
||||
6. **Generate Deliverables** - NFR assessment report, gate YAML, evidence checklist
|
||||
|
||||
---
|
||||
|
||||
## Outputs
|
||||
|
||||
### NFR Assessment Report (`nfr-assessment.md`)
|
||||
|
||||
Comprehensive markdown file with:
|
||||
|
||||
- Executive summary (overall status, critical issues)
|
||||
- Assessment by category (performance, security, reliability, maintainability)
|
||||
- Evidence for each NFR (test results, metrics, thresholds)
|
||||
- Status classification (PASS/CONCERNS/FAIL)
|
||||
- Quick wins section
|
||||
- Recommended actions section
|
||||
- Evidence gaps checklist
|
||||
|
||||
### Gate YAML Snippet (Optional)
|
||||
|
||||
```yaml
|
||||
nfr_assessment:
|
||||
date: '2025-10-14'
|
||||
categories:
|
||||
performance: 'PASS'
|
||||
security: 'CONCERNS'
|
||||
reliability: 'PASS'
|
||||
maintainability: 'PASS'
|
||||
overall_status: 'CONCERNS'
|
||||
critical_issues: 0
|
||||
high_priority_issues: 1
|
||||
concerns: 1
|
||||
blockers: false
|
||||
```
|
||||
|
||||
### Evidence Checklist (Optional)
|
||||
|
||||
- List of NFRs with missing or incomplete evidence
|
||||
- Owners for evidence collection
|
||||
- Suggested evidence sources
|
||||
- Deadlines for evidence collection
|
||||
|
||||
---
|
||||
|
||||
## NFR Categories
|
||||
|
||||
### Performance
|
||||
|
||||
**Criteria:** Response time, throughput, resource usage, scalability
|
||||
**Thresholds (Default):**
|
||||
|
||||
- Response time p95: 500ms
|
||||
- Throughput: 100 RPS
|
||||
- CPU usage: < 70%
|
||||
- Memory usage: < 80%
|
||||
|
||||
**Evidence Sources:** Load test results, APM data, Lighthouse reports, Playwright traces
|
||||
|
||||
---
|
||||
|
||||
### Security
|
||||
|
||||
**Criteria:** Authentication, authorization, data protection, vulnerability management
|
||||
**Thresholds (Default):**
|
||||
|
||||
- Security score: >= 85/100
|
||||
- Critical vulnerabilities: 0
|
||||
- High vulnerabilities: < 3
|
||||
- MFA enabled
|
||||
|
||||
**Evidence Sources:** SAST results, DAST results, dependency scanning, pentest reports
|
||||
|
||||
---
|
||||
|
||||
### Reliability
|
||||
|
||||
**Criteria:** Availability, error handling, fault tolerance, disaster recovery
|
||||
**Thresholds (Default):**
|
||||
|
||||
- Uptime: >= 99.9%
|
||||
- Error rate: < 0.1%
|
||||
- MTTR: < 15 minutes
|
||||
- CI burn-in: 100 consecutive runs
|
||||
|
||||
**Evidence Sources:** Uptime monitoring, error logs, CI burn-in results, chaos tests
|
||||
|
||||
---
|
||||
|
||||
### Maintainability
|
||||
|
||||
**Criteria:** Code quality, test coverage, documentation, technical debt
|
||||
**Thresholds (Default):**
|
||||
|
||||
- Test coverage: >= 80%
|
||||
- Code quality: >= 85/100
|
||||
- Technical debt: < 5%
|
||||
- Documentation: >= 90%
|
||||
|
||||
**Evidence Sources:** Coverage reports, static analysis, documentation audit, test review
|
||||
|
||||
---
|
||||
|
||||
## Assessment Rules
|
||||
|
||||
### PASS ✅
|
||||
|
||||
- Evidence exists AND meets or exceeds threshold
|
||||
- No concerns flagged in evidence
|
||||
- Quality is acceptable
|
||||
|
||||
### CONCERNS ⚠️
|
||||
|
||||
- Threshold is UNKNOWN (not defined)
|
||||
- Evidence is MISSING or INCOMPLETE
|
||||
- Evidence is close to threshold (within 10%)
|
||||
- Evidence shows intermittent issues
|
||||
|
||||
### FAIL ❌
|
||||
|
||||
- Evidence exists BUT does not meet threshold
|
||||
- Critical evidence is MISSING
|
||||
- Evidence shows consistent failures
|
||||
- Quality is unacceptable
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### workflow.yaml Variables
|
||||
|
||||
```yaml
|
||||
variables:
|
||||
# NFR categories to assess
|
||||
assess_performance: true
|
||||
assess_security: true
|
||||
assess_reliability: true
|
||||
assess_maintainability: true
|
||||
|
||||
# Custom NFR categories
|
||||
custom_nfr_categories: '' # e.g., "accessibility,compliance"
|
||||
|
||||
# Evidence sources
|
||||
test_results_dir: '{project-root}/test-results'
|
||||
metrics_dir: '{project-root}/metrics'
|
||||
logs_dir: '{project-root}/logs'
|
||||
include_ci_results: true
|
||||
|
||||
# Thresholds
|
||||
performance_response_time_ms: 500
|
||||
performance_throughput_rps: 100
|
||||
security_score_min: 85
|
||||
reliability_uptime_pct: 99.9
|
||||
maintainability_coverage_pct: 80
|
||||
|
||||
# Assessment configuration
|
||||
use_deterministic_rules: true
|
||||
never_guess_thresholds: true
|
||||
require_evidence: true
|
||||
suggest_monitoring: true
|
||||
|
||||
# Output configuration
|
||||
output_file: '{output_folder}/nfr-assessment.md'
|
||||
generate_gate_yaml: true
|
||||
generate_evidence_checklist: true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base Integration
|
||||
|
||||
This workflow automatically loads relevant knowledge fragments:
|
||||
|
||||
- `nfr-criteria.md` - Non-functional requirements criteria
|
||||
- `ci-burn-in.md` - CI/CD burn-in patterns for reliability
|
||||
- `test-quality.md` - Test quality expectations (maintainability)
|
||||
- `playwright-config.md` - Performance configuration patterns
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Full NFR Assessment Before Release
|
||||
|
||||
```bash
|
||||
bmad tea *nfr-assess
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```markdown
|
||||
# NFR Assessment - Story 1.3
|
||||
|
||||
**Overall Status:** PASS ✅ (No blockers)
|
||||
|
||||
## Performance Assessment
|
||||
|
||||
- Response Time p95: PASS ✅ (320ms < 500ms threshold)
|
||||
- Throughput: PASS ✅ (250 RPS > 100 RPS threshold)
|
||||
|
||||
## Security Assessment
|
||||
|
||||
- Authentication: PASS ✅ (MFA enforced)
|
||||
- Data Protection: PASS ✅ (AES-256 + TLS 1.3)
|
||||
|
||||
## Reliability Assessment
|
||||
|
||||
- Uptime: PASS ✅ (99.95% > 99.9% threshold)
|
||||
- Error Rate: PASS ✅ (0.05% < 0.1% threshold)
|
||||
|
||||
## Maintainability Assessment
|
||||
|
||||
- Test Coverage: PASS ✅ (87% > 80% threshold)
|
||||
- Code Quality: PASS ✅ (92/100 > 85/100 threshold)
|
||||
|
||||
Gate Status: PASS ✅ - Ready for release
|
||||
```
|
||||
|
||||
### Example 2: NFR Assessment with Concerns
|
||||
|
||||
```bash
|
||||
bmad tea *nfr-assess --feature-name "User Authentication"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```markdown
|
||||
# NFR Assessment - User Authentication
|
||||
|
||||
**Overall Status:** CONCERNS ⚠️ (1 HIGH issue)
|
||||
|
||||
## Security Assessment
|
||||
|
||||
### Authentication Strength
|
||||
|
||||
- **Status:** CONCERNS ⚠️
|
||||
- **Threshold:** MFA enabled for all users
|
||||
- **Actual:** MFA optional (not enforced)
|
||||
- **Evidence:** Security audit (security-audit-2025-10-14.md)
|
||||
- **Recommendation:** HIGH - Enforce MFA for all new accounts
|
||||
|
||||
## Quick Wins
|
||||
|
||||
1. **Enforce MFA (Security)** - HIGH - 4 hours
|
||||
- Add configuration flag to enforce MFA
|
||||
- No code changes needed
|
||||
|
||||
Gate Status: CONCERNS ⚠️ - Address HIGH priority issues before release
|
||||
```
|
||||
|
||||
### Example 3: Performance-Only Assessment
|
||||
|
||||
```bash
|
||||
bmad tea *nfr-assess \
|
||||
--assess-performance true \
|
||||
--assess-security false \
|
||||
--assess-reliability false \
|
||||
--assess-maintainability false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "NFR thresholds not defined"
|
||||
|
||||
- Check tech-spec.md for NFR requirements
|
||||
- Check PRD.md for product-level SLAs
|
||||
- Check story file for feature-specific requirements
|
||||
- If thresholds truly unknown, mark as CONCERNS and recommend defining them
|
||||
|
||||
### "No evidence found"
|
||||
|
||||
- Check evidence directories (test-results, metrics, logs)
|
||||
- Check CI/CD pipeline for test results
|
||||
- If evidence truly missing, mark NFR as "NO EVIDENCE" and recommend generating it
|
||||
|
||||
### "CONCERNS status but no threshold exceeded"
|
||||
|
||||
- CONCERNS is correct when threshold is UNKNOWN or evidence is MISSING/INCOMPLETE
|
||||
- CONCERNS is also correct when evidence is close to threshold (within 10%)
|
||||
- Document why CONCERNS was assigned in assessment report
|
||||
|
||||
### "FAIL status blocks release"
|
||||
|
||||
- This is intentional - FAIL means critical NFR not met
|
||||
- Recommend remediation actions with specific steps
|
||||
- Re-run assessment after remediation
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
- **testarch-test-design** → `*nfr-assess` - Define NFR requirements, then assess
|
||||
- **testarch-framework** → `*nfr-assess` - Set up frameworks, then validate NFRs
|
||||
- **testarch-ci** → `*nfr-assess` - Configure CI, then assess reliability with burn-in
|
||||
- `*nfr-assess` → **testarch-trace (Phase 2)** - Assess NFRs, then apply quality gates
|
||||
- `*nfr-assess` → **testarch-test-review** - Assess maintainability, then review tests
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Never Guess Thresholds**
|
||||
- If threshold is unknown, mark as CONCERNS
|
||||
- Recommend defining threshold in tech-spec.md
|
||||
- Don't infer thresholds from similar features
|
||||
|
||||
2. **Evidence-Based Assessment**
|
||||
- Every assessment must be backed by evidence
|
||||
- Mark NFRs without evidence as "NO EVIDENCE"
|
||||
- Don't assume or infer - require explicit evidence
|
||||
|
||||
3. **Deterministic Rules**
|
||||
- Apply PASS/CONCERNS/FAIL consistently
|
||||
- Document reasoning for each classification
|
||||
- Use same rules across all NFR categories
|
||||
|
||||
4. **Actionable Recommendations**
|
||||
- Provide specific steps, not generic advice
|
||||
- Include priority, effort estimate, owner suggestion
|
||||
- Focus on quick wins first
|
||||
|
||||
5. **Gate Integration**
|
||||
- Enable `generate_gate_yaml` for CI/CD integration
|
||||
- Use YAML snippets in pipeline quality gates
|
||||
- Export metrics for dashboard visualization
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Status | Criteria | Action |
|
||||
| ----------- | ---------------------------- | --------------------------- |
|
||||
| PASS ✅ | All NFRs have PASS status | Ready for release |
|
||||
| CONCERNS ⚠️ | Any NFR has CONCERNS status | Address before next release |
|
||||
| FAIL ❌ | Critical NFR has FAIL status | Do not release - BLOCKER |
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `bmad tea *test-design` - Define NFR requirements and test plan
|
||||
- `bmad tea *framework` - Set up performance/security testing frameworks
|
||||
- `bmad tea *ci` - Configure CI/CD for NFR validation
|
||||
- `bmad tea *trace` (Phase 2) - Apply quality gates using NFR assessment metrics
|
||||
- `bmad tea *test-review` - Review test quality (maintainability NFR)
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Instructions](./instructions.md) - Detailed workflow steps
|
||||
- [Checklist](./checklist.md) - Validation checklist
|
||||
- [Template](./nfr-report-template.md) - NFR assessment report template
|
||||
- [Knowledge Base](../../testarch/knowledge/) - NFR criteria and best practices
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
@@ -1,493 +0,0 @@
|
||||
# Test Design and Risk Assessment Workflow
|
||||
|
||||
Plans comprehensive test coverage strategy with risk assessment (probability × impact scoring), priority classification (P0-P3), and resource estimation. This workflow generates a test design document that identifies high-risk areas, maps requirements to appropriate test levels, and provides execution ordering for optimal feedback.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *test-design
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- Planning test coverage before development starts
|
||||
- Assessing risks for an epic or story
|
||||
- Prioritizing test scenarios by business impact
|
||||
- Estimating testing effort and resources
|
||||
|
||||
## Inputs
|
||||
|
||||
**Required Context Files:**
|
||||
|
||||
- **Story markdown**: Acceptance criteria and requirements
|
||||
- **PRD or epics.md**: High-level product context
|
||||
- **Architecture docs** (optional): Technical constraints and integration points
|
||||
|
||||
**Workflow Variables:**
|
||||
|
||||
- `epic_num`: Epic number for scoped design
|
||||
- `story_path`: Specific story for design (optional)
|
||||
- `design_level`: full/targeted/minimal (default: full)
|
||||
- `risk_threshold`: Score for high-priority flag (default: 6)
|
||||
- `risk_categories`: TECH,SEC,PERF,DATA,BUS,OPS (all enabled)
|
||||
- `priority_levels`: P0,P1,P2,P3 (all enabled)
|
||||
|
||||
## Outputs
|
||||
|
||||
**Primary Deliverable:**
|
||||
|
||||
**Test Design Document** (`test-design-epic-{N}.md`):
|
||||
|
||||
1. **Risk Assessment Matrix**
|
||||
- Risk ID, category, description
|
||||
- Probability (1-3) × Impact (1-3) = Score
|
||||
- Scores ≥6 flagged as high-priority
|
||||
- Mitigation plans with owners and timelines
|
||||
|
||||
2. **Coverage Matrix**
|
||||
- Requirement → Test Level (E2E/API/Component/Unit)
|
||||
- Priority assignment (P0-P3)
|
||||
- Risk linkage
|
||||
- Test count estimates
|
||||
|
||||
3. **Execution Order**
|
||||
- Smoke tests (P0 subset, <5 min)
|
||||
- P0 tests (critical paths, <10 min)
|
||||
- P1 tests (important features, <30 min)
|
||||
- P2/P3 tests (full regression, <60 min)
|
||||
|
||||
4. **Resource Estimates**
|
||||
- Hours per priority level
|
||||
- Total effort in days
|
||||
- Tooling and data prerequisites
|
||||
|
||||
5. **Quality Gate Criteria**
|
||||
- P0 pass rate: 100%
|
||||
- P1 pass rate: ≥95%
|
||||
- High-risk mitigations: 100%
|
||||
- Coverage target: ≥80%
|
||||
|
||||
## Key Features
|
||||
|
||||
### Risk Scoring Framework
|
||||
|
||||
**Probability × Impact = Risk Score**
|
||||
|
||||
**Probability** (1-3):
|
||||
|
||||
- 1 (Unlikely): <10% chance
|
||||
- 2 (Possible): 10-50% chance
|
||||
- 3 (Likely): >50% chance
|
||||
|
||||
**Impact** (1-3):
|
||||
|
||||
- 1 (Minor): Cosmetic, workaround exists
|
||||
- 2 (Degraded): Feature impaired, difficult workaround
|
||||
- 3 (Critical): System failure, no workaround
|
||||
|
||||
**Scores**:
|
||||
|
||||
- 1-2: Low risk (monitor)
|
||||
- 3-4: Medium risk (plan mitigation)
|
||||
- **6-9: High risk** (immediate mitigation required)
|
||||
|
||||
### Risk Categories (6 types)
|
||||
|
||||
**TECH** (Technical/Architecture):
|
||||
|
||||
- Architecture flaws, integration failures
|
||||
- Scalability issues, technical debt
|
||||
|
||||
**SEC** (Security):
|
||||
|
||||
- Missing access controls, auth bypass
|
||||
- Data exposure, injection vulnerabilities
|
||||
|
||||
**PERF** (Performance):
|
||||
|
||||
- SLA violations, response time degradation
|
||||
- Resource exhaustion, scalability limits
|
||||
|
||||
**DATA** (Data Integrity):
|
||||
|
||||
- Data loss/corruption, inconsistent state
|
||||
- Migration failures
|
||||
|
||||
**BUS** (Business Impact):
|
||||
|
||||
- UX degradation, business logic errors
|
||||
- Revenue impact, compliance violations
|
||||
|
||||
**OPS** (Operations):
|
||||
|
||||
- Deployment failures, configuration errors
|
||||
- Monitoring gaps, rollback issues
|
||||
|
||||
### Priority Classification (P0-P3)
|
||||
|
||||
**P0 (Critical)** - Run on every commit:
|
||||
|
||||
- Blocks core user journey
|
||||
- High-risk (score ≥6)
|
||||
- Revenue-impacting or security-critical
|
||||
|
||||
**P1 (High)** - Run on PR to main:
|
||||
|
||||
- Important user features
|
||||
- Medium-risk (score 3-4)
|
||||
- Common workflows
|
||||
|
||||
**P2 (Medium)** - Run nightly/weekly:
|
||||
|
||||
- Secondary features
|
||||
- Low-risk (score 1-2)
|
||||
- Edge cases
|
||||
|
||||
**P3 (Low)** - Run on-demand:
|
||||
|
||||
- Nice-to-have, exploratory
|
||||
- Performance benchmarks
|
||||
|
||||
### Test Level Selection
|
||||
|
||||
**E2E (End-to-End)**:
|
||||
|
||||
- Critical user journeys
|
||||
- Multi-system integration
|
||||
- Highest confidence, slowest
|
||||
|
||||
**API (Integration)**:
|
||||
|
||||
- Service contracts
|
||||
- Business logic validation
|
||||
- Fast feedback, stable
|
||||
|
||||
**Component**:
|
||||
|
||||
- UI component behavior
|
||||
- Visual regression
|
||||
- Fast, isolated
|
||||
|
||||
**Unit**:
|
||||
|
||||
- Business logic, edge cases
|
||||
- Error handling
|
||||
- Fastest, most granular
|
||||
|
||||
**Key principle**: Avoid duplicate coverage - don't test same behavior at multiple levels.
|
||||
|
||||
### Exploratory Mode (NEW - Phase 2.5)
|
||||
|
||||
**test-design** supports UI exploration for brownfield applications with missing documentation.
|
||||
|
||||
**Activation**: Automatic when requirements missing/incomplete for brownfield apps
|
||||
|
||||
- If config.tea_use_mcp_enhancements is true + MCP available → MCP-assisted exploration
|
||||
- Otherwise → Manual exploration with user documentation
|
||||
|
||||
**When to Use Exploratory Mode:**
|
||||
|
||||
- ✅ Brownfield projects with missing documentation
|
||||
- ✅ Legacy systems lacking requirements
|
||||
- ✅ Undocumented features needing test coverage
|
||||
- ✅ Unknown user journeys requiring discovery
|
||||
- ❌ NOT for greenfield projects with clear requirements
|
||||
|
||||
**Exploration Modes:**
|
||||
|
||||
1. **MCP-Assisted Exploration** (if Playwright MCP available):
|
||||
- Interactive browser exploration using MCP tools
|
||||
- `planner_setup_page` - Initialize browser
|
||||
- `browser_navigate` - Explore pages
|
||||
- `browser_click` - Interact with UI elements
|
||||
- `browser_hover` - Reveal hidden menus
|
||||
- `browser_snapshot` - Capture state at each step
|
||||
- `browser_screenshot` - Document visually
|
||||
- `browser_console_messages` - Find JavaScript errors
|
||||
- `browser_network_requests` - Identify API endpoints
|
||||
|
||||
2. **Manual Exploration** (fallback without MCP):
|
||||
- User explores application manually
|
||||
- Documents findings in markdown:
|
||||
- Pages/features discovered
|
||||
- User journeys identified
|
||||
- API endpoints observed (DevTools Network)
|
||||
- JavaScript errors noted (DevTools Console)
|
||||
- Critical workflows mapped
|
||||
- Provides exploration findings to workflow
|
||||
|
||||
**Exploration Workflow:**
|
||||
|
||||
```
|
||||
1. Enable exploratory_mode and set exploration_url
|
||||
2. IF MCP available:
|
||||
- Use planner_setup_page to init browser
|
||||
- Explore UI with browser_* tools
|
||||
- Capture snapshots and screenshots
|
||||
- Monitor console and network
|
||||
- Document discoveries
|
||||
3. IF MCP unavailable:
|
||||
- Notify user to explore manually
|
||||
- Wait for exploration findings
|
||||
4. Convert discoveries to testable requirements
|
||||
5. Continue with standard risk assessment (Step 2)
|
||||
```
|
||||
|
||||
**Example Output from Exploratory Mode:**
|
||||
|
||||
```markdown
|
||||
## Exploration Findings - Legacy Admin Panel
|
||||
|
||||
**Exploration URL**: https://admin.example.com
|
||||
**Mode**: MCP-Assisted
|
||||
|
||||
### Discovered Features:
|
||||
|
||||
1. User Management (/admin/users)
|
||||
- List users (table with 10 columns)
|
||||
- Edit user (modal form)
|
||||
- Delete user (confirmation dialog)
|
||||
- Export to CSV (download button)
|
||||
|
||||
2. Reporting Dashboard (/admin/reports)
|
||||
- Date range picker
|
||||
- Filter by department
|
||||
- Generate PDF report
|
||||
- Email report to stakeholders
|
||||
|
||||
3. API Endpoints Discovered:
|
||||
- GET /api/admin/users
|
||||
- PUT /api/admin/users/:id
|
||||
- DELETE /api/admin/users/:id
|
||||
- POST /api/reports/generate
|
||||
|
||||
### User Journeys Mapped:
|
||||
|
||||
1. Admin deletes inactive user
|
||||
- Navigate to /admin/users
|
||||
- Click delete icon
|
||||
- Confirm in modal
|
||||
- User removed from table
|
||||
|
||||
2. Admin generates monthly report
|
||||
- Navigate to /admin/reports
|
||||
- Select date range (last month)
|
||||
- Click generate
|
||||
- Download PDF
|
||||
|
||||
### Risks Identified (from exploration):
|
||||
|
||||
- R-001 (SEC): No RBAC check observed (any admin can delete any user)
|
||||
- R-002 (DATA): No confirmation on bulk delete
|
||||
- R-003 (PERF): User table loads slowly (5s for 1000 rows)
|
||||
|
||||
**Next**: Proceed to risk assessment with discovered requirements
|
||||
```
|
||||
|
||||
**Graceful Degradation:**
|
||||
|
||||
- Exploratory mode is OPTIONAL (default: disabled)
|
||||
- Works without Playwright MCP (manual fallback)
|
||||
- If exploration fails, can disable mode and provide requirements documentation
|
||||
- Seamlessly transitions to standard risk assessment workflow
|
||||
|
||||
### Knowledge Base Integration
|
||||
|
||||
Automatically consults TEA knowledge base:
|
||||
|
||||
- `risk-governance.md` - Risk classification framework
|
||||
- `probability-impact.md` - Risk scoring methodology
|
||||
- `test-levels-framework.md` - Test level selection
|
||||
- `test-priorities-matrix.md` - P0-P3 prioritization
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
**Before test-design:**
|
||||
|
||||
- **prd** (Phase 2): Creates PRD and epics
|
||||
- **architecture** (Phase 3): Defines technical approach
|
||||
- **tech-spec** (Phase 3): Implementation details
|
||||
|
||||
**After test-design:**
|
||||
|
||||
- **atdd**: Generate failing tests for P0 scenarios
|
||||
- **automate**: Expand coverage for P1/P2 scenarios
|
||||
- **trace (Phase 2)**: Use quality gate criteria for release decisions
|
||||
|
||||
**Coordinates with:**
|
||||
|
||||
- **framework**: Test infrastructure must exist
|
||||
- **ci**: Execution order maps to CI stages
|
||||
|
||||
**Updates:**
|
||||
|
||||
- `bmm-workflow-status.md`: Adds test design to Quality & Testing Progress
|
||||
|
||||
## Important Notes
|
||||
|
||||
### Evidence-Based Assessment
|
||||
|
||||
**Critical principle**: Base risk assessment on **evidence**, not speculation.
|
||||
|
||||
**Evidence sources:**
|
||||
|
||||
- PRD and user research
|
||||
- Architecture documentation
|
||||
- Historical bug data
|
||||
- User feedback
|
||||
- Security audit results
|
||||
|
||||
**When uncertain**: Document assumptions, request user clarification.
|
||||
|
||||
**Avoid**:
|
||||
|
||||
- Guessing business impact
|
||||
- Assuming user behavior
|
||||
- Inventing requirements
|
||||
|
||||
### Resource Estimation Formula
|
||||
|
||||
```
|
||||
P0: 2 hours per test (setup + complex scenarios)
|
||||
P1: 1 hour per test (standard coverage)
|
||||
P2: 0.5 hours per test (simple scenarios)
|
||||
P3: 0.25 hours per test (exploratory)
|
||||
|
||||
Total Days = Total Hours / 8
|
||||
```
|
||||
|
||||
Example:
|
||||
|
||||
- 15 P0 × 2h = 30h
|
||||
- 25 P1 × 1h = 25h
|
||||
- 40 P2 × 0.5h = 20h
|
||||
- **Total: 75 hours (~10 days)**
|
||||
|
||||
### Execution Order Strategy
|
||||
|
||||
**Smoke tests** (subset of P0, <5 min):
|
||||
|
||||
- Login successful
|
||||
- Dashboard loads
|
||||
- Core API responds
|
||||
|
||||
**Purpose**: Fast feedback, catch build-breaking issues immediately.
|
||||
|
||||
**P0 tests** (critical paths, <10 min):
|
||||
|
||||
- All scenarios blocking user journeys
|
||||
- Security-critical flows
|
||||
|
||||
**P1 tests** (important features, <30 min):
|
||||
|
||||
- Common workflows
|
||||
- Medium-risk areas
|
||||
|
||||
**P2/P3 tests** (full regression, <60 min):
|
||||
|
||||
- Edge cases
|
||||
- Performance benchmarks
|
||||
|
||||
### Quality Gate Criteria
|
||||
|
||||
**Pass/Fail thresholds:**
|
||||
|
||||
- P0: 100% pass (no exceptions)
|
||||
- P1: ≥95% pass (2-3 failures acceptable with waivers)
|
||||
- P2/P3: ≥90% pass (informational)
|
||||
- High-risk items: All mitigated or have approved waivers
|
||||
|
||||
**Coverage targets:**
|
||||
|
||||
- Critical paths: ≥80%
|
||||
- Security scenarios: 100%
|
||||
- Business logic: ≥70%
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
After workflow completion:
|
||||
|
||||
- [ ] Risk assessment complete (all categories)
|
||||
- [ ] Risks scored (probability × impact)
|
||||
- [ ] High-priority risks (≥6) flagged
|
||||
- [ ] Coverage matrix maps requirements to test levels
|
||||
- [ ] Priorities assigned (P0-P3)
|
||||
- [ ] Execution order defined
|
||||
- [ ] Resource estimates provided
|
||||
- [ ] Quality gate criteria defined
|
||||
- [ ] Output file created
|
||||
|
||||
Refer to `checklist.md` for comprehensive validation.
|
||||
|
||||
## Example Execution
|
||||
|
||||
**Scenario: E-commerce checkout epic**
|
||||
|
||||
```bash
|
||||
bmad tea *test-design
|
||||
# Epic 3: Checkout flow redesign
|
||||
|
||||
# Risk Assessment identifies:
|
||||
- R-001 (SEC): Payment bypass, P=2 × I=3 = 6 (HIGH)
|
||||
- R-002 (PERF): Cart load time, P=3 × I=2 = 6 (HIGH)
|
||||
- R-003 (BUS): Order confirmation email, P=2 × I=2 = 4 (MEDIUM)
|
||||
|
||||
# Coverage Plan:
|
||||
P0 scenarios: 12 tests (payment security, order creation)
|
||||
P1 scenarios: 18 tests (cart management, promo codes)
|
||||
P2 scenarios: 25 tests (edge cases, error handling)
|
||||
|
||||
Total effort: 65 hours (~8 days)
|
||||
|
||||
# Test Levels:
|
||||
- E2E: 8 tests (critical checkout path)
|
||||
- API: 30 tests (business logic, payment processing)
|
||||
- Unit: 17 tests (calculations, validations)
|
||||
|
||||
# Execution Order:
|
||||
1. Smoke: Payment successful, order created (2 min)
|
||||
2. P0: All payment & security flows (8 min)
|
||||
3. P1: Cart & promo codes (20 min)
|
||||
4. P2: Edge cases (40 min)
|
||||
|
||||
# Quality Gates:
|
||||
- P0 pass rate: 100%
|
||||
- P1 pass rate: ≥95%
|
||||
- R-001 mitigated: Add payment validation layer
|
||||
- R-002 mitigated: Implement cart caching
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Issue: "Unable to score risks - missing context"**
|
||||
|
||||
- **Cause**: Insufficient documentation
|
||||
- **Solution**: Request PRD, architecture docs, or user clarification
|
||||
|
||||
**Issue: "All tests marked as P0"**
|
||||
|
||||
- **Cause**: Over-prioritization
|
||||
- **Solution**: Apply strict P0 criteria (blocks core journey + high risk + no workaround)
|
||||
|
||||
**Issue: "Duplicate coverage at multiple test levels"**
|
||||
|
||||
- **Cause**: Not following test pyramid
|
||||
- **Solution**: Use E2E for critical paths only, API for logic, unit for edge cases
|
||||
|
||||
**Issue: "Resource estimates too high"**
|
||||
|
||||
- **Cause**: Complex test setup or insufficient automation
|
||||
- **Solution**: Invest in fixtures/factories upfront, reduce per-test setup time
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **atdd**: Generate failing tests → [atdd/README.md](../atdd/README.md)
|
||||
- **automate**: Expand regression coverage → [automate/README.md](../automate/README.md)
|
||||
- **trace**: Traceability and quality gate decisions → [trace/README.md](../trace/README.md)
|
||||
- **framework**: Test infrastructure → [framework/README.md](../framework/README.md)
|
||||
|
||||
## Version History
|
||||
|
||||
- **v4.0 (BMad v6)**: Pure markdown instructions, risk scoring framework, template-based output
|
||||
- **v3.x**: XML format instructions
|
||||
- **v2.x**: Legacy task-based approach
|
||||
@@ -1,775 +0,0 @@
|
||||
# Test Quality Review Workflow
|
||||
|
||||
The Test Quality Review workflow performs comprehensive quality validation of test code using TEA's knowledge base of best practices. It detects flaky patterns, validates structure, and provides actionable feedback to improve test maintainability and reliability.
|
||||
|
||||
## Overview
|
||||
|
||||
This workflow reviews test quality against proven patterns from TEA's knowledge base including fixture architecture, network-first safeguards, data factories, determinism, isolation, and flakiness prevention. It generates a quality score (0-100) with detailed feedback on violations and recommendations.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- **Knowledge-Based Review**: Applies patterns from 19+ knowledge fragments in tea-index.csv
|
||||
- **Quality Scoring**: 0-100 score with letter grade (A+ to F) based on violations
|
||||
- **Multi-Scope Review**: Single file, directory, or entire test suite
|
||||
- **Pattern Detection**: Identifies hard waits, race conditions, shared state, conditionals
|
||||
- **Best Practice Validation**: BDD format, test IDs, priorities, assertions, test length
|
||||
- **Actionable Feedback**: Critical issues (must fix) vs recommendations (should fix)
|
||||
- **Code Examples**: Every issue includes recommended fix with code snippets
|
||||
- **Integration**: Works with story files, test-design, acceptance criteria context
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
bmad tea *test-review
|
||||
```
|
||||
|
||||
The TEA agent runs this workflow when:
|
||||
|
||||
- After `*atdd` workflow → validate generated acceptance tests
|
||||
- After `*automate` workflow → ensure regression suite quality
|
||||
- After developer writes tests → provide quality feedback
|
||||
- Before `*gate` workflow → confirm test quality before release
|
||||
- User explicitly requests review: `bmad tea *test-review`
|
||||
- Periodic quality audits of existing test suite
|
||||
|
||||
**Typical workflow sequence:**
|
||||
|
||||
1. `*atdd` → Generate failing acceptance tests
|
||||
2. **`*test-review`** → Validate test quality ⬅️ YOU ARE HERE (option 1)
|
||||
3. `*dev story` → Implement feature with tests passing
|
||||
4. **`*test-review`** → Review implementation tests ⬅️ YOU ARE HERE (option 2)
|
||||
5. `*automate` → Expand regression suite
|
||||
6. **`*test-review`** → Validate new regression tests ⬅️ YOU ARE HERE (option 3)
|
||||
7. `*gate` → Final quality gate decision
|
||||
|
||||
---
|
||||
|
||||
## Inputs
|
||||
|
||||
### Required Context Files
|
||||
|
||||
- **Test File(s)**: One or more test files to review (auto-discovered or explicitly provided)
|
||||
- **Test Framework Config**: playwright.config.ts, jest.config.js, etc. (for context)
|
||||
|
||||
### Recommended Context Files
|
||||
|
||||
- **Story File**: Acceptance criteria for context (e.g., `story-1.3.md`)
|
||||
- **Test Design**: Priority context (P0/P1/P2/P3) from test-design.md
|
||||
- **Knowledge Base**: tea-index.csv with best practice fragments (required for thorough review)
|
||||
|
||||
### Workflow Variables
|
||||
|
||||
Key variables that control review behavior (configured in `workflow.yaml`):
|
||||
|
||||
- **review_scope**: `single` | `directory` | `suite` (default: `single`)
|
||||
- `single`: Review one test file
|
||||
- `directory`: Review all tests in a directory
|
||||
- `suite`: Review entire test suite
|
||||
|
||||
- **quality_score_enabled**: Enable 0-100 quality scoring (default: `true`)
|
||||
- **append_to_file**: Add inline comments to test files (default: `false`)
|
||||
- **check_against_knowledge**: Use tea-index.csv fragments (default: `true`)
|
||||
- **strict_mode**: Fail on any violation vs advisory only (default: `false`)
|
||||
|
||||
**Quality Criteria Flags** (all default to `true`):
|
||||
|
||||
- `check_given_when_then`: BDD format validation
|
||||
- `check_test_ids`: Test ID conventions
|
||||
- `check_priority_markers`: P0/P1/P2/P3 classification
|
||||
- `check_hard_waits`: Detect sleep(), wait(X)
|
||||
- `check_determinism`: No conditionals/try-catch abuse
|
||||
- `check_isolation`: Tests clean up, no shared state
|
||||
- `check_fixture_patterns`: Pure function → Fixture → mergeTests
|
||||
- `check_data_factories`: Factory usage vs hardcoded data
|
||||
- `check_network_first`: Route intercept before navigate
|
||||
- `check_assertions`: Explicit assertions present
|
||||
- `check_test_length`: Warn if >300 lines
|
||||
- `check_test_duration`: Warn if >1.5 min
|
||||
- `check_flakiness_patterns`: Common flaky patterns
|
||||
|
||||
---
|
||||
|
||||
## Outputs
|
||||
|
||||
### Primary Deliverable
|
||||
|
||||
**Test Quality Review Report** (`test-review-{filename}.md`):
|
||||
|
||||
- **Executive Summary**: Overall assessment, key strengths/weaknesses, recommendation
|
||||
- **Quality Score**: 0-100 score with letter grade (A+ to F)
|
||||
- **Quality Criteria Assessment**: Table with all criteria evaluated (PASS/WARN/FAIL)
|
||||
- **Critical Issues**: P0/P1 violations that must be fixed
|
||||
- **Recommendations**: P2/P3 violations that should be fixed
|
||||
- **Best Practices Examples**: Good patterns found in tests
|
||||
- **Knowledge Base References**: Links to detailed guidance
|
||||
|
||||
Each issue includes:
|
||||
|
||||
- Code location (file:line)
|
||||
- Explanation of problem
|
||||
- Recommended fix with code example
|
||||
- Knowledge base fragment reference
|
||||
|
||||
### Secondary Outputs
|
||||
|
||||
- **Inline Comments**: TODO comments in test files at violation locations (if enabled)
|
||||
- **Quality Badge**: Badge with score (e.g., "Test Quality: 87/100 (A)")
|
||||
- **Story Update**: Test quality section appended to story file (if enabled)
|
||||
|
||||
### Validation Safeguards
|
||||
|
||||
- ✅ All knowledge base fragments loaded successfully
|
||||
- ✅ Test files parsed and structure analyzed
|
||||
- ✅ All enabled quality criteria evaluated
|
||||
- ✅ Violations categorized by severity (P0/P1/P2/P3)
|
||||
- ✅ Quality score calculated with breakdown
|
||||
- ✅ Actionable feedback with code examples provided
|
||||
|
||||
---
|
||||
|
||||
## Quality Criteria Explained
|
||||
|
||||
### 1. BDD Format (Given-When-Then)
|
||||
|
||||
**PASS**: Tests use clear Given-When-Then structure
|
||||
|
||||
```typescript
|
||||
// Given: User is logged in
|
||||
const user = await createTestUser();
|
||||
await loginPage.login(user.email, user.password);
|
||||
|
||||
// When: User navigates to dashboard
|
||||
await page.goto('/dashboard');
|
||||
|
||||
// Then: User sees welcome message
|
||||
await expect(page.locator('[data-testid="welcome"]')).toContainText(user.name);
|
||||
```
|
||||
|
||||
**FAIL**: Tests lack structure, hard to understand intent
|
||||
|
||||
```typescript
|
||||
await page.goto('/dashboard');
|
||||
await page.click('.button');
|
||||
await expect(page.locator('.text')).toBeVisible();
|
||||
```
|
||||
|
||||
**Knowledge**: test-quality.md, tdd-cycles.md
|
||||
|
||||
---
|
||||
|
||||
### 2. Test IDs
|
||||
|
||||
**PASS**: All tests have IDs following convention
|
||||
|
||||
```typescript
|
||||
test.describe('1.3-E2E-001: User Login Flow', () => {
|
||||
test('should log in successfully with valid credentials', async ({ page }) => {
|
||||
// Test implementation
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**FAIL**: No test IDs, can't trace to requirements
|
||||
|
||||
```typescript
|
||||
test.describe('Login', () => {
|
||||
test('login works', async ({ page }) => {
|
||||
// Test implementation
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
**Knowledge**: traceability.md, test-quality.md
|
||||
|
||||
---
|
||||
|
||||
### 3. Priority Markers
|
||||
|
||||
**PASS**: Tests classified as P0/P1/P2/P3
|
||||
|
||||
```typescript
|
||||
test.describe('P0: Critical User Journey - Checkout', () => {
|
||||
// Critical tests
|
||||
});
|
||||
|
||||
test.describe('P2: Edge Case - International Addresses', () => {
|
||||
// Nice-to-have tests
|
||||
});
|
||||
```
|
||||
|
||||
**Knowledge**: test-priorities.md, risk-governance.md
|
||||
|
||||
---
|
||||
|
||||
### 4. No Hard Waits
|
||||
|
||||
**PASS**: No sleep(), wait(), hardcoded delays
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Explicit wait for condition
|
||||
await expect(page.locator('[data-testid="user-menu"]')).toBeVisible({ timeout: 10000 });
|
||||
```
|
||||
|
||||
**FAIL**: Hard waits introduce flakiness
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Hard wait
|
||||
await page.waitForTimeout(2000);
|
||||
await expect(page.locator('[data-testid="user-menu"]')).toBeVisible();
|
||||
```
|
||||
|
||||
**Knowledge**: test-quality.md, network-first.md
|
||||
|
||||
---
|
||||
|
||||
### 5. Determinism
|
||||
|
||||
**PASS**: Tests work deterministically, no conditionals
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Deterministic test
|
||||
await expect(page.locator('[data-testid="status"]')).toHaveText('Active');
|
||||
```
|
||||
|
||||
**FAIL**: Conditionals make tests unpredictable
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Conditional logic
|
||||
const status = await page.locator('[data-testid="status"]').textContent();
|
||||
if (status === 'Active') {
|
||||
await page.click('[data-testid="deactivate"]');
|
||||
} else {
|
||||
await page.click('[data-testid="activate"]');
|
||||
}
|
||||
```
|
||||
|
||||
**Knowledge**: test-quality.md, data-factories.md
|
||||
|
||||
---
|
||||
|
||||
### 6. Isolation
|
||||
|
||||
**PASS**: Tests clean up, no shared state
|
||||
|
||||
```typescript
|
||||
test.afterEach(async ({ page, testUser }) => {
|
||||
// Cleanup: Delete test user
|
||||
await api.deleteUser(testUser.id);
|
||||
});
|
||||
```
|
||||
|
||||
**FAIL**: Shared state, tests depend on order
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Shared global variable
|
||||
let userId: string;
|
||||
|
||||
test('create user', async () => {
|
||||
userId = await createUser(); // Sets global
|
||||
});
|
||||
|
||||
test('update user', async () => {
|
||||
await updateUser(userId); // Depends on previous test
|
||||
});
|
||||
```
|
||||
|
||||
**Knowledge**: test-quality.md, data-factories.md
|
||||
|
||||
---
|
||||
|
||||
### 7. Fixture Patterns
|
||||
|
||||
**PASS**: Pure function → Fixture → mergeTests
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Pure function fixture
|
||||
const createAuthenticatedPage = async (page: Page, user: User) => {
|
||||
await loginPage.login(user.email, user.password);
|
||||
return page;
|
||||
};
|
||||
|
||||
const test = base.extend({
|
||||
authenticatedPage: async ({ page }, use) => {
|
||||
const user = createTestUser();
|
||||
const authedPage = await createAuthenticatedPage(page, user);
|
||||
await use(authedPage);
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
**FAIL**: No fixtures, repeated setup
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Repeated setup in every test
|
||||
test('test 1', async ({ page }) => {
|
||||
await page.goto('/login');
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="password"]', 'password123');
|
||||
await page.click('[type="submit"]');
|
||||
// Test logic
|
||||
});
|
||||
```
|
||||
|
||||
**Knowledge**: fixture-architecture.md
|
||||
|
||||
---
|
||||
|
||||
### 8. Data Factories
|
||||
|
||||
**PASS**: Factory functions with overrides
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Factory function
|
||||
import { createTestUser } from './factories/user-factory';
|
||||
|
||||
test('user can update profile', async ({ page }) => {
|
||||
const user = createTestUser({ role: 'admin' });
|
||||
await api.createUser(user); // API-first setup
|
||||
// Test UI interaction
|
||||
});
|
||||
```
|
||||
|
||||
**FAIL**: Hardcoded test data
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Magic strings
|
||||
await page.fill('[name="email"]', 'test@example.com');
|
||||
await page.fill('[name="phone"]', '555-1234');
|
||||
```
|
||||
|
||||
**Knowledge**: data-factories.md
|
||||
|
||||
---
|
||||
|
||||
### 9. Network-First Pattern
|
||||
|
||||
**PASS**: Route intercept before navigate
|
||||
|
||||
```typescript
|
||||
// ✅ Good: Intercept before navigation
|
||||
await page.route('**/api/users', (route) => route.fulfill({ json: mockUsers }));
|
||||
await page.goto('/users'); // Navigate after route setup
|
||||
```
|
||||
|
||||
**FAIL**: Race condition risk
|
||||
|
||||
```typescript
|
||||
// ❌ Bad: Navigate before intercept
|
||||
await page.goto('/users');
|
||||
await page.route('**/api/users', (route) => route.fulfill({ json: mockUsers })); // Too late!
|
||||
```
|
||||
|
||||
**Knowledge**: network-first.md
|
||||
|
||||
---
|
||||
|
||||
### 10. Explicit Assertions
|
||||
|
||||
**PASS**: Clear, specific assertions
|
||||
|
||||
```typescript
|
||||
await expect(page.locator('[data-testid="username"]')).toHaveText('John Doe');
|
||||
await expect(page.locator('[data-testid="status"]')).toHaveClass(/active/);
|
||||
```
|
||||
|
||||
**FAIL**: Missing or vague assertions
|
||||
|
||||
```typescript
|
||||
await page.locator('[data-testid="username"]').isVisible(); // No assertion!
|
||||
```
|
||||
|
||||
**Knowledge**: test-quality.md
|
||||
|
||||
---
|
||||
|
||||
### 11. Test Length
|
||||
|
||||
**PASS**: ≤300 lines per file (ideal: ≤200)
|
||||
**WARN**: 301-500 lines (consider splitting)
|
||||
**FAIL**: >500 lines (too large)
|
||||
|
||||
**Knowledge**: test-quality.md
|
||||
|
||||
---
|
||||
|
||||
### 12. Test Duration
|
||||
|
||||
**PASS**: ≤1.5 minutes per test (target: <30 seconds)
|
||||
**WARN**: 1.5-3 minutes (consider optimization)
|
||||
**FAIL**: >3 minutes (too slow)
|
||||
|
||||
**Knowledge**: test-quality.md, selective-testing.md
|
||||
|
||||
---
|
||||
|
||||
### 13. Flakiness Patterns
|
||||
|
||||
Common flaky patterns detected:
|
||||
|
||||
- Tight timeouts (e.g., `{ timeout: 1000 }`)
|
||||
- Race conditions (navigation before route interception)
|
||||
- Timing-dependent assertions
|
||||
- Retry logic hiding flakiness
|
||||
- Environment-dependent assumptions
|
||||
|
||||
**Knowledge**: test-quality.md, network-first.md, ci-burn-in.md
|
||||
|
||||
---
|
||||
|
||||
## Quality Scoring
|
||||
|
||||
### Score Calculation
|
||||
|
||||
```
|
||||
Starting Score: 100
|
||||
|
||||
Deductions:
|
||||
- Critical Violations (P0): -10 points each
|
||||
- High Violations (P1): -5 points each
|
||||
- Medium Violations (P2): -2 points each
|
||||
- Low Violations (P3): -1 point each
|
||||
|
||||
Bonus Points (max +30):
|
||||
+ Excellent BDD structure: +5
|
||||
+ Comprehensive fixtures: +5
|
||||
+ Comprehensive data factories: +5
|
||||
+ Network-first pattern consistently used: +5
|
||||
+ Perfect isolation (all tests clean up): +5
|
||||
+ All test IDs present and correct: +5
|
||||
|
||||
Final Score: max(0, min(100, Starting Score - Violations + Bonus))
|
||||
```
|
||||
|
||||
### Quality Grades
|
||||
|
||||
- **90-100** (A+): Excellent - Production-ready, best practices followed
|
||||
- **80-89** (A): Good - Minor improvements recommended
|
||||
- **70-79** (B): Acceptable - Some issues to address
|
||||
- **60-69** (C): Needs Improvement - Several issues detected
|
||||
- **<60** (F): Critical Issues - Significant problems, not production-ready
|
||||
|
||||
---
|
||||
|
||||
## Example Scenarios
|
||||
|
||||
### Scenario 1: Excellent Quality (Score: 95)
|
||||
|
||||
```markdown
|
||||
# Test Quality Review: checkout-flow.spec.ts
|
||||
|
||||
**Quality Score**: 95/100 (A+ - Excellent)
|
||||
**Recommendation**: Approve - Production Ready
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Excellent test quality with comprehensive coverage and best practices throughout.
|
||||
Tests demonstrate expert-level patterns including fixture architecture, data
|
||||
factories, network-first approach, and perfect isolation.
|
||||
|
||||
**Strengths:**
|
||||
✅ Clear Given-When-Then structure in all tests
|
||||
✅ Comprehensive fixtures for authenticated states
|
||||
✅ Data factories with faker.js for realistic test data
|
||||
✅ Network-first pattern prevents race conditions
|
||||
✅ Perfect test isolation with cleanup
|
||||
✅ All test IDs present (1.2-E2E-001 through 1.2-E2E-005)
|
||||
|
||||
**Minor Recommendations:**
|
||||
⚠️ One test slightly verbose (245 lines) - consider extracting helper function
|
||||
|
||||
**Recommendation**: Approve without changes. Use as reference for other tests.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario 2: Good Quality (Score: 82)
|
||||
|
||||
```markdown
|
||||
# Test Quality Review: user-profile.spec.ts
|
||||
|
||||
**Quality Score**: 82/100 (A - Good)
|
||||
**Recommendation**: Approve with Comments
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Solid test quality with good structure and coverage. A few improvements would
|
||||
enhance maintainability and reduce flakiness risk.
|
||||
|
||||
**Strengths:**
|
||||
✅ Good BDD structure
|
||||
✅ Test IDs present
|
||||
✅ Explicit assertions
|
||||
|
||||
**Issues to Address:**
|
||||
⚠️ 2 hard waits detected (lines 34, 67) - use explicit waits instead
|
||||
⚠️ Hardcoded test data (line 23) - use factory functions
|
||||
⚠️ Missing cleanup in one test (line 89) - add afterEach hook
|
||||
|
||||
**Recommendation**: Address hard waits before merging. Other improvements
|
||||
can be addressed in follow-up PR.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario 3: Needs Improvement (Score: 68)
|
||||
|
||||
```markdown
|
||||
# Test Quality Review: legacy-report.spec.ts
|
||||
|
||||
**Quality Score**: 68/100 (C - Needs Improvement)
|
||||
**Recommendation**: Request Changes
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Test has several quality issues that should be addressed before merging.
|
||||
Primarily concerns around flakiness risk and maintainability.
|
||||
|
||||
**Critical Issues:**
|
||||
❌ 5 hard waits detected (flakiness risk)
|
||||
❌ Race condition: navigation before route interception (line 45)
|
||||
❌ Shared global state between tests (line 12)
|
||||
❌ Missing test IDs (can't trace to requirements)
|
||||
|
||||
**Recommendations:**
|
||||
⚠️ Test file is 487 lines - consider splitting
|
||||
⚠️ Hardcoded data throughout - use factories
|
||||
⚠️ Missing cleanup in afterEach
|
||||
|
||||
**Recommendation**: Address all critical issues (❌) before re-review.
|
||||
Significant refactoring needed.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Scenario 4: Critical Issues (Score: 42)
|
||||
|
||||
```markdown
|
||||
# Test Quality Review: data-export.spec.ts
|
||||
|
||||
**Quality Score**: 42/100 (F - Critical Issues)
|
||||
**Recommendation**: Block - Not Production Ready
|
||||
|
||||
## Executive Summary
|
||||
|
||||
CRITICAL: Test has severe quality issues that make it unsuitable for
|
||||
production. Significant refactoring required.
|
||||
|
||||
**Critical Issues:**
|
||||
❌ 12 hard waits (page.waitForTimeout) throughout
|
||||
❌ No test IDs or structure
|
||||
❌ Try/catch blocks swallowing errors (lines 23, 45, 67, 89)
|
||||
❌ No cleanup - tests leave data in database
|
||||
❌ Conditional logic (if/else) throughout tests
|
||||
❌ No assertions in 3 tests (tests do nothing!)
|
||||
❌ 687 lines - far too large
|
||||
❌ Multiple race conditions
|
||||
❌ Hardcoded credentials in plain text (SECURITY ISSUE)
|
||||
|
||||
**Recommendation**: BLOCK MERGE. Complete rewrite recommended following
|
||||
TEA knowledge base patterns. Suggest pairing session with QA engineer.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
### Before Test Review
|
||||
|
||||
1. **atdd** - Generates acceptance tests → TEA reviews for quality
|
||||
2. **dev story** - Developer implements tests → TEA provides feedback
|
||||
3. **automate** - Expands regression suite → TEA validates new tests
|
||||
|
||||
### After Test Review
|
||||
|
||||
1. **Developer** - Addresses critical issues, improves based on recommendations
|
||||
2. **gate** - Test quality feeds into release decision (high-quality tests increase confidence)
|
||||
|
||||
### Coordinates With
|
||||
|
||||
- **Story File**: Review links to acceptance criteria for context
|
||||
- **Test Design**: Review validates tests align with P0/P1/P2/P3 prioritization
|
||||
- **Knowledge Base**: All feedback references tea-index.csv fragments
|
||||
|
||||
---
|
||||
|
||||
## Review Scopes
|
||||
|
||||
### Single File Review
|
||||
|
||||
```bash
|
||||
# Review specific test file
|
||||
bmad tea *test-review
|
||||
# Provide test_file_path when prompted: tests/auth/login.spec.ts
|
||||
```
|
||||
|
||||
**Use When:**
|
||||
|
||||
- Reviewing tests just written
|
||||
- PR review of specific test file
|
||||
- Debugging flaky test
|
||||
- Learning test quality patterns
|
||||
|
||||
---
|
||||
|
||||
### Directory Review
|
||||
|
||||
```bash
|
||||
# Review all tests in directory
|
||||
bmad tea *test-review
|
||||
# Provide review_scope: directory
|
||||
# Provide test_dir: tests/auth/
|
||||
```
|
||||
|
||||
**Use When:**
|
||||
|
||||
- Feature branch has multiple test files
|
||||
- Reviewing entire feature test suite
|
||||
- Auditing test quality for module
|
||||
|
||||
---
|
||||
|
||||
### Suite Review
|
||||
|
||||
```bash
|
||||
# Review entire test suite
|
||||
bmad tea *test-review
|
||||
# Provide review_scope: suite
|
||||
```
|
||||
|
||||
**Use When:**
|
||||
|
||||
- Periodic quality audit (monthly/quarterly)
|
||||
- Before major release
|
||||
- Identifying patterns across codebase
|
||||
- Establishing quality baseline
|
||||
|
||||
---
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Strict Review (Fail on Violations)
|
||||
|
||||
```yaml
|
||||
review_scope: 'single'
|
||||
quality_score_enabled: true
|
||||
strict_mode: true # Fail if score <70
|
||||
check_against_knowledge: true
|
||||
# All check_* flags: true
|
||||
```
|
||||
|
||||
Use for: PR gates, production releases
|
||||
|
||||
---
|
||||
|
||||
### Balanced Review (Advisory)
|
||||
|
||||
```yaml
|
||||
review_scope: 'single'
|
||||
quality_score_enabled: true
|
||||
strict_mode: false # Advisory only
|
||||
check_against_knowledge: true
|
||||
# All check_* flags: true
|
||||
```
|
||||
|
||||
Use for: Most development workflows (default)
|
||||
|
||||
---
|
||||
|
||||
### Focused Review (Specific Criteria)
|
||||
|
||||
```yaml
|
||||
review_scope: 'single'
|
||||
check_hard_waits: true
|
||||
check_flakiness_patterns: true
|
||||
check_network_first: true
|
||||
# Other checks: false
|
||||
```
|
||||
|
||||
Use for: Debugging flaky tests, targeted improvements
|
||||
|
||||
---
|
||||
|
||||
## Important Notes
|
||||
|
||||
1. **Non-Prescriptive**: Review provides guidance, not rigid rules
|
||||
2. **Context Matters**: Some violations may be justified (document with comments)
|
||||
3. **Knowledge-Based**: All feedback grounded in proven patterns
|
||||
4. **Actionable**: Every issue includes recommended fix with code example
|
||||
5. **Quality Score**: Use as indicator, not absolute measure
|
||||
6. **Continuous Improvement**: Review tests periodically as patterns evolve
|
||||
7. **Learning Tool**: Use reviews to learn best practices, not just find bugs
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base References
|
||||
|
||||
This workflow automatically consults:
|
||||
|
||||
- **test-quality.md** - Definition of Done (no hard waits, <300 lines, <1.5 min, self-cleaning)
|
||||
- **fixture-architecture.md** - Pure function → Fixture → mergeTests pattern
|
||||
- **network-first.md** - Route intercept before navigate (race condition prevention)
|
||||
- **data-factories.md** - Factory functions with overrides, API-first setup
|
||||
- **test-levels-framework.md** - E2E vs API vs Component vs Unit appropriateness
|
||||
- **playwright-config.md** - Environment-based configuration patterns
|
||||
- **tdd-cycles.md** - Red-Green-Refactor patterns
|
||||
- **selective-testing.md** - Duplicate coverage detection
|
||||
- **ci-burn-in.md** - Flakiness detection patterns
|
||||
- **test-priorities.md** - P0/P1/P2/P3 classification framework
|
||||
- **traceability.md** - Requirements-to-tests mapping
|
||||
|
||||
See `tea-index.csv` for complete knowledge fragment mapping.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Problem: Quality score seems too low
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Review violation breakdown - focus on critical issues first
|
||||
- Consider project context - some patterns may be justified
|
||||
- Check if criteria are appropriate for project type
|
||||
- Score is indicator, not absolute - focus on actionable feedback
|
||||
|
||||
---
|
||||
|
||||
### Problem: No test files found
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Verify test_dir path is correct
|
||||
- Check test file extensions (_.spec.ts, _.test.js, etc.)
|
||||
- Use glob pattern to discover: `tests/**/*.spec.ts`
|
||||
|
||||
---
|
||||
|
||||
### Problem: Knowledge fragments not loading
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Verify tea-index.csv exists in testarch/ directory
|
||||
- Check fragment file paths are correct in tea-index.csv
|
||||
- Ensure auto_load_knowledge: true in workflow variables
|
||||
|
||||
---
|
||||
|
||||
### Problem: Too many false positives
|
||||
|
||||
**Solution:**
|
||||
|
||||
- Add justification comments in code for legitimate violations
|
||||
- Adjust check\_\* flags to disable specific criteria
|
||||
- Use strict_mode: false for advisory-only feedback
|
||||
- Context matters - document why pattern is appropriate
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `bmad tea *atdd` - Generate acceptance tests (review after generation)
|
||||
- `bmad tea *automate` - Expand regression suite (review new tests)
|
||||
- `bmad tea *gate` - Quality gate decision (test quality feeds into decision)
|
||||
- `bmad dev story` - Implement story (review tests after implementation)
|
||||
@@ -1,802 +0,0 @@
|
||||
# Requirements Traceability & Quality Gate Workflow
|
||||
|
||||
**Workflow ID:** `testarch-trace`
|
||||
**Agent:** Test Architect (TEA)
|
||||
**Command:** `bmad tea *trace`
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The **trace** workflow operates in two sequential phases to validate test coverage and deployment readiness:
|
||||
|
||||
**PHASE 1 - REQUIREMENTS TRACEABILITY:** Generates comprehensive requirements-to-tests traceability matrix that maps acceptance criteria to implemented tests, identifies coverage gaps, and provides actionable recommendations.
|
||||
|
||||
**PHASE 2 - QUALITY GATE DECISION:** Makes deterministic release decisions (PASS/CONCERNS/FAIL/WAIVED) based on traceability results, test execution evidence, and non-functional requirements validation.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- Maps acceptance criteria to specific test cases across all levels (E2E, API, Component, Unit)
|
||||
- Classifies coverage status (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
|
||||
- Prioritizes gaps by risk level (P0/P1/P2/P3)
|
||||
- Applies deterministic decision rules for deployment readiness
|
||||
- Generates gate decisions with evidence and rationale
|
||||
- Supports waivers for business-approved exceptions
|
||||
- Updates workflow status and notifies stakeholders
|
||||
- Creates CI/CD-ready YAML snippets for quality gates
|
||||
- Detects duplicate coverage across test levels
|
||||
- Verifies test quality (assertions, structure, performance)
|
||||
|
||||
---
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use `*trace` when you need to:
|
||||
|
||||
### Phase 1 - Traceability
|
||||
|
||||
- ✅ Validate that all acceptance criteria have test coverage
|
||||
- ✅ Identify coverage gaps before release or PR merge
|
||||
- ✅ Generate traceability documentation for compliance or audits
|
||||
- ✅ Ensure critical paths (P0/P1) are fully tested
|
||||
- ✅ Detect duplicate coverage across test levels
|
||||
- ✅ Assess test quality across your suite
|
||||
|
||||
### Phase 2 - Gate Decision (Optional)
|
||||
|
||||
- ✅ Make final go/no-go deployment decision
|
||||
- ✅ Validate test execution results against thresholds
|
||||
- ✅ Evaluate non-functional requirements (security, performance)
|
||||
- ✅ Generate audit trail for release approval
|
||||
- ✅ Handle business waivers for critical deadlines
|
||||
- ✅ Notify stakeholders of gate decision
|
||||
|
||||
**Typical Timing:**
|
||||
|
||||
- After tests are implemented (post-ATDD or post-development)
|
||||
- Before merging a PR (validate P0/P1 coverage)
|
||||
- Before release (validate full coverage and make gate decision)
|
||||
- During sprint retrospectives (assess test quality)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Phase 1 - Traceability (Required)
|
||||
|
||||
- Acceptance criteria (from story file OR inline)
|
||||
- Implemented test suite (or acknowledged gaps)
|
||||
|
||||
### Phase 2 - Gate Decision (Required if `enable_gate_decision: true`)
|
||||
|
||||
- Test execution results (CI/CD test reports, pass/fail rates)
|
||||
- Test design with risk priorities (P0/P1/P2/P3)
|
||||
|
||||
### Recommended
|
||||
|
||||
- `test-design.md` - Risk assessment and test priorities
|
||||
- `nfr-assessment.md` - Non-functional requirements validation (for release gates)
|
||||
- `tech-spec.md` - Technical implementation details
|
||||
- Test framework configuration (playwright.config.ts, jest.config.js)
|
||||
|
||||
**Halt Conditions:**
|
||||
|
||||
- Story lacks any tests AND gaps are not acknowledged → Run `*atdd` first
|
||||
- Acceptance criteria are completely missing → Provide criteria or story file
|
||||
- Phase 2 enabled but test execution results missing → Warn and skip gate decision
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage (Both Phases)
|
||||
|
||||
```bash
|
||||
bmad tea *trace
|
||||
```
|
||||
|
||||
The workflow will:
|
||||
|
||||
1. **Phase 1**: Read story file, extract acceptance criteria, auto-discover tests, generate traceability matrix
|
||||
2. **Phase 2**: Load test execution results, apply decision rules, generate gate decision document
|
||||
3. Save traceability matrix to `bmad/output/traceability-matrix.md`
|
||||
4. Save gate decision to `bmad/output/gate-decision-story-X.X.md`
|
||||
|
||||
### Phase 1 Only (Skip Gate Decision)
|
||||
|
||||
```bash
|
||||
bmad tea *trace --enable-gate-decision false
|
||||
```
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
```bash
|
||||
bmad tea *trace \
|
||||
--story-file "bmad/output/story-1.3.md" \
|
||||
--test-results "ci-artifacts/test-report.xml" \
|
||||
--min-p0-coverage 100 \
|
||||
--min-p1-coverage 90 \
|
||||
--min-p0-pass-rate 100 \
|
||||
--min-p1-pass-rate 95
|
||||
```
|
||||
|
||||
### Standalone Mode (No Story File)
|
||||
|
||||
```bash
|
||||
bmad tea *trace --acceptance-criteria "AC-1: User can login with email..."
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
### PHASE 1: Requirements Traceability
|
||||
|
||||
1. **Load Context** - Read story, test design, tech spec, knowledge base
|
||||
2. **Discover Tests** - Auto-find tests related to story (by ID, describe blocks, file paths)
|
||||
3. **Map Criteria** - Link acceptance criteria to specific test cases
|
||||
4. **Analyze Gaps** - Identify missing coverage and prioritize by risk
|
||||
5. **Verify Quality** - Check test quality (assertions, structure, performance)
|
||||
6. **Generate Deliverables** - Create traceability matrix, gate YAML, coverage badge
|
||||
|
||||
### PHASE 2: Quality Gate Decision (if `enable_gate_decision: true`)
|
||||
|
||||
7. **Gather Evidence** - Load traceability results, test execution reports, NFR assessments
|
||||
8. **Apply Decision Rules** - Evaluate against thresholds (PASS/CONCERNS/FAIL/WAIVED)
|
||||
9. **Document Decision** - Create gate decision document with evidence and rationale
|
||||
10. **Update Status & Notify** - Append to bmm-workflow-status.md, notify stakeholders
|
||||
|
||||
---
|
||||
|
||||
## Outputs
|
||||
|
||||
### Phase 1: Traceability Matrix (`traceability-matrix.md`)
|
||||
|
||||
Comprehensive markdown file with:
|
||||
|
||||
- Coverage summary table (by priority)
|
||||
- Detailed criterion-to-test mapping
|
||||
- Gap analysis with recommendations
|
||||
- Quality assessment for each test
|
||||
- Gate YAML snippet
|
||||
|
||||
**Example:**
|
||||
|
||||
```markdown
|
||||
# Traceability Matrix - Story 1.3
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Priority | Total | FULL | Coverage % | Status |
|
||||
| -------- | ----- | ---- | ---------- | ------- |
|
||||
| P0 | 3 | 3 | 100% | ✅ PASS |
|
||||
| P1 | 5 | 4 | 80% | ⚠️ WARN |
|
||||
|
||||
Gate Status: CONCERNS ⚠️ (P1 coverage below 90%)
|
||||
```
|
||||
|
||||
### Phase 2: Gate Decision Document (`gate-decision-{type}-{id}.md`)
|
||||
|
||||
**Decision Document** with:
|
||||
|
||||
- **Decision**: PASS / CONCERNS / FAIL / WAIVED with clear rationale
|
||||
- **Evidence Summary**: Test results, coverage, NFRs, quality validation
|
||||
- **Decision Criteria Table**: Each criterion with threshold, actual, status
|
||||
- **Rationale**: Explanation of decision based on evidence
|
||||
- **Residual Risks**: Unresolved issues (for CONCERNS/WAIVED)
|
||||
- **Waiver Details**: Approver, justification, remediation plan (for WAIVED)
|
||||
- **Next Steps**: Action items for each decision type
|
||||
|
||||
**Example:**
|
||||
|
||||
```markdown
|
||||
# Quality Gate Decision: Story 1.3 - User Login
|
||||
|
||||
**Decision**: ⚠️ CONCERNS
|
||||
**Date**: 2025-10-15
|
||||
|
||||
## Decision Criteria
|
||||
|
||||
| Criterion | Threshold | Actual | Status |
|
||||
| ------------ | --------- | ------ | ------- |
|
||||
| P0 Coverage | ≥100% | 100% | ✅ PASS |
|
||||
| P1 Coverage | ≥90% | 88% | ⚠️ FAIL |
|
||||
| Overall Pass | ≥90% | 96% | ✅ PASS |
|
||||
|
||||
**Decision**: CONCERNS (P1 coverage 88% below 90% threshold)
|
||||
|
||||
## Next Steps
|
||||
|
||||
- Deploy with monitoring
|
||||
- Create follow-up story for AC-5 test
|
||||
```
|
||||
|
||||
### Secondary Outputs
|
||||
|
||||
- **Gate YAML**: Machine-readable snippet for CI/CD integration
|
||||
- **Status Update**: Appends decision to `bmm-workflow-status.md` history
|
||||
- **Stakeholder Notification**: Auto-generated summary message
|
||||
- **Updated Story File**: Traceability section added (optional)
|
||||
|
||||
---
|
||||
|
||||
## Decision Logic (Phase 2)
|
||||
|
||||
### PASS Decision ✅
|
||||
|
||||
**All criteria met:**
|
||||
|
||||
- ✅ P0 coverage ≥ 100%
|
||||
- ✅ P1 coverage ≥ 90%
|
||||
- ✅ Overall coverage ≥ 80%
|
||||
- ✅ P0 test pass rate = 100%
|
||||
- ✅ P1 test pass rate ≥ 95%
|
||||
- ✅ Overall test pass rate ≥ 90%
|
||||
- ✅ Security issues = 0
|
||||
- ✅ Critical NFR failures = 0
|
||||
|
||||
**Action:** Deploy to production with standard monitoring
|
||||
|
||||
---
|
||||
|
||||
### CONCERNS Decision ⚠️
|
||||
|
||||
**P0 criteria met, but P1 criteria degraded:**
|
||||
|
||||
- ✅ P0 coverage = 100%
|
||||
- ⚠️ P1 coverage 80-89% (below 90% threshold)
|
||||
- ⚠️ P1 test pass rate 90-94% (below 95% threshold)
|
||||
- ✅ No security issues
|
||||
- ✅ No critical NFR failures
|
||||
|
||||
**Residual Risks:** Minor P1 issues, edge cases, non-critical gaps
|
||||
|
||||
**Action:** Deploy with enhanced monitoring, create backlog stories for fixes
|
||||
|
||||
**Note:** CONCERNS does NOT block deployment but requires acknowledgment
|
||||
|
||||
---
|
||||
|
||||
### FAIL Decision ❌
|
||||
|
||||
**Any P0 criterion failed:**
|
||||
|
||||
- ❌ P0 coverage <100% (missing critical tests)
|
||||
- OR ❌ P0 test pass rate <100% (failing critical tests)
|
||||
- OR ❌ P1 coverage <80% (significant gap)
|
||||
- OR ❌ Security issues >0
|
||||
- OR ❌ Critical NFR failures >0
|
||||
|
||||
**Critical Blockers:** P0 test failures, security vulnerabilities, critical NFRs
|
||||
|
||||
**Action:** Block deployment, fix critical issues, re-run gate after fixes
|
||||
|
||||
---
|
||||
|
||||
### WAIVED Decision 🔓
|
||||
|
||||
**FAIL status + business-approved waiver:**
|
||||
|
||||
- ❌ Original decision: FAIL
|
||||
- 🔓 Waiver approved by: {VP Engineering / CTO / Product Owner}
|
||||
- 📋 Business justification: {regulatory deadline, contractual obligation}
|
||||
- 📅 Waiver expiry: {date - does NOT apply to future releases}
|
||||
- 🔧 Remediation plan: {fix in next release, due date}
|
||||
|
||||
**Action:** Deploy with business approval, aggressive monitoring, fix ASAP
|
||||
|
||||
**Important:** Waivers NEVER apply to P0 security issues or data corruption risks
|
||||
|
||||
---
|
||||
|
||||
## Coverage Classifications (Phase 1)
|
||||
|
||||
- **FULL** ✅ - All scenarios validated at appropriate level(s)
|
||||
- **PARTIAL** ⚠️ - Some coverage but missing edge cases or levels
|
||||
- **NONE** ❌ - No test coverage at any level
|
||||
- **UNIT-ONLY** ⚠️ - Only unit tests (missing integration/E2E validation)
|
||||
- **INTEGRATION-ONLY** ⚠️ - Only API/Component tests (missing unit confidence)
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Priority | Coverage Requirement | Pass Rate Requirement | Severity | Action |
|
||||
| -------- | -------------------- | --------------------- | -------- | ------------------ |
|
||||
| P0 | 100% | 100% | BLOCKER | Do not release |
|
||||
| P1 | 90% | 95% | HIGH | Block PR merge |
|
||||
| P2 | 80% (recommended) | 85% (recommended) | MEDIUM | Address in nightly |
|
||||
| P3 | No requirement | No requirement | LOW | Optional |
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### workflow.yaml Variables
|
||||
|
||||
```yaml
|
||||
variables:
|
||||
# Target specification
|
||||
story_file: '' # Path to story markdown
|
||||
acceptance_criteria: '' # Inline criteria if no story
|
||||
|
||||
# Test discovery
|
||||
test_dir: '{project-root}/tests'
|
||||
auto_discover_tests: true
|
||||
|
||||
# Traceability configuration
|
||||
coverage_levels: 'e2e,api,component,unit'
|
||||
map_by_test_id: true
|
||||
map_by_describe: true
|
||||
map_by_filename: true
|
||||
|
||||
# Gap analysis
|
||||
prioritize_by_risk: true
|
||||
suggest_missing_tests: true
|
||||
check_duplicate_coverage: true
|
||||
|
||||
# Output configuration
|
||||
output_file: '{output_folder}/traceability-matrix.md'
|
||||
generate_gate_yaml: true
|
||||
generate_coverage_badge: true
|
||||
update_story_file: true
|
||||
|
||||
# Quality gates (Phase 1 recommendations)
|
||||
min_p0_coverage: 100
|
||||
min_p1_coverage: 90
|
||||
min_overall_coverage: 80
|
||||
|
||||
# PHASE 2: Gate Decision Variables
|
||||
enable_gate_decision: true # Run gate decision after traceability
|
||||
|
||||
# Gate target specification
|
||||
gate_type: 'story' # story | epic | release | hotfix
|
||||
|
||||
# Gate decision configuration
|
||||
decision_mode: 'deterministic' # deterministic | manual
|
||||
allow_waivers: true
|
||||
require_evidence: true
|
||||
|
||||
# Input sources for gate
|
||||
nfr_file: '' # Path to nfr-assessment.md (optional)
|
||||
test_results: '' # Path to test execution results (required for Phase 2)
|
||||
|
||||
# Decision criteria thresholds
|
||||
min_p0_pass_rate: 100
|
||||
min_p1_pass_rate: 95
|
||||
min_overall_pass_rate: 90
|
||||
max_critical_nfrs_fail: 0
|
||||
max_security_issues: 0
|
||||
|
||||
# Risk tolerance
|
||||
allow_p2_failures: true
|
||||
allow_p3_failures: true
|
||||
escalate_p1_failures: true
|
||||
|
||||
# Gate output configuration
|
||||
gate_output_file: '{output_folder}/gate-decision-{gate_type}-{story_id}.md'
|
||||
append_to_history: true
|
||||
notify_stakeholders: true
|
||||
|
||||
# Advanced gate options
|
||||
check_all_workflows_complete: true
|
||||
validate_evidence_freshness: true
|
||||
require_sign_off: false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base Integration
|
||||
|
||||
This workflow automatically loads relevant knowledge fragments:
|
||||
|
||||
**Phase 1 (Traceability):**
|
||||
|
||||
- `traceability.md` - Requirements mapping patterns
|
||||
- `test-priorities.md` - P0/P1/P2/P3 risk framework
|
||||
- `risk-governance.md` - Risk-based testing approach
|
||||
- `test-quality.md` - Definition of Done for tests
|
||||
- `selective-testing.md` - Duplicate coverage patterns
|
||||
|
||||
**Phase 2 (Gate Decision):**
|
||||
|
||||
- `risk-governance.md` - Quality gate criteria and decision framework
|
||||
- `probability-impact.md` - Risk scoring for residual risks
|
||||
- `test-quality.md` - Quality standards validation
|
||||
- `test-priorities.md` - Priority classification framework
|
||||
|
||||
---
|
||||
|
||||
## Example Scenarios
|
||||
|
||||
### Example 1: Full Coverage with Gate PASS
|
||||
|
||||
```bash
|
||||
# Validate coverage and make gate decision
|
||||
bmad tea *trace --story-file "bmad/output/story-1.3.md" \
|
||||
--test-results "ci-artifacts/test-report.xml"
|
||||
```
|
||||
|
||||
**Phase 1 Output:**
|
||||
|
||||
```markdown
|
||||
# Traceability Matrix - Story 1.3
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Priority | Total | FULL | Coverage % | Status |
|
||||
| -------- | ----- | ---- | ---------- | ------- |
|
||||
| P0 | 3 | 3 | 100% | ✅ PASS |
|
||||
| P1 | 5 | 5 | 100% | ✅ PASS |
|
||||
|
||||
Gate Status: Ready for Phase 2 ✅
|
||||
```
|
||||
|
||||
**Phase 2 Output:**
|
||||
|
||||
```markdown
|
||||
# Quality Gate Decision: Story 1.3
|
||||
|
||||
**Decision**: ✅ PASS
|
||||
|
||||
Evidence:
|
||||
|
||||
- P0 Coverage: 100% ✅
|
||||
- P1 Coverage: 100% ✅
|
||||
- P0 Pass Rate: 100% (12/12 tests) ✅
|
||||
- P1 Pass Rate: 98% (45/46 tests) ✅
|
||||
- Overall Pass Rate: 96% ✅
|
||||
|
||||
Next Steps:
|
||||
|
||||
1. Deploy to staging
|
||||
2. Monitor for 24 hours
|
||||
3. Deploy to production
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Example 2: Gap Identification with CONCERNS Decision
|
||||
|
||||
```bash
|
||||
# Find gaps and evaluate readiness
|
||||
bmad tea *trace --story-file "bmad/output/story-2.1.md" \
|
||||
--test-results "ci-artifacts/test-report.xml"
|
||||
```
|
||||
|
||||
**Phase 1 Output:**
|
||||
|
||||
```markdown
|
||||
## Gap Analysis
|
||||
|
||||
### Critical Gaps (BLOCKER)
|
||||
|
||||
- None ✅
|
||||
|
||||
### High Priority Gaps (PR BLOCKER)
|
||||
|
||||
1. **AC-3: Password reset email edge cases**
|
||||
- Recommend: Add 1.3-API-001 (email service integration)
|
||||
- Impact: Users may not recover accounts in error scenarios
|
||||
```
|
||||
|
||||
**Phase 2 Output:**
|
||||
|
||||
```markdown
|
||||
# Quality Gate Decision: Story 2.1
|
||||
|
||||
**Decision**: ⚠️ CONCERNS
|
||||
|
||||
Evidence:
|
||||
|
||||
- P0 Coverage: 100% ✅
|
||||
- P1 Coverage: 88% ⚠️ (below 90%)
|
||||
- Test Pass Rate: 96% ✅
|
||||
|
||||
Residual Risks:
|
||||
|
||||
- AC-3 missing E2E test for email error handling
|
||||
|
||||
Next Steps:
|
||||
|
||||
- Deploy with monitoring
|
||||
- Create follow-up story for AC-3 test
|
||||
- Monitor production for edge cases
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Example 3: Critical Blocker with FAIL Decision
|
||||
|
||||
```bash
|
||||
# Critical issues detected
|
||||
bmad tea *trace --story-file "bmad/output/story-3.2.md" \
|
||||
--test-results "ci-artifacts/test-report.xml"
|
||||
```
|
||||
|
||||
**Phase 1 Output:**
|
||||
|
||||
```markdown
|
||||
## Gap Analysis
|
||||
|
||||
### Critical Gaps (BLOCKER)
|
||||
|
||||
1. **AC-2: Invalid login security validation**
|
||||
- Priority: P0
|
||||
- Status: NONE (no tests)
|
||||
- Impact: Security vulnerability - users can bypass login
|
||||
```
|
||||
|
||||
**Phase 2 Output:**
|
||||
|
||||
```markdown
|
||||
# Quality Gate Decision: Story 3.2
|
||||
|
||||
**Decision**: ❌ FAIL
|
||||
|
||||
Critical Blockers:
|
||||
|
||||
- P0 Coverage: 80% ❌ (AC-2 missing)
|
||||
- Security Risk: Login bypass vulnerability
|
||||
|
||||
Next Steps:
|
||||
|
||||
1. BLOCK DEPLOYMENT IMMEDIATELY
|
||||
2. Add P0 test for AC-2: 1.3-E2E-004
|
||||
3. Re-run full test suite
|
||||
4. Re-run gate after fixes verified
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Example 4: Business Override with WAIVED Decision
|
||||
|
||||
```bash
|
||||
# FAIL with business waiver
|
||||
bmad tea *trace --story-file "bmad/output/release-2.4.0.md" \
|
||||
--test-results "ci-artifacts/test-report.xml" \
|
||||
--allow-waivers true
|
||||
```
|
||||
|
||||
**Phase 2 Output:**
|
||||
|
||||
```markdown
|
||||
# Quality Gate Decision: Release 2.4.0
|
||||
|
||||
**Original Decision**: ❌ FAIL
|
||||
**Final Decision**: 🔓 WAIVED
|
||||
|
||||
Waiver Details:
|
||||
|
||||
- Approver: Jane Doe, VP Engineering
|
||||
- Reason: GDPR compliance deadline (regulatory, Oct 15)
|
||||
- Expiry: 2025-10-15 (does NOT apply to v2.5.0)
|
||||
- Monitoring: Enhanced error tracking
|
||||
- Remediation: Fix in v2.4.1 hotfix (due Oct 20)
|
||||
|
||||
Business Justification:
|
||||
Release contains critical GDPR features required by law. Failed
|
||||
test affects legacy feature used by <1% of users. Workaround available.
|
||||
|
||||
Next Steps:
|
||||
|
||||
1. Deploy v2.4.0 with waiver approval
|
||||
2. Monitor error rates aggressively
|
||||
3. Fix issue in v2.4.1 (Oct 20)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Phase 1 Issues
|
||||
|
||||
#### "No tests found for this story"
|
||||
|
||||
- Run `*atdd` workflow first to generate failing acceptance tests
|
||||
- Check test file naming conventions (may not match story ID pattern)
|
||||
- Verify test directory path is correct (`test_dir` variable)
|
||||
|
||||
#### "Cannot determine coverage status"
|
||||
|
||||
- Tests may lack explicit mapping (no test IDs, unclear describe blocks)
|
||||
- Add test IDs: `{STORY_ID}-{LEVEL}-{SEQ}` (e.g., `1.3-E2E-001`)
|
||||
- Use Given-When-Then narrative in test descriptions
|
||||
|
||||
#### "P0 coverage below 100%"
|
||||
|
||||
- This is a **BLOCKER** - do not release
|
||||
- Identify missing P0 tests in gap analysis
|
||||
- Run `*atdd` workflow to generate missing tests
|
||||
- Verify P0 classification is correct with stakeholders
|
||||
|
||||
#### "Duplicate coverage detected"
|
||||
|
||||
- Review `selective-testing.md` knowledge fragment
|
||||
- Determine if overlap is acceptable (defense in depth) or wasteful
|
||||
- Consolidate tests at appropriate level (logic → unit, journey → E2E)
|
||||
|
||||
### Phase 2 Issues
|
||||
|
||||
#### "Test execution results missing"
|
||||
|
||||
- Phase 2 gate decision requires `test_results` (CI/CD test reports)
|
||||
- If missing, Phase 2 will be skipped with warning
|
||||
- Provide JUnit XML, TAP, or JSON test report path via `test_results` variable
|
||||
|
||||
#### "Gate decision is FAIL but deployment needed urgently"
|
||||
|
||||
- Request business waiver (if `allow_waivers: true`)
|
||||
- Document approver, justification, mitigation plan
|
||||
- Create follow-up stories to address gaps
|
||||
- Use WAIVED decision only for non-P0 gaps
|
||||
- **Never waive**: Security issues, data corruption risks
|
||||
|
||||
#### "Assessments are stale (>7 days old)"
|
||||
|
||||
- Re-run `*test-design` workflow
|
||||
- Re-run traceability (Phase 1)
|
||||
- Re-run `*nfr-assess` workflow
|
||||
- Update evidence files before gate decision
|
||||
|
||||
#### "Unclear decision (edge case)"
|
||||
|
||||
- Switch to manual mode: `decision_mode: manual`
|
||||
- Document assumptions and rationale clearly
|
||||
- Escalate to tech lead or architect for guidance
|
||||
- Consider waiver if business-critical
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
### Before Trace
|
||||
|
||||
1. **testarch-test-design** - Define test priorities (P0/P1/P2/P3)
|
||||
2. **testarch-atdd** - Generate failing acceptance tests
|
||||
3. **testarch-automate** - Expand regression suite
|
||||
|
||||
### After Trace (Phase 2 Decision)
|
||||
|
||||
- **PASS**: Proceed to deployment workflow
|
||||
- **CONCERNS**: Deploy with monitoring, create remediation backlog stories
|
||||
- **FAIL**: Block deployment, fix issues, re-run trace workflow
|
||||
- **WAIVED**: Deploy with business approval, escalate monitoring
|
||||
|
||||
### Complements
|
||||
|
||||
- `*trace` → **testarch-nfr-assess** - Use NFR validation in gate decision
|
||||
- `*trace` → **testarch-test-review** - Flag quality issues for review
|
||||
- **CI/CD Pipeline** - Use gate YAML for automated quality gates
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Phase 1 - Traceability
|
||||
|
||||
1. **Run Trace After Test Implementation**
|
||||
- Don't run `*trace` before tests exist (run `*atdd` first)
|
||||
- Trace is most valuable after initial test suite is written
|
||||
|
||||
2. **Prioritize by Risk**
|
||||
- P0 gaps are BLOCKERS (must fix before release)
|
||||
- P1 gaps are HIGH priority (block PR merge)
|
||||
- P3 gaps are acceptable (fix if time permits)
|
||||
|
||||
3. **Explicit Mapping**
|
||||
- Use test IDs (`1.3-E2E-001`) for clear traceability
|
||||
- Reference criteria in describe blocks
|
||||
- Use Given-When-Then narrative
|
||||
|
||||
4. **Avoid Duplicate Coverage**
|
||||
- Test each behavior at appropriate level only
|
||||
- Unit tests for logic, E2E for journeys
|
||||
- Only overlap for defense in depth on critical paths
|
||||
|
||||
### Phase 2 - Gate Decision
|
||||
|
||||
5. **Evidence is King**
|
||||
- Never make gate decisions without fresh test results
|
||||
- Validate evidence freshness (<7 days old)
|
||||
- Link to all evidence sources (reports, logs, artifacts)
|
||||
|
||||
6. **P0 is Sacred**
|
||||
- P0 failures ALWAYS result in FAIL (no exceptions except waivers)
|
||||
- P0 = Critical user journeys, security, data integrity
|
||||
- Waivers require VP/CTO approval + business justification
|
||||
|
||||
7. **Waivers are Temporary**
|
||||
- Waiver applies ONLY to specific release
|
||||
- Issue must be fixed in next release
|
||||
- Never waive: security, data corruption, compliance violations
|
||||
|
||||
8. **CONCERNS is Not PASS**
|
||||
- CONCERNS means "deploy with monitoring"
|
||||
- Create follow-up stories for issues
|
||||
- Do not ignore CONCERNS repeatedly
|
||||
|
||||
9. **Automate Gate Integration**
|
||||
- Enable `generate_gate_yaml` for CI/CD integration
|
||||
- Use YAML snippets in pipeline quality gates
|
||||
- Export metrics for dashboard visualization
|
||||
|
||||
---
|
||||
|
||||
## Configuration Examples
|
||||
|
||||
### Strict Gate (Zero Tolerance)
|
||||
|
||||
```yaml
|
||||
min_p0_coverage: 100
|
||||
min_p1_coverage: 100
|
||||
min_overall_coverage: 90
|
||||
min_p0_pass_rate: 100
|
||||
min_p1_pass_rate: 100
|
||||
min_overall_pass_rate: 95
|
||||
allow_waivers: false
|
||||
max_security_issues: 0
|
||||
max_critical_nfrs_fail: 0
|
||||
```
|
||||
|
||||
Use for: Financial systems, healthcare, security-critical features
|
||||
|
||||
---
|
||||
|
||||
### Balanced Gate (Production Standard - Default)
|
||||
|
||||
```yaml
|
||||
min_p0_coverage: 100
|
||||
min_p1_coverage: 90
|
||||
min_overall_coverage: 80
|
||||
min_p0_pass_rate: 100
|
||||
min_p1_pass_rate: 95
|
||||
min_overall_pass_rate: 90
|
||||
allow_waivers: true
|
||||
max_security_issues: 0
|
||||
max_critical_nfrs_fail: 0
|
||||
```
|
||||
|
||||
Use for: Most production releases
|
||||
|
||||
---
|
||||
|
||||
### Relaxed Gate (Early Development)
|
||||
|
||||
```yaml
|
||||
min_p0_coverage: 100
|
||||
min_p1_coverage: 80
|
||||
min_overall_coverage: 70
|
||||
min_p0_pass_rate: 100
|
||||
min_p1_pass_rate: 85
|
||||
min_overall_pass_rate: 80
|
||||
allow_waivers: true
|
||||
allow_p2_failures: true
|
||||
allow_p3_failures: true
|
||||
```
|
||||
|
||||
Use for: Alpha/beta releases, internal tools, proof-of-concept
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `bmad tea *test-design` - Define test priorities and risk assessment
|
||||
- `bmad tea *atdd` - Generate failing acceptance tests for gaps
|
||||
- `bmad tea *automate` - Expand regression suite based on gaps
|
||||
- `bmad tea *nfr-assess` - Validate non-functional requirements (for gate)
|
||||
- `bmad tea *test-review` - Review test quality issues flagged by trace
|
||||
- `bmad sm story-done` - Mark story as complete (triggers gate)
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Instructions](./instructions.md) - Detailed workflow steps (both phases)
|
||||
- [Checklist](./checklist.md) - Validation checklist
|
||||
- [Template](./trace-template.md) - Traceability matrix template
|
||||
- [Knowledge Base](../../testarch/knowledge/) - Testing best practices
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
@@ -1,260 +0,0 @@
|
||||
# Workflow Status System
|
||||
|
||||
The universal entry point for BMM workflows - answers "what should I do now?" for any agent.
|
||||
|
||||
## Overview
|
||||
|
||||
The workflow status system provides:
|
||||
|
||||
- **Smart project initialization** - Detects existing work and infers project details
|
||||
- **Simple status tracking** - Key-value pairs for instant parsing
|
||||
- **Intelligent routing** - Suggests next actions based on current state
|
||||
- **Modular workflow paths** - Each project type/level has its own clean definition
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Components
|
||||
|
||||
```
|
||||
workflow-status/
|
||||
├── workflow.yaml # Main configuration
|
||||
├── instructions.md # Status checker (99 lines)
|
||||
├── workflow-status-template.yaml # Clean YAML status template
|
||||
├── project-levels.yaml # Source of truth for scale definitions
|
||||
└── paths/ # Modular workflow definitions
|
||||
├── greenfield-level-0.yaml through level-4.yaml
|
||||
├── brownfield-level-0.yaml through level-4.yaml
|
||||
└── game-design.yaml
|
||||
```
|
||||
|
||||
### Related Workflow
|
||||
|
||||
```
|
||||
workflow-init/
|
||||
├── workflow.yaml # Initialization configuration
|
||||
└── instructions.md # Smart setup (182 lines)
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### For New Projects
|
||||
|
||||
1. User runs `workflow-status`
|
||||
2. System finds no status file
|
||||
3. Directs to `workflow-init`
|
||||
4. Init workflow:
|
||||
- Scans for existing work (PRDs, code, etc.)
|
||||
- Infers project details from what it finds
|
||||
- Asks minimal questions (name + description)
|
||||
- Confirms understanding in one step
|
||||
- Creates status file with workflow path
|
||||
|
||||
### For Existing Projects
|
||||
|
||||
1. User runs `workflow-status`
|
||||
2. System reads status file
|
||||
3. Shows current state and options:
|
||||
- Continue in-progress work
|
||||
- Next required step
|
||||
- Available optional workflows
|
||||
4. User picks action
|
||||
|
||||
## Status File Format
|
||||
|
||||
Clean YAML format with all workflows listed up front:
|
||||
|
||||
```yaml
|
||||
# generated: 2025-10-29
|
||||
# project: MyProject
|
||||
# project_type: software
|
||||
# project_level: 2
|
||||
# field_type: greenfield
|
||||
# workflow_path: greenfield-level-2.yaml
|
||||
|
||||
workflow_status:
|
||||
# Phase 1: Analysis
|
||||
brainstorm-project: optional
|
||||
research: optional
|
||||
product-brief: recommended
|
||||
|
||||
# Phase 2: Planning
|
||||
prd: docs/prd.md
|
||||
validate-prd: optional
|
||||
create-design: conditional
|
||||
|
||||
# Phase 3: Solutioning
|
||||
create-architecture: required
|
||||
validate-architecture: optional
|
||||
solutioning-gate-check: required
|
||||
```
|
||||
|
||||
**Status Values:**
|
||||
|
||||
- `required` / `optional` / `recommended` / `conditional` - Not yet started
|
||||
- `{file-path}` - Completed (e.g., `docs/prd.md`)
|
||||
- `skipped` - Optional workflow that was skipped
|
||||
|
||||
Any agent can instantly parse what they need:
|
||||
|
||||
- Read YAML to see all workflows and their status
|
||||
- Check which workflows are completed vs pending
|
||||
- Auto-detect existing work by scanning for output files
|
||||
|
||||
## Project Levels
|
||||
|
||||
Source of truth: `/src/modules/bmm/README.md` lines 77-85
|
||||
|
||||
- **Level 0**: Single atomic change (1 story)
|
||||
- **Level 1**: Small feature (1-10 stories)
|
||||
- **Level 2**: Medium project (5-15 stories)
|
||||
- **Level 3**: Complex system (12-40 stories)
|
||||
- **Level 4**: Enterprise scale (40+ stories)
|
||||
|
||||
## Workflow Paths
|
||||
|
||||
Each combination has its own file:
|
||||
|
||||
- `greenfield-level-X.yaml` - New projects at each level
|
||||
- `brownfield-level-X.yaml` - Existing codebases at each level
|
||||
- `game-design.yaml` - Game projects (all levels)
|
||||
|
||||
Benefits:
|
||||
|
||||
- Load only what's needed (60 lines vs 750+)
|
||||
- Easy to maintain individual paths
|
||||
- Clear separation of concerns
|
||||
|
||||
## Smart Detection
|
||||
|
||||
The init workflow intelligently detects:
|
||||
|
||||
**Project Type:**
|
||||
|
||||
- Finds GDD → game
|
||||
- Otherwise → software
|
||||
|
||||
**Project Level:**
|
||||
|
||||
- Reads PRD epic/story counts
|
||||
- Analyzes scope descriptions
|
||||
- Makes educated guess
|
||||
|
||||
**Field Type:**
|
||||
|
||||
- Finds source code → brownfield
|
||||
- Only planning docs → greenfield
|
||||
- Checks git history age
|
||||
|
||||
**Documentation Status:**
|
||||
|
||||
- Finds index.md → was undocumented
|
||||
- Good README → documented
|
||||
- Missing docs → needs documentation
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Any Agent Checking Status
|
||||
|
||||
```
|
||||
Agent: workflow-status
|
||||
Result: "Current: Phase 2 - Planning, Next: prd (pm agent)"
|
||||
```
|
||||
|
||||
### New Project Setup
|
||||
|
||||
```
|
||||
Agent: workflow-status
|
||||
System: "No status found. Run workflow-init"
|
||||
Agent: workflow-init
|
||||
System: "Tell me about your project"
|
||||
User: "Building a dashboard with user management"
|
||||
System: "Level 2 greenfield software project. Correct?"
|
||||
User: "Yes"
|
||||
System: "Status created! Next: pm agent, run prd"
|
||||
```
|
||||
|
||||
### Smart Inference
|
||||
|
||||
```
|
||||
System finds: prd-dashboard.md with 3 epics
|
||||
System finds: package.json, src/ directory
|
||||
System infers: Level 2 brownfield software
|
||||
User confirms or corrects
|
||||
```
|
||||
|
||||
## Philosophy
|
||||
|
||||
**Less Structure, More Intelligence**
|
||||
|
||||
Instead of complex if/else logic:
|
||||
|
||||
- Trust the LLM to analyze and infer
|
||||
- Use natural language for corrections
|
||||
- Keep menus simple and contextual
|
||||
- Let intelligence emerge from the model
|
||||
|
||||
**Result:** A workflow system that feels like talking to a smart assistant, not filling out a form.
|
||||
|
||||
## Implementation Details
|
||||
|
||||
### workflow-init (6 Steps)
|
||||
|
||||
1. **Scan for existing work** - Check for docs, code, git history
|
||||
2. **Confirm findings** - Show what was detected (if anything)
|
||||
3. **Gather info** - Name, description, confirm type/level/field
|
||||
4. **Load path file** - Select appropriate workflow definition
|
||||
5. **Generate workflow** - Build from path file
|
||||
6. **Create status file** - Save and show next step
|
||||
|
||||
### workflow-status (4 Steps)
|
||||
|
||||
1. **Check for status file** - Direct to init if missing
|
||||
2. **Parse status** - Extract key-value pairs
|
||||
3. **Display options** - Show current, required, optional
|
||||
4. **Handle selection** - Execute user's choice
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Let the AI guess** - It's usually right, user corrects if needed
|
||||
2. **One conversation** - Get all info in Step 3 of init
|
||||
3. **Simple parsing** - Key-value pairs, not complex structures
|
||||
4. **Modular paths** - Each scenario in its own file
|
||||
5. **Trust intelligence** - LLM understands context better than rules
|
||||
|
||||
## Integration
|
||||
|
||||
Other workflows read the status to coordinate:
|
||||
|
||||
- Any workflow can check CURRENT_PHASE
|
||||
- Workflows can verify prerequisites are complete
|
||||
- All agents can ask "what should I do?"
|
||||
|
||||
**Phase 4 (Implementation):**
|
||||
|
||||
- workflow-status only tracks sprint-planning completion
|
||||
- After sprint-planning, all story/epic tracking happens in sprint-status.yaml
|
||||
- Phase 4 workflows do NOT read/write workflow-status (except sprint-planning for prerequisite verification)
|
||||
|
||||
The workflow-status.yaml file is the single source of truth for Phases 1-3, and sprint-status.yaml takes over for Phase 4 implementation tracking.
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Smart Detection** - Infers from existing work instead of asking everything
|
||||
✅ **Minimal Questions** - Just name and description in most cases
|
||||
✅ **Clean Status** - Simple key-value pairs for instant parsing
|
||||
✅ **Modular Paths** - 60-line files instead of 750+ line monolith
|
||||
✅ **Natural Language** - "Tell me about your project" not "Pick 1-12"
|
||||
✅ **Intelligent Menus** - Shows only relevant options
|
||||
✅ **Fast Parsing** - Grep instead of complex logic
|
||||
✅ **Easy Maintenance** - Change one level without affecting others
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
- Visual progress indicators
|
||||
- Time tracking and estimates
|
||||
- Multi-project support
|
||||
- Team synchronization
|
||||
|
||||
---
|
||||
|
||||
**This workflow is the front door to BMad Method. Start here to know what to do next.**
|
||||
Reference in New Issue
Block a user