docs: massive documentation overhaul + introduce Paige (Documentation Guide agent)

## 📚 Complete Documentation Restructure

**BMM Documentation Hub Created:**
- New centralized documentation system at `src/modules/bmm/docs/`
- 18 comprehensive guides organized by topic (7000+ lines total)
- Clear learning paths for greenfield, brownfield, and quick spec flows
- Professional technical writing standards throughout

**New Documentation:**
- `README.md` - Complete documentation hub with navigation
- `quick-start.md` - 15-minute getting started guide
- `agents-guide.md` - Comprehensive 12-agent reference (45 min read)
- `party-mode.md` - Multi-agent collaboration guide (20 min read)
- `scale-adaptive-system.md` - Deep dive on Levels 0-4 (42 min read)
- `brownfield-guide.md` - Existing codebase development (53 min read)
- `quick-spec-flow.md` - Rapid Level 0-1 development (26 min read)
- `workflows-analysis.md` - Phase 1 workflows (12 min read)
- `workflows-planning.md` - Phase 2 workflows (19 min read)
- `workflows-solutioning.md` - Phase 3 workflows (13 min read)
- `workflows-implementation.md` - Phase 4 workflows (33 min read)
- `workflows-testing.md` - Testing & QA workflows (29 min read)
- `workflow-architecture-reference.md` - Architecture workflow deep-dive
- `workflow-document-project-reference.md` - Document-project workflow reference
- `enterprise-agentic-development.md` - Team collaboration patterns
- `faq.md` - Comprehensive Q&A covering all topics
- `glossary.md` - Complete terminology reference
- `troubleshooting.md` - Common issues and solutions

**Documentation Improvements:**
- Removed all version/date footers (git handles versioning)
- Agent customization docs now include full rebuild process
- Cross-referenced links between all guides
- Reading time estimates for all major docs
- Consistent professional formatting and structure

**Consolidated & Streamlined:**
- Module README (`src/modules/bmm/README.md`) streamlined to lean signpost
- Root README polished with better hierarchy and clear CTAs
- Moved docs from root `docs/` to module-specific locations
- Better separation of user docs vs. developer reference

## 🤖 New Agent: Paige (Documentation Guide)

**Role:** Technical documentation specialist and information architect

**Expertise:**
- Professional technical writing standards
- Documentation structure and organization
- Information architecture and navigation
- User-focused content design
- Style guide enforcement

**Status:** Work in progress - Paige will evolve as documentation needs grow

**Integration:**
- Listed in agents-guide.md, glossary.md, FAQ
- Available for all phases (documentation is continuous)
- Can be customized like all BMM agents

## 🔧 Additional Changes

- Updated agent manifest with Paige
- Updated workflow manifest with new documentation workflows
- Fixed workflow-to-agent mappings across all guides
- Improved root README with clearer Quick Start section
- Better module structure explanations
- Enhanced community links with Discord channel names

**Total Impact:**
- 18 new/restructured documentation files
- 7000+ lines of professional technical documentation
- Complete navigation system with cross-references
- Clear learning paths for all user types
- Foundation for knowledge base (coming in beta)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
Brian Madison
2025-11-02 21:18:33 -06:00
parent 8a00f8ad70
commit cfedecbd53
359 changed files with 72374 additions and 809 deletions

View File

@@ -0,0 +1,802 @@
# Requirements Traceability & Quality Gate Workflow
**Workflow ID:** `testarch-trace`
**Agent:** Test Architect (TEA)
**Command:** `bmad tea *trace`
---
## Overview
The **trace** workflow operates in two sequential phases to validate test coverage and deployment readiness:
**PHASE 1 - REQUIREMENTS TRACEABILITY:** Generates comprehensive requirements-to-tests traceability matrix that maps acceptance criteria to implemented tests, identifies coverage gaps, and provides actionable recommendations.
**PHASE 2 - QUALITY GATE DECISION:** Makes deterministic release decisions (PASS/CONCERNS/FAIL/WAIVED) based on traceability results, test execution evidence, and non-functional requirements validation.
**Key Features:**
- Maps acceptance criteria to specific test cases across all levels (E2E, API, Component, Unit)
- Classifies coverage status (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
- Prioritizes gaps by risk level (P0/P1/P2/P3)
- Applies deterministic decision rules for deployment readiness
- Generates gate decisions with evidence and rationale
- Supports waivers for business-approved exceptions
- Updates workflow status and notifies stakeholders
- Creates CI/CD-ready YAML snippets for quality gates
- Detects duplicate coverage across test levels
- Verifies test quality (assertions, structure, performance)
---
## When to Use This Workflow
Use `*trace` when you need to:
### Phase 1 - Traceability
- ✅ Validate that all acceptance criteria have test coverage
- ✅ Identify coverage gaps before release or PR merge
- ✅ Generate traceability documentation for compliance or audits
- ✅ Ensure critical paths (P0/P1) are fully tested
- ✅ Detect duplicate coverage across test levels
- ✅ Assess test quality across your suite
### Phase 2 - Gate Decision (Optional)
- ✅ Make final go/no-go deployment decision
- ✅ Validate test execution results against thresholds
- ✅ Evaluate non-functional requirements (security, performance)
- ✅ Generate audit trail for release approval
- ✅ Handle business waivers for critical deadlines
- ✅ Notify stakeholders of gate decision
**Typical Timing:**
- After tests are implemented (post-ATDD or post-development)
- Before merging a PR (validate P0/P1 coverage)
- Before release (validate full coverage and make gate decision)
- During sprint retrospectives (assess test quality)
---
## Prerequisites
### Phase 1 - Traceability (Required)
- Acceptance criteria (from story file OR inline)
- Implemented test suite (or acknowledged gaps)
### Phase 2 - Gate Decision (Required if `enable_gate_decision: true`)
- Test execution results (CI/CD test reports, pass/fail rates)
- Test design with risk priorities (P0/P1/P2/P3)
### Recommended
- `test-design.md` - Risk assessment and test priorities
- `nfr-assessment.md` - Non-functional requirements validation (for release gates)
- `tech-spec.md` - Technical implementation details
- Test framework configuration (playwright.config.ts, jest.config.js)
**Halt Conditions:**
- Story lacks any tests AND gaps are not acknowledged → Run `*atdd` first
- Acceptance criteria are completely missing → Provide criteria or story file
- Phase 2 enabled but test execution results missing → Warn and skip gate decision
---
## Usage
### Basic Usage (Both Phases)
```bash
bmad tea *trace
```
The workflow will:
1. **Phase 1**: Read story file, extract acceptance criteria, auto-discover tests, generate traceability matrix
2. **Phase 2**: Load test execution results, apply decision rules, generate gate decision document
3. Save traceability matrix to `bmad/output/traceability-matrix.md`
4. Save gate decision to `bmad/output/gate-decision-story-X.X.md`
### Phase 1 Only (Skip Gate Decision)
```bash
bmad tea *trace --enable-gate-decision false
```
### Custom Configuration
```bash
bmad tea *trace \
--story-file "bmad/output/story-1.3.md" \
--test-results "ci-artifacts/test-report.xml" \
--min-p0-coverage 100 \
--min-p1-coverage 90 \
--min-p0-pass-rate 100 \
--min-p1-pass-rate 95
```
### Standalone Mode (No Story File)
```bash
bmad tea *trace --acceptance-criteria "AC-1: User can login with email..."
```
---
## Workflow Steps
### PHASE 1: Requirements Traceability
1. **Load Context** - Read story, test design, tech spec, knowledge base
2. **Discover Tests** - Auto-find tests related to story (by ID, describe blocks, file paths)
3. **Map Criteria** - Link acceptance criteria to specific test cases
4. **Analyze Gaps** - Identify missing coverage and prioritize by risk
5. **Verify Quality** - Check test quality (assertions, structure, performance)
6. **Generate Deliverables** - Create traceability matrix, gate YAML, coverage badge
### PHASE 2: Quality Gate Decision (if `enable_gate_decision: true`)
7. **Gather Evidence** - Load traceability results, test execution reports, NFR assessments
8. **Apply Decision Rules** - Evaluate against thresholds (PASS/CONCERNS/FAIL/WAIVED)
9. **Document Decision** - Create gate decision document with evidence and rationale
10. **Update Status & Notify** - Append to bmm-workflow-status.md, notify stakeholders
---
## Outputs
### Phase 1: Traceability Matrix (`traceability-matrix.md`)
Comprehensive markdown file with:
- Coverage summary table (by priority)
- Detailed criterion-to-test mapping
- Gap analysis with recommendations
- Quality assessment for each test
- Gate YAML snippet
**Example:**
```markdown
# Traceability Matrix - Story 1.3
## Coverage Summary
| Priority | Total | FULL | Coverage % | Status |
| -------- | ----- | ---- | ---------- | ------- |
| P0 | 3 | 3 | 100% | ✅ PASS |
| P1 | 5 | 4 | 80% | ⚠️ WARN |
Gate Status: CONCERNS ⚠️ (P1 coverage below 90%)
```
### Phase 2: Gate Decision Document (`gate-decision-{type}-{id}.md`)
**Decision Document** with:
- **Decision**: PASS / CONCERNS / FAIL / WAIVED with clear rationale
- **Evidence Summary**: Test results, coverage, NFRs, quality validation
- **Decision Criteria Table**: Each criterion with threshold, actual, status
- **Rationale**: Explanation of decision based on evidence
- **Residual Risks**: Unresolved issues (for CONCERNS/WAIVED)
- **Waiver Details**: Approver, justification, remediation plan (for WAIVED)
- **Next Steps**: Action items for each decision type
**Example:**
```markdown
# Quality Gate Decision: Story 1.3 - User Login
**Decision**: ⚠️ CONCERNS
**Date**: 2025-10-15
## Decision Criteria
| Criterion | Threshold | Actual | Status |
| ------------ | --------- | ------ | ------- |
| P0 Coverage | ≥100% | 100% | ✅ PASS |
| P1 Coverage | ≥90% | 88% | ⚠️ FAIL |
| Overall Pass | ≥90% | 96% | ✅ PASS |
**Decision**: CONCERNS (P1 coverage 88% below 90% threshold)
## Next Steps
- Deploy with monitoring
- Create follow-up story for AC-5 test
```
### Secondary Outputs
- **Gate YAML**: Machine-readable snippet for CI/CD integration
- **Status Update**: Appends decision to `bmm-workflow-status.md` history
- **Stakeholder Notification**: Auto-generated summary message
- **Updated Story File**: Traceability section added (optional)
---
## Decision Logic (Phase 2)
### PASS Decision ✅
**All criteria met:**
- ✅ P0 coverage ≥ 100%
- ✅ P1 coverage ≥ 90%
- ✅ Overall coverage ≥ 80%
- ✅ P0 test pass rate = 100%
- ✅ P1 test pass rate ≥ 95%
- ✅ Overall test pass rate ≥ 90%
- ✅ Security issues = 0
- ✅ Critical NFR failures = 0
**Action:** Deploy to production with standard monitoring
---
### CONCERNS Decision ⚠️
**P0 criteria met, but P1 criteria degraded:**
- ✅ P0 coverage = 100%
- ⚠️ P1 coverage 80-89% (below 90% threshold)
- ⚠️ P1 test pass rate 90-94% (below 95% threshold)
- ✅ No security issues
- ✅ No critical NFR failures
**Residual Risks:** Minor P1 issues, edge cases, non-critical gaps
**Action:** Deploy with enhanced monitoring, create backlog stories for fixes
**Note:** CONCERNS does NOT block deployment but requires acknowledgment
---
### FAIL Decision ❌
**Any P0 criterion failed:**
- ❌ P0 coverage <100% (missing critical tests)
- OR ❌ P0 test pass rate <100% (failing critical tests)
- OR ❌ P1 coverage <80% (significant gap)
- OR ❌ Security issues >0
- OR ❌ Critical NFR failures >0
**Critical Blockers:** P0 test failures, security vulnerabilities, critical NFRs
**Action:** Block deployment, fix critical issues, re-run gate after fixes
---
### WAIVED Decision 🔓
**FAIL status + business-approved waiver:**
- ❌ Original decision: FAIL
- 🔓 Waiver approved by: {VP Engineering / CTO / Product Owner}
- 📋 Business justification: {regulatory deadline, contractual obligation}
- 📅 Waiver expiry: {date - does NOT apply to future releases}
- 🔧 Remediation plan: {fix in next release, due date}
**Action:** Deploy with business approval, aggressive monitoring, fix ASAP
**Important:** Waivers NEVER apply to P0 security issues or data corruption risks
---
## Coverage Classifications (Phase 1)
- **FULL** ✅ - All scenarios validated at appropriate level(s)
- **PARTIAL** ⚠️ - Some coverage but missing edge cases or levels
- **NONE** ❌ - No test coverage at any level
- **UNIT-ONLY** ⚠️ - Only unit tests (missing integration/E2E validation)
- **INTEGRATION-ONLY** ⚠️ - Only API/Component tests (missing unit confidence)
---
## Quality Gates
| Priority | Coverage Requirement | Pass Rate Requirement | Severity | Action |
| -------- | -------------------- | --------------------- | -------- | ------------------ |
| P0 | 100% | 100% | BLOCKER | Do not release |
| P1 | 90% | 95% | HIGH | Block PR merge |
| P2 | 80% (recommended) | 85% (recommended) | MEDIUM | Address in nightly |
| P3 | No requirement | No requirement | LOW | Optional |
---
## Configuration
### workflow.yaml Variables
```yaml
variables:
# Target specification
story_file: '' # Path to story markdown
acceptance_criteria: '' # Inline criteria if no story
# Test discovery
test_dir: '{project-root}/tests'
auto_discover_tests: true
# Traceability configuration
coverage_levels: 'e2e,api,component,unit'
map_by_test_id: true
map_by_describe: true
map_by_filename: true
# Gap analysis
prioritize_by_risk: true
suggest_missing_tests: true
check_duplicate_coverage: true
# Output configuration
output_file: '{output_folder}/traceability-matrix.md'
generate_gate_yaml: true
generate_coverage_badge: true
update_story_file: true
# Quality gates (Phase 1 recommendations)
min_p0_coverage: 100
min_p1_coverage: 90
min_overall_coverage: 80
# PHASE 2: Gate Decision Variables
enable_gate_decision: true # Run gate decision after traceability
# Gate target specification
gate_type: 'story' # story | epic | release | hotfix
# Gate decision configuration
decision_mode: 'deterministic' # deterministic | manual
allow_waivers: true
require_evidence: true
# Input sources for gate
nfr_file: '' # Path to nfr-assessment.md (optional)
test_results: '' # Path to test execution results (required for Phase 2)
# Decision criteria thresholds
min_p0_pass_rate: 100
min_p1_pass_rate: 95
min_overall_pass_rate: 90
max_critical_nfrs_fail: 0
max_security_issues: 0
# Risk tolerance
allow_p2_failures: true
allow_p3_failures: true
escalate_p1_failures: true
# Gate output configuration
gate_output_file: '{output_folder}/gate-decision-{gate_type}-{story_id}.md'
append_to_history: true
notify_stakeholders: true
# Advanced gate options
check_all_workflows_complete: true
validate_evidence_freshness: true
require_sign_off: false
```
---
## Knowledge Base Integration
This workflow automatically loads relevant knowledge fragments:
**Phase 1 (Traceability):**
- `traceability.md` - Requirements mapping patterns
- `test-priorities.md` - P0/P1/P2/P3 risk framework
- `risk-governance.md` - Risk-based testing approach
- `test-quality.md` - Definition of Done for tests
- `selective-testing.md` - Duplicate coverage patterns
**Phase 2 (Gate Decision):**
- `risk-governance.md` - Quality gate criteria and decision framework
- `probability-impact.md` - Risk scoring for residual risks
- `test-quality.md` - Quality standards validation
- `test-priorities.md` - Priority classification framework
---
## Example Scenarios
### Example 1: Full Coverage with Gate PASS
```bash
# Validate coverage and make gate decision
bmad tea *trace --story-file "bmad/output/story-1.3.md" \
--test-results "ci-artifacts/test-report.xml"
```
**Phase 1 Output:**
```markdown
# Traceability Matrix - Story 1.3
## Coverage Summary
| Priority | Total | FULL | Coverage % | Status |
| -------- | ----- | ---- | ---------- | ------- |
| P0 | 3 | 3 | 100% | ✅ PASS |
| P1 | 5 | 5 | 100% | ✅ PASS |
Gate Status: Ready for Phase 2 ✅
```
**Phase 2 Output:**
```markdown
# Quality Gate Decision: Story 1.3
**Decision**: ✅ PASS
Evidence:
- P0 Coverage: 100% ✅
- P1 Coverage: 100% ✅
- P0 Pass Rate: 100% (12/12 tests) ✅
- P1 Pass Rate: 98% (45/46 tests) ✅
- Overall Pass Rate: 96% ✅
Next Steps:
1. Deploy to staging
2. Monitor for 24 hours
3. Deploy to production
```
---
### Example 2: Gap Identification with CONCERNS Decision
```bash
# Find gaps and evaluate readiness
bmad tea *trace --story-file "bmad/output/story-2.1.md" \
--test-results "ci-artifacts/test-report.xml"
```
**Phase 1 Output:**
```markdown
## Gap Analysis
### Critical Gaps (BLOCKER)
- None ✅
### High Priority Gaps (PR BLOCKER)
1. **AC-3: Password reset email edge cases**
- Recommend: Add 1.3-API-001 (email service integration)
- Impact: Users may not recover accounts in error scenarios
```
**Phase 2 Output:**
```markdown
# Quality Gate Decision: Story 2.1
**Decision**: ⚠️ CONCERNS
Evidence:
- P0 Coverage: 100% ✅
- P1 Coverage: 88% ⚠️ (below 90%)
- Test Pass Rate: 96% ✅
Residual Risks:
- AC-3 missing E2E test for email error handling
Next Steps:
- Deploy with monitoring
- Create follow-up story for AC-3 test
- Monitor production for edge cases
```
---
### Example 3: Critical Blocker with FAIL Decision
```bash
# Critical issues detected
bmad tea *trace --story-file "bmad/output/story-3.2.md" \
--test-results "ci-artifacts/test-report.xml"
```
**Phase 1 Output:**
```markdown
## Gap Analysis
### Critical Gaps (BLOCKER)
1. **AC-2: Invalid login security validation**
- Priority: P0
- Status: NONE (no tests)
- Impact: Security vulnerability - users can bypass login
```
**Phase 2 Output:**
```markdown
# Quality Gate Decision: Story 3.2
**Decision**: ❌ FAIL
Critical Blockers:
- P0 Coverage: 80% ❌ (AC-2 missing)
- Security Risk: Login bypass vulnerability
Next Steps:
1. BLOCK DEPLOYMENT IMMEDIATELY
2. Add P0 test for AC-2: 1.3-E2E-004
3. Re-run full test suite
4. Re-run gate after fixes verified
```
---
### Example 4: Business Override with WAIVED Decision
```bash
# FAIL with business waiver
bmad tea *trace --story-file "bmad/output/release-2.4.0.md" \
--test-results "ci-artifacts/test-report.xml" \
--allow-waivers true
```
**Phase 2 Output:**
```markdown
# Quality Gate Decision: Release 2.4.0
**Original Decision**: ❌ FAIL
**Final Decision**: 🔓 WAIVED
Waiver Details:
- Approver: Jane Doe, VP Engineering
- Reason: GDPR compliance deadline (regulatory, Oct 15)
- Expiry: 2025-10-15 (does NOT apply to v2.5.0)
- Monitoring: Enhanced error tracking
- Remediation: Fix in v2.4.1 hotfix (due Oct 20)
Business Justification:
Release contains critical GDPR features required by law. Failed
test affects legacy feature used by <1% of users. Workaround available.
Next Steps:
1. Deploy v2.4.0 with waiver approval
2. Monitor error rates aggressively
3. Fix issue in v2.4.1 (Oct 20)
```
---
## Troubleshooting
### Phase 1 Issues
#### "No tests found for this story"
- Run `*atdd` workflow first to generate failing acceptance tests
- Check test file naming conventions (may not match story ID pattern)
- Verify test directory path is correct (`test_dir` variable)
#### "Cannot determine coverage status"
- Tests may lack explicit mapping (no test IDs, unclear describe blocks)
- Add test IDs: `{STORY_ID}-{LEVEL}-{SEQ}` (e.g., `1.3-E2E-001`)
- Use Given-When-Then narrative in test descriptions
#### "P0 coverage below 100%"
- This is a **BLOCKER** - do not release
- Identify missing P0 tests in gap analysis
- Run `*atdd` workflow to generate missing tests
- Verify P0 classification is correct with stakeholders
#### "Duplicate coverage detected"
- Review `selective-testing.md` knowledge fragment
- Determine if overlap is acceptable (defense in depth) or wasteful
- Consolidate tests at appropriate level (logic → unit, journey → E2E)
### Phase 2 Issues
#### "Test execution results missing"
- Phase 2 gate decision requires `test_results` (CI/CD test reports)
- If missing, Phase 2 will be skipped with warning
- Provide JUnit XML, TAP, or JSON test report path via `test_results` variable
#### "Gate decision is FAIL but deployment needed urgently"
- Request business waiver (if `allow_waivers: true`)
- Document approver, justification, mitigation plan
- Create follow-up stories to address gaps
- Use WAIVED decision only for non-P0 gaps
- **Never waive**: Security issues, data corruption risks
#### "Assessments are stale (>7 days old)"
- Re-run `*test-design` workflow
- Re-run traceability (Phase 1)
- Re-run `*nfr-assess` workflow
- Update evidence files before gate decision
#### "Unclear decision (edge case)"
- Switch to manual mode: `decision_mode: manual`
- Document assumptions and rationale clearly
- Escalate to tech lead or architect for guidance
- Consider waiver if business-critical
---
## Integration with Other Workflows
### Before Trace
1. **testarch-test-design** - Define test priorities (P0/P1/P2/P3)
2. **testarch-atdd** - Generate failing acceptance tests
3. **testarch-automate** - Expand regression suite
### After Trace (Phase 2 Decision)
- **PASS**: Proceed to deployment workflow
- **CONCERNS**: Deploy with monitoring, create remediation backlog stories
- **FAIL**: Block deployment, fix issues, re-run trace workflow
- **WAIVED**: Deploy with business approval, escalate monitoring
### Complements
- `*trace`**testarch-nfr-assess** - Use NFR validation in gate decision
- `*trace`**testarch-test-review** - Flag quality issues for review
- **CI/CD Pipeline** - Use gate YAML for automated quality gates
---
## Best Practices
### Phase 1 - Traceability
1. **Run Trace After Test Implementation**
- Don't run `*trace` before tests exist (run `*atdd` first)
- Trace is most valuable after initial test suite is written
2. **Prioritize by Risk**
- P0 gaps are BLOCKERS (must fix before release)
- P1 gaps are HIGH priority (block PR merge)
- P3 gaps are acceptable (fix if time permits)
3. **Explicit Mapping**
- Use test IDs (`1.3-E2E-001`) for clear traceability
- Reference criteria in describe blocks
- Use Given-When-Then narrative
4. **Avoid Duplicate Coverage**
- Test each behavior at appropriate level only
- Unit tests for logic, E2E for journeys
- Only overlap for defense in depth on critical paths
### Phase 2 - Gate Decision
5. **Evidence is King**
- Never make gate decisions without fresh test results
- Validate evidence freshness (<7 days old)
- Link to all evidence sources (reports, logs, artifacts)
6. **P0 is Sacred**
- P0 failures ALWAYS result in FAIL (no exceptions except waivers)
- P0 = Critical user journeys, security, data integrity
- Waivers require VP/CTO approval + business justification
7. **Waivers are Temporary**
- Waiver applies ONLY to specific release
- Issue must be fixed in next release
- Never waive: security, data corruption, compliance violations
8. **CONCERNS is Not PASS**
- CONCERNS means "deploy with monitoring"
- Create follow-up stories for issues
- Do not ignore CONCERNS repeatedly
9. **Automate Gate Integration**
- Enable `generate_gate_yaml` for CI/CD integration
- Use YAML snippets in pipeline quality gates
- Export metrics for dashboard visualization
---
## Configuration Examples
### Strict Gate (Zero Tolerance)
```yaml
min_p0_coverage: 100
min_p1_coverage: 100
min_overall_coverage: 90
min_p0_pass_rate: 100
min_p1_pass_rate: 100
min_overall_pass_rate: 95
allow_waivers: false
max_security_issues: 0
max_critical_nfrs_fail: 0
```
Use for: Financial systems, healthcare, security-critical features
---
### Balanced Gate (Production Standard - Default)
```yaml
min_p0_coverage: 100
min_p1_coverage: 90
min_overall_coverage: 80
min_p0_pass_rate: 100
min_p1_pass_rate: 95
min_overall_pass_rate: 90
allow_waivers: true
max_security_issues: 0
max_critical_nfrs_fail: 0
```
Use for: Most production releases
---
### Relaxed Gate (Early Development)
```yaml
min_p0_coverage: 100
min_p1_coverage: 80
min_overall_coverage: 70
min_p0_pass_rate: 100
min_p1_pass_rate: 85
min_overall_pass_rate: 80
allow_waivers: true
allow_p2_failures: true
allow_p3_failures: true
```
Use for: Alpha/beta releases, internal tools, proof-of-concept
---
## Related Commands
- `bmad tea *test-design` - Define test priorities and risk assessment
- `bmad tea *atdd` - Generate failing acceptance tests for gaps
- `bmad tea *automate` - Expand regression suite based on gaps
- `bmad tea *nfr-assess` - Validate non-functional requirements (for gate)
- `bmad tea *test-review` - Review test quality issues flagged by trace
- `bmad sm story-done` - Mark story as complete (triggers gate)
---
## Resources
- [Instructions](./instructions.md) - Detailed workflow steps (both phases)
- [Checklist](./checklist.md) - Validation checklist
- [Template](./trace-template.md) - Traceability matrix template
- [Knowledge Base](../../testarch/knowledge/) - Testing best practices
---
<!-- Powered by BMAD-CORE™ -->

View File

@@ -0,0 +1,654 @@
# Requirements Traceability & Gate Decision - Validation Checklist
**Workflow:** `testarch-trace`
**Purpose:** Ensure complete traceability matrix with actionable gap analysis AND make deployment readiness decision (PASS/CONCERNS/FAIL/WAIVED)
This checklist covers **two sequential phases**:
- **PHASE 1**: Requirements Traceability (always executed)
- **PHASE 2**: Quality Gate Decision (executed if `enable_gate_decision: true`)
---
# PHASE 1: REQUIREMENTS TRACEABILITY
## Prerequisites Validation
- [ ] Acceptance criteria are available (from story file OR inline)
- [ ] Test suite exists (or gaps are acknowledged and documented)
- [ ] Test directory path is correct (`test_dir` variable)
- [ ] Story file is accessible (if using BMad mode)
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)
---
## Context Loading
- [ ] Story file read successfully (if applicable)
- [ ] Acceptance criteria extracted correctly
- [ ] Story ID identified (e.g., 1.3)
- [ ] `test-design.md` loaded (if available)
- [ ] `tech-spec.md` loaded (if available)
- [ ] `PRD.md` loaded (if available)
- [ ] Relevant knowledge fragments loaded from `tea-index.csv`
---
## Test Discovery and Cataloging
- [ ] Tests auto-discovered using multiple strategies (test IDs, describe blocks, file paths)
- [ ] Tests categorized by level (E2E, API, Component, Unit)
- [ ] Test metadata extracted:
- [ ] Test IDs (e.g., 1.3-E2E-001)
- [ ] Describe/context blocks
- [ ] It blocks (individual test cases)
- [ ] Given-When-Then structure (if BDD)
- [ ] Priority markers (P0/P1/P2/P3)
- [ ] All relevant test files found (no tests missed due to naming conventions)
---
## Criteria-to-Test Mapping
- [ ] Each acceptance criterion mapped to tests (or marked as NONE)
- [ ] Explicit references found (test IDs, describe blocks mentioning criterion)
- [ ] Test level documented (E2E, API, Component, Unit)
- [ ] Given-When-Then narrative verified for alignment
- [ ] Traceability matrix table generated:
- [ ] Criterion ID
- [ ] Description
- [ ] Test ID
- [ ] Test File
- [ ] Test Level
- [ ] Coverage Status
---
## Coverage Classification
- [ ] Coverage status classified for each criterion:
- [ ] **FULL** - All scenarios validated at appropriate level(s)
- [ ] **PARTIAL** - Some coverage but missing edge cases or levels
- [ ] **NONE** - No test coverage at any level
- [ ] **UNIT-ONLY** - Only unit tests (missing integration/E2E validation)
- [ ] **INTEGRATION-ONLY** - Only API/Component tests (missing unit confidence)
- [ ] Classification justifications provided
- [ ] Edge cases considered in FULL vs PARTIAL determination
---
## Duplicate Coverage Detection
- [ ] Duplicate coverage checked across test levels
- [ ] Acceptable overlap identified (defense in depth for critical paths)
- [ ] Unacceptable duplication flagged (same validation at multiple levels)
- [ ] Recommendations provided for consolidation
- [ ] Selective testing principles applied
---
## Gap Analysis
- [ ] Coverage gaps identified:
- [ ] Criteria with NONE status
- [ ] Criteria with PARTIAL status
- [ ] Criteria with UNIT-ONLY status
- [ ] Criteria with INTEGRATION-ONLY status
- [ ] Gaps prioritized by risk level using test-priorities framework:
- [ ] **CRITICAL** - P0 criteria without FULL coverage (BLOCKER)
- [ ] **HIGH** - P1 criteria without FULL coverage (PR blocker)
- [ ] **MEDIUM** - P2 criteria without FULL coverage (nightly gap)
- [ ] **LOW** - P3 criteria without FULL coverage (acceptable)
- [ ] Specific test recommendations provided for each gap:
- [ ] Suggested test level (E2E, API, Component, Unit)
- [ ] Test description (Given-When-Then)
- [ ] Recommended test ID (e.g., 1.3-E2E-004)
- [ ] Explanation of why test is needed
---
## Coverage Metrics
- [ ] Overall coverage percentage calculated (FULL coverage / total criteria)
- [ ] P0 coverage percentage calculated
- [ ] P1 coverage percentage calculated
- [ ] P2 coverage percentage calculated (if applicable)
- [ ] Coverage by level calculated:
- [ ] E2E coverage %
- [ ] API coverage %
- [ ] Component coverage %
- [ ] Unit coverage %
---
## Test Quality Verification
For each mapped test, verify:
- [ ] Explicit assertions are present (not hidden in helpers)
- [ ] Test follows Given-When-Then structure
- [ ] No hard waits or sleeps (deterministic waiting only)
- [ ] Self-cleaning (test cleans up its data)
- [ ] File size < 300 lines
- [ ] Test duration < 90 seconds
Quality issues flagged:
- [ ] **BLOCKER** issues identified (missing assertions, hard waits, flaky patterns)
- [ ] **WARNING** issues identified (large files, slow tests, unclear structure)
- [ ] **INFO** issues identified (style inconsistencies, missing documentation)
Knowledge fragments referenced:
- [ ] `test-quality.md` for Definition of Done
- [ ] `fixture-architecture.md` for self-cleaning patterns
- [ ] `network-first.md` for Playwright best practices
- [ ] `data-factories.md` for test data patterns
---
## Phase 1 Deliverables Generated
### Traceability Matrix Markdown
- [ ] File created at `{output_folder}/traceability-matrix.md`
- [ ] Template from `trace-template.md` used
- [ ] Full mapping table included
- [ ] Coverage status section included
- [ ] Gap analysis section included
- [ ] Quality assessment section included
- [ ] Recommendations section included
### Coverage Badge/Metric (if enabled)
- [ ] Badge markdown generated
- [ ] Metrics exported to JSON for CI/CD integration
### Updated Story File (if enabled)
- [ ] "Traceability" section added to story markdown
- [ ] Link to traceability matrix included
- [ ] Coverage summary included
---
## Phase 1 Quality Assurance
### Accuracy Checks
- [ ] All acceptance criteria accounted for (none skipped)
- [ ] Test IDs correctly formatted (e.g., 1.3-E2E-001)
- [ ] File paths are correct and accessible
- [ ] Coverage percentages calculated correctly
- [ ] No false positives (tests incorrectly mapped to criteria)
- [ ] No false negatives (existing tests missed in mapping)
### Completeness Checks
- [ ] All test levels considered (E2E, API, Component, Unit)
- [ ] All priorities considered (P0, P1, P2, P3)
- [ ] All coverage statuses used appropriately (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
- [ ] All gaps have recommendations
- [ ] All quality issues have severity and remediation guidance
### Actionability Checks
- [ ] Recommendations are specific (not generic)
- [ ] Test IDs suggested for new tests
- [ ] Given-When-Then provided for recommended tests
- [ ] Impact explained for each gap
- [ ] Priorities clear (CRITICAL, HIGH, MEDIUM, LOW)
---
## Phase 1 Documentation
- [ ] Traceability matrix is readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
---
# PHASE 2: QUALITY GATE DECISION
**Note**: Phase 2 executes only if `enable_gate_decision: true` in workflow.yaml
---
## Prerequisites
### Evidence Gathering
- [ ] Test execution results obtained (CI/CD pipeline, test framework reports)
- [ ] Story/epic/release file identified and read
- [ ] Test design document discovered or explicitly provided (if available)
- [ ] Traceability matrix discovered or explicitly provided (available from Phase 1)
- [ ] NFR assessment discovered or explicitly provided (if available)
- [ ] Code coverage report discovered or explicitly provided (if available)
- [ ] Burn-in results discovered or explicitly provided (if available)
### Evidence Validation
- [ ] Evidence freshness validated (warn if >7 days old, recommend re-running workflows)
- [ ] All required assessments available or user acknowledged gaps
- [ ] Test results are complete (not partial or interrupted runs)
- [ ] Test results match current codebase (not from outdated branch)
### Knowledge Base Loading
- [ ] `risk-governance.md` loaded successfully
- [ ] `probability-impact.md` loaded successfully
- [ ] `test-quality.md` loaded successfully
- [ ] `test-priorities.md` loaded successfully
- [ ] `ci-burn-in.md` loaded (if burn-in results available)
---
## Process Steps
### Step 1: Context Loading
- [ ] Gate type identified (story/epic/release/hotfix)
- [ ] Target ID extracted (story_id, epic_num, or release_version)
- [ ] Decision thresholds loaded from workflow variables
- [ ] Risk tolerance configuration loaded
- [ ] Waiver policy loaded
### Step 2: Evidence Parsing
**Test Results:**
- [ ] Total test count extracted
- [ ] Passed test count extracted
- [ ] Failed test count extracted
- [ ] Skipped test count extracted
- [ ] Test duration extracted
- [ ] P0 test pass rate calculated
- [ ] P1 test pass rate calculated
- [ ] Overall test pass rate calculated
**Quality Assessments:**
- [ ] P0/P1/P2/P3 scenarios extracted from test-design.md (if available)
- [ ] Risk scores extracted from test-design.md (if available)
- [ ] Coverage percentages extracted from traceability-matrix.md (available from Phase 1)
- [ ] Coverage gaps extracted from traceability-matrix.md (available from Phase 1)
- [ ] NFR status extracted from nfr-assessment.md (if available)
- [ ] Security issues count extracted from nfr-assessment.md (if available)
**Code Coverage:**
- [ ] Line coverage percentage extracted (if available)
- [ ] Branch coverage percentage extracted (if available)
- [ ] Function coverage percentage extracted (if available)
- [ ] Critical path coverage validated (if available)
**Burn-in Results:**
- [ ] Burn-in iterations count extracted (if available)
- [ ] Flaky tests count extracted (if available)
- [ ] Stability score calculated (if available)
### Step 3: Decision Rules Application
**P0 Criteria Evaluation:**
- [ ] P0 test pass rate evaluated (must be 100%)
- [ ] P0 acceptance criteria coverage evaluated (must be 100%)
- [ ] Security issues count evaluated (must be 0)
- [ ] Critical NFR failures evaluated (must be 0)
- [ ] Flaky tests evaluated (must be 0 if burn-in enabled)
- [ ] P0 decision recorded: PASS or FAIL
**P1 Criteria Evaluation:**
- [ ] P1 test pass rate evaluated (threshold: min_p1_pass_rate)
- [ ] P1 acceptance criteria coverage evaluated (threshold: 95%)
- [ ] Overall test pass rate evaluated (threshold: min_overall_pass_rate)
- [ ] Code coverage evaluated (threshold: min_coverage)
- [ ] P1 decision recorded: PASS or CONCERNS
**P2/P3 Criteria Evaluation:**
- [ ] P2 failures tracked (informational, don't block if allow_p2_failures: true)
- [ ] P3 failures tracked (informational, don't block if allow_p3_failures: true)
- [ ] Residual risks documented
**Final Decision:**
- [ ] Decision determined: PASS / CONCERNS / FAIL / WAIVED
- [ ] Decision rationale documented
- [ ] Decision is deterministic (follows rules, not arbitrary)
### Step 4: Documentation
**Gate Decision Document Created:**
- [ ] Story/epic/release info section complete (ID, title, description, links)
- [ ] Decision clearly stated (PASS / CONCERNS / FAIL / WAIVED)
- [ ] Decision date recorded
- [ ] Evaluator recorded (user or agent name)
**Evidence Summary Documented:**
- [ ] Test results summary complete (total, passed, failed, pass rates)
- [ ] Coverage summary complete (P0/P1 criteria, code coverage)
- [ ] NFR validation summary complete (security, performance, reliability, maintainability)
- [ ] Flakiness summary complete (burn-in iterations, flaky test count)
**Rationale Documented:**
- [ ] Decision rationale clearly explained
- [ ] Key evidence highlighted
- [ ] Assumptions and caveats noted (if any)
**Residual Risks Documented (if CONCERNS or WAIVED):**
- [ ] Unresolved P1/P2 issues listed
- [ ] Probability × impact estimated for each risk
- [ ] Mitigations or workarounds described
**Waivers Documented (if WAIVED):**
- [ ] Waiver reason documented (business justification)
- [ ] Waiver approver documented (name, role)
- [ ] Waiver expiry date documented
- [ ] Remediation plan documented (fix in next release, due date)
- [ ] Monitoring plan documented
**Critical Issues Documented (if FAIL or CONCERNS):**
- [ ] Top 5-10 critical issues listed
- [ ] Priority assigned to each issue (P0/P1/P2)
- [ ] Owner assigned to each issue
- [ ] Due date assigned to each issue
**Recommendations Documented:**
- [ ] Next steps clearly stated for decision type
- [ ] Deployment recommendation provided
- [ ] Monitoring recommendations provided (if applicable)
- [ ] Remediation recommendations provided (if applicable)
### Step 5: Status Updates and Notifications
**Status File Updated:**
- [ ] Gate decision appended to bmm-workflow-status.md (if append_to_history: true)
- [ ] Format correct: `[DATE] Gate Decision: DECISION - Target {ID} - {rationale}`
- [ ] Status file committed or staged for commit
**Gate YAML Created:**
- [ ] Gate YAML snippet generated with decision and criteria
- [ ] Evidence references included in YAML
- [ ] Next steps included in YAML
- [ ] YAML file saved to output folder
**Stakeholder Notification Generated:**
- [ ] Notification subject line created
- [ ] Notification body created with summary
- [ ] Recipients identified (PM, SM, DEV lead, stakeholders)
- [ ] Notification ready for delivery (if notify_stakeholders: true)
**Outputs Saved:**
- [ ] Gate decision document saved to `{output_file}`
- [ ] Gate YAML saved to `{output_folder}/gate-decision-{target}.yaml`
- [ ] All outputs are valid and readable
---
## Phase 2 Output Validation
### Gate Decision Document
**Completeness:**
- [ ] All required sections present (info, decision, evidence, rationale, next steps)
- [ ] No placeholder text or TODOs left in document
- [ ] All evidence references are accurate and complete
- [ ] All links to artifacts are valid
**Accuracy:**
- [ ] Decision matches applied criteria rules
- [ ] Test results match CI/CD pipeline output
- [ ] Coverage percentages match reports
- [ ] NFR status matches assessment document
- [ ] No contradictions or inconsistencies
**Clarity:**
- [ ] Decision rationale is clear and unambiguous
- [ ] Technical jargon is explained or avoided
- [ ] Stakeholders can understand next steps
- [ ] Recommendations are actionable
### Gate YAML
**Format:**
- [ ] YAML is valid (no syntax errors)
- [ ] All required fields present (target, decision, date, evaluator, criteria, evidence)
- [ ] Field values are correct data types (numbers, strings, dates)
**Content:**
- [ ] Criteria values match decision document
- [ ] Evidence references are accurate
- [ ] Next steps align with decision type
---
## Phase 2 Quality Checks
### Decision Integrity
- [ ] Decision is deterministic (follows rules, not arbitrary)
- [ ] P0 failures result in FAIL decision (unless waived)
- [ ] Security issues result in FAIL decision (unless waived - but should never be waived)
- [ ] Waivers have business justification and approver (if WAIVED)
- [ ] Residual risks are documented (if CONCERNS or WAIVED)
### Evidence-Based
- [ ] Decision is based on actual test results (not guesses)
- [ ] All claims are supported by evidence
- [ ] No assumptions without documentation
- [ ] Evidence sources are cited (CI run IDs, report URLs)
### Transparency
- [ ] Decision rationale is transparent and auditable
- [ ] Criteria evaluation is documented step-by-step
- [ ] Any deviations from standard process are explained
- [ ] Waiver justifications are clear (if applicable)
### Consistency
- [ ] Decision aligns with risk-governance knowledge fragment
- [ ] Priority framework (P0/P1/P2/P3) applied consistently
- [ ] Terminology consistent with test-quality knowledge fragment
- [ ] Decision matrix followed correctly
---
## Phase 2 Integration Points
### BMad Workflow Status
- [ ] Gate decision added to `bmm-workflow-status.md`
- [ ] Format matches existing gate history entries
- [ ] Timestamp is accurate
- [ ] Decision summary is concise (<80 chars)
### CI/CD Pipeline
- [ ] Gate YAML is CI/CD-compatible
- [ ] YAML can be parsed by pipeline automation
- [ ] Decision can be used to block/allow deployments
- [ ] Evidence references are accessible to pipeline
### Stakeholders
- [ ] Notification message is clear and actionable
- [ ] Decision is explained in non-technical terms
- [ ] Next steps are specific and time-bound
- [ ] Recipients are appropriate for decision type
---
## Phase 2 Compliance and Audit
### Audit Trail
- [ ] Decision date and time recorded
- [ ] Evaluator identified (user or agent)
- [ ] All evidence sources cited
- [ ] Decision criteria documented
- [ ] Rationale clearly explained
### Traceability
- [ ] Gate decision traceable to story/epic/release
- [ ] Evidence traceable to specific test runs
- [ ] Assessments traceable to workflows that created them
- [ ] Waiver traceable to approver (if applicable)
### Compliance
- [ ] Security requirements validated (no unresolved vulnerabilities)
- [ ] Quality standards met or waived with justification
- [ ] Regulatory requirements addressed (if applicable)
- [ ] Documentation sufficient for external audit
---
## Phase 2 Edge Cases and Exceptions
### Missing Evidence
- [ ] If test-design.md missing, decision still possible with test results + trace
- [ ] If traceability-matrix.md missing, decision still possible with test results (but Phase 1 should provide it)
- [ ] If nfr-assessment.md missing, NFR validation marked as NOT ASSESSED
- [ ] If code coverage missing, coverage criterion marked as NOT ASSESSED
- [ ] User acknowledged gaps in evidence or provided alternative proof
### Stale Evidence
- [ ] Evidence freshness checked (if validate_evidence_freshness: true)
- [ ] Warnings issued for assessments >7 days old
- [ ] User acknowledged stale evidence or re-ran workflows
- [ ] Decision document notes any stale evidence used
### Conflicting Evidence
- [ ] Conflicts between test results and assessments resolved
- [ ] Most recent/authoritative source identified
- [ ] Conflict resolution documented in decision rationale
- [ ] User consulted if conflict cannot be resolved
### Waiver Scenarios
- [ ] Waiver only used for FAIL decision (not PASS or CONCERNS)
- [ ] Waiver has business justification (not technical convenience)
- [ ] Waiver has named approver with authority (VP/CTO/PO)
- [ ] Waiver has expiry date (does NOT apply to future releases)
- [ ] Waiver has remediation plan with concrete due date
- [ ] Security vulnerabilities are NOT waived (enforced)
---
# FINAL VALIDATION (Both Phases)
## Non-Prescriptive Validation
- [ ] Traceability format adapted to team needs (not rigid template)
- [ ] Examples are minimal and focused on patterns
- [ ] Teams can extend with custom classifications
- [ ] Integration with external systems supported (JIRA, Azure DevOps)
- [ ] Compliance requirements considered (if applicable)
---
## Documentation and Communication
- [ ] All documents are readable and well-formatted
- [ ] Tables render correctly in markdown
- [ ] Code blocks have proper syntax highlighting
- [ ] Links are valid and accessible
- [ ] Recommendations are clear and prioritized
- [ ] Gate decision is prominent and unambiguous (Phase 2)
---
## Final Validation
**Phase 1 (Traceability):**
- [ ] All prerequisites met
- [ ] All acceptance criteria mapped or gaps documented
- [ ] P0 coverage is 100% OR documented as BLOCKER
- [ ] Gap analysis is complete and prioritized
- [ ] Test quality issues identified and flagged
- [ ] Deliverables generated and saved
**Phase 2 (Gate Decision):**
- [ ] All quality evidence gathered
- [ ] Decision criteria applied correctly
- [ ] Decision rationale documented
- [ ] Gate YAML ready for CI/CD integration
- [ ] Status file updated (if enabled)
- [ ] Stakeholders notified (if enabled)
**Workflow Complete:**
- [ ] Phase 1 completed successfully
- [ ] Phase 2 completed successfully (if enabled)
- [ ] All outputs validated and saved
- [ ] Ready to proceed based on gate decision
---
## Sign-Off
**Phase 1 - Traceability Status:**
- [ ] ✅ PASS - All quality gates met, no critical gaps
- [ ] ⚠️ WARN - P1 gaps exist, address before PR merge
- [ ] ❌ FAIL - P0 gaps exist, BLOCKER for release
**Phase 2 - Gate Decision Status (if enabled):**
- [ ] ✅ PASS - Deploy to production
- [ ] ⚠️ CONCERNS - Deploy with monitoring
- [ ] ❌ FAIL - Block deployment, fix issues
- [ ] 🔓 WAIVED - Deploy with business approval and remediation plan
**Next Actions:**
- If PASS (both phases): Proceed to deployment
- If WARN/CONCERNS: Address gaps/issues, proceed with monitoring
- If FAIL (either phase): Run `*atdd` for missing tests, fix issues, re-run `*trace`
- If WAIVED: Deploy with approved waiver, schedule remediation
---
## Notes
Record any issues, deviations, or important observations during workflow execution:
- **Phase 1 Issues**: [Note any traceability mapping challenges, missing tests, quality concerns]
- **Phase 2 Issues**: [Note any missing, stale, or conflicting evidence]
- **Decision Rationale**: [Document any nuanced reasoning or edge cases]
- **Waiver Details**: [Document waiver negotiations or approvals]
- **Follow-up Actions**: [List any actions required after gate decision]
---
<!-- Powered by BMAD-CORE™ -->

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,673 @@
# Traceability Matrix & Gate Decision - Story {STORY_ID}
**Story:** {STORY_TITLE}
**Date:** {DATE}
**Evaluator:** {user_name or TEA Agent}
---
## PHASE 1: REQUIREMENTS TRACEABILITY
### Coverage Summary
| Priority | Total Criteria | FULL Coverage | Coverage % | Status |
| --------- | -------------- | ------------- | ---------- | ------------ |
| P0 | {P0_TOTAL} | {P0_FULL} | {P0_PCT}% | {P0_STATUS} |
| P1 | {P1_TOTAL} | {P1_FULL} | {P1_PCT}% | {P1_STATUS} |
| P2 | {P2_TOTAL} | {P2_FULL} | {P2_PCT}% | {P2_STATUS} |
| P3 | {P3_TOTAL} | {P3_FULL} | {P3_PCT}% | {P3_STATUS} |
| **Total** | **{TOTAL}** | **{FULL}** | **{PCT}%** | **{STATUS}** |
**Legend:**
- ✅ PASS - Coverage meets quality gate threshold
- ⚠️ WARN - Coverage below threshold but not critical
- ❌ FAIL - Coverage below minimum threshold (blocker)
---
### Detailed Mapping
#### {CRITERION_ID}: {CRITERION_DESCRIPTION} ({PRIORITY})
- **Coverage:** {COVERAGE_STATUS} {STATUS_ICON}
- **Tests:**
- `{TEST_ID}` - {TEST_FILE}:{LINE}
- **Given:** {GIVEN}
- **When:** {WHEN}
- **Then:** {THEN}
- `{TEST_ID_2}` - {TEST_FILE_2}:{LINE}
- **Given:** {GIVEN_2}
- **When:** {WHEN_2}
- **Then:** {THEN_2}
- **Gaps:** (if PARTIAL or UNIT-ONLY or INTEGRATION-ONLY)
- Missing: {MISSING_SCENARIO_1}
- Missing: {MISSING_SCENARIO_2}
- **Recommendation:** {RECOMMENDATION_TEXT}
---
#### Example: AC-1: User can login with email and password (P0)
- **Coverage:** FULL ✅
- **Tests:**
- `1.3-E2E-001` - tests/e2e/auth.spec.ts:12
- **Given:** User has valid credentials
- **When:** User submits login form
- **Then:** User is redirected to dashboard
- `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8
- **Given:** Valid email and password hash
- **When:** validateCredentials is called
- **Then:** Returns user object
---
#### Example: AC-3: User can reset password via email (P1)
- **Coverage:** PARTIAL ⚠️
- **Tests:**
- `1.3-E2E-003` - tests/e2e/auth.spec.ts:44
- **Given:** User requests password reset
- **When:** User clicks reset link in email
- **Then:** User can set new password
- **Gaps:**
- Missing: Email delivery validation
- Missing: Expired token handling (error path)
- Missing: Invalid token handling (security test)
- Missing: Unit test for token generation logic
- **Recommendation:** Add `1.3-API-001` for email service integration testing and `1.3-UNIT-003` for token generation logic. Add `1.3-E2E-004` for error path validation (expired/invalid tokens).
---
### Gap Analysis
#### Critical Gaps (BLOCKER) ❌
{CRITICAL_GAP_COUNT} gaps found. **Do not release until resolved.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P0)
- Current Coverage: {COVERAGE_STATUS}
- Missing Tests: {MISSING_TEST_DESCRIPTION}
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
- Impact: {IMPACT_DESCRIPTION}
---
#### High Priority Gaps (PR BLOCKER) ⚠️
{HIGH_GAP_COUNT} gaps found. **Address before PR merge.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P1)
- Current Coverage: {COVERAGE_STATUS}
- Missing Tests: {MISSING_TEST_DESCRIPTION}
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
- Impact: {IMPACT_DESCRIPTION}
---
#### Medium Priority Gaps (Nightly) ⚠️
{MEDIUM_GAP_COUNT} gaps found. **Address in nightly test improvements.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P2)
- Current Coverage: {COVERAGE_STATUS}
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
---
#### Low Priority Gaps (Optional)
{LOW_GAP_COUNT} gaps found. **Optional - add if time permits.**
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P3)
- Current Coverage: {COVERAGE_STATUS}
---
### Quality Assessment
#### Tests with Issues
**BLOCKER Issues**
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
**WARNING Issues** ⚠️
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
**INFO Issues**
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
---
#### Example Quality Issues
**WARNING Issues** ⚠️
- `1.3-E2E-001` - 145 seconds (exceeds 90s target) - Optimize fixture setup to reduce test duration
- `1.3-UNIT-005` - 320 lines (exceeds 300 line limit) - Split into multiple focused test files
**INFO Issues**
- `1.3-E2E-002` - Missing Given-When-Then structure - Refactor describe block to use BDD format
---
#### Tests Passing Quality Gates
**{PASSING_TEST_COUNT}/{TOTAL_TEST_COUNT} tests ({PASSING_PCT}%) meet all quality criteria** ✅
---
### Duplicate Coverage Analysis
#### Acceptable Overlap (Defense in Depth)
- {CRITERION_ID}: Tested at unit (business logic) and E2E (user journey) ✅
#### Unacceptable Duplication ⚠️
- {CRITERION_ID}: Same validation at E2E and Component level
- Recommendation: Remove {TEST_ID} or consolidate with {OTHER_TEST_ID}
---
### Coverage by Test Level
| Test Level | Tests | Criteria Covered | Coverage % |
| ---------- | ----------------- | -------------------- | ---------------- |
| E2E | {E2E_COUNT} | {E2E_CRITERIA} | {E2E_PCT}% |
| API | {API_COUNT} | {API_CRITERIA} | {API_PCT}% |
| Component | {COMP_COUNT} | {COMP_CRITERIA} | {COMP_PCT}% |
| Unit | {UNIT_COUNT} | {UNIT_CRITERIA} | {UNIT_PCT}% |
| **Total** | **{TOTAL_TESTS}** | **{TOTAL_CRITERIA}** | **{TOTAL_PCT}%** |
---
### Traceability Recommendations
#### Immediate Actions (Before PR Merge)
1. **{ACTION_1}** - {DESCRIPTION}
2. **{ACTION_2}** - {DESCRIPTION}
#### Short-term Actions (This Sprint)
1. **{ACTION_1}** - {DESCRIPTION}
2. **{ACTION_2}** - {DESCRIPTION}
#### Long-term Actions (Backlog)
1. **{ACTION_1}** - {DESCRIPTION}
---
#### Example Recommendations
**Immediate Actions (Before PR Merge)**
1. **Add P1 Password Reset Tests** - Implement `1.3-API-001` for email service integration and `1.3-E2E-004` for error path validation. P1 coverage currently at 80%, target is 90%.
2. **Optimize Slow E2E Test** - Refactor `1.3-E2E-001` to use faster fixture setup. Currently 145s, target is <90s.
**Short-term Actions (This Sprint)**
1. **Enhance P2 Coverage** - Add E2E validation for session timeout (`1.3-E2E-005`). Currently UNIT-ONLY coverage.
2. **Split Large Test File** - Break `1.3-UNIT-005` (320 lines) into multiple focused test files (<300 lines each).
**Long-term Actions (Backlog)**
1. **Enrich P3 Coverage** - Add tests for edge cases in P3 criteria if time permits.
---
## PHASE 2: QUALITY GATE DECISION
**Gate Type:** {story | epic | release | hotfix}
**Decision Mode:** {deterministic | manual}
---
### Evidence Summary
#### Test Execution Results
- **Total Tests**: {total_count}
- **Passed**: {passed_count} ({pass_percentage}%)
- **Failed**: {failed_count} ({fail_percentage}%)
- **Skipped**: {skipped_count} ({skip_percentage}%)
- **Duration**: {total_duration}
**Priority Breakdown:**
- **P0 Tests**: {p0_passed}/{p0_total} passed ({p0_pass_rate}%) {✅ | ❌}
- **P1 Tests**: {p1_passed}/{p1_total} passed ({p1_pass_rate}%) {✅ | ⚠️ | ❌}
- **P2 Tests**: {p2_passed}/{p2_total} passed ({p2_pass_rate}%) {informational}
- **P3 Tests**: {p3_passed}/{p3_total} passed ({p3_pass_rate}%) {informational}
**Overall Pass Rate**: {overall_pass_rate}% {✅ | ⚠️ | ❌}
**Test Results Source**: {CI_run_id | test_report_url | local_run}
---
#### Coverage Summary (from Phase 1)
**Requirements Coverage:**
- **P0 Acceptance Criteria**: {p0_covered}/{p0_total} covered ({p0_coverage}%) {✅ | ❌}
- **P1 Acceptance Criteria**: {p1_covered}/{p1_total} covered ({p1_coverage}%) {✅ | ⚠️ | ❌}
- **P2 Acceptance Criteria**: {p2_covered}/{p2_total} covered ({p2_coverage}%) {informational}
- **Overall Coverage**: {overall_coverage}%
**Code Coverage** (if available):
- **Line Coverage**: {line_coverage}% {✅ | ⚠️ | ❌}
- **Branch Coverage**: {branch_coverage}% {✅ | ⚠️ | ❌}
- **Function Coverage**: {function_coverage}% {✅ | ⚠️ | ❌}
**Coverage Source**: {coverage_report_url | coverage_file_path}
---
#### Non-Functional Requirements (NFRs)
**Security**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌}
- Security Issues: {security_issue_count}
- {details_if_issues}
**Performance**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌}
- {performance_metrics_summary}
**Reliability**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌}
- {reliability_metrics_summary}
**Maintainability**: {PASS | CONCERNS | FAIL | NOT_ASSESSED} {✅ | ⚠️ | ❌}
- {maintainability_metrics_summary}
**NFR Source**: {nfr_assessment_file_path | not_assessed}
---
#### Flakiness Validation
**Burn-in Results** (if available):
- **Burn-in Iterations**: {iteration_count} (e.g., 10)
- **Flaky Tests Detected**: {flaky_test_count} {✅ if 0 | ❌ if >0}
- **Stability Score**: {stability_percentage}%
**Flaky Tests List** (if any):
- {flaky_test_1_name} - {failure_rate}
- {flaky_test_2_name} - {failure_rate}
**Burn-in Source**: {CI_burn_in_run_id | not_available}
---
### Decision Criteria Evaluation
#### P0 Criteria (Must ALL Pass)
| Criterion | Threshold | Actual | Status |
| --------------------- | --------- | ------------------------- | -------- | -------- |
| P0 Coverage | 100% | {p0_coverage}% | {✅ PASS | ❌ FAIL} |
| P0 Test Pass Rate | 100% | {p0_pass_rate}% | {✅ PASS | ❌ FAIL} |
| Security Issues | 0 | {security_issue_count} | {✅ PASS | ❌ FAIL} |
| Critical NFR Failures | 0 | {critical_nfr_fail_count} | {✅ PASS | ❌ FAIL} |
| Flaky Tests | 0 | {flaky_test_count} | {✅ PASS | ❌ FAIL} |
**P0 Evaluation**: {✅ ALL PASS | ❌ ONE OR MORE FAILED}
---
#### P1 Criteria (Required for PASS, May Accept for CONCERNS)
| Criterion | Threshold | Actual | Status |
| ---------------------- | ------------------------- | -------------------- | -------- | ----------- | -------- |
| P1 Coverage | ≥{min_p1_coverage}% | {p1_coverage}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
| P1 Test Pass Rate | ≥{min_p1_pass_rate}% | {p1_pass_rate}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
| Overall Test Pass Rate | ≥{min_overall_pass_rate}% | {overall_pass_rate}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
| Overall Coverage | ≥{min_coverage}% | {overall_coverage}% | {✅ PASS | ⚠️ CONCERNS | ❌ FAIL} |
**P1 Evaluation**: {✅ ALL PASS | ⚠️ SOME CONCERNS | ❌ FAILED}
---
#### P2/P3 Criteria (Informational, Don't Block)
| Criterion | Actual | Notes |
| ----------------- | --------------- | ------------------------------------------------------------ |
| P2 Test Pass Rate | {p2_pass_rate}% | {allow_p2_failures ? "Tracked, doesn't block" : "Evaluated"} |
| P3 Test Pass Rate | {p3_pass_rate}% | {allow_p3_failures ? "Tracked, doesn't block" : "Evaluated"} |
---
### GATE DECISION: {PASS | CONCERNS | FAIL | WAIVED}
---
### Rationale
{Explain decision based on criteria evaluation}
{Highlight key evidence that drove decision}
{Note any assumptions or caveats}
**Example (PASS):**
> All P0 criteria met with 100% coverage and pass rates across critical tests. All P1 criteria exceeded thresholds with 98% overall pass rate and 92% coverage. No security issues detected. No flaky tests in validation. Feature is ready for production deployment with standard monitoring.
**Example (CONCERNS):**
> All P0 criteria met, ensuring critical user journeys are protected. However, P1 coverage (88%) falls below threshold (90%) due to missing E2E test for AC-5 edge case. Overall pass rate (96%) is excellent. Issues are non-critical and have acceptable workarounds. Risk is low enough to deploy with enhanced monitoring.
**Example (FAIL):**
> CRITICAL BLOCKERS DETECTED:
>
> 1. P0 coverage incomplete (80%) - AC-2 security validation missing
> 2. P0 test failures (75% pass rate) in core search functionality
> 3. Unresolved SQL injection vulnerability in search filter (CRITICAL)
>
> Release MUST BE BLOCKED until P0 issues are resolved. Security vulnerability cannot be waived.
**Example (WAIVED):**
> Original decision was FAIL due to P0 test failure in legacy Excel 2007 export module (affects <1% of users). However, release contains critical GDPR compliance features required by regulatory deadline (Oct 15). Business has approved waiver given:
>
> - Regulatory priority overrides legacy module risk
> - Workaround available (use Excel 2010+)
> - Issue will be fixed in v2.4.1 hotfix (due Oct 20)
> - Enhanced monitoring in place
---
### {Section: Delete if not applicable}
#### Residual Risks (For CONCERNS or WAIVED)
List unresolved P1/P2 issues that don't block release but should be tracked:
1. **{Risk Description}**
- **Priority**: P1 | P2
- **Probability**: Low | Medium | High
- **Impact**: Low | Medium | High
- **Risk Score**: {probability × impact}
- **Mitigation**: {workaround or monitoring plan}
- **Remediation**: {fix in next sprint/release}
**Overall Residual Risk**: {LOW | MEDIUM | HIGH}
---
#### Waiver Details (For WAIVED only)
**Original Decision**: ❌ FAIL
**Reason for Failure**:
- {list_of_blocking_issues}
**Waiver Information**:
- **Waiver Reason**: {business_justification}
- **Waiver Approver**: {name}, {role} (e.g., Jane Doe, VP Engineering)
- **Approval Date**: {YYYY-MM-DD}
- **Waiver Expiry**: {YYYY-MM-DD} (**NOTE**: Does NOT apply to next release)
**Monitoring Plan**:
- {enhanced_monitoring_1}
- {enhanced_monitoring_2}
- {escalation_criteria}
**Remediation Plan**:
- **Fix Target**: {next_release_version} (e.g., v2.4.1 hotfix)
- **Due Date**: {YYYY-MM-DD}
- **Owner**: {team_or_person}
- **Verification**: {how_fix_will_be_verified}
**Business Justification**:
{detailed_explanation_of_why_waiver_is_acceptable}
---
#### Critical Issues (For FAIL or CONCERNS)
Top blockers requiring immediate attention:
| Priority | Issue | Description | Owner | Due Date | Status |
| -------- | ------------- | ------------------- | ------------ | ------------ | ------------------ |
| P0 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} |
| P0 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} |
| P1 | {issue_title} | {brief_description} | {owner_name} | {YYYY-MM-DD} | {OPEN/IN_PROGRESS} |
**Blocking Issues Count**: {p0_blocker_count} P0 blockers, {p1_blocker_count} P1 issues
---
### Gate Recommendations
#### For PASS Decision ✅
1. **Proceed to deployment**
- Deploy to staging environment
- Validate with smoke tests
- Monitor key metrics for 24-48 hours
- Deploy to production with standard monitoring
2. **Post-Deployment Monitoring**
- {metric_1_to_monitor}
- {metric_2_to_monitor}
- {alert_thresholds}
3. **Success Criteria**
- {success_criterion_1}
- {success_criterion_2}
---
#### For CONCERNS Decision ⚠️
1. **Deploy with Enhanced Monitoring**
- Deploy to staging with extended validation period
- Enable enhanced logging/monitoring for known risk areas:
- {risk_area_1}
- {risk_area_2}
- Set aggressive alerts for potential issues
- Deploy to production with caution
2. **Create Remediation Backlog**
- Create story: "{fix_title_1}" (Priority: {priority})
- Create story: "{fix_title_2}" (Priority: {priority})
- Target sprint: {next_sprint}
3. **Post-Deployment Actions**
- Monitor {specific_areas} closely for {time_period}
- Weekly status updates on remediation progress
- Re-assess after fixes deployed
---
#### For FAIL Decision ❌
1. **Block Deployment Immediately**
- Do NOT deploy to any environment
- Notify stakeholders of blocking issues
- Escalate to tech lead and PM
2. **Fix Critical Issues**
- Address P0 blockers listed in Critical Issues section
- Owner assignments confirmed
- Due dates agreed upon
- Daily standup on blocker resolution
3. **Re-Run Gate After Fixes**
- Re-run full test suite after fixes
- Re-run `bmad tea *trace` workflow
- Verify decision is PASS before deploying
---
#### For WAIVED Decision 🔓
1. **Deploy with Business Approval**
- Confirm waiver approver has signed off
- Document waiver in release notes
- Notify all stakeholders of waived risks
2. **Aggressive Monitoring**
- {enhanced_monitoring_plan}
- {escalation_procedures}
- Daily checks on waived risk areas
3. **Mandatory Remediation**
- Fix MUST be completed by {due_date}
- Issue CANNOT be waived in next release
- Track remediation progress weekly
- Verify fix in next gate
---
### Next Steps
**Immediate Actions** (next 24-48 hours):
1. {action_1}
2. {action_2}
3. {action_3}
**Follow-up Actions** (next sprint/release):
1. {action_1}
2. {action_2}
3. {action_3}
**Stakeholder Communication**:
- Notify PM: {decision_summary}
- Notify SM: {decision_summary}
- Notify DEV lead: {decision_summary}
---
## Integrated YAML Snippet (CI/CD)
```yaml
traceability_and_gate:
# Phase 1: Traceability
traceability:
story_id: "{STORY_ID}"
date: "{DATE}"
coverage:
overall: {OVERALL_PCT}%
p0: {P0_PCT}%
p1: {P1_PCT}%
p2: {P2_PCT}%
p3: {P3_PCT}%
gaps:
critical: {CRITICAL_COUNT}
high: {HIGH_COUNT}
medium: {MEDIUM_COUNT}
low: {LOW_COUNT}
quality:
passing_tests: {PASSING_COUNT}
total_tests: {TOTAL_TESTS}
blocker_issues: {BLOCKER_COUNT}
warning_issues: {WARNING_COUNT}
recommendations:
- "{RECOMMENDATION_1}"
- "{RECOMMENDATION_2}"
# Phase 2: Gate Decision
gate_decision:
decision: "{PASS | CONCERNS | FAIL | WAIVED}"
gate_type: "{story | epic | release | hotfix}"
decision_mode: "{deterministic | manual}"
criteria:
p0_coverage: {p0_coverage}%
p0_pass_rate: {p0_pass_rate}%
p1_coverage: {p1_coverage}%
p1_pass_rate: {p1_pass_rate}%
overall_pass_rate: {overall_pass_rate}%
overall_coverage: {overall_coverage}%
security_issues: {security_issue_count}
critical_nfrs_fail: {critical_nfr_fail_count}
flaky_tests: {flaky_test_count}
thresholds:
min_p0_coverage: 100
min_p0_pass_rate: 100
min_p1_coverage: {min_p1_coverage}
min_p1_pass_rate: {min_p1_pass_rate}
min_overall_pass_rate: {min_overall_pass_rate}
min_coverage: {min_coverage}
evidence:
test_results: "{CI_run_id | test_report_url}"
traceability: "{trace_file_path}"
nfr_assessment: "{nfr_file_path}"
code_coverage: "{coverage_report_url}"
next_steps: "{brief_summary_of_recommendations}"
waiver: # Only if WAIVED
reason: "{business_justification}"
approver: "{name}, {role}"
expiry: "{YYYY-MM-DD}"
remediation_due: "{YYYY-MM-DD}"
```
---
## Related Artifacts
- **Story File:** {STORY_FILE_PATH}
- **Test Design:** {TEST_DESIGN_PATH} (if available)
- **Tech Spec:** {TECH_SPEC_PATH} (if available)
- **Test Results:** {TEST_RESULTS_PATH}
- **NFR Assessment:** {NFR_FILE_PATH} (if available)
- **Test Files:** {TEST_DIR_PATH}
---
## Sign-Off
**Phase 1 - Traceability Assessment:**
- Overall Coverage: {OVERALL_PCT}%
- P0 Coverage: {P0_PCT}% {P0_STATUS}
- P1 Coverage: {P1_PCT}% {P1_STATUS}
- Critical Gaps: {CRITICAL_COUNT}
- High Priority Gaps: {HIGH_COUNT}
**Phase 2 - Gate Decision:**
- **Decision**: {PASS | CONCERNS | FAIL | WAIVED} {STATUS_ICON}
- **P0 Evaluation**: {✅ ALL PASS | ❌ ONE OR MORE FAILED}
- **P1 Evaluation**: {✅ ALL PASS | ⚠️ SOME CONCERNS | ❌ FAILED}
**Overall Status:** {STATUS} {STATUS_ICON}
**Next Steps:**
- If PASS ✅: Proceed to deployment
- If CONCERNS ⚠️: Deploy with monitoring, create remediation backlog
- If FAIL ❌: Block deployment, fix critical issues, re-run workflow
- If WAIVED 🔓: Deploy with business approval and aggressive monitoring
**Generated:** {DATE}
**Workflow:** testarch-trace v4.0 (Enhanced with Gate Decision)
---
<!-- Powered by BMAD-CORE™ -->

View File

@@ -0,0 +1,66 @@
# Test Architect workflow: trace (enhanced with gate decision)
name: testarch-trace
description: "Generate requirements-to-tests traceability matrix, analyze coverage, and make quality gate decision (PASS/CONCERNS/FAIL/WAIVED)"
author: "BMad"
# Critical variables from config
config_source: "{project-root}/bmad/bmm/config.yaml"
output_folder: "{config_source}:output_folder"
user_name: "{config_source}:user_name"
communication_language: "{config_source}:communication_language"
document_output_language: "{config_source}:document_output_language"
date: system-generated
# Workflow components
installed_path: "{project-root}/bmad/bmm/workflows/testarch/trace"
instructions: "{installed_path}/instructions.md"
validation: "{installed_path}/checklist.md"
template: "{installed_path}/trace-template.md"
# Variables and inputs
variables:
# Directory paths
test_dir: "{project-root}/tests" # Root test directory
source_dir: "{project-root}/src" # Source code directory
# Workflow behavior
coverage_levels: "e2e,api,component,unit" # Which test levels to trace
gate_type: "story" # story | epic | release | hotfix - determines gate scope
decision_mode: "deterministic" # deterministic (rule-based) | manual (team decision)
# Output configuration
default_output_file: "{output_folder}/traceability-matrix.md"
# Required tools
required_tools:
- read_file # Read story, test files, BMad artifacts
- write_file # Create traceability matrix, gate YAML
- list_files # Discover test files
- search_repo # Find tests by test ID, describe blocks
- glob # Find test files matching patterns
# Recommended inputs
recommended_inputs:
- story: "Story markdown with acceptance criteria (required for BMad mode)"
- test_files: "Test suite for the feature (auto-discovered if not provided)"
- test_design: "Test design with risk/priority assessment (required for Phase 2 gate)"
- tech_spec: "Technical specification (optional)"
- existing_tests: "Current test suite for analysis"
- test_results: "CI/CD test execution results (required for Phase 2 gate)"
- nfr_assess: "Non-functional requirements validation (recommended for release gates)"
- code_coverage: "Code coverage report (optional)"
tags:
- qa
- traceability
- test-architect
- coverage
- requirements
- gate
- decision
- release
execution_hints:
interactive: false # Minimize prompts
autonomous: true # Proceed without user input unless blocked
iterative: true