mirror of
https://github.com/bmadcode/BMAD-METHOD.git
synced 2025-12-29 16:14:59 +00:00
feat: migrate test architect entirely to v6
This commit is contained in:
375
src/modules/bmm/workflows/testarch/trace/README.md
Normal file
375
src/modules/bmm/workflows/testarch/trace/README.md
Normal file
@@ -0,0 +1,375 @@
|
||||
# Requirements Traceability Workflow
|
||||
|
||||
**Workflow ID:** `testarch-trace`
|
||||
**Agent:** Test Architect (TEA)
|
||||
**Command:** `bmad tea *trace`
|
||||
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
The **trace** workflow generates a comprehensive requirements-to-tests traceability matrix that maps acceptance criteria to implemented tests, identifies coverage gaps, and provides actionable recommendations for improving test coverage.
|
||||
|
||||
**Key Features:**
|
||||
|
||||
- Maps acceptance criteria to specific test cases across all levels (E2E, API, Component, Unit)
|
||||
- Classifies coverage status (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
|
||||
- Prioritizes gaps by risk level (P0/P1/P2/P3)
|
||||
- Generates CI/CD-ready YAML snippets for quality gates
|
||||
- Detects duplicate coverage across test levels
|
||||
- Verifies test quality (assertions, structure, performance)
|
||||
|
||||
---
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use `*trace` when you need to:
|
||||
|
||||
- ✅ Validate that all acceptance criteria have test coverage
|
||||
- ✅ Identify coverage gaps before release or PR merge
|
||||
- ✅ Generate traceability documentation for compliance or audits
|
||||
- ✅ Ensure critical paths (P0/P1) are fully tested
|
||||
- ✅ Detect duplicate coverage across test levels
|
||||
- ✅ Assess test quality across your suite
|
||||
- ✅ Create gate-ready metrics for CI/CD pipelines
|
||||
|
||||
**Typical Timing:**
|
||||
|
||||
- After tests are implemented (post-ATDD or post-development)
|
||||
- Before merging a PR (validate P0/P1 coverage)
|
||||
- Before release (validate full coverage)
|
||||
- During sprint retrospectives (assess test quality)
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Required:**
|
||||
|
||||
- Acceptance criteria (from story file OR inline)
|
||||
- Implemented test suite (or acknowledged gaps)
|
||||
|
||||
**Recommended:**
|
||||
|
||||
- `test-design.md` - Risk assessment and test priorities
|
||||
- `tech-spec.md` - Technical implementation details
|
||||
- Test framework configuration (playwright.config.ts, jest.config.js)
|
||||
|
||||
**Halt Conditions:**
|
||||
|
||||
- Story lacks any tests AND gaps are not acknowledged → Run `*atdd` first
|
||||
- Acceptance criteria are completely missing → Provide criteria or story file
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage (BMad Mode)
|
||||
|
||||
```bash
|
||||
bmad tea *trace
|
||||
```
|
||||
|
||||
The workflow will:
|
||||
|
||||
1. Read story file from `bmad/output/story-X.X.md`
|
||||
2. Extract acceptance criteria
|
||||
3. Auto-discover tests for this story
|
||||
4. Generate traceability matrix
|
||||
5. Save to `bmad/output/traceability-matrix.md`
|
||||
|
||||
### Standalone Mode (No Story File)
|
||||
|
||||
```bash
|
||||
bmad tea *trace --acceptance-criteria "AC-1: User can login with email..."
|
||||
```
|
||||
|
||||
### Custom Configuration
|
||||
|
||||
```bash
|
||||
bmad tea *trace \
|
||||
--story-file "bmad/output/story-1.3.md" \
|
||||
--output-file "docs/qa/trace-1.3.md" \
|
||||
--min-p0-coverage 100 \
|
||||
--min-p1-coverage 90
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
1. **Load Context** - Read story, test design, tech spec, knowledge base
|
||||
2. **Discover Tests** - Auto-find tests related to story (by ID, describe blocks, file paths)
|
||||
3. **Map Criteria** - Link acceptance criteria to specific test cases
|
||||
4. **Analyze Gaps** - Identify missing coverage and prioritize by risk
|
||||
5. **Verify Quality** - Check test quality (assertions, structure, performance)
|
||||
6. **Generate Deliverables** - Create traceability matrix, gate YAML, coverage badge
|
||||
|
||||
---
|
||||
|
||||
## Outputs
|
||||
|
||||
### Traceability Matrix (`traceability-matrix.md`)
|
||||
|
||||
Comprehensive markdown file with:
|
||||
|
||||
- Coverage summary table (by priority)
|
||||
- Detailed criterion-to-test mapping
|
||||
- Gap analysis with recommendations
|
||||
- Quality assessment for each test
|
||||
- Gate YAML snippet
|
||||
|
||||
### Gate YAML Snippet
|
||||
|
||||
```yaml
|
||||
traceability:
|
||||
story_id: '1.3'
|
||||
coverage:
|
||||
overall: 85%
|
||||
p0: 100%
|
||||
p1: 90%
|
||||
gaps:
|
||||
critical: 0
|
||||
high: 1
|
||||
status: 'PASS'
|
||||
```
|
||||
|
||||
### Updated Story File (Optional)
|
||||
|
||||
Adds "Traceability" section to story markdown with:
|
||||
|
||||
- Link to traceability matrix
|
||||
- Coverage summary
|
||||
- Gate status
|
||||
|
||||
---
|
||||
|
||||
## Coverage Classifications
|
||||
|
||||
- **FULL** ✅ - All scenarios validated at appropriate level(s)
|
||||
- **PARTIAL** ⚠️ - Some coverage but missing edge cases or levels
|
||||
- **NONE** ❌ - No test coverage at any level
|
||||
- **UNIT-ONLY** ⚠️ - Only unit tests (missing integration/E2E validation)
|
||||
- **INTEGRATION-ONLY** ⚠️ - Only API/Component tests (missing unit confidence)
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
| Priority | Coverage Requirement | Severity | Action |
|
||||
| -------- | -------------------- | -------- | ------------------ |
|
||||
| P0 | 100% | BLOCKER | Do not release |
|
||||
| P1 | 90% | HIGH | Block PR merge |
|
||||
| P2 | 80% (recommended) | MEDIUM | Address in nightly |
|
||||
| P3 | No requirement | LOW | Optional |
|
||||
|
||||
---
|
||||
|
||||
## Configuration
|
||||
|
||||
### workflow.yaml Variables
|
||||
|
||||
```yaml
|
||||
variables:
|
||||
# Target specification
|
||||
story_file: '' # Path to story markdown
|
||||
acceptance_criteria: '' # Inline criteria if no story
|
||||
|
||||
# Test discovery
|
||||
test_dir: '{project-root}/tests'
|
||||
auto_discover_tests: true
|
||||
|
||||
# Traceability configuration
|
||||
coverage_levels: 'e2e,api,component,unit'
|
||||
map_by_test_id: true
|
||||
map_by_describe: true
|
||||
map_by_filename: true
|
||||
|
||||
# Gap analysis
|
||||
prioritize_by_risk: true
|
||||
suggest_missing_tests: true
|
||||
check_duplicate_coverage: true
|
||||
|
||||
# Output configuration
|
||||
output_file: '{output_folder}/traceability-matrix.md'
|
||||
generate_gate_yaml: true
|
||||
generate_coverage_badge: true
|
||||
update_story_file: true
|
||||
|
||||
# Quality gates
|
||||
min_p0_coverage: 100
|
||||
min_p1_coverage: 90
|
||||
min_overall_coverage: 80
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Knowledge Base Integration
|
||||
|
||||
This workflow automatically loads relevant knowledge fragments:
|
||||
|
||||
- `traceability.md` - Requirements mapping patterns
|
||||
- `test-priorities.md` - P0/P1/P2/P3 risk framework
|
||||
- `risk-governance.md` - Risk-based testing approach
|
||||
- `test-quality.md` - Definition of Done for tests
|
||||
- `selective-testing.md` - Duplicate coverage patterns
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Full Coverage Validation
|
||||
|
||||
```bash
|
||||
# Validate P0/P1 coverage before PR merge
|
||||
bmad tea *trace --story-file "bmad/output/story-1.3.md"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```markdown
|
||||
# Traceability Matrix - Story 1.3
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Priority | Total | FULL | Coverage % | Status |
|
||||
| -------- | ----- | ---- | ---------- | ------- |
|
||||
| P0 | 3 | 3 | 100% | ✅ PASS |
|
||||
| P1 | 5 | 5 | 100% | ✅ PASS |
|
||||
|
||||
Gate Status: PASS ✅
|
||||
```
|
||||
|
||||
### Example 2: Gap Identification
|
||||
|
||||
```bash
|
||||
# Find coverage gaps for existing feature
|
||||
bmad tea *trace --target-feature "user-authentication"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```markdown
|
||||
## Gap Analysis
|
||||
|
||||
### Critical Gaps (BLOCKER)
|
||||
|
||||
- None ✅
|
||||
|
||||
### High Priority Gaps (PR BLOCKER)
|
||||
|
||||
1. **AC-3: Password reset email edge cases**
|
||||
- Recommend: Add 1.3-API-001 (email service integration)
|
||||
- Impact: Users may not recover accounts in error scenarios
|
||||
```
|
||||
|
||||
### Example 3: Duplicate Coverage Detection
|
||||
|
||||
```bash
|
||||
# Check for redundant tests
|
||||
bmad tea *trace --check-duplicate-coverage true
|
||||
```
|
||||
|
||||
**Output:**
|
||||
|
||||
```markdown
|
||||
## Duplicate Coverage Detected
|
||||
|
||||
⚠️ AC-1 (login validation) is tested at multiple levels:
|
||||
|
||||
- 1.3-E2E-001 (full user journey) ✅ Appropriate
|
||||
- 1.3-UNIT-001 (business logic) ✅ Appropriate
|
||||
- 1.3-COMPONENT-001 (form validation) ⚠️ Redundant with UNIT-001
|
||||
|
||||
Recommendation: Remove 1.3-COMPONENT-001 or consolidate with UNIT-001
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No tests found for this story"
|
||||
|
||||
- Run `*atdd` workflow first to generate failing acceptance tests
|
||||
- Check test file naming conventions (may not match story ID pattern)
|
||||
- Verify test directory path is correct (`test_dir` variable)
|
||||
|
||||
### "Cannot determine coverage status"
|
||||
|
||||
- Tests may lack explicit mapping (no test IDs, unclear describe blocks)
|
||||
- Add test IDs: `{STORY_ID}-{LEVEL}-{SEQ}` (e.g., `1.3-E2E-001`)
|
||||
- Use Given-When-Then narrative in test descriptions
|
||||
|
||||
### "P0 coverage below 100%"
|
||||
|
||||
- This is a **BLOCKER** - do not release
|
||||
- Identify missing P0 tests in gap analysis
|
||||
- Run `*atdd` workflow to generate missing tests
|
||||
- Verify P0 classification is correct with stakeholders
|
||||
|
||||
### "Duplicate coverage detected"
|
||||
|
||||
- Review `selective-testing.md` knowledge fragment
|
||||
- Determine if overlap is acceptable (defense in depth) or wasteful
|
||||
- Consolidate tests at appropriate level (logic → unit, journey → E2E)
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Workflows
|
||||
|
||||
- **testarch-test-design** → `*trace` - Define priorities, then trace coverage
|
||||
- **testarch-atdd** → `*trace` - Generate tests, then validate coverage
|
||||
- `*trace` → **testarch-automate** - Identify gaps, then automate regression
|
||||
- `*trace` → **testarch-gate** - Generate metrics, then apply quality gates
|
||||
- `*trace` → **testarch-test-review** - Flag quality issues, then review tests
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Run Trace After Test Implementation**
|
||||
- Don't run `*trace` before tests exist (run `*atdd` first)
|
||||
- Trace is most valuable after initial test suite is written
|
||||
|
||||
2. **Prioritize by Risk**
|
||||
- P0 gaps are BLOCKERS (must fix before release)
|
||||
- P1 gaps are HIGH priority (block PR merge)
|
||||
- P3 gaps are acceptable (fix if time permits)
|
||||
|
||||
3. **Explicit Mapping**
|
||||
- Use test IDs (`1.3-E2E-001`) for clear traceability
|
||||
- Reference criteria in describe blocks
|
||||
- Use Given-When-Then narrative
|
||||
|
||||
4. **Avoid Duplicate Coverage**
|
||||
- Test each behavior at appropriate level only
|
||||
- Unit tests for logic, E2E for journeys
|
||||
- Only overlap for defense in depth on critical paths
|
||||
|
||||
5. **Generate Gate-Ready Artifacts**
|
||||
- Enable `generate_gate_yaml` for CI/CD integration
|
||||
- Use YAML snippets in pipeline quality gates
|
||||
- Export metrics for dashboard visualization
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
- `bmad tea *test-design` - Define test priorities and risk assessment
|
||||
- `bmad tea *atdd` - Generate failing acceptance tests for gaps
|
||||
- `bmad tea *automate` - Expand regression suite based on gaps
|
||||
- `bmad tea *gate` - Apply quality gates using traceability metrics
|
||||
- `bmad tea *test-review` - Review test quality issues flagged by trace
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Instructions](./instructions.md) - Detailed workflow steps
|
||||
- [Checklist](./checklist.md) - Validation checklist
|
||||
- [Template](./trace-template.md) - Traceability matrix template
|
||||
- [Knowledge Base](../../testarch/knowledge/) - Testing best practices
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
267
src/modules/bmm/workflows/testarch/trace/checklist.md
Normal file
267
src/modules/bmm/workflows/testarch/trace/checklist.md
Normal file
@@ -0,0 +1,267 @@
|
||||
# Requirements Traceability - Validation Checklist
|
||||
|
||||
**Workflow:** `testarch-trace`
|
||||
**Purpose:** Ensure complete and accurate traceability matrix with actionable gap analysis
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites Validation
|
||||
|
||||
- [ ] Acceptance criteria are available (from story file OR inline)
|
||||
- [ ] Test suite exists (or gaps are acknowledged and documented)
|
||||
- [ ] Test directory path is correct (`test_dir` variable)
|
||||
- [ ] Story file is accessible (if using BMad mode)
|
||||
- [ ] Knowledge base is loaded (test-priorities, traceability, risk-governance)
|
||||
|
||||
---
|
||||
|
||||
## Context Loading
|
||||
|
||||
- [ ] Story file read successfully (if applicable)
|
||||
- [ ] Acceptance criteria extracted correctly
|
||||
- [ ] Story ID identified (e.g., 1.3)
|
||||
- [ ] `test-design.md` loaded (if available)
|
||||
- [ ] `tech-spec.md` loaded (if available)
|
||||
- [ ] `PRD.md` loaded (if available)
|
||||
- [ ] Relevant knowledge fragments loaded from `tea-index.csv`
|
||||
|
||||
---
|
||||
|
||||
## Test Discovery and Cataloging
|
||||
|
||||
- [ ] Tests auto-discovered using multiple strategies (test IDs, describe blocks, file paths)
|
||||
- [ ] Tests categorized by level (E2E, API, Component, Unit)
|
||||
- [ ] Test metadata extracted:
|
||||
- [ ] Test IDs (e.g., 1.3-E2E-001)
|
||||
- [ ] Describe/context blocks
|
||||
- [ ] It blocks (individual test cases)
|
||||
- [ ] Given-When-Then structure (if BDD)
|
||||
- [ ] Priority markers (P0/P1/P2/P3)
|
||||
- [ ] All relevant test files found (no tests missed due to naming conventions)
|
||||
|
||||
---
|
||||
|
||||
## Criteria-to-Test Mapping
|
||||
|
||||
- [ ] Each acceptance criterion mapped to tests (or marked as NONE)
|
||||
- [ ] Explicit references found (test IDs, describe blocks mentioning criterion)
|
||||
- [ ] Test level documented (E2E, API, Component, Unit)
|
||||
- [ ] Given-When-Then narrative verified for alignment
|
||||
- [ ] Traceability matrix table generated:
|
||||
- [ ] Criterion ID
|
||||
- [ ] Description
|
||||
- [ ] Test ID
|
||||
- [ ] Test File
|
||||
- [ ] Test Level
|
||||
- [ ] Coverage Status
|
||||
|
||||
---
|
||||
|
||||
## Coverage Classification
|
||||
|
||||
- [ ] Coverage status classified for each criterion:
|
||||
- [ ] **FULL** - All scenarios validated at appropriate level(s)
|
||||
- [ ] **PARTIAL** - Some coverage but missing edge cases or levels
|
||||
- [ ] **NONE** - No test coverage at any level
|
||||
- [ ] **UNIT-ONLY** - Only unit tests (missing integration/E2E validation)
|
||||
- [ ] **INTEGRATION-ONLY** - Only API/Component tests (missing unit confidence)
|
||||
- [ ] Classification justifications provided
|
||||
- [ ] Edge cases considered in FULL vs PARTIAL determination
|
||||
|
||||
---
|
||||
|
||||
## Duplicate Coverage Detection
|
||||
|
||||
- [ ] Duplicate coverage checked across test levels
|
||||
- [ ] Acceptable overlap identified (defense in depth for critical paths)
|
||||
- [ ] Unacceptable duplication flagged (same validation at multiple levels)
|
||||
- [ ] Recommendations provided for consolidation
|
||||
- [ ] Selective testing principles applied
|
||||
|
||||
---
|
||||
|
||||
## Gap Analysis
|
||||
|
||||
- [ ] Coverage gaps identified:
|
||||
- [ ] Criteria with NONE status
|
||||
- [ ] Criteria with PARTIAL status
|
||||
- [ ] Criteria with UNIT-ONLY status
|
||||
- [ ] Criteria with INTEGRATION-ONLY status
|
||||
- [ ] Gaps prioritized by risk level using test-priorities framework:
|
||||
- [ ] **CRITICAL** - P0 criteria without FULL coverage (BLOCKER)
|
||||
- [ ] **HIGH** - P1 criteria without FULL coverage (PR blocker)
|
||||
- [ ] **MEDIUM** - P2 criteria without FULL coverage (nightly gap)
|
||||
- [ ] **LOW** - P3 criteria without FULL coverage (acceptable)
|
||||
- [ ] Specific test recommendations provided for each gap:
|
||||
- [ ] Suggested test level (E2E, API, Component, Unit)
|
||||
- [ ] Test description (Given-When-Then)
|
||||
- [ ] Recommended test ID (e.g., 1.3-E2E-004)
|
||||
- [ ] Explanation of why test is needed
|
||||
|
||||
---
|
||||
|
||||
## Coverage Metrics
|
||||
|
||||
- [ ] Overall coverage percentage calculated (FULL coverage / total criteria)
|
||||
- [ ] P0 coverage percentage calculated
|
||||
- [ ] P1 coverage percentage calculated
|
||||
- [ ] P2 coverage percentage calculated (if applicable)
|
||||
- [ ] Coverage by level calculated:
|
||||
- [ ] E2E coverage %
|
||||
- [ ] API coverage %
|
||||
- [ ] Component coverage %
|
||||
- [ ] Unit coverage %
|
||||
|
||||
---
|
||||
|
||||
## Quality Gate Validation
|
||||
|
||||
- [ ] P0 coverage >= 100% (required) ✅ or BLOCKER documented ❌
|
||||
- [ ] P1 coverage >= 90% (recommended) ✅ or HIGH priority gap documented ⚠️
|
||||
- [ ] Overall coverage >= 80% (recommended) ✅ or MEDIUM priority gap documented ⚠️
|
||||
- [ ] Gate status determined: PASS / WARN / FAIL
|
||||
|
||||
---
|
||||
|
||||
## Test Quality Verification
|
||||
|
||||
For each mapped test, verify:
|
||||
|
||||
- [ ] Explicit assertions are present (not hidden in helpers)
|
||||
- [ ] Test follows Given-When-Then structure
|
||||
- [ ] No hard waits or sleeps (deterministic waiting only)
|
||||
- [ ] Self-cleaning (test cleans up its data)
|
||||
- [ ] File size < 300 lines
|
||||
- [ ] Test duration < 90 seconds
|
||||
|
||||
Quality issues flagged:
|
||||
|
||||
- [ ] **BLOCKER** issues identified (missing assertions, hard waits, flaky patterns)
|
||||
- [ ] **WARNING** issues identified (large files, slow tests, unclear structure)
|
||||
- [ ] **INFO** issues identified (style inconsistencies, missing documentation)
|
||||
|
||||
Knowledge fragments referenced:
|
||||
|
||||
- [ ] `test-quality.md` for Definition of Done
|
||||
- [ ] `fixture-architecture.md` for self-cleaning patterns
|
||||
- [ ] `network-first.md` for Playwright best practices
|
||||
- [ ] `data-factories.md` for test data patterns
|
||||
|
||||
---
|
||||
|
||||
## Deliverables Generated
|
||||
|
||||
### Traceability Matrix Markdown
|
||||
|
||||
- [ ] File created at `{output_folder}/traceability-matrix.md`
|
||||
- [ ] Template from `trace-template.md` used
|
||||
- [ ] Full mapping table included
|
||||
- [ ] Coverage status section included
|
||||
- [ ] Gap analysis section included
|
||||
- [ ] Quality assessment section included
|
||||
- [ ] Recommendations section included
|
||||
|
||||
### Gate YAML Snippet (if enabled)
|
||||
|
||||
- [ ] YAML snippet generated
|
||||
- [ ] Story ID included
|
||||
- [ ] Coverage metrics included (overall, p0, p1, p2)
|
||||
- [ ] Gap counts included (critical, high, medium, low)
|
||||
- [ ] Status included (PASS / WARN / FAIL)
|
||||
- [ ] Recommendations included
|
||||
|
||||
### Coverage Badge/Metric (if enabled)
|
||||
|
||||
- [ ] Badge markdown generated
|
||||
- [ ] Metrics exported to JSON for CI/CD integration
|
||||
|
||||
### Updated Story File (if enabled)
|
||||
|
||||
- [ ] "Traceability" section added to story markdown
|
||||
- [ ] Link to traceability matrix included
|
||||
- [ ] Coverage summary included
|
||||
- [ ] Gate status included
|
||||
|
||||
---
|
||||
|
||||
## Quality Assurance
|
||||
|
||||
### Accuracy Checks
|
||||
|
||||
- [ ] All acceptance criteria accounted for (none skipped)
|
||||
- [ ] Test IDs correctly formatted (e.g., 1.3-E2E-001)
|
||||
- [ ] File paths are correct and accessible
|
||||
- [ ] Coverage percentages calculated correctly
|
||||
- [ ] No false positives (tests incorrectly mapped to criteria)
|
||||
- [ ] No false negatives (existing tests missed in mapping)
|
||||
|
||||
### Completeness Checks
|
||||
|
||||
- [ ] All test levels considered (E2E, API, Component, Unit)
|
||||
- [ ] All priorities considered (P0, P1, P2, P3)
|
||||
- [ ] All coverage statuses used appropriately (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
|
||||
- [ ] All gaps have recommendations
|
||||
- [ ] All quality issues have severity and remediation guidance
|
||||
|
||||
### Actionability Checks
|
||||
|
||||
- [ ] Recommendations are specific (not generic)
|
||||
- [ ] Test IDs suggested for new tests
|
||||
- [ ] Given-When-Then provided for recommended tests
|
||||
- [ ] Impact explained for each gap
|
||||
- [ ] Priorities clear (CRITICAL, HIGH, MEDIUM, LOW)
|
||||
|
||||
---
|
||||
|
||||
## Non-Prescriptive Validation
|
||||
|
||||
- [ ] Traceability format adapted to team needs (not rigid template)
|
||||
- [ ] Examples are minimal and focused on patterns
|
||||
- [ ] Teams can extend with custom classifications
|
||||
- [ ] Integration with external systems supported (JIRA, Azure DevOps)
|
||||
- [ ] Compliance requirements considered (if applicable)
|
||||
|
||||
---
|
||||
|
||||
## Documentation and Communication
|
||||
|
||||
- [ ] Traceability matrix is readable and well-formatted
|
||||
- [ ] Tables render correctly in markdown
|
||||
- [ ] Code blocks have proper syntax highlighting
|
||||
- [ ] Links are valid and accessible
|
||||
- [ ] Recommendations are clear and prioritized
|
||||
- [ ] Gate status is prominent and unambiguous
|
||||
|
||||
---
|
||||
|
||||
## Final Validation
|
||||
|
||||
- [ ] All prerequisites met
|
||||
- [ ] All acceptance criteria mapped or gaps documented
|
||||
- [ ] P0 coverage is 100% OR documented as BLOCKER
|
||||
- [ ] Gap analysis is complete and prioritized
|
||||
- [ ] Test quality issues identified and flagged
|
||||
- [ ] Deliverables generated and saved
|
||||
- [ ] Gate YAML ready for CI/CD integration (if enabled)
|
||||
- [ ] Story file updated (if enabled)
|
||||
- [ ] Workflow completed successfully
|
||||
|
||||
---
|
||||
|
||||
## Sign-Off
|
||||
|
||||
**Traceability Status:**
|
||||
|
||||
- [ ] ✅ PASS - All quality gates met, no critical gaps
|
||||
- [ ] ⚠️ WARN - P1 gaps exist, address before PR merge
|
||||
- [ ] ❌ FAIL - P0 gaps exist, BLOCKER for release
|
||||
|
||||
**Next Actions:**
|
||||
|
||||
- If PASS: Proceed to `*gate` workflow or PR merge
|
||||
- If WARN: Address HIGH priority gaps, re-run `*trace`
|
||||
- If FAIL: Run `*atdd` to generate missing P0 tests, re-run `*trace`
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
@@ -1,39 +1,558 @@
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
# Requirements Traceability - Instructions v4.0
|
||||
|
||||
# Requirements Traceability v3.0
|
||||
**Workflow:** `testarch-trace`
|
||||
**Purpose:** Generate requirements-to-tests traceability matrix with coverage analysis and gap identification
|
||||
**Agent:** Test Architect (TEA)
|
||||
**Format:** Pure Markdown v4.0 (no XML blocks)
|
||||
|
||||
```xml
|
||||
<task id="bmad/bmm/testarch/trace" name="Requirements Traceability">
|
||||
<llm critical="true">
|
||||
<i>Preflight requirements:</i>
|
||||
<i>- Story has implemented tests (or acknowledge gaps).</i>
|
||||
<i>- Access to source code and specifications is available.</i>
|
||||
</llm>
|
||||
<flow>
|
||||
<step n="1" title="Preflight">
|
||||
<action>Confirm prerequisites; halt if tests or specs are unavailable.</action>
|
||||
</step>
|
||||
<step n="2" title="Trace Coverage">
|
||||
<action>Gather acceptance criteria and implemented tests.</action>
|
||||
<action>Map each criterion to concrete tests (file + describe/it) using Given-When-Then narrative.</action>
|
||||
<action>Classify coverage status as FULL, PARTIAL, NONE, UNIT-ONLY, or INTEGRATION-ONLY.</action>
|
||||
<action>Flag severity based on priority (P0 gaps are critical) and recommend additional tests or refactors.</action>
|
||||
<action>Build gate YAML coverage summary reflecting totals and gaps.</action>
|
||||
</step>
|
||||
<step n="3" title="Deliverables">
|
||||
<action>Generate traceability report under `docs/qa/assessments`, a coverage matrix per criterion, and gate YAML snippet capturing totals/gaps.</action>
|
||||
</step>
|
||||
</flow>
|
||||
<halt>
|
||||
<i>If story lacks implemented tests, pause and advise running `*atdd` or writing tests before tracing.</i>
|
||||
</halt>
|
||||
<notes>
|
||||
<i>Use `{project-root}/bmad/bmm/testarch/tea-index.csv` to load traceability-relevant fragments (risk-governance, selective-testing, test-quality) as needed.</i>
|
||||
<i>Coverage definitions: FULL=all scenarios validated, PARTIAL=some coverage, NONE=no validation, UNIT-ONLY=missing higher-level validation, INTEGRATION-ONLY=lacks lower-level confidence.</i>
|
||||
<i>Ensure assertions stay explicit and avoid duplicate coverage.</i>
|
||||
</notes>
|
||||
<output>
|
||||
<i>Traceability matrix and gate snippet ready for review.</i>
|
||||
</output>
|
||||
</task>
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
This workflow creates a comprehensive traceability matrix that maps acceptance criteria to implemented tests, identifies coverage gaps, and provides actionable recommendations for improving test coverage. It supports both BMad-integrated mode (with story files and test design) and standalone mode (with inline acceptance criteria).
|
||||
|
||||
**Key Capabilities:**
|
||||
|
||||
- Map acceptance criteria to specific test cases across all levels (E2E, API, Component, Unit)
|
||||
- Classify coverage status (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
|
||||
- Prioritize gaps by risk level (P0/P1/P2/P3) using test-priorities framework
|
||||
- Generate gate-ready YAML snippets for CI/CD integration
|
||||
- Detect duplicate coverage across test levels
|
||||
- Verify explicit assertions in test cases
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
**Required:**
|
||||
|
||||
- Acceptance criteria (from story file OR provided inline)
|
||||
- Implemented test suite (or acknowledge gaps to be addressed)
|
||||
|
||||
**Recommended:**
|
||||
|
||||
- `test-design.md` (for risk assessment and priority context)
|
||||
- `tech-spec.md` (for technical implementation context)
|
||||
- Test framework configuration (playwright.config.ts, jest.config.js, etc.)
|
||||
|
||||
**Halt Conditions:**
|
||||
|
||||
- If story lacks any implemented tests AND no gaps are acknowledged, recommend running `*atdd` workflow first
|
||||
- If acceptance criteria are completely missing, halt and request them
|
||||
|
||||
---
|
||||
|
||||
## Workflow Steps
|
||||
|
||||
### Step 1: Load Context and Knowledge Base
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. Load relevant knowledge fragments from `{project-root}/bmad/bmm/testarch/tea-index.csv`:
|
||||
- `traceability.md` - Requirements mapping patterns
|
||||
- `test-priorities.md` - P0/P1/P2/P3 risk framework
|
||||
- `risk-governance.md` - Risk-based testing approach
|
||||
- `test-quality.md` - Definition of Done for tests
|
||||
- `selective-testing.md` - Duplicate coverage patterns
|
||||
|
||||
2. Read story file (if provided):
|
||||
- Extract acceptance criteria
|
||||
- Identify story ID (e.g., 1.3)
|
||||
- Note any existing test design or priority information
|
||||
|
||||
3. Read related BMad artifacts (if available):
|
||||
- `test-design.md` - Risk assessment and test priorities
|
||||
- `tech-spec.md` - Technical implementation details
|
||||
- `PRD.md` - Product requirements context
|
||||
|
||||
**Output:** Complete understanding of requirements, priorities, and existing context
|
||||
|
||||
---
|
||||
|
||||
### Step 2: Discover and Catalog Tests
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. Auto-discover test files related to the story:
|
||||
- Search for test IDs (e.g., `1.3-E2E-001`, `1.3-UNIT-005`)
|
||||
- Search for describe blocks mentioning feature name
|
||||
- Search for file paths matching feature directory
|
||||
- Use `glob` to find test files in `{test_dir}`
|
||||
|
||||
2. Categorize tests by level:
|
||||
- **E2E Tests**: Full user journeys through UI
|
||||
- **API Tests**: HTTP contract and integration tests
|
||||
- **Component Tests**: UI component behavior in isolation
|
||||
- **Unit Tests**: Business logic and pure functions
|
||||
|
||||
3. Extract test metadata:
|
||||
- Test ID (if present)
|
||||
- Describe/context blocks
|
||||
- It blocks (individual test cases)
|
||||
- Given-When-Then structure (if BDD)
|
||||
- Assertions used
|
||||
- Priority markers (P0/P1/P2/P3)
|
||||
|
||||
**Output:** Complete catalog of all tests for this feature
|
||||
|
||||
---
|
||||
|
||||
### Step 3: Map Criteria to Tests
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. For each acceptance criterion:
|
||||
- Search for explicit references (test IDs, describe blocks mentioning criterion)
|
||||
- Map to specific test files and it blocks
|
||||
- Use Given-When-Then narrative to verify alignment
|
||||
- Document test level (E2E, API, Component, Unit)
|
||||
|
||||
2. Build traceability matrix:
|
||||
|
||||
```
|
||||
| Criterion ID | Description | Test ID | Test File | Test Level | Coverage Status |
|
||||
|--------------|-------------|---------|-----------|------------|-----------------|
|
||||
| AC-1 | User can... | 1.3-E2E-001 | e2e/auth.spec.ts | E2E | FULL |
|
||||
```
|
||||
|
||||
3. Classify coverage status for each criterion:
|
||||
- **FULL**: All scenarios validated at appropriate level(s)
|
||||
- **PARTIAL**: Some coverage but missing edge cases or levels
|
||||
- **NONE**: No test coverage at any level
|
||||
- **UNIT-ONLY**: Only unit tests (missing integration/E2E validation)
|
||||
- **INTEGRATION-ONLY**: Only API/Component tests (missing unit confidence)
|
||||
|
||||
4. Check for duplicate coverage:
|
||||
- Same behavior tested at multiple levels unnecessarily
|
||||
- Flag violations of selective testing principles
|
||||
- Recommend consolidation where appropriate
|
||||
|
||||
**Output:** Complete traceability matrix with coverage classifications
|
||||
|
||||
---
|
||||
|
||||
### Step 4: Analyze Gaps and Prioritize
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. Identify coverage gaps:
|
||||
- List criteria with NONE, PARTIAL, UNIT-ONLY, or INTEGRATION-ONLY status
|
||||
- Assign severity based on test-priorities framework:
|
||||
- **CRITICAL**: P0 criteria without FULL coverage (blocks release)
|
||||
- **HIGH**: P1 criteria without FULL coverage (PR blocker)
|
||||
- **MEDIUM**: P2 criteria without FULL coverage (nightly test gap)
|
||||
- **LOW**: P3 criteria without FULL coverage (acceptable gap)
|
||||
|
||||
2. Recommend specific tests to add:
|
||||
- Suggest test level (E2E, API, Component, Unit)
|
||||
- Provide test description (Given-When-Then)
|
||||
- Recommend test ID (e.g., `1.3-E2E-004`)
|
||||
- Explain why this test is needed
|
||||
|
||||
3. Calculate coverage metrics:
|
||||
- Overall coverage percentage (criteria with FULL coverage / total criteria)
|
||||
- P0 coverage percentage (critical paths)
|
||||
- P1 coverage percentage (high priority)
|
||||
- Coverage by level (E2E%, API%, Component%, Unit%)
|
||||
|
||||
4. Check against quality gates:
|
||||
- P0 coverage >= 100% (required)
|
||||
- P1 coverage >= 90% (recommended)
|
||||
- Overall coverage >= 80% (recommended)
|
||||
|
||||
**Output:** Prioritized gap analysis with actionable recommendations
|
||||
|
||||
---
|
||||
|
||||
### Step 5: Verify Test Quality
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. For each mapped test, verify:
|
||||
- Explicit assertions are present (not hidden in helpers)
|
||||
- Test follows Given-When-Then structure
|
||||
- No hard waits or sleeps
|
||||
- Self-cleaning (test cleans up its data)
|
||||
- File size < 300 lines
|
||||
- Test duration < 90 seconds
|
||||
|
||||
2. Flag quality issues:
|
||||
- **BLOCKER**: Missing assertions, hard waits, flaky patterns
|
||||
- **WARNING**: Large files, slow tests, unclear structure
|
||||
- **INFO**: Style inconsistencies, missing documentation
|
||||
|
||||
3. Reference knowledge fragments:
|
||||
- `test-quality.md` for Definition of Done
|
||||
- `fixture-architecture.md` for self-cleaning patterns
|
||||
- `network-first.md` for Playwright best practices
|
||||
- `data-factories.md` for test data patterns
|
||||
|
||||
**Output:** Quality assessment for each test with improvement recommendations
|
||||
|
||||
---
|
||||
|
||||
### Step 6: Generate Deliverables
|
||||
|
||||
**Actions:**
|
||||
|
||||
1. Create traceability matrix markdown file:
|
||||
- Use template from `trace-template.md`
|
||||
- Include full mapping table
|
||||
- Add coverage status section
|
||||
- Add gap analysis section
|
||||
- Add quality assessment section
|
||||
- Add recommendations section
|
||||
- Save to `{output_folder}/traceability-matrix.md`
|
||||
|
||||
2. Generate gate YAML snippet (if enabled):
|
||||
|
||||
```yaml
|
||||
traceability:
|
||||
story_id: '1.3'
|
||||
coverage:
|
||||
overall: 85%
|
||||
p0: 100%
|
||||
p1: 90%
|
||||
p2: 75%
|
||||
gaps:
|
||||
critical: 0
|
||||
high: 1
|
||||
medium: 2
|
||||
status: 'PASS' # or "FAIL" if P0 < 100%
|
||||
```
|
||||
|
||||
3. Create coverage badge/metric (if enabled):
|
||||
- Generate badge markdown: ``
|
||||
- Export metrics to JSON for CI/CD integration
|
||||
|
||||
4. Update story file (if enabled):
|
||||
- Add "Traceability" section to story markdown
|
||||
- Link to traceability matrix
|
||||
- Include coverage summary
|
||||
- Add gate status
|
||||
|
||||
**Output:** Complete traceability documentation ready for review and CI/CD integration
|
||||
|
||||
---
|
||||
|
||||
## Non-Prescriptive Approach
|
||||
|
||||
**Minimal Examples:** This workflow provides principles and patterns, not rigid templates. Teams should adapt the traceability format to their needs.
|
||||
|
||||
**Key Patterns to Follow:**
|
||||
|
||||
- Map criteria to tests explicitly (don't rely on inference alone)
|
||||
- Prioritize by risk (P0 gaps are critical, P3 gaps are acceptable)
|
||||
- Check coverage at appropriate levels (E2E for journeys, Unit for logic)
|
||||
- Verify test quality (explicit assertions, no flakiness)
|
||||
- Generate gate-ready artifacts (YAML snippets for CI/CD)
|
||||
|
||||
**Extend as Needed:**
|
||||
|
||||
- Add custom coverage classifications
|
||||
- Integrate with code coverage tools (Istanbul, NYC)
|
||||
- Link to external traceability systems (JIRA, Azure DevOps)
|
||||
- Add compliance or regulatory requirements
|
||||
|
||||
---
|
||||
|
||||
## Coverage Classification Details
|
||||
|
||||
### FULL Coverage
|
||||
|
||||
- All scenarios validated at appropriate test level(s)
|
||||
- Edge cases considered
|
||||
- Both happy path and error paths tested
|
||||
- Assertions are explicit and complete
|
||||
|
||||
### PARTIAL Coverage
|
||||
|
||||
- Some scenarios validated but missing edge cases
|
||||
- Only happy path tested (missing error paths)
|
||||
- Assertions present but incomplete
|
||||
- Coverage exists but needs enhancement
|
||||
|
||||
### NONE Coverage
|
||||
|
||||
- No tests found for this criterion
|
||||
- Complete gap requiring new tests
|
||||
- Critical if P0/P1, acceptable if P3
|
||||
|
||||
### UNIT-ONLY Coverage
|
||||
|
||||
- Only unit tests exist (business logic validated)
|
||||
- Missing integration or E2E validation
|
||||
- Risk: Implementation may not work end-to-end
|
||||
- Recommendation: Add integration or E2E tests for critical paths
|
||||
|
||||
### INTEGRATION-ONLY Coverage
|
||||
|
||||
- Only API or Component tests exist
|
||||
- Missing unit test confidence for business logic
|
||||
- Risk: Logic errors may not be caught quickly
|
||||
- Recommendation: Add unit tests for complex algorithms or state machines
|
||||
|
||||
---
|
||||
|
||||
## Duplicate Coverage Detection
|
||||
|
||||
Use selective testing principles from `selective-testing.md`:
|
||||
|
||||
**Acceptable Overlap:**
|
||||
|
||||
- Unit tests for business logic + E2E tests for user journey (different aspects)
|
||||
- API tests for contract + E2E tests for full workflow (defense in depth for critical paths)
|
||||
|
||||
**Unacceptable Duplication:**
|
||||
|
||||
- Same validation at multiple levels (e.g., E2E testing math logic better suited for unit tests)
|
||||
- Multiple E2E tests covering identical user path
|
||||
- Component tests duplicating unit test logic
|
||||
|
||||
**Recommendation Pattern:**
|
||||
|
||||
- Test logic at unit level
|
||||
- Test integration at API/Component level
|
||||
- Test user experience at E2E level
|
||||
- Avoid testing framework behavior at any level
|
||||
|
||||
---
|
||||
|
||||
## Integration with BMad Artifacts
|
||||
|
||||
### With test-design.md
|
||||
|
||||
- Use risk assessment to prioritize gap remediation
|
||||
- Reference test priorities (P0/P1/P2/P3) for severity classification
|
||||
- Align traceability with originally planned test coverage
|
||||
|
||||
### With tech-spec.md
|
||||
|
||||
- Understand technical implementation details
|
||||
- Map criteria to specific code modules
|
||||
- Verify tests cover technical edge cases
|
||||
|
||||
### With PRD.md
|
||||
|
||||
- Understand full product context
|
||||
- Verify acceptance criteria align with product goals
|
||||
- Check for unstated requirements that need coverage
|
||||
|
||||
---
|
||||
|
||||
## Quality Gates
|
||||
|
||||
### P0 Coverage (Critical Paths)
|
||||
|
||||
- **Requirement:** 100% FULL coverage
|
||||
- **Severity:** BLOCKER if not met
|
||||
- **Action:** Do not release until P0 coverage is complete
|
||||
|
||||
### P1 Coverage (High Priority)
|
||||
|
||||
- **Requirement:** 90% FULL coverage
|
||||
- **Severity:** HIGH if not met
|
||||
- **Action:** Block PR merge until addressed
|
||||
|
||||
### P2 Coverage (Medium Priority)
|
||||
|
||||
- **Requirement:** No strict requirement (recommended 80%)
|
||||
- **Severity:** MEDIUM if gaps exist
|
||||
- **Action:** Address in nightly test improvements
|
||||
|
||||
### P3 Coverage (Low Priority)
|
||||
|
||||
- **Requirement:** No requirement
|
||||
- **Severity:** LOW if gaps exist
|
||||
- **Action:** Optional - add if time permits
|
||||
|
||||
---
|
||||
|
||||
## Example Traceability Matrix
|
||||
|
||||
````markdown
|
||||
# Traceability Matrix - Story 1.3
|
||||
|
||||
**Story:** User Authentication
|
||||
**Date:** 2025-10-14
|
||||
**Status:** 85% Coverage (1 HIGH gap)
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Priority | Total Criteria | FULL Coverage | Coverage % | Status |
|
||||
| --------- | -------------- | ------------- | ---------- | ------- |
|
||||
| P0 | 3 | 3 | 100% | ✅ PASS |
|
||||
| P1 | 5 | 4 | 80% | ⚠️ WARN |
|
||||
| P2 | 4 | 3 | 75% | ✅ PASS |
|
||||
| P3 | 2 | 1 | 50% | ✅ PASS |
|
||||
| **Total** | **14** | **11** | **79%** | ⚠️ WARN |
|
||||
|
||||
## Detailed Mapping
|
||||
|
||||
### AC-1: User can login with email and password (P0)
|
||||
|
||||
- **Coverage:** FULL ✅
|
||||
- **Tests:**
|
||||
- `1.3-E2E-001` - tests/e2e/auth.spec.ts:12
|
||||
- Given: User has valid credentials
|
||||
- When: User submits login form
|
||||
- Then: User is redirected to dashboard
|
||||
- `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8
|
||||
- Given: Valid email and password hash
|
||||
- When: validateCredentials is called
|
||||
- Then: Returns user object
|
||||
|
||||
### AC-2: User sees error for invalid credentials (P0)
|
||||
|
||||
- **Coverage:** FULL ✅
|
||||
- **Tests:**
|
||||
- `1.3-E2E-002` - tests/e2e/auth.spec.ts:28
|
||||
- Given: User has invalid password
|
||||
- When: User submits login form
|
||||
- Then: Error message is displayed
|
||||
- `1.3-UNIT-002` - tests/unit/auth-service.spec.ts:18
|
||||
- Given: Invalid password hash
|
||||
- When: validateCredentials is called
|
||||
- Then: Throws AuthenticationError
|
||||
|
||||
### AC-3: User can reset password via email (P1)
|
||||
|
||||
- **Coverage:** PARTIAL ⚠️
|
||||
- **Tests:**
|
||||
- `1.3-E2E-003` - tests/e2e/auth.spec.ts:44
|
||||
- Given: User requests password reset
|
||||
- When: User clicks reset link
|
||||
- Then: User can set new password
|
||||
- **Gaps:**
|
||||
- Missing: Email delivery validation
|
||||
- Missing: Expired token handling
|
||||
- Missing: Unit test for token generation
|
||||
- **Recommendation:** Add `1.3-API-001` for email service integration and `1.3-UNIT-003` for token logic
|
||||
|
||||
## Gap Analysis
|
||||
|
||||
### Critical Gaps (BLOCKER)
|
||||
|
||||
- None ✅
|
||||
|
||||
### High Priority Gaps (PR BLOCKER)
|
||||
|
||||
1. **AC-3: Password reset email edge cases**
|
||||
- Missing tests for expired tokens, invalid tokens, email failures
|
||||
- Recommend: `1.3-API-001` (email service integration) and `1.3-E2E-004` (error paths)
|
||||
- Impact: Users may not be able to recover accounts in error scenarios
|
||||
|
||||
### Medium Priority Gaps (Nightly)
|
||||
|
||||
1. **AC-7: Session timeout handling** - UNIT-ONLY coverage (missing E2E validation)
|
||||
|
||||
## Quality Assessment
|
||||
|
||||
### Tests with Issues
|
||||
|
||||
- `1.3-E2E-001` ⚠️ - 145 seconds (exceeds 90s target) - Optimize fixture setup
|
||||
- `1.3-UNIT-005` ⚠️ - 320 lines (exceeds 300 line limit) - Split into multiple test files
|
||||
|
||||
### Tests Passing Quality Gates
|
||||
|
||||
- 11/13 tests (85%) meet all quality criteria ✅
|
||||
|
||||
## Gate YAML Snippet
|
||||
|
||||
```yaml
|
||||
traceability:
|
||||
story_id: '1.3'
|
||||
coverage:
|
||||
overall: 79%
|
||||
p0: 100%
|
||||
p1: 80%
|
||||
p2: 75%
|
||||
p3: 50%
|
||||
gaps:
|
||||
critical: 0
|
||||
high: 1
|
||||
medium: 1
|
||||
low: 1
|
||||
status: 'WARN' # P1 coverage below 90% threshold
|
||||
recommendations:
|
||||
- 'Add 1.3-API-001 for email service integration'
|
||||
- 'Add 1.3-E2E-004 for password reset error paths'
|
||||
- 'Optimize 1.3-E2E-001 performance (145s → <90s)'
|
||||
```
|
||||
````
|
||||
|
||||
## Recommendations
|
||||
|
||||
1. **Address High Priority Gap:** Add password reset edge case tests before PR merge
|
||||
2. **Optimize Slow Test:** Refactor `1.3-E2E-001` to use faster fixture setup
|
||||
3. **Split Large Test:** Break `1.3-UNIT-005` into focused test files
|
||||
4. **Enhance P2 Coverage:** Add E2E validation for session timeout (currently UNIT-ONLY)
|
||||
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Before completing this workflow, verify:
|
||||
|
||||
- ✅ All acceptance criteria are mapped to tests (or gaps are documented)
|
||||
- ✅ Coverage status is classified (FULL, PARTIAL, NONE, UNIT-ONLY, INTEGRATION-ONLY)
|
||||
- ✅ Gaps are prioritized by risk level (P0/P1/P2/P3)
|
||||
- ✅ P0 coverage is 100% or blockers are documented
|
||||
- ✅ Duplicate coverage is identified and flagged
|
||||
- ✅ Test quality is assessed (assertions, structure, performance)
|
||||
- ✅ Traceability matrix is generated and saved
|
||||
- ✅ Gate YAML snippet is generated (if enabled)
|
||||
- ✅ Story file is updated with traceability section (if enabled)
|
||||
- ✅ Recommendations are actionable and specific
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- **Explicit Mapping:** Require tests to reference criteria explicitly (test IDs, describe blocks) for maintainability
|
||||
- **Risk-Based Prioritization:** Use test-priorities framework (P0/P1/P2/P3) to determine gap severity
|
||||
- **Quality Over Quantity:** Better to have fewer high-quality tests with FULL coverage than many low-quality tests with PARTIAL coverage
|
||||
- **Selective Testing:** Avoid duplicate coverage - test each behavior at the appropriate level only
|
||||
- **Gate Integration:** Generate YAML snippets that can be consumed by CI/CD pipelines for automated quality gates
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "No tests found for this story"
|
||||
- Run `*atdd` workflow first to generate failing acceptance tests
|
||||
- Check test file naming conventions (may not match story ID pattern)
|
||||
- Verify test directory path is correct
|
||||
|
||||
### "Cannot determine coverage status"
|
||||
- Tests may lack explicit mapping to criteria (no test IDs, unclear describe blocks)
|
||||
- Review test structure and add Given-When-Then narrative
|
||||
- Add test IDs in format: `{STORY_ID}-{LEVEL}-{SEQ}` (e.g., 1.3-E2E-001)
|
||||
|
||||
### "P0 coverage below 100%"
|
||||
- This is a **BLOCKER** - do not release
|
||||
- Identify missing P0 tests in gap analysis
|
||||
- Run `*atdd` workflow to generate missing tests
|
||||
- Verify with stakeholders that P0 classification is correct
|
||||
|
||||
### "Duplicate coverage detected"
|
||||
- Review selective testing principles in `selective-testing.md`
|
||||
- Determine if overlap is acceptable (defense in depth) or wasteful (same validation at multiple levels)
|
||||
- Consolidate tests at appropriate level (logic → unit, integration → API, journey → E2E)
|
||||
|
||||
---
|
||||
|
||||
## Related Workflows
|
||||
|
||||
- **testarch-test-design** - Define test priorities (P0/P1/P2/P3) before tracing
|
||||
- **testarch-atdd** - Generate failing acceptance tests for gaps identified
|
||||
- **testarch-automate** - Expand regression suite based on traceability findings
|
||||
- **testarch-gate** - Use traceability matrix as input for quality gate decisions
|
||||
- **testarch-test-review** - Review test quality issues flagged in traceability
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
```
|
||||
|
||||
307
src/modules/bmm/workflows/testarch/trace/trace-template.md
Normal file
307
src/modules/bmm/workflows/testarch/trace/trace-template.md
Normal file
@@ -0,0 +1,307 @@
|
||||
# Traceability Matrix - Story {STORY_ID}
|
||||
|
||||
**Story:** {STORY_TITLE}
|
||||
**Date:** {DATE}
|
||||
**Status:** {OVERALL_COVERAGE}% Coverage ({GAP_COUNT} {GAP_SEVERITY} gap{s})
|
||||
|
||||
---
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Priority | Total Criteria | FULL Coverage | Coverage % | Status |
|
||||
| --------- | -------------- | ------------- | ---------- | ------------ |
|
||||
| P0 | {P0_TOTAL} | {P0_FULL} | {P0_PCT}% | {P0_STATUS} |
|
||||
| P1 | {P1_TOTAL} | {P1_FULL} | {P1_PCT}% | {P1_STATUS} |
|
||||
| P2 | {P2_TOTAL} | {P2_FULL} | {P2_PCT}% | {P2_STATUS} |
|
||||
| P3 | {P3_TOTAL} | {P3_FULL} | {P3_PCT}% | {P3_STATUS} |
|
||||
| **Total** | **{TOTAL}** | **{FULL}** | **{PCT}%** | **{STATUS}** |
|
||||
|
||||
**Legend:**
|
||||
|
||||
- ✅ PASS - Coverage meets quality gate threshold
|
||||
- ⚠️ WARN - Coverage below threshold but not critical
|
||||
- ❌ FAIL - Coverage below minimum threshold (blocker)
|
||||
|
||||
---
|
||||
|
||||
## Detailed Mapping
|
||||
|
||||
### {CRITERION_ID}: {CRITERION_DESCRIPTION} ({PRIORITY})
|
||||
|
||||
- **Coverage:** {COVERAGE_STATUS} {STATUS_ICON}
|
||||
- **Tests:**
|
||||
- `{TEST_ID}` - {TEST_FILE}:{LINE}
|
||||
- **Given:** {GIVEN}
|
||||
- **When:** {WHEN}
|
||||
- **Then:** {THEN}
|
||||
- `{TEST_ID_2}` - {TEST_FILE_2}:{LINE}
|
||||
- **Given:** {GIVEN_2}
|
||||
- **When:** {WHEN_2}
|
||||
- **Then:** {THEN_2}
|
||||
|
||||
- **Gaps:** (if PARTIAL or UNIT-ONLY or INTEGRATION-ONLY)
|
||||
- Missing: {MISSING_SCENARIO_1}
|
||||
- Missing: {MISSING_SCENARIO_2}
|
||||
|
||||
- **Recommendation:** {RECOMMENDATION_TEXT}
|
||||
|
||||
---
|
||||
|
||||
### Example: AC-1: User can login with email and password (P0)
|
||||
|
||||
- **Coverage:** FULL ✅
|
||||
- **Tests:**
|
||||
- `1.3-E2E-001` - tests/e2e/auth.spec.ts:12
|
||||
- **Given:** User has valid credentials
|
||||
- **When:** User submits login form
|
||||
- **Then:** User is redirected to dashboard
|
||||
- `1.3-UNIT-001` - tests/unit/auth-service.spec.ts:8
|
||||
- **Given:** Valid email and password hash
|
||||
- **When:** validateCredentials is called
|
||||
- **Then:** Returns user object
|
||||
|
||||
---
|
||||
|
||||
### Example: AC-3: User can reset password via email (P1)
|
||||
|
||||
- **Coverage:** PARTIAL ⚠️
|
||||
- **Tests:**
|
||||
- `1.3-E2E-003` - tests/e2e/auth.spec.ts:44
|
||||
- **Given:** User requests password reset
|
||||
- **When:** User clicks reset link in email
|
||||
- **Then:** User can set new password
|
||||
|
||||
- **Gaps:**
|
||||
- Missing: Email delivery validation
|
||||
- Missing: Expired token handling (error path)
|
||||
- Missing: Invalid token handling (security test)
|
||||
- Missing: Unit test for token generation logic
|
||||
|
||||
- **Recommendation:** Add `1.3-API-001` for email service integration testing and `1.3-UNIT-003` for token generation logic. Add `1.3-E2E-004` for error path validation (expired/invalid tokens).
|
||||
|
||||
---
|
||||
|
||||
### Example: AC-7: Session timeout handling (P2)
|
||||
|
||||
- **Coverage:** UNIT-ONLY ⚠️
|
||||
- **Tests:**
|
||||
- `1.3-UNIT-006` - tests/unit/session-manager.spec.ts:42
|
||||
- **Given:** Session has expired timestamp
|
||||
- **When:** isSessionValid is called
|
||||
- **Then:** Returns false
|
||||
|
||||
- **Gaps:**
|
||||
- Missing: E2E validation of timeout behavior in UI
|
||||
- Missing: API test for session refresh flow
|
||||
|
||||
- **Recommendation:** Add `1.3-E2E-005` to validate that user sees timeout message and is redirected to login. Add `1.3-API-002` to validate session refresh endpoint behavior.
|
||||
|
||||
---
|
||||
|
||||
## Gap Analysis
|
||||
|
||||
### Critical Gaps (BLOCKER) ❌
|
||||
|
||||
{CRITICAL_GAP_COUNT} gaps found. **Do not release until resolved.**
|
||||
|
||||
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P0)
|
||||
- Current Coverage: {COVERAGE_STATUS}
|
||||
- Missing Tests: {MISSING_TEST_DESCRIPTION}
|
||||
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
|
||||
- Impact: {IMPACT_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
### High Priority Gaps (PR BLOCKER) ⚠️
|
||||
|
||||
{HIGH_GAP_COUNT} gaps found. **Address before PR merge.**
|
||||
|
||||
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P1)
|
||||
- Current Coverage: {COVERAGE_STATUS}
|
||||
- Missing Tests: {MISSING_TEST_DESCRIPTION}
|
||||
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
|
||||
- Impact: {IMPACT_DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
### Medium Priority Gaps (Nightly) ⚠️
|
||||
|
||||
{MEDIUM_GAP_COUNT} gaps found. **Address in nightly test improvements.**
|
||||
|
||||
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P2)
|
||||
- Current Coverage: {COVERAGE_STATUS}
|
||||
- Recommend: {RECOMMENDED_TEST_ID} ({RECOMMENDED_TEST_LEVEL})
|
||||
|
||||
---
|
||||
|
||||
### Low Priority Gaps (Optional) ℹ️
|
||||
|
||||
{LOW_GAP_COUNT} gaps found. **Optional - add if time permits.**
|
||||
|
||||
1. **{CRITERION_ID}: {CRITERION_DESCRIPTION}** (P3)
|
||||
- Current Coverage: {COVERAGE_STATUS}
|
||||
|
||||
---
|
||||
|
||||
## Quality Assessment
|
||||
|
||||
### Tests with Issues
|
||||
|
||||
**BLOCKER Issues** ❌
|
||||
|
||||
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
|
||||
|
||||
**WARNING Issues** ⚠️
|
||||
|
||||
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
|
||||
|
||||
**INFO Issues** ℹ️
|
||||
|
||||
- `{TEST_ID}` - {ISSUE_DESCRIPTION} - {REMEDIATION}
|
||||
|
||||
---
|
||||
|
||||
### Example Quality Issues
|
||||
|
||||
**WARNING Issues** ⚠️
|
||||
|
||||
- `1.3-E2E-001` - 145 seconds (exceeds 90s target) - Optimize fixture setup to reduce test duration
|
||||
- `1.3-UNIT-005` - 320 lines (exceeds 300 line limit) - Split into multiple focused test files
|
||||
|
||||
**INFO Issues** ℹ️
|
||||
|
||||
- `1.3-E2E-002` - Missing Given-When-Then structure - Refactor describe block to use BDD format
|
||||
|
||||
---
|
||||
|
||||
### Tests Passing Quality Gates
|
||||
|
||||
**{PASSING_TEST_COUNT}/{TOTAL_TEST_COUNT} tests ({PASSING_PCT}%) meet all quality criteria** ✅
|
||||
|
||||
---
|
||||
|
||||
## Duplicate Coverage Analysis
|
||||
|
||||
### Acceptable Overlap (Defense in Depth)
|
||||
|
||||
- {CRITERION_ID}: Tested at unit (business logic) and E2E (user journey) ✅
|
||||
|
||||
### Unacceptable Duplication ⚠️
|
||||
|
||||
- {CRITERION_ID}: Same validation at E2E and Component level
|
||||
- Recommendation: Remove {TEST_ID} or consolidate with {OTHER_TEST_ID}
|
||||
|
||||
---
|
||||
|
||||
## Coverage by Test Level
|
||||
|
||||
| Test Level | Tests | Criteria Covered | Coverage % |
|
||||
| ---------- | ----------------- | -------------------- | ---------------- |
|
||||
| E2E | {E2E_COUNT} | {E2E_CRITERIA} | {E2E_PCT}% |
|
||||
| API | {API_COUNT} | {API_CRITERIA} | {API_PCT}% |
|
||||
| Component | {COMP_COUNT} | {COMP_CRITERIA} | {COMP_PCT}% |
|
||||
| Unit | {UNIT_COUNT} | {UNIT_CRITERIA} | {UNIT_PCT}% |
|
||||
| **Total** | **{TOTAL_TESTS}** | **{TOTAL_CRITERIA}** | **{TOTAL_PCT}%** |
|
||||
|
||||
---
|
||||
|
||||
## Gate YAML Snippet
|
||||
|
||||
```yaml
|
||||
traceability:
|
||||
story_id: "{STORY_ID}"
|
||||
date: "{DATE}"
|
||||
coverage:
|
||||
overall: {OVERALL_PCT}%
|
||||
p0: {P0_PCT}%
|
||||
p1: {P1_PCT}%
|
||||
p2: {P2_PCT}%
|
||||
p3: {P3_PCT}%
|
||||
gaps:
|
||||
critical: {CRITICAL_COUNT}
|
||||
high: {HIGH_COUNT}
|
||||
medium: {MEDIUM_COUNT}
|
||||
low: {LOW_COUNT}
|
||||
quality:
|
||||
passing_tests: {PASSING_COUNT}
|
||||
total_tests: {TOTAL_TESTS}
|
||||
blocker_issues: {BLOCKER_COUNT}
|
||||
warning_issues: {WARNING_COUNT}
|
||||
status: "{STATUS}" # PASS / WARN / FAIL
|
||||
recommendations:
|
||||
- "{RECOMMENDATION_1}"
|
||||
- "{RECOMMENDATION_2}"
|
||||
- "{RECOMMENDATION_3}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Immediate Actions (Before PR Merge)
|
||||
|
||||
1. **{ACTION_1}** - {DESCRIPTION}
|
||||
2. **{ACTION_2}** - {DESCRIPTION}
|
||||
|
||||
### Short-term Actions (This Sprint)
|
||||
|
||||
1. **{ACTION_1}** - {DESCRIPTION}
|
||||
2. **{ACTION_2}** - {DESCRIPTION}
|
||||
|
||||
### Long-term Actions (Backlog)
|
||||
|
||||
1. **{ACTION_1}** - {DESCRIPTION}
|
||||
|
||||
---
|
||||
|
||||
### Example Recommendations
|
||||
|
||||
### Immediate Actions (Before PR Merge)
|
||||
|
||||
1. **Add P1 Password Reset Tests** - Implement `1.3-API-001` for email service integration and `1.3-E2E-004` for error path validation. P1 coverage currently at 80%, target is 90%.
|
||||
2. **Optimize Slow E2E Test** - Refactor `1.3-E2E-001` to use faster fixture setup. Currently 145s, target is <90s.
|
||||
|
||||
### Short-term Actions (This Sprint)
|
||||
|
||||
1. **Enhance P2 Coverage** - Add E2E validation for session timeout (`1.3-E2E-005`). Currently UNIT-ONLY coverage.
|
||||
2. **Split Large Test File** - Break `1.3-UNIT-005` (320 lines) into multiple focused test files (<300 lines each).
|
||||
|
||||
### Long-term Actions (Backlog)
|
||||
|
||||
1. **Enrich P3 Coverage** - Add tests for edge cases in P3 criteria if time permits.
|
||||
|
||||
---
|
||||
|
||||
## Related Artifacts
|
||||
|
||||
- **Story File:** {STORY_FILE_PATH}
|
||||
- **Test Design:** {TEST_DESIGN_PATH} (if available)
|
||||
- **Tech Spec:** {TECH_SPEC_PATH} (if available)
|
||||
- **Test Files:** {TEST_DIR_PATH}
|
||||
|
||||
---
|
||||
|
||||
## Sign-Off
|
||||
|
||||
**Traceability Assessment:**
|
||||
|
||||
- Overall Coverage: {OVERALL_PCT}%
|
||||
- P0 Coverage: {P0_PCT}% {P0_STATUS}
|
||||
- P1 Coverage: {P1_PCT}% {P1_STATUS}
|
||||
- Critical Gaps: {CRITICAL_COUNT}
|
||||
- High Priority Gaps: {HIGH_COUNT}
|
||||
|
||||
**Gate Status:** {STATUS} {STATUS_ICON}
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
- If PASS ✅: Proceed to `*gate` workflow or PR merge
|
||||
- If WARN ⚠️: Address HIGH priority gaps, re-run `*trace`
|
||||
- If FAIL ❌: Run `*atdd` to generate missing P0 tests, re-run `*trace`
|
||||
|
||||
**Generated:** {DATE}
|
||||
**Workflow:** testarch-trace v4.0
|
||||
|
||||
---
|
||||
|
||||
<!-- Powered by BMAD-CORE™ -->
|
||||
@@ -1,25 +1,99 @@
|
||||
# Test Architect workflow: trace
|
||||
name: testarch-trace
|
||||
description: "Trace requirements to implemented automated tests."
|
||||
description: "Generate requirements-to-tests traceability matrix with coverage analysis and gap identification"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
date: system-generated
|
||||
|
||||
# Workflow components
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/testarch/trace"
|
||||
instructions: "{installed_path}/instructions.md"
|
||||
validation: "{installed_path}/checklist.md"
|
||||
template: "{installed_path}/trace-template.md"
|
||||
|
||||
template: false
|
||||
# Variables and inputs
|
||||
variables:
|
||||
# Target specification
|
||||
story_file: "" # Path to story markdown (e.g., bmad/output/story-1.3.md)
|
||||
acceptance_criteria: "" # Optional - inline criteria if no story file
|
||||
|
||||
# Test discovery
|
||||
test_dir: "{project-root}/tests"
|
||||
source_dir: "{project-root}/src"
|
||||
auto_discover_tests: true # Automatically find tests related to story
|
||||
|
||||
# Traceability configuration
|
||||
coverage_levels: "e2e,api,component,unit" # Which levels to trace (comma-separated)
|
||||
map_by_test_id: true # Use test IDs (e.g., 1.3-E2E-001) for mapping
|
||||
map_by_describe: true # Use describe blocks for mapping
|
||||
map_by_filename: true # Use file paths for mapping
|
||||
|
||||
# Coverage classification
|
||||
require_explicit_mapping: true # Require tests to explicitly reference criteria
|
||||
flag_unit_only: true # Flag criteria covered only by unit tests
|
||||
flag_integration_only: true # Flag criteria covered only by integration tests
|
||||
flag_partial_coverage: true # Flag criteria with incomplete coverage
|
||||
|
||||
# Gap analysis
|
||||
prioritize_by_risk: true # Use test-priorities (P0/P1/P2/P3) for gap severity
|
||||
suggest_missing_tests: true # Recommend specific tests to add
|
||||
check_duplicate_coverage: true # Warn about same behavior tested at multiple levels
|
||||
|
||||
# Integration with BMad artifacts
|
||||
use_test_design: true # Load test-design.md if exists (risk assessment)
|
||||
use_tech_spec: true # Load tech-spec.md if exists (technical context)
|
||||
use_prd: true # Load PRD.md if exists (requirements context)
|
||||
|
||||
# Output configuration
|
||||
output_file: "{output_folder}/traceability-matrix.md"
|
||||
generate_gate_yaml: true # Create gate YAML snippet with coverage summary
|
||||
generate_coverage_badge: true # Create coverage badge/metric
|
||||
update_story_file: true # Add traceability section to story file
|
||||
|
||||
# Quality gates
|
||||
min_p0_coverage: 100 # Percentage (P0 must be 100% covered)
|
||||
min_p1_coverage: 90 # Percentage
|
||||
min_overall_coverage: 80 # Percentage
|
||||
|
||||
# Advanced options
|
||||
auto_load_knowledge: true # Load traceability, risk-governance, test-quality fragments
|
||||
include_code_coverage: false # Integrate with code coverage reports (Istanbul, NYC)
|
||||
check_assertions: true # Verify explicit assertions in tests
|
||||
|
||||
# Output configuration
|
||||
default_output_file: "{output_folder}/traceability-matrix.md"
|
||||
|
||||
# Required tools
|
||||
required_tools:
|
||||
- read_file # Read story, test files, BMad artifacts
|
||||
- write_file # Create traceability matrix, gate YAML
|
||||
- list_files # Discover test files
|
||||
- search_repo # Find tests by test ID, describe blocks
|
||||
- glob # Find test files matching patterns
|
||||
|
||||
# Recommended inputs
|
||||
recommended_inputs:
|
||||
- story: "Story markdown with acceptance criteria (required for BMad mode)"
|
||||
- test_files: "Test suite for the feature (auto-discovered if not provided)"
|
||||
- test_design: "Test design with risk/priority assessment (optional)"
|
||||
- tech_spec: "Technical specification (optional)"
|
||||
- existing_tests: "Current test suite for analysis"
|
||||
|
||||
tags:
|
||||
- qa
|
||||
- traceability
|
||||
- test-architect
|
||||
- coverage
|
||||
- requirements
|
||||
|
||||
execution_hints:
|
||||
interactive: false
|
||||
autonomous: true
|
||||
interactive: false # Minimize prompts
|
||||
autonomous: true # Proceed without user input unless blocked
|
||||
iterative: true
|
||||
|
||||
web_bundle: false
|
||||
|
||||
Reference in New Issue
Block a user