mirror of
https://github.com/bmadcode/BMAD-METHOD.git
synced 2025-12-29 16:14:59 +00:00
docs: massive documentation overhaul + introduce Paige (Documentation Guide agent)
## 📚 Complete Documentation Restructure **BMM Documentation Hub Created:** - New centralized documentation system at `src/modules/bmm/docs/` - 18 comprehensive guides organized by topic (7000+ lines total) - Clear learning paths for greenfield, brownfield, and quick spec flows - Professional technical writing standards throughout **New Documentation:** - `README.md` - Complete documentation hub with navigation - `quick-start.md` - 15-minute getting started guide - `agents-guide.md` - Comprehensive 12-agent reference (45 min read) - `party-mode.md` - Multi-agent collaboration guide (20 min read) - `scale-adaptive-system.md` - Deep dive on Levels 0-4 (42 min read) - `brownfield-guide.md` - Existing codebase development (53 min read) - `quick-spec-flow.md` - Rapid Level 0-1 development (26 min read) - `workflows-analysis.md` - Phase 1 workflows (12 min read) - `workflows-planning.md` - Phase 2 workflows (19 min read) - `workflows-solutioning.md` - Phase 3 workflows (13 min read) - `workflows-implementation.md` - Phase 4 workflows (33 min read) - `workflows-testing.md` - Testing & QA workflows (29 min read) - `workflow-architecture-reference.md` - Architecture workflow deep-dive - `workflow-document-project-reference.md` - Document-project workflow reference - `enterprise-agentic-development.md` - Team collaboration patterns - `faq.md` - Comprehensive Q&A covering all topics - `glossary.md` - Complete terminology reference - `troubleshooting.md` - Common issues and solutions **Documentation Improvements:** - Removed all version/date footers (git handles versioning) - Agent customization docs now include full rebuild process - Cross-referenced links between all guides - Reading time estimates for all major docs - Consistent professional formatting and structure **Consolidated & Streamlined:** - Module README (`src/modules/bmm/README.md`) streamlined to lean signpost - Root README polished with better hierarchy and clear CTAs - Moved docs from root `docs/` to module-specific locations - Better separation of user docs vs. developer reference ## 🤖 New Agent: Paige (Documentation Guide) **Role:** Technical documentation specialist and information architect **Expertise:** - Professional technical writing standards - Documentation structure and organization - Information architecture and navigation - User-focused content design - Style guide enforcement **Status:** Work in progress - Paige will evolve as documentation needs grow **Integration:** - Listed in agents-guide.md, glossary.md, FAQ - Available for all phases (documentation is continuous) - Can be customized like all BMM agents ## 🔧 Additional Changes - Updated agent manifest with Paige - Updated workflow manifest with new documentation workflows - Fixed workflow-to-agent mappings across all guides - Improved root README with clearer Quick Start section - Better module structure explanations - Enhanced community links with Discord channel names **Total Impact:** - 18 new/restructured documentation files - 7000+ lines of professional technical documentation - Complete navigation system with cross-references - Clear learning paths for all user types - Foundation for knowledge base (coming in beta) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
454
bmad/bmm/workflows/1-analysis/research/README.md
Normal file
454
bmad/bmm/workflows/1-analysis/research/README.md
Normal file
@@ -0,0 +1,454 @@
|
||||
# Research Workflow - Multi-Type Research System
|
||||
|
||||
## Overview
|
||||
|
||||
The Research Workflow is a comprehensive, adaptive research system that supports multiple research types through an intelligent router pattern. This workflow consolidates various research methodologies into a single, powerful tool that adapts to your specific research needs - from market analysis to technical evaluation to AI prompt generation.
|
||||
|
||||
**Version 2.0.0** - Multi-type research system with router-based architecture
|
||||
|
||||
## Key Features
|
||||
|
||||
### 🔀 Intelligent Research Router
|
||||
|
||||
- **6 Research Types**: Market, Deep Prompt, Technical, Competitive, User, Domain
|
||||
- **Dynamic Instructions**: Loads appropriate instruction set based on research type
|
||||
- **Adaptive Templates**: Selects optimal output format for research goal
|
||||
- **Context-Aware**: Adjusts frameworks and methods per research type
|
||||
|
||||
### 🔍 Market Research (Type: `market`)
|
||||
|
||||
- Real-time web research for current market data
|
||||
- TAM/SAM/SOM calculations with multiple methodologies
|
||||
- Competitive landscape analysis and positioning
|
||||
- Customer persona development and Jobs-to-be-Done
|
||||
- Porter's Five Forces and strategic frameworks
|
||||
- Go-to-market strategy recommendations
|
||||
|
||||
### 🤖 Deep Research Prompt Generation (Type: `deep_prompt`)
|
||||
|
||||
- **Optimized for AI Research Platforms**: ChatGPT Deep Research, Gemini, Grok DeepSearch, Claude Projects
|
||||
- **Prompt Engineering Best Practices**: Multi-stage research workflows, iterative refinement
|
||||
- **Platform-Specific Optimization**: Tailored prompts for each AI research tool
|
||||
- **Context Packaging**: Structures background information for optimal AI understanding
|
||||
- **Research Question Refinement**: Transforms vague questions into precise research prompts
|
||||
|
||||
### 🏗️ Technical/Architecture Research (Type: `technical`)
|
||||
|
||||
- Technology evaluation and comparison matrices
|
||||
- Architecture pattern research and trade-off analysis
|
||||
- Framework/library assessment with pros/cons
|
||||
- Technical feasibility studies
|
||||
- Cost-benefit analysis for technology decisions
|
||||
- Architecture Decision Records (ADR) generation
|
||||
|
||||
### 🎯 Competitive Intelligence (Type: `competitive`)
|
||||
|
||||
- Deep competitor analysis and profiling
|
||||
- Competitive positioning and gap analysis
|
||||
- Strategic group mapping
|
||||
- Feature comparison matrices
|
||||
- Pricing strategy analysis
|
||||
- Market share and growth tracking
|
||||
|
||||
### 👥 User Research (Type: `user`)
|
||||
|
||||
- Customer insights and behavioral analysis
|
||||
- Persona development with demographics and psychographics
|
||||
- Jobs-to-be-Done framework application
|
||||
- Customer journey mapping
|
||||
- Pain point identification
|
||||
- Willingness-to-pay analysis
|
||||
|
||||
### 🌐 Domain/Industry Research (Type: `domain`)
|
||||
|
||||
- Industry deep dives and trend analysis
|
||||
- Regulatory landscape assessment
|
||||
- Domain expertise synthesis
|
||||
- Best practices identification
|
||||
- Standards and compliance requirements
|
||||
- Emerging patterns and disruptions
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Invocation
|
||||
|
||||
```bash
|
||||
workflow research
|
||||
```
|
||||
|
||||
The workflow will prompt you to select a research type.
|
||||
|
||||
### Direct Research Type Selection
|
||||
|
||||
```bash
|
||||
# Market research
|
||||
workflow research --type market
|
||||
|
||||
# Deep research prompt generation
|
||||
workflow research --type deep_prompt
|
||||
|
||||
# Technical evaluation
|
||||
workflow research --type technical
|
||||
|
||||
# Competitive intelligence
|
||||
workflow research --type competitive
|
||||
|
||||
# User research
|
||||
workflow research --type user
|
||||
|
||||
# Domain analysis
|
||||
workflow research --type domain
|
||||
```
|
||||
|
||||
### With Input Documents
|
||||
|
||||
```bash
|
||||
workflow research --type market --input product-brief.md --input competitor-list.md
|
||||
workflow research --type technical --input requirements.md --input architecture.md
|
||||
workflow research --type deep_prompt --input research-question.md
|
||||
```
|
||||
|
||||
### Configuration Options
|
||||
|
||||
Can be customized through `workflow.yaml`:
|
||||
|
||||
- **research_depth**: `quick`, `standard`, or `comprehensive`
|
||||
- **enable_web_research**: `true`/`false` for real-time data gathering
|
||||
- **enable_competitor_analysis**: `true`/`false` (market/competitive types)
|
||||
- **enable_financial_modeling**: `true`/`false` (market type)
|
||||
|
||||
## Workflow Structure
|
||||
|
||||
### Files Included
|
||||
|
||||
```
|
||||
research/
|
||||
├── workflow.yaml # Multi-type configuration
|
||||
├── instructions-router.md # Router logic (loads correct instructions)
|
||||
├── instructions-market.md # Market research workflow
|
||||
├── instructions-deep-prompt.md # Deep prompt generation workflow
|
||||
├── instructions-technical.md # Technical evaluation workflow
|
||||
├── template-market.md # Market research report template
|
||||
├── template-deep-prompt.md # Research prompt template
|
||||
├── template-technical.md # Technical evaluation template
|
||||
├── checklist.md # Universal validation criteria
|
||||
├── README.md # This file
|
||||
└── claude-code/ # Claude Code enhancements (optional)
|
||||
├── injections.yaml # Integration configuration
|
||||
└── sub-agents/ # Specialized research agents
|
||||
├── bmm-market-researcher.md
|
||||
├── bmm-trend-spotter.md
|
||||
├── bmm-data-analyst.md
|
||||
├── bmm-competitor-analyzer.md
|
||||
├── bmm-user-researcher.md
|
||||
└── bmm-technical-evaluator.md
|
||||
```
|
||||
|
||||
## Workflow Process
|
||||
|
||||
### Phase 1: Research Type Selection and Setup
|
||||
|
||||
1. Router presents research type menu
|
||||
2. User selects research type (market, deep_prompt, technical, competitive, user, domain)
|
||||
3. Router loads appropriate instructions and template
|
||||
4. Gather research parameters and inputs
|
||||
|
||||
### Phase 2: Research Type-Specific Execution
|
||||
|
||||
**For Market Research:**
|
||||
|
||||
1. Define research objectives and market boundaries
|
||||
2. Conduct web research across multiple sources
|
||||
3. Calculate TAM/SAM/SOM with triangulation
|
||||
4. Develop customer segments and personas
|
||||
5. Analyze competitive landscape
|
||||
6. Apply industry frameworks (Porter's Five Forces, etc.)
|
||||
7. Identify trends and opportunities
|
||||
8. Develop strategic recommendations
|
||||
9. Create financial projections (optional)
|
||||
10. Compile comprehensive report
|
||||
|
||||
**For Deep Prompt Generation:**
|
||||
|
||||
1. Analyze research question or topic
|
||||
2. Identify optimal AI research platform (ChatGPT, Gemini, Grok, Claude)
|
||||
3. Structure research context and background
|
||||
4. Generate platform-optimized prompt
|
||||
5. Create multi-stage research workflow
|
||||
6. Define iteration and refinement strategy
|
||||
7. Package with context documents
|
||||
8. Provide execution guidance
|
||||
|
||||
**For Technical Research:**
|
||||
|
||||
1. Define technical requirements and constraints
|
||||
2. Identify technologies/frameworks to evaluate
|
||||
3. Research each option (documentation, community, maturity)
|
||||
4. Create comparison matrix with criteria
|
||||
5. Perform trade-off analysis
|
||||
6. Calculate cost-benefit for each option
|
||||
7. Generate Architecture Decision Record (ADR)
|
||||
8. Provide recommendation with rationale
|
||||
|
||||
**For Competitive/User/Domain:**
|
||||
|
||||
- Uses market research workflow with specific focus
|
||||
- Adapts questions and frameworks to research type
|
||||
- Customizes output format for target audience
|
||||
|
||||
### Phase 3: Validation and Delivery
|
||||
|
||||
1. Review outputs against checklist
|
||||
2. Validate completeness and quality
|
||||
3. Generate final report/document
|
||||
4. Provide next steps and recommendations
|
||||
|
||||
## Output
|
||||
|
||||
### Generated Files by Research Type
|
||||
|
||||
**Market Research:**
|
||||
|
||||
- `market-research-{product_name}-{date}.md`
|
||||
- Comprehensive market analysis report (10+ sections)
|
||||
|
||||
**Deep Research Prompt:**
|
||||
|
||||
- `deep-research-prompt-{date}.md`
|
||||
- Optimized AI research prompt with context and instructions
|
||||
|
||||
**Technical Research:**
|
||||
|
||||
- `technical-research-{date}.md`
|
||||
- Technology evaluation with comparison matrix and ADR
|
||||
|
||||
**Competitive Intelligence:**
|
||||
|
||||
- `competitive-intelligence-{date}.md`
|
||||
- Detailed competitor analysis and positioning
|
||||
|
||||
**User Research:**
|
||||
|
||||
- `user-research-{date}.md`
|
||||
- Customer insights and persona documentation
|
||||
|
||||
**Domain Research:**
|
||||
|
||||
- `domain-research-{date}.md`
|
||||
- Industry deep dive with trends and best practices
|
||||
|
||||
## Requirements
|
||||
|
||||
### All Research Types
|
||||
|
||||
- BMAD Core v6 project structure
|
||||
- Web search capability (for real-time research)
|
||||
- Access to research data sources
|
||||
|
||||
### Market Research
|
||||
|
||||
- Product or business description
|
||||
- Target customer hypotheses (optional)
|
||||
- Known competitors list (optional)
|
||||
|
||||
### Deep Prompt Research
|
||||
|
||||
- Research question or topic
|
||||
- Background context documents (optional)
|
||||
- Target AI platform preference (optional)
|
||||
|
||||
### Technical Research
|
||||
|
||||
- Technical requirements document
|
||||
- Current architecture (if brownfield)
|
||||
- Technical constraints list
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Before Starting
|
||||
|
||||
1. **Know Your Research Goal**: Select the most appropriate research type
|
||||
2. **Gather Context**: Collect relevant documents before starting
|
||||
3. **Set Depth Level**: Choose appropriate research_depth (quick/standard/comprehensive)
|
||||
4. **Define Success Criteria**: What decisions will this research inform?
|
||||
|
||||
### During Execution
|
||||
|
||||
**Market Research:**
|
||||
|
||||
- Provide specific product/service details
|
||||
- Validate market boundaries carefully
|
||||
- Review TAM/SAM/SOM assumptions
|
||||
- Challenge competitive positioning
|
||||
|
||||
**Deep Prompt Generation:**
|
||||
|
||||
- Be specific about research platform target
|
||||
- Provide rich context documents
|
||||
- Clarify expected research outcome
|
||||
- Define iteration strategy
|
||||
|
||||
**Technical Research:**
|
||||
|
||||
- List all evaluation criteria upfront
|
||||
- Weight criteria by importance
|
||||
- Consider long-term implications
|
||||
- Include cost analysis
|
||||
|
||||
### After Completion
|
||||
|
||||
1. Review using the validation checklist
|
||||
2. Update with any missing information
|
||||
3. Share with stakeholders for feedback
|
||||
4. Schedule follow-up research if needed
|
||||
5. Document decisions made based on research
|
||||
|
||||
## Research Frameworks Available
|
||||
|
||||
### Market Research Frameworks
|
||||
|
||||
- TAM/SAM/SOM Analysis
|
||||
- Porter's Five Forces
|
||||
- Jobs-to-be-Done (JTBD)
|
||||
- Technology Adoption Lifecycle
|
||||
- SWOT Analysis
|
||||
- Value Chain Analysis
|
||||
|
||||
### Technical Research Frameworks
|
||||
|
||||
- Trade-off Analysis Matrix
|
||||
- Architecture Decision Records (ADR)
|
||||
- Technology Radar
|
||||
- Comparison Matrix
|
||||
- Cost-Benefit Analysis
|
||||
- Technical Risk Assessment
|
||||
|
||||
### Deep Prompt Frameworks
|
||||
|
||||
- ChatGPT Deep Research Best Practices
|
||||
- Gemini Deep Research Framework
|
||||
- Grok DeepSearch Optimization
|
||||
- Claude Projects Methodology
|
||||
- Iterative Prompt Refinement
|
||||
|
||||
## Data Sources
|
||||
|
||||
The workflow leverages multiple data sources:
|
||||
|
||||
- Industry reports and publications
|
||||
- Government statistics and databases
|
||||
- Financial reports and SEC filings
|
||||
- News articles and press releases
|
||||
- Academic research papers
|
||||
- Technical documentation and RFCs
|
||||
- GitHub repositories and discussions
|
||||
- Stack Overflow and developer forums
|
||||
- Market research firm reports
|
||||
- Social media and communities
|
||||
- Patent databases
|
||||
- Benchmarking studies
|
||||
|
||||
## Claude Code Enhancements
|
||||
|
||||
### Available Subagents
|
||||
|
||||
1. **bmm-market-researcher** - Market intelligence gathering
|
||||
2. **bmm-trend-spotter** - Emerging trends and weak signals
|
||||
3. **bmm-data-analyst** - Quantitative analysis and modeling
|
||||
4. **bmm-competitor-analyzer** - Competitive intelligence
|
||||
5. **bmm-user-researcher** - Customer insights and personas
|
||||
6. **bmm-technical-evaluator** - Technology assessment
|
||||
|
||||
These are automatically invoked during workflow execution if Claude Code integration is configured.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Don't know which research type to choose
|
||||
|
||||
- **Solution**: Start with research question - "What do I need to know?"
|
||||
- Market viability? → `market`
|
||||
- Best technology? → `technical`
|
||||
- Need AI to research deeper? → `deep_prompt`
|
||||
- Who are competitors? → `competitive`
|
||||
- Who are users? → `user`
|
||||
- Industry understanding? → `domain`
|
||||
|
||||
### Issue: Market research results seem incomplete
|
||||
|
||||
- **Solution**: Increase research_depth to `comprehensive`
|
||||
- **Check**: Enable web_research in workflow.yaml
|
||||
- **Try**: Run competitive and user research separately for more depth
|
||||
|
||||
### Issue: Deep prompt doesn't work with target platform
|
||||
|
||||
- **Solution**: Review platform-specific best practices in generated prompt
|
||||
- **Check**: Ensure context documents are included
|
||||
- **Try**: Regenerate with different platform selection
|
||||
|
||||
### Issue: Technical comparison is subjective
|
||||
|
||||
- **Solution**: Add more objective criteria (performance metrics, cost, community size)
|
||||
- **Check**: Weight criteria by business importance
|
||||
- **Try**: Run pilot implementations for top 2 options
|
||||
|
||||
## Customization
|
||||
|
||||
### Adding New Research Types
|
||||
|
||||
1. Create new instructions file: `instructions-{type}.md`
|
||||
2. Create new template file: `template-{type}.md`
|
||||
3. Add research type to `workflow.yaml` `research_types` section
|
||||
4. Update router logic in `instructions-router.md`
|
||||
|
||||
### Modifying Existing Research Types
|
||||
|
||||
1. Edit appropriate `instructions-{type}.md` file
|
||||
2. Update corresponding `template-{type}.md` if needed
|
||||
3. Adjust validation criteria in `checklist.md`
|
||||
|
||||
### Creating Custom Frameworks
|
||||
|
||||
Add to `workflow.yaml` `frameworks` section under appropriate research type.
|
||||
|
||||
## Version History
|
||||
|
||||
- **v2.0.0** - Multi-type research system with router architecture
|
||||
- Added deep_prompt research type for AI research platform optimization
|
||||
- Added technical research type for technology evaluation
|
||||
- Consolidated competitive, user, domain under market with focus variants
|
||||
- Router-based instruction loading
|
||||
- Template selection by research type
|
||||
- Enhanced Claude Code subagent support
|
||||
|
||||
- **v1.0.0** - Initial market research only implementation
|
||||
- Single-purpose market research workflow
|
||||
- Now deprecated in favor of v2.0.0 multi-type system
|
||||
|
||||
## Support
|
||||
|
||||
For issues or questions:
|
||||
|
||||
- Review workflow creation guide at `/bmad/bmb/workflows/create-workflow/workflow-creation-guide.md`
|
||||
- Check validation against `checklist.md`
|
||||
- Examine router logic in `instructions-router.md`
|
||||
- Review research type-specific instructions
|
||||
- Consult BMAD Method v6 documentation
|
||||
|
||||
## Migration from v1.0 market-research
|
||||
|
||||
If you're used to the standalone `market-research` workflow:
|
||||
|
||||
```bash
|
||||
# Old way
|
||||
workflow market-research
|
||||
|
||||
# New way
|
||||
workflow research --type market
|
||||
# Or just: workflow research (then select market)
|
||||
```
|
||||
|
||||
All market research functionality is preserved and enhanced in v2.0.0.
|
||||
|
||||
---
|
||||
|
||||
_Part of the BMad Method v6 - BMM (BMad Method) Module - Empowering systematic research and analysis_
|
||||
144
bmad/bmm/workflows/1-analysis/research/checklist-deep-prompt.md
Normal file
144
bmad/bmm/workflows/1-analysis/research/checklist-deep-prompt.md
Normal file
@@ -0,0 +1,144 @@
|
||||
# Deep Research Prompt Validation Checklist
|
||||
|
||||
## 🚨 CRITICAL: Anti-Hallucination Instructions (PRIORITY)
|
||||
|
||||
### Citation Requirements Built Into Prompt
|
||||
|
||||
- [ ] Prompt EXPLICITLY instructs: "Cite sources with URLs for ALL factual claims"
|
||||
- [ ] Prompt requires: "Include source name, date, and URL for every statistic"
|
||||
- [ ] Prompt mandates: "If you cannot find reliable data, state 'No verified data found for [X]'"
|
||||
- [ ] Prompt specifies inline citation format (e.g., "[Source: Company, Year, URL]")
|
||||
- [ ] Prompt requires References section at end with all sources listed
|
||||
|
||||
### Multi-Source Verification Requirements
|
||||
|
||||
- [ ] Prompt instructs: "Cross-reference critical claims with at least 2 independent sources"
|
||||
- [ ] Prompt requires: "Note when sources conflict and present all viewpoints"
|
||||
- [ ] Prompt specifies: "Verify version numbers and dates from official sources"
|
||||
- [ ] Prompt mandates: "Mark confidence levels: [Verified], [Single source], [Uncertain]"
|
||||
|
||||
### Fact vs Analysis Distinction
|
||||
|
||||
- [ ] Prompt requires clear labeling: "Distinguish FACTS (sourced), ANALYSIS (your interpretation), SPECULATION (projections)"
|
||||
- [ ] Prompt instructs: "Do not present assumptions or analysis as verified facts"
|
||||
- [ ] Prompt requires: "Label projections and forecasts clearly as such"
|
||||
- [ ] Prompt warns: "Avoid vague attributions like 'experts say' - name the expert/source"
|
||||
|
||||
### Source Quality Guidance
|
||||
|
||||
- [ ] Prompt specifies preferred sources (e.g., "Official docs > analyst reports > blog posts")
|
||||
- [ ] Prompt prioritizes recency: "Prioritize {{current_year}} sources for time-sensitive data"
|
||||
- [ ] Prompt requires credibility assessment: "Note source credibility for each citation"
|
||||
- [ ] Prompt warns against: "Do not rely on single blog posts for critical claims"
|
||||
|
||||
### Anti-Hallucination Safeguards
|
||||
|
||||
- [ ] Prompt warns: "If data seems convenient or too round, verify with additional sources"
|
||||
- [ ] Prompt instructs: "Flag suspicious claims that need third-party verification"
|
||||
- [ ] Prompt requires: "Provide date accessed for all web sources"
|
||||
- [ ] Prompt mandates: "Do NOT invent statistics - only use verified data"
|
||||
|
||||
## Prompt Foundation
|
||||
|
||||
### Topic and Scope
|
||||
|
||||
- [ ] Research topic is specific and focused (not too broad)
|
||||
- [ ] Target platform is specified (ChatGPT, Gemini, Grok, Claude)
|
||||
- [ ] Temporal scope defined and includes "current {{current_year}}" requirement
|
||||
- [ ] Source recency requirement specified (e.g., "prioritize 2024-2025 sources")
|
||||
|
||||
## Content Requirements
|
||||
|
||||
### Information Specifications
|
||||
|
||||
- [ ] Types of information needed are listed (quantitative, qualitative, trends, case studies, etc.)
|
||||
- [ ] Preferred sources are specified (academic, industry reports, news, etc.)
|
||||
- [ ] Recency requirements are stated (e.g., "prioritize {{current_year}} sources")
|
||||
- [ ] Keywords and technical terms are included for search optimization
|
||||
- [ ] Validation criteria are defined (how to verify findings)
|
||||
|
||||
### Output Structure
|
||||
|
||||
- [ ] Desired format is clear (executive summary, comparison table, timeline, SWOT, etc.)
|
||||
- [ ] Key sections or questions are outlined
|
||||
- [ ] Depth level is specified (overview, standard, comprehensive, exhaustive)
|
||||
- [ ] Citation requirements are stated
|
||||
- [ ] Any special formatting needs are mentioned
|
||||
|
||||
## Platform Optimization
|
||||
|
||||
### Platform-Specific Elements
|
||||
|
||||
- [ ] Prompt is optimized for chosen platform's capabilities
|
||||
- [ ] Platform-specific tips are included
|
||||
- [ ] Query limit considerations are noted (if applicable)
|
||||
- [ ] Platform strengths are leveraged (e.g., ChatGPT's multi-step search, Gemini's plan modification)
|
||||
|
||||
### Execution Guidance
|
||||
|
||||
- [ ] Research persona/perspective is specified (if applicable)
|
||||
- [ ] Special requirements are stated (bias considerations, recency, etc.)
|
||||
- [ ] Follow-up strategy is outlined
|
||||
- [ ] Validation approach is defined
|
||||
|
||||
## Quality and Usability
|
||||
|
||||
### Clarity and Completeness
|
||||
|
||||
- [ ] Prompt language is clear and unambiguous
|
||||
- [ ] All placeholders and variables are replaced with actual values
|
||||
- [ ] Prompt can be copy-pasted directly into platform
|
||||
- [ ] No contradictory instructions exist
|
||||
- [ ] Prompt is self-contained (doesn't assume unstated context)
|
||||
|
||||
### Practical Utility
|
||||
|
||||
- [ ] Execution checklist is provided (before, during, after research)
|
||||
- [ ] Platform usage tips are included
|
||||
- [ ] Follow-up questions are anticipated
|
||||
- [ ] Success criteria are defined
|
||||
- [ ] Output file format is specified
|
||||
|
||||
## Research Depth
|
||||
|
||||
### Scope Appropriateness
|
||||
|
||||
- [ ] Scope matches user's available time and resources
|
||||
- [ ] Depth is appropriate for decision at hand
|
||||
- [ ] Key questions that MUST be answered are identified
|
||||
- [ ] Nice-to-have vs. critical information is distinguished
|
||||
|
||||
## Validation Criteria
|
||||
|
||||
### Quality Standards
|
||||
|
||||
- [ ] Method for cross-referencing sources is specified
|
||||
- [ ] Approach to handling conflicting information is defined
|
||||
- [ ] Confidence level indicators are requested
|
||||
- [ ] Gap identification is included
|
||||
- [ ] Fact vs. opinion distinction is required
|
||||
|
||||
---
|
||||
|
||||
## Issues Found
|
||||
|
||||
### Critical Issues
|
||||
|
||||
_List any critical gaps or errors that must be addressed:_
|
||||
|
||||
- [ ] Issue 1: [Description]
|
||||
- [ ] Issue 2: [Description]
|
||||
|
||||
### Minor Improvements
|
||||
|
||||
_List minor improvements that would enhance the prompt:_
|
||||
|
||||
- [ ] Issue 1: [Description]
|
||||
- [ ] Issue 2: [Description]
|
||||
|
||||
---
|
||||
|
||||
**Validation Complete:** ☐ Yes ☐ No
|
||||
**Ready to Execute:** ☐ Yes ☐ No
|
||||
**Reviewer:** \***\*\_\*\***
|
||||
**Date:** \***\*\_\*\***
|
||||
249
bmad/bmm/workflows/1-analysis/research/checklist-technical.md
Normal file
249
bmad/bmm/workflows/1-analysis/research/checklist-technical.md
Normal file
@@ -0,0 +1,249 @@
|
||||
# Technical/Architecture Research Validation Checklist
|
||||
|
||||
## 🚨 CRITICAL: Source Verification and Fact-Checking (PRIORITY)
|
||||
|
||||
### Version Number Verification (MANDATORY)
|
||||
|
||||
- [ ] **EVERY** technology version number has cited source with URL
|
||||
- [ ] Version numbers verified via WebSearch from {{current_year}} (NOT from training data!)
|
||||
- [ ] Official documentation/release pages cited for each version
|
||||
- [ ] Release dates included with version numbers
|
||||
- [ ] LTS status verified from official sources (with URL)
|
||||
- [ ] No "assumed" or "remembered" version numbers - ALL must be verified
|
||||
|
||||
### Technical Claim Source Verification
|
||||
|
||||
- [ ] **EVERY** feature claim has source (official docs, release notes, website)
|
||||
- [ ] Performance benchmarks cite source (official benchmarks, third-party tests with URLs)
|
||||
- [ ] Compatibility claims verified (official compatibility matrix, documentation)
|
||||
- [ ] Community size/popularity backed by sources (GitHub stars, npm downloads, official stats)
|
||||
- [ ] "Supports X" claims verified via official documentation with URL
|
||||
- [ ] No invented capabilities or features
|
||||
|
||||
### Source Quality for Technical Data
|
||||
|
||||
- [ ] Official documentation prioritized (docs.technology.com > blog posts)
|
||||
- [ ] Version info from official release pages (highest credibility)
|
||||
- [ ] Benchmarks from official sources or reputable third-parties (not random blogs)
|
||||
- [ ] Community data from verified sources (GitHub, npm, official registries)
|
||||
- [ ] Pricing from official pricing pages (with URL and date verified)
|
||||
|
||||
### Multi-Source Verification (Critical Technical Claims)
|
||||
|
||||
- [ ] Major technical claims (performance, scalability) verified by 2+ sources
|
||||
- [ ] Technology comparisons cite multiple independent sources
|
||||
- [ ] "Best for X" claims backed by comparative analysis with sources
|
||||
- [ ] Production experience claims cite real case studies or articles with URLs
|
||||
- [ ] No single-source critical decisions without flagging need for verification
|
||||
|
||||
### Anti-Hallucination for Technical Data
|
||||
|
||||
- [ ] No invented version numbers or release dates
|
||||
- [ ] No assumed feature availability without verification
|
||||
- [ ] If current data not found, explicitly states "Could not verify {{current_year}} information"
|
||||
- [ ] Speculation clearly labeled (e.g., "Based on trends, technology may...")
|
||||
- [ ] No "probably supports" or "likely compatible" without verification
|
||||
|
||||
## Technology Evaluation
|
||||
|
||||
### Comprehensive Profiling
|
||||
|
||||
For each evaluated technology:
|
||||
|
||||
- [ ] Core capabilities and features are documented
|
||||
- [ ] Architecture and design philosophy are explained
|
||||
- [ ] Maturity level is assessed (experimental, stable, mature, legacy)
|
||||
- [ ] Community size and activity are measured
|
||||
- [ ] Maintenance status is verified (active, maintenance mode, abandoned)
|
||||
|
||||
### Practical Considerations
|
||||
|
||||
- [ ] Learning curve is evaluated
|
||||
- [ ] Documentation quality is assessed
|
||||
- [ ] Developer experience is considered
|
||||
- [ ] Tooling ecosystem is reviewed
|
||||
- [ ] Testing and debugging capabilities are examined
|
||||
|
||||
### Operational Assessment
|
||||
|
||||
- [ ] Deployment complexity is understood
|
||||
- [ ] Monitoring and observability options are evaluated
|
||||
- [ ] Operational overhead is estimated
|
||||
- [ ] Cloud provider support is verified
|
||||
- [ ] Container/Kubernetes compatibility is checked (if relevant)
|
||||
|
||||
## Comparative Analysis
|
||||
|
||||
### Multi-Dimensional Comparison
|
||||
|
||||
- [ ] Technologies are compared across relevant dimensions
|
||||
- [ ] Performance benchmarks are included (if available)
|
||||
- [ ] Scalability characteristics are compared
|
||||
- [ ] Complexity trade-offs are analyzed
|
||||
- [ ] Total cost of ownership is estimated for each option
|
||||
|
||||
### Trade-off Analysis
|
||||
|
||||
- [ ] Key trade-offs between options are identified
|
||||
- [ ] Decision factors are prioritized based on user needs
|
||||
- [ ] Conditions favoring each option are specified
|
||||
- [ ] Weighted analysis reflects user's priorities
|
||||
|
||||
## Real-World Evidence
|
||||
|
||||
### Production Experience
|
||||
|
||||
- [ ] Real-world production experiences are researched
|
||||
- [ ] Known issues and gotchas are documented
|
||||
- [ ] Performance data from actual deployments is included
|
||||
- [ ] Migration experiences are considered (if replacing existing tech)
|
||||
- [ ] Community discussions and war stories are referenced
|
||||
|
||||
### Source Quality
|
||||
|
||||
- [ ] Multiple independent sources validate key claims
|
||||
- [ ] Recent sources from {{current_year}} are prioritized
|
||||
- [ ] Practitioner experiences are included (blog posts, conference talks, forums)
|
||||
- [ ] Both proponent and critic perspectives are considered
|
||||
|
||||
## Decision Support
|
||||
|
||||
### Recommendations
|
||||
|
||||
- [ ] Primary recommendation is clearly stated with rationale
|
||||
- [ ] Alternative options are explained with use cases
|
||||
- [ ] Fit for user's specific context is explained
|
||||
- [ ] Decision is justified by requirements and constraints
|
||||
|
||||
### Implementation Guidance
|
||||
|
||||
- [ ] Proof-of-concept approach is outlined
|
||||
- [ ] Key implementation decisions are identified
|
||||
- [ ] Migration path is described (if applicable)
|
||||
- [ ] Success criteria are defined
|
||||
- [ ] Validation approach is recommended
|
||||
|
||||
### Risk Management
|
||||
|
||||
- [ ] Technical risks are identified
|
||||
- [ ] Mitigation strategies are provided
|
||||
- [ ] Contingency options are outlined (if primary choice doesn't work)
|
||||
- [ ] Exit strategy considerations are discussed
|
||||
|
||||
## Architecture Decision Record
|
||||
|
||||
### ADR Completeness
|
||||
|
||||
- [ ] Status is specified (Proposed, Accepted, Superseded)
|
||||
- [ ] Context and problem statement are clear
|
||||
- [ ] Decision drivers are documented
|
||||
- [ ] All considered options are listed
|
||||
- [ ] Chosen option and rationale are explained
|
||||
- [ ] Consequences (positive, negative, neutral) are identified
|
||||
- [ ] Implementation notes are included
|
||||
- [ ] References to research sources are provided
|
||||
|
||||
## References and Source Documentation (CRITICAL)
|
||||
|
||||
### References Section Completeness
|
||||
|
||||
- [ ] Report includes comprehensive "References and Sources" section
|
||||
- [ ] Sources organized by category (official docs, benchmarks, community, architecture)
|
||||
- [ ] Every source includes: Title, Publisher/Site, Date Accessed, Full URL
|
||||
- [ ] URLs are clickable and functional (documentation links, release pages, GitHub)
|
||||
- [ ] Version verification sources clearly listed
|
||||
- [ ] Inline citations throughout report reference the sources section
|
||||
|
||||
### Technology Source Documentation
|
||||
|
||||
- [ ] For each technology evaluated, sources documented:
|
||||
- Official documentation URL
|
||||
- Release notes/changelog URL for version
|
||||
- Pricing page URL (if applicable)
|
||||
- Community/GitHub URL
|
||||
- Benchmark source URLs
|
||||
- [ ] Comparison data cites source for each claim
|
||||
- [ ] Architecture pattern sources cited (articles, books, official guides)
|
||||
|
||||
### Source Quality Metrics
|
||||
|
||||
- [ ] Report documents total sources cited
|
||||
- [ ] Official sources count (highest credibility)
|
||||
- [ ] Third-party sources count (benchmarks, articles)
|
||||
- [ ] Version verification count (all technologies verified {{current_year}})
|
||||
- [ ] Outdated sources flagged (if any used)
|
||||
|
||||
### Citation Format Standards
|
||||
|
||||
- [ ] Inline citations format: [Source: Docs URL] or [Version: 1.2.3, Source: Release Page URL]
|
||||
- [ ] Consistent citation style throughout
|
||||
- [ ] No vague citations like "according to the community" without specifics
|
||||
- [ ] GitHub links include star count and last update date
|
||||
- [ ] Documentation links point to current stable version docs
|
||||
|
||||
## Document Quality
|
||||
|
||||
### Anti-Hallucination Final Check
|
||||
|
||||
- [ ] Spot-check 5 random version numbers - can you find the cited source?
|
||||
- [ ] Verify feature claims against official documentation
|
||||
- [ ] Check any performance numbers have benchmark sources
|
||||
- [ ] Ensure no "cutting edge" or "latest" without specific version number
|
||||
- [ ] Cross-check technology comparisons with cited sources
|
||||
|
||||
### Structure and Completeness
|
||||
|
||||
- [ ] Executive summary captures key findings
|
||||
- [ ] No placeholder text remains (all {{variables}} are replaced)
|
||||
- [ ] References section is complete and properly formatted
|
||||
- [ ] Version verification audit trail included
|
||||
- [ ] Document ready for technical fact-checking by third party
|
||||
|
||||
## Research Completeness
|
||||
|
||||
### Coverage
|
||||
|
||||
- [ ] All user requirements were addressed
|
||||
- [ ] All constraints were considered
|
||||
- [ ] Sufficient depth for the decision at hand
|
||||
- [ ] Optional analyses were considered and included/excluded appropriately
|
||||
- [ ] Web research was conducted for current market data
|
||||
|
||||
### Data Freshness
|
||||
|
||||
- [ ] Current {{current_year}} data was used throughout
|
||||
- [ ] Version information is up-to-date
|
||||
- [ ] Recent developments and trends are included
|
||||
- [ ] Outdated or deprecated information is flagged or excluded
|
||||
|
||||
---
|
||||
|
||||
## Issues Found
|
||||
|
||||
### Critical Issues
|
||||
|
||||
_List any critical gaps or errors that must be addressed:_
|
||||
|
||||
- [ ] Issue 1: [Description]
|
||||
- [ ] Issue 2: [Description]
|
||||
|
||||
### Minor Improvements
|
||||
|
||||
_List minor improvements that would enhance the report:_
|
||||
|
||||
- [ ] Issue 1: [Description]
|
||||
- [ ] Issue 2: [Description]
|
||||
|
||||
### Additional Research Needed
|
||||
|
||||
_List areas requiring further investigation:_
|
||||
|
||||
- [ ] Topic 1: [Description]
|
||||
- [ ] Topic 2: [Description]
|
||||
|
||||
---
|
||||
|
||||
**Validation Complete:** ☐ Yes ☐ No
|
||||
**Ready for Decision:** ☐ Yes ☐ No
|
||||
**Reviewer:** \***\*\_\*\***
|
||||
**Date:** \***\*\_\*\***
|
||||
299
bmad/bmm/workflows/1-analysis/research/checklist.md
Normal file
299
bmad/bmm/workflows/1-analysis/research/checklist.md
Normal file
@@ -0,0 +1,299 @@
|
||||
# Market Research Report Validation Checklist
|
||||
|
||||
## 🚨 CRITICAL: Source Verification and Fact-Checking (PRIORITY)
|
||||
|
||||
### Source Citation Completeness
|
||||
|
||||
- [ ] **EVERY** market size claim has at least 2 cited sources with URLs
|
||||
- [ ] **EVERY** growth rate/CAGR has cited sources with URLs
|
||||
- [ ] **EVERY** competitive data point (pricing, features, funding) has sources with URLs
|
||||
- [ ] **EVERY** customer statistic or insight has cited sources
|
||||
- [ ] **EVERY** industry trend claim has sources from {{current_year}} or recent years
|
||||
- [ ] All sources include: Name, Date, URL (clickable links)
|
||||
- [ ] No claims exist without verifiable sources
|
||||
|
||||
### Source Quality and Credibility
|
||||
|
||||
- [ ] Market size sources are HIGH credibility (Gartner, Forrester, IDC, government data, industry associations)
|
||||
- [ ] NOT relying on single blog posts or unverified sources for critical data
|
||||
- [ ] Sources are recent ({{current_year}} or within 1-2 years for time-sensitive data)
|
||||
- [ ] Primary sources prioritized over secondary/tertiary sources
|
||||
- [ ] Paywalled reports are cited with proper attribution (e.g., "Gartner Market Report 2025")
|
||||
|
||||
### Multi-Source Verification (Critical Claims)
|
||||
|
||||
- [ ] TAM calculation verified by at least 2 independent sources
|
||||
- [ ] SAM calculation methodology is transparent and sourced
|
||||
- [ ] SOM estimates are conservative and based on comparable benchmarks
|
||||
- [ ] Market growth rates corroborated by multiple analyst reports
|
||||
- [ ] Competitive market share data verified across sources
|
||||
|
||||
### Conflicting Data Resolution
|
||||
|
||||
- [ ] Where sources conflict, ALL conflicting estimates are presented
|
||||
- [ ] Variance between sources is explained (methodology, scope differences)
|
||||
- [ ] No arbitrary selection of "convenient" numbers without noting alternatives
|
||||
- [ ] Conflicting data is flagged with confidence levels
|
||||
- [ ] User is made aware of uncertainty in conflicting claims
|
||||
|
||||
### Confidence Level Marking
|
||||
|
||||
- [ ] Every major claim is marked with confidence level:
|
||||
- **[Verified - 2+ sources]** = High confidence, multiple independent sources agree
|
||||
- **[Single source - verify]** = Medium confidence, only one source found
|
||||
- **[Estimated - low confidence]** = Low confidence, calculated/projected without strong sources
|
||||
- [ ] Low confidence claims are clearly flagged for user to verify independently
|
||||
- [ ] Speculative/projected data is labeled as PROJECTION or FORECAST, not presented as fact
|
||||
|
||||
### Fact vs Analysis vs Speculation
|
||||
|
||||
- [ ] Clear distinction between:
|
||||
- **FACT:** Sourced data with citations (e.g., "Market is $5.2B [Source: Gartner 2025]")
|
||||
- **ANALYSIS:** Interpretation of facts (e.g., "This suggests strong growth momentum")
|
||||
- **SPECULATION:** Educated guesses (e.g., "This trend may continue if...")
|
||||
- [ ] Analysis and speculation are NOT presented as verified facts
|
||||
- [ ] Recommendations are based on sourced facts, not unsupported assumptions
|
||||
|
||||
### Anti-Hallucination Verification
|
||||
|
||||
- [ ] No invented statistics or "made up" market sizes
|
||||
- [ ] All percentages, dollar amounts, and growth rates are traceable to sources
|
||||
- [ ] If data couldn't be found, report explicitly states "No verified data available for [X]"
|
||||
- [ ] No use of vague sources like "industry experts say" without naming the expert/source
|
||||
- [ ] Version numbers, dates, and specific figures match source material exactly
|
||||
|
||||
## Market Sizing Analysis (Source-Verified)
|
||||
|
||||
### TAM Calculation Sources
|
||||
|
||||
- [ ] TAM figure has at least 2 independent source citations
|
||||
- [ ] Calculation methodology is sourced (not invented)
|
||||
- [ ] Industry benchmarks used for sanity-check are cited
|
||||
- [ ] Growth rate assumptions are backed by sourced projections
|
||||
- [ ] Any adjustments or filters applied are justified and documented
|
||||
|
||||
### SAM and SOM Source Verification
|
||||
|
||||
- [ ] SAM constraints are based on sourced data (addressable market scope)
|
||||
- [ ] SOM competitive assumptions cite actual competitor data
|
||||
- [ ] Market share benchmarks reference comparable companies with sources
|
||||
- [ ] Scenarios (conservative/realistic/optimistic) are justified with sourced reasoning
|
||||
|
||||
## Competitive Analysis (Source-Verified)
|
||||
|
||||
### Competitor Data Source Verification
|
||||
|
||||
- [ ] **EVERY** competitor mentioned has source for basic company info
|
||||
- [ ] Competitor pricing data has sources (website URLs, pricing pages, reviews)
|
||||
- [ ] Funding amounts cite sources (Crunchbase, press releases, SEC filings)
|
||||
- [ ] Product features verified through sources (official website, documentation, reviews)
|
||||
- [ ] Market positioning claims are backed by sources (analyst reports, company statements)
|
||||
- [ ] Customer count/user numbers cite sources (company announcements, verified reports)
|
||||
- [ ] Recent news and developments cite article URLs with dates from {{current_year}}
|
||||
|
||||
### Competitive Data Credibility
|
||||
|
||||
- [ ] Company websites/official sources used for product info (highest credibility)
|
||||
- [ ] Financial data from Crunchbase, PitchBook, or SEC filings (not rumors)
|
||||
- [ ] Review sites cited for customer sentiment (G2, Capterra, TrustPilot with URLs)
|
||||
- [ ] Pricing verified from official pricing pages (with URL and date checked)
|
||||
- [ ] No assumptions about competitors without sourced evidence
|
||||
|
||||
### Competitive Claims Verification
|
||||
|
||||
- [ ] Market share claims cite analyst reports or verified data
|
||||
- [ ] "Leading" or "dominant" claims backed by sourced market data
|
||||
- [ ] Competitor weaknesses cited from reviews, articles, or public statements (not speculation)
|
||||
- [ ] Product comparison claims verified (feature lists from official sources)
|
||||
|
||||
## Customer Intelligence (Source-Verified)
|
||||
|
||||
### Customer Data Sources
|
||||
|
||||
- [ ] Customer segment data cites research sources (reports, surveys, studies)
|
||||
- [ ] Demographics/firmographics backed by census data, industry reports, or studies
|
||||
- [ ] Pain points sourced from customer research, reviews, surveys (not assumed)
|
||||
- [ ] Willingness to pay backed by pricing studies, surveys, or comparable market data
|
||||
- [ ] Buying behavior sourced from research studies or industry data
|
||||
- [ ] Jobs-to-be-Done insights cite customer research or validated frameworks
|
||||
|
||||
### Customer Insight Credibility
|
||||
|
||||
- [ ] Primary research (if conducted) documents sample size and methodology
|
||||
- [ ] Secondary research cites the original study/report with full attribution
|
||||
- [ ] Customer quotes or testimonials cite the source (interview, review site, case study)
|
||||
- [ ] Persona data based on real research findings (not fictional archetypes)
|
||||
- [ ] No invented customer statistics or behaviors without source backing
|
||||
|
||||
### Positioning Analysis
|
||||
|
||||
- [ ] Market positioning map uses relevant dimensions for the industry
|
||||
- [ ] White space opportunities are clearly identified
|
||||
- [ ] Differentiation strategy is supported by competitive gaps
|
||||
- [ ] Switching costs and barriers are quantified
|
||||
- [ ] Network effects and moats are assessed
|
||||
|
||||
## Industry Analysis
|
||||
|
||||
### Porter's Five Forces
|
||||
|
||||
- [ ] Each force has a clear rating (Low/Medium/High) with justification
|
||||
- [ ] Specific examples and evidence support each assessment
|
||||
- [ ] Industry-specific factors are considered (not generic template)
|
||||
- [ ] Implications for strategy are drawn from each force
|
||||
- [ ] Overall industry attractiveness conclusion is provided
|
||||
|
||||
### Trends and Dynamics
|
||||
|
||||
- [ ] At least 5 major trends are identified with evidence
|
||||
- [ ] Technology disruptions are assessed for probability and timeline
|
||||
- [ ] Regulatory changes and their impacts are documented
|
||||
- [ ] Social/cultural shifts relevant to adoption are included
|
||||
- [ ] Market maturity stage is identified with supporting indicators
|
||||
|
||||
## Strategic Recommendations
|
||||
|
||||
### Go-to-Market Strategy
|
||||
|
||||
- [ ] Target segment prioritization has clear rationale
|
||||
- [ ] Positioning statement is specific and differentiated
|
||||
- [ ] Channel strategy aligns with customer buying behavior
|
||||
- [ ] Partnership opportunities are identified with specific targets
|
||||
- [ ] Pricing strategy is justified by willingness-to-pay analysis
|
||||
|
||||
### Opportunity Assessment
|
||||
|
||||
- [ ] Each opportunity is sized quantitatively
|
||||
- [ ] Resource requirements are estimated (time, money, people)
|
||||
- [ ] Success criteria are measurable and time-bound
|
||||
- [ ] Dependencies and prerequisites are identified
|
||||
- [ ] Quick wins vs. long-term plays are distinguished
|
||||
|
||||
### Risk Analysis
|
||||
|
||||
- [ ] All major risk categories are covered (market, competitive, execution, regulatory)
|
||||
- [ ] Each risk has probability and impact assessment
|
||||
- [ ] Mitigation strategies are specific and actionable
|
||||
- [ ] Early warning indicators are defined
|
||||
- [ ] Contingency plans are outlined for high-impact risks
|
||||
|
||||
## References and Source Documentation (CRITICAL)
|
||||
|
||||
### References Section Completeness
|
||||
|
||||
- [ ] Report includes comprehensive "References and Sources" section
|
||||
- [ ] Sources organized by category (market size, competitive, customer, trends)
|
||||
- [ ] Every source includes: Title/Name, Publisher, Date, Full URL
|
||||
- [ ] URLs are clickable and functional (not broken links)
|
||||
- [ ] Sources are numbered or organized for easy reference
|
||||
- [ ] Inline citations throughout report reference the sources section
|
||||
|
||||
### Source Quality Metrics
|
||||
|
||||
- [ ] Report documents total sources cited count
|
||||
- [ ] High confidence claims (2+ sources) count is reported
|
||||
- [ ] Single source claims are identified and counted
|
||||
- [ ] Low confidence/speculative claims are flagged
|
||||
- [ ] Web searches conducted count is included (for transparency)
|
||||
|
||||
### Source Audit Trail
|
||||
|
||||
- [ ] For each major section, sources are listed
|
||||
- [ ] TAM/SAM/SOM calculations show source for each number
|
||||
- [ ] Competitive data shows source for each competitor profile
|
||||
- [ ] Customer insights show research sources
|
||||
- [ ] Industry trends show article/report sources with dates
|
||||
|
||||
### Citation Format Standards
|
||||
|
||||
- [ ] Inline citations format: [Source: Company/Publication, Year, URL] or similar
|
||||
- [ ] Consistent citation style throughout document
|
||||
- [ ] No vague citations like "according to sources" without specifics
|
||||
- [ ] URLs are complete (not truncated)
|
||||
- [ ] Accessed/verified dates included for web sources
|
||||
|
||||
## Document Quality
|
||||
|
||||
### Anti-Hallucination Final Check
|
||||
|
||||
- [ ] Read through entire report - does anything "feel" invented or too convenient?
|
||||
- [ ] Spot-check 5-10 random claims - can you find the cited source?
|
||||
- [ ] Check suspicious round numbers - are they actually from sources?
|
||||
- [ ] Verify any "shocking" statistics have strong sources
|
||||
- [ ] Cross-check key market size claims against multiple cited sources
|
||||
|
||||
### Structure and Completeness
|
||||
|
||||
- [ ] Executive summary captures all key insights
|
||||
- [ ] No placeholder text remains (all {{variables}} are replaced)
|
||||
- [ ] References section is complete and properly formatted
|
||||
- [ ] Source quality assessment included
|
||||
- [ ] Document ready for fact-checking by third party
|
||||
|
||||
## Research Completeness
|
||||
|
||||
### Coverage Check
|
||||
|
||||
- [ ] All workflow steps were completed (none skipped without justification)
|
||||
- [ ] Optional analyses were considered and included where valuable
|
||||
- [ ] Web research was conducted for current market intelligence
|
||||
- [ ] Financial projections align with market size analysis
|
||||
- [ ] Implementation roadmap provides clear next steps
|
||||
|
||||
### Validation
|
||||
|
||||
- [ ] Key findings are triangulated across multiple sources
|
||||
- [ ] Surprising insights are double-checked for accuracy
|
||||
- [ ] Calculations are verified for mathematical accuracy
|
||||
- [ ] Conclusions logically follow from the analysis
|
||||
- [ ] Recommendations are actionable and specific
|
||||
|
||||
## Final Quality Assurance
|
||||
|
||||
### Ready for Decision-Making
|
||||
|
||||
- [ ] Research answers all initial objectives
|
||||
- [ ] Sufficient detail for investment decisions
|
||||
- [ ] Clear go/no-go recommendation provided
|
||||
- [ ] Success metrics are defined
|
||||
- [ ] Follow-up research needs are identified
|
||||
|
||||
### Document Meta
|
||||
|
||||
- [ ] Research date is current
|
||||
- [ ] Confidence levels are indicated for key assertions
|
||||
- [ ] Next review date is set
|
||||
- [ ] Distribution list is appropriate
|
||||
- [ ] Confidentiality classification is marked
|
||||
|
||||
---
|
||||
|
||||
## Issues Found
|
||||
|
||||
### Critical Issues
|
||||
|
||||
_List any critical gaps or errors that must be addressed:_
|
||||
|
||||
- [ ] Issue 1: [Description]
|
||||
- [ ] Issue 2: [Description]
|
||||
|
||||
### Minor Issues
|
||||
|
||||
_List minor improvements that would enhance the report:_
|
||||
|
||||
- [ ] Issue 1: [Description]
|
||||
- [ ] Issue 2: [Description]
|
||||
|
||||
### Additional Research Needed
|
||||
|
||||
_List areas requiring further investigation:_
|
||||
|
||||
- [ ] Topic 1: [Description]
|
||||
- [ ] Topic 2: [Description]
|
||||
|
||||
---
|
||||
|
||||
**Validation Complete:** ☐ Yes ☐ No
|
||||
**Ready for Distribution:** ☐ Yes ☐ No
|
||||
**Reviewer:** **\*\***\_\_\_\_**\*\***
|
||||
**Date:** **\*\***\_\_\_\_**\*\***
|
||||
@@ -0,0 +1,114 @@
|
||||
# Market Research Workflow - Claude Code Integration Configuration
|
||||
# This file configures how subagents are installed and integrated
|
||||
|
||||
subagents:
|
||||
# List of subagent files to be installed
|
||||
files:
|
||||
- bmm-market-researcher.md
|
||||
- bmm-trend-spotter.md
|
||||
- bmm-data-analyst.md
|
||||
- bmm-competitor-analyzer.md
|
||||
- bmm-user-researcher.md
|
||||
|
||||
# Installation configuration
|
||||
installation:
|
||||
prompt: "The Market Research workflow includes specialized AI subagents for enhanced research capabilities. Would you like to install them?"
|
||||
location_options:
|
||||
- project # Install to .claude/agents/ in project
|
||||
- user # Install to ~/.claude/agents/ for all projects
|
||||
default_location: project
|
||||
|
||||
# Content injections for the workflow
|
||||
injections:
|
||||
- injection_point: "market-research-subagents"
|
||||
description: "Injects subagent activation instructions into the workflow"
|
||||
content: |
|
||||
<critical>
|
||||
Claude Code Enhanced Mode: The following specialized subagents are available to enhance your market research:
|
||||
|
||||
- **bmm-market-researcher**: Comprehensive market intelligence gathering and analysis
|
||||
- **bmm-trend-spotter**: Identifies emerging trends and weak signals
|
||||
- **bmm-data-analyst**: Quantitative analysis and market sizing calculations
|
||||
- **bmm-competitor-analyzer**: Deep competitive intelligence and positioning
|
||||
- **bmm-user-researcher**: User research, personas, and journey mapping
|
||||
|
||||
These subagents will be automatically invoked when their expertise is relevant to the current research task.
|
||||
Use them PROACTIVELY throughout the workflow for enhanced insights.
|
||||
</critical>
|
||||
|
||||
- injection_point: "market-tam-calculations"
|
||||
description: "Enhanced TAM calculation with data analyst"
|
||||
content: |
|
||||
<invoke-subagent name="bmm-data-analyst">
|
||||
Calculate TAM using multiple methodologies and provide confidence intervals.
|
||||
Use all available market data from previous research steps.
|
||||
Show detailed calculations and assumptions.
|
||||
</invoke-subagent>
|
||||
|
||||
- injection_point: "market-trends-analysis"
|
||||
description: "Enhanced trend analysis with trend spotter"
|
||||
content: |
|
||||
<invoke-subagent name="bmm-trend-spotter">
|
||||
Identify emerging trends, weak signals, and future disruptions.
|
||||
Look for cross-industry patterns and second-order effects.
|
||||
Provide timeline estimates for mainstream adoption.
|
||||
</invoke-subagent>
|
||||
|
||||
- injection_point: "market-customer-segments"
|
||||
description: "Enhanced customer research"
|
||||
content: |
|
||||
<invoke-subagent name="bmm-user-researcher">
|
||||
Develop detailed user personas with jobs-to-be-done analysis.
|
||||
Map the complete customer journey with pain points and opportunities.
|
||||
Provide behavioral and psychographic insights.
|
||||
</invoke-subagent>
|
||||
|
||||
- injection_point: "market-executive-summary"
|
||||
description: "Enhanced executive summary synthesis"
|
||||
content: |
|
||||
<invoke-subagent name="bmm-market-researcher">
|
||||
Synthesize all research findings into a compelling executive summary.
|
||||
Highlight the most critical insights and strategic implications.
|
||||
Ensure all key metrics and recommendations are captured.
|
||||
</invoke-subagent>
|
||||
|
||||
# Configuration for subagent behavior
|
||||
configuration:
|
||||
auto_invoke: true # Automatically invoke subagents when relevant
|
||||
parallel_execution: true # Allow parallel subagent execution
|
||||
cache_results: true # Cache subagent outputs for reuse
|
||||
|
||||
# Subagent-specific configurations
|
||||
subagent_config:
|
||||
bmm-market-researcher:
|
||||
priority: high
|
||||
max_execution_time: 300 # seconds
|
||||
retry_on_failure: true
|
||||
|
||||
bmm-trend-spotter:
|
||||
priority: medium
|
||||
max_execution_time: 180
|
||||
retry_on_failure: false
|
||||
|
||||
bmm-data-analyst:
|
||||
priority: high
|
||||
max_execution_time: 240
|
||||
retry_on_failure: true
|
||||
|
||||
bmm-competitor-analyzer:
|
||||
priority: high
|
||||
max_execution_time: 300
|
||||
retry_on_failure: true
|
||||
|
||||
bmm-user-researcher:
|
||||
priority: medium
|
||||
max_execution_time: 240
|
||||
retry_on_failure: false
|
||||
|
||||
# Metadata
|
||||
metadata:
|
||||
compatible_with: "claude-code-1.0+"
|
||||
workflow: "market-research"
|
||||
module: "bmm"
|
||||
author: "BMad Builder"
|
||||
description: "Claude Code enhancements for comprehensive market research"
|
||||
@@ -0,0 +1,439 @@
|
||||
# Deep Research Prompt Generator Instructions
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow uses ADAPTIVE FACILITATION - adjust your communication style based on {user_skill_level}</critical>
|
||||
<critical>This workflow generates structured research prompts optimized for AI platforms</critical>
|
||||
<critical>Based on {{current_year}} best practices from ChatGPT, Gemini, Grok, and Claude</critical>
|
||||
<critical>Communicate all responses in {communication_language} and tailor to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
|
||||
<critical>🚨 BUILD ANTI-HALLUCINATION INTO PROMPTS 🚨</critical>
|
||||
<critical>Generated prompts MUST instruct AI to cite sources with URLs for all factual claims</critical>
|
||||
<critical>Include validation requirements: "Cross-reference claims with at least 2 independent sources"</critical>
|
||||
<critical>Add explicit instructions: "If you cannot find reliable data, state 'No verified data found for [X]'"</critical>
|
||||
<critical>Require confidence indicators in prompts: "Mark each claim with confidence level and source quality"</critical>
|
||||
<critical>Include fact-checking instructions: "Distinguish between verified facts, analysis, and speculation"</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Discover what research prompt they need">
|
||||
|
||||
<action>Engage conversationally to understand their needs:
|
||||
|
||||
<check if="{user_skill_level} == 'expert'">
|
||||
"Let's craft a research prompt optimized for AI deep research tools.
|
||||
|
||||
What topic or question do you want to investigate, and which platform are you planning to use? (ChatGPT Deep Research, Gemini, Grok, Claude Projects)"
|
||||
</check>
|
||||
|
||||
<check if="{user_skill_level} == 'intermediate'">
|
||||
"I'll help you create a structured research prompt for AI platforms like ChatGPT Deep Research, Gemini, or Grok.
|
||||
|
||||
These tools work best with well-structured prompts that define scope, sources, and output format.
|
||||
|
||||
What do you want to research?"
|
||||
</check>
|
||||
|
||||
<check if="{user_skill_level} == 'beginner'">
|
||||
"Think of this as creating a detailed brief for an AI research assistant.
|
||||
|
||||
Tools like ChatGPT Deep Research can spend hours searching the web and synthesizing information - but they work best when you give them clear instructions about what to look for and how to present it.
|
||||
|
||||
What topic are you curious about?"
|
||||
</check>
|
||||
</action>
|
||||
|
||||
<action>Through conversation, discover:
|
||||
|
||||
- **The research topic** - What they want to explore
|
||||
- **Their purpose** - Why they need this (decision-making, learning, writing, etc.)
|
||||
- **Target platform** - Which AI tool they'll use (affects prompt structure)
|
||||
- **Existing knowledge** - What they already know vs. what's uncertain
|
||||
|
||||
Adapt your questions based on their clarity:
|
||||
|
||||
- If they're vague → Help them sharpen the focus
|
||||
- If they're specific → Capture the details
|
||||
- If they're unsure about platform → Guide them to the best fit
|
||||
|
||||
Don't make them fill out a form - have a real conversation.
|
||||
</action>
|
||||
|
||||
<template-output>research_topic</template-output>
|
||||
<template-output>research_goal</template-output>
|
||||
<template-output>target_platform</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Define Research Scope and Boundaries">
|
||||
<action>Help user define clear boundaries for focused research</action>
|
||||
|
||||
**Let's define the scope to ensure focused, actionable results:**
|
||||
|
||||
<ask>**Temporal Scope** - What time period should the research cover?
|
||||
|
||||
- Current state only (last 6-12 months)
|
||||
- Recent trends (last 2-3 years)
|
||||
- Historical context (5-10 years)
|
||||
- Future outlook (projections 3-5 years)
|
||||
- Custom date range (specify)</ask>
|
||||
|
||||
<template-output>temporal_scope</template-output>
|
||||
|
||||
<ask>**Geographic Scope** - What geographic focus?
|
||||
|
||||
- Global
|
||||
- Regional (North America, Europe, Asia-Pacific, etc.)
|
||||
- Specific countries
|
||||
- US-focused
|
||||
- Other (specify)</ask>
|
||||
|
||||
<template-output>geographic_scope</template-output>
|
||||
|
||||
<ask>**Thematic Boundaries** - Are there specific aspects to focus on or exclude?
|
||||
|
||||
Examples:
|
||||
|
||||
- Focus: technological innovation, regulatory changes, market dynamics
|
||||
- Exclude: historical background, unrelated adjacent markets</ask>
|
||||
|
||||
<template-output>thematic_boundaries</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Specify Information Types and Sources">
|
||||
<action>Determine what types of information and sources are needed</action>
|
||||
|
||||
**What types of information do you need?**
|
||||
|
||||
<ask>Select all that apply:
|
||||
|
||||
- [ ] Quantitative data and statistics
|
||||
- [ ] Qualitative insights and expert opinions
|
||||
- [ ] Trends and patterns
|
||||
- [ ] Case studies and examples
|
||||
- [ ] Comparative analysis
|
||||
- [ ] Technical specifications
|
||||
- [ ] Regulatory and compliance information
|
||||
- [ ] Financial data
|
||||
- [ ] Academic research
|
||||
- [ ] Industry reports
|
||||
- [ ] News and current events</ask>
|
||||
|
||||
<template-output>information_types</template-output>
|
||||
|
||||
<ask>**Preferred Sources** - Any specific source types or credibility requirements?
|
||||
|
||||
Examples:
|
||||
|
||||
- Peer-reviewed academic journals
|
||||
- Industry analyst reports (Gartner, Forrester, IDC)
|
||||
- Government/regulatory sources
|
||||
- Financial reports and SEC filings
|
||||
- Technical documentation
|
||||
- News from major publications
|
||||
- Expert blogs and thought leadership
|
||||
- Social media and forums (with caveats)</ask>
|
||||
|
||||
<template-output>preferred_sources</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Define Output Structure and Format">
|
||||
<action>Specify desired output format for the research</action>
|
||||
|
||||
<ask>**Output Format** - How should the research be structured?
|
||||
|
||||
1. Executive Summary + Detailed Sections
|
||||
2. Comparative Analysis Table
|
||||
3. Chronological Timeline
|
||||
4. SWOT Analysis Framework
|
||||
5. Problem-Solution-Impact Format
|
||||
6. Question-Answer Format
|
||||
7. Custom structure (describe)</ask>
|
||||
|
||||
<template-output>output_format</template-output>
|
||||
|
||||
<ask>**Key Sections** - What specific sections or questions should the research address?
|
||||
|
||||
Examples for market research:
|
||||
|
||||
- Market size and growth
|
||||
- Key players and competitive landscape
|
||||
- Trends and drivers
|
||||
- Challenges and barriers
|
||||
- Future outlook
|
||||
|
||||
Examples for technical research:
|
||||
|
||||
- Current state of technology
|
||||
- Alternative approaches and trade-offs
|
||||
- Best practices and patterns
|
||||
- Implementation considerations
|
||||
- Tool/framework comparison</ask>
|
||||
|
||||
<template-output>key_sections</template-output>
|
||||
|
||||
<ask>**Depth Level** - How detailed should each section be?
|
||||
|
||||
- High-level overview (2-3 paragraphs per section)
|
||||
- Standard depth (1-2 pages per section)
|
||||
- Comprehensive (3-5 pages per section with examples)
|
||||
- Exhaustive (deep dive with all available data)</ask>
|
||||
|
||||
<template-output>depth_level</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Add Context and Constraints">
|
||||
<action>Gather additional context to make the prompt more effective</action>
|
||||
|
||||
<ask>**Persona/Perspective** - Should the research take a specific viewpoint?
|
||||
|
||||
Examples:
|
||||
|
||||
- "Act as a venture capital analyst evaluating investment opportunities"
|
||||
- "Act as a CTO evaluating technology choices for a fintech startup"
|
||||
- "Act as an academic researcher reviewing literature"
|
||||
- "Act as a product manager assessing market opportunities"
|
||||
- No specific persona needed</ask>
|
||||
|
||||
<template-output>research_persona</template-output>
|
||||
|
||||
<ask>**Special Requirements or Constraints:**
|
||||
|
||||
- Citation requirements (e.g., "Include source URLs for all claims")
|
||||
- Bias considerations (e.g., "Consider perspectives from both proponents and critics")
|
||||
- Recency requirements (e.g., "Prioritize sources from 2024-2025")
|
||||
- Specific keywords or technical terms to focus on
|
||||
- Any topics or angles to avoid</ask>
|
||||
|
||||
<template-output>special_requirements</template-output>
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Define Validation and Follow-up Strategy">
|
||||
<action>Establish how to validate findings and what follow-ups might be needed</action>
|
||||
|
||||
<ask>**Validation Criteria** - How should the research be validated?
|
||||
|
||||
- Cross-reference multiple sources for key claims
|
||||
- Identify conflicting viewpoints and resolve them
|
||||
- Distinguish between facts, expert opinions, and speculation
|
||||
- Note confidence levels for different findings
|
||||
- Highlight gaps or areas needing more research</ask>
|
||||
|
||||
<template-output>validation_criteria</template-output>
|
||||
|
||||
<ask>**Follow-up Questions** - What potential follow-up questions should be anticipated?
|
||||
|
||||
Examples:
|
||||
|
||||
- "If cost data is unclear, drill deeper into pricing models"
|
||||
- "If regulatory landscape is complex, create separate analysis"
|
||||
- "If multiple technical approaches exist, create comparison matrix"</ask>
|
||||
|
||||
<template-output>follow_up_strategy</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Generate Optimized Research Prompt">
|
||||
<action>Synthesize all inputs into platform-optimized research prompt</action>
|
||||
|
||||
<critical>Generate the deep research prompt using best practices for the target platform</critical>
|
||||
|
||||
**Prompt Structure Best Practices:**
|
||||
|
||||
1. **Clear Title/Question** (specific, focused)
|
||||
2. **Context and Goal** (why this research matters)
|
||||
3. **Scope Definition** (boundaries and constraints)
|
||||
4. **Information Requirements** (what types of data/insights)
|
||||
5. **Output Structure** (format and sections)
|
||||
6. **Source Guidance** (preferred sources and credibility)
|
||||
7. **Validation Requirements** (how to verify findings)
|
||||
8. **Keywords** (precise technical terms, brand names)
|
||||
|
||||
<action>Generate prompt following this structure</action>
|
||||
|
||||
<template-output file="deep-research-prompt.md">deep_research_prompt</template-output>
|
||||
|
||||
<ask>Review the generated prompt:
|
||||
|
||||
- [a] Accept and save
|
||||
- [e] Edit sections
|
||||
- [r] Refine with additional context
|
||||
- [o] Optimize for different platform</ask>
|
||||
|
||||
<check if="edit or refine">
|
||||
<ask>What would you like to adjust?</ask>
|
||||
<goto step="7">Regenerate with modifications</goto>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Generate Platform-Specific Tips">
|
||||
<action>Provide platform-specific usage tips based on target platform</action>
|
||||
|
||||
<check if="target_platform includes ChatGPT">
|
||||
**ChatGPT Deep Research Tips:**
|
||||
|
||||
- Use clear verbs: "compare," "analyze," "synthesize," "recommend"
|
||||
- Specify keywords explicitly to guide search
|
||||
- Answer clarifying questions thoroughly (requests are more expensive)
|
||||
- You have 25-250 queries/month depending on tier
|
||||
- Review the research plan before it starts searching
|
||||
</check>
|
||||
|
||||
<check if="target_platform includes Gemini">
|
||||
**Gemini Deep Research Tips:**
|
||||
|
||||
- Keep initial prompt simple - you can adjust the research plan
|
||||
- Be specific and clear - vagueness is the enemy
|
||||
- Review and modify the multi-point research plan before it runs
|
||||
- Use follow-up questions to drill deeper or add sections
|
||||
- Available in 45+ languages globally
|
||||
</check>
|
||||
|
||||
<check if="target_platform includes Grok">
|
||||
**Grok DeepSearch Tips:**
|
||||
|
||||
- Include date windows: "from Jan-Jun 2025"
|
||||
- Specify output format: "bullet list + citations"
|
||||
- Pair with Think Mode for reasoning
|
||||
- Use follow-up commands: "Expand on [topic]" to deepen sections
|
||||
- Verify facts when obscure sources cited
|
||||
- Free tier: 5 queries/24hrs, Premium: 30/2hrs
|
||||
</check>
|
||||
|
||||
<check if="target_platform includes Claude">
|
||||
**Claude Projects Tips:**
|
||||
|
||||
- Use Chain of Thought prompting for complex reasoning
|
||||
- Break into sub-prompts for multi-step research (prompt chaining)
|
||||
- Add relevant documents to Project for context
|
||||
- Provide explicit instructions and examples
|
||||
- Test iteratively and refine prompts
|
||||
</check>
|
||||
|
||||
<template-output>platform_tips</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Generate Research Execution Checklist">
|
||||
<action>Create a checklist for executing and evaluating the research</action>
|
||||
|
||||
Generate execution checklist with:
|
||||
|
||||
**Before Running Research:**
|
||||
|
||||
- [ ] Prompt clearly states the research question
|
||||
- [ ] Scope and boundaries are well-defined
|
||||
- [ ] Output format and structure specified
|
||||
- [ ] Keywords and technical terms included
|
||||
- [ ] Source guidance provided
|
||||
- [ ] Validation criteria clear
|
||||
|
||||
**During Research:**
|
||||
|
||||
- [ ] Review research plan before execution (if platform provides)
|
||||
- [ ] Answer any clarifying questions thoroughly
|
||||
- [ ] Monitor progress if platform shows reasoning process
|
||||
- [ ] Take notes on unexpected findings or gaps
|
||||
|
||||
**After Research Completion:**
|
||||
|
||||
- [ ] Verify key facts from multiple sources
|
||||
- [ ] Check citation credibility
|
||||
- [ ] Identify conflicting information and resolve
|
||||
- [ ] Note confidence levels for findings
|
||||
- [ ] Identify gaps requiring follow-up
|
||||
- [ ] Ask clarifying follow-up questions
|
||||
- [ ] Export/save research before query limit resets
|
||||
|
||||
<template-output>execution_checklist</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Finalize and Export">
|
||||
<action>Save complete research prompt package</action>
|
||||
|
||||
**Your Deep Research Prompt Package is ready!**
|
||||
|
||||
The output includes:
|
||||
|
||||
1. **Optimized Research Prompt** - Ready to paste into AI platform
|
||||
2. **Platform-Specific Tips** - How to get the best results
|
||||
3. **Execution Checklist** - Ensure thorough research process
|
||||
4. **Follow-up Strategy** - Questions to deepen findings
|
||||
|
||||
<action>Save all outputs to {default_output_file}</action>
|
||||
|
||||
<ask>Would you like to:
|
||||
|
||||
1. Generate a variation for a different platform
|
||||
2. Create a follow-up prompt based on hypothetical findings
|
||||
3. Generate a related research prompt
|
||||
4. Exit workflow
|
||||
|
||||
Select option (1-4):</ask>
|
||||
|
||||
<check if="option 1">
|
||||
<goto step="1">Start with different platform selection</goto>
|
||||
</check>
|
||||
|
||||
<check if="option 2 or 3">
|
||||
<goto step="1">Start new prompt with context from previous</goto>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="FINAL" goal="Update status file on completion" tag="workflow-status">
|
||||
<check if="standalone_mode != true">
|
||||
<action>Load the FULL file: {output_folder}/bmm-workflow-status.yaml</action>
|
||||
<action>Find workflow_status key "research"</action>
|
||||
<critical>ONLY write the file path as the status value - no other text, notes, or metadata</critical>
|
||||
<action>Update workflow_status["research"] = "{output_folder}/bmm-research-deep-prompt-{{date}}.md"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<action>Find first non-completed workflow in workflow_status (next workflow to do)</action>
|
||||
<action>Determine next agent from path file based on next workflow</action>
|
||||
</check>
|
||||
|
||||
<output>**✅ Deep Research Prompt Generated**
|
||||
|
||||
**Research Prompt:**
|
||||
|
||||
- Structured research prompt generated and saved to {output_folder}/bmm-research-deep-prompt-{{date}}.md
|
||||
- Ready to execute with ChatGPT, Claude, Gemini, or Grok
|
||||
|
||||
{{#if standalone_mode != true}}
|
||||
**Status Updated:**
|
||||
|
||||
- Progress tracking updated: research marked complete
|
||||
- Next workflow: {{next_workflow}}
|
||||
{{else}}
|
||||
**Note:** Running in standalone mode (no progress tracking)
|
||||
{{/if}}
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
{{#if standalone_mode != true}}
|
||||
|
||||
- **Next workflow:** {{next_workflow}} ({{next_agent}} agent)
|
||||
- **Optional:** Execute the research prompt with AI platform, gather findings, or run additional research workflows
|
||||
|
||||
Check status anytime with: `workflow-status`
|
||||
{{else}}
|
||||
Since no workflow is in progress:
|
||||
|
||||
- Execute the research prompt with AI platform and gather findings
|
||||
- Refer to the BMM workflow guide if unsure what to do next
|
||||
- Or run `workflow-init` to create a workflow path and get guided next steps
|
||||
{{/if}}
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
679
bmad/bmm/workflows/1-analysis/research/instructions-market.md
Normal file
679
bmad/bmm/workflows/1-analysis/research/instructions-market.md
Normal file
@@ -0,0 +1,679 @@
|
||||
# Market Research Workflow Instructions
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow uses ADAPTIVE FACILITATION - adjust your communication style based on {user_skill_level}</critical>
|
||||
<critical>This is a HIGHLY INTERACTIVE workflow - collaborate with user throughout, don't just gather info and disappear</critical>
|
||||
<critical>Web research is MANDATORY - use WebSearch tool with {{current_year}} for all market intelligence gathering</critical>
|
||||
<critical>Communicate all responses in {communication_language} and tailor to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
|
||||
<critical>🚨 ANTI-HALLUCINATION PROTOCOL - MANDATORY 🚨</critical>
|
||||
<critical>NEVER invent market data - if you cannot find reliable data, explicitly state: "I could not find verified data for [X]"</critical>
|
||||
<critical>EVERY statistic, market size, growth rate, or competitive claim MUST have a cited source with URL</critical>
|
||||
<critical>For CRITICAL claims (TAM/SAM/SOM, market size, growth rates), require 2+ independent sources that agree</critical>
|
||||
<critical>When data sources conflict (e.g., different market size estimates), present ALL estimates with sources and explain variance</critical>
|
||||
<critical>Mark data confidence: [Verified - 2+ sources], [Single source - verify], [Estimated - low confidence]</critical>
|
||||
<critical>Clearly label: FACT (sourced data), ANALYSIS (your interpretation), PROJECTION (forecast/speculation)</critical>
|
||||
<critical>After each WebSearch, extract and store source URLs - include them in the report</critical>
|
||||
<critical>If a claim seems suspicious or too convenient, STOP and cross-verify with additional searches</critical>
|
||||
|
||||
<!-- IDE-INJECT-POINT: market-research-subagents -->
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Discover research needs and scope collaboratively">
|
||||
|
||||
<action>Welcome {user_name} warmly. Position yourself as their collaborative research partner who will:
|
||||
|
||||
- Gather live {{current_year}} market data
|
||||
- Share findings progressively throughout
|
||||
- Help make sense of what we discover together
|
||||
|
||||
Ask what they're building and what market questions they need answered.
|
||||
</action>
|
||||
|
||||
<action>Through natural conversation, discover:
|
||||
|
||||
- The product/service and current stage
|
||||
- Their burning questions (what they REALLY need to know)
|
||||
- Context and urgency (fundraising? launch decision? pivot?)
|
||||
- Existing knowledge vs. uncertainties
|
||||
- Desired depth (gauge from their needs, don't ask them to choose)
|
||||
|
||||
Adapt your approach: If uncertain → help them think it through. If detailed → dig deeper.
|
||||
|
||||
Collaboratively define scope:
|
||||
|
||||
- Markets/segments to focus on
|
||||
- Geographic boundaries
|
||||
- Critical questions vs. nice-to-have
|
||||
</action>
|
||||
|
||||
<action>Reflect understanding back to confirm you're aligned on what matters.</action>
|
||||
|
||||
<template-output>product_name</template-output>
|
||||
<template-output>product_description</template-output>
|
||||
<template-output>research_objectives</template-output>
|
||||
<template-output>research_scope</template-output>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Market Definition and Boundaries">
|
||||
<action>Help the user precisely define the market scope</action>
|
||||
|
||||
Work with the user to establish:
|
||||
|
||||
1. **Market Category Definition**
|
||||
- Primary category/industry
|
||||
- Adjacent or overlapping markets
|
||||
- Where this fits in the value chain
|
||||
|
||||
2. **Geographic Scope**
|
||||
- Global, regional, or country-specific?
|
||||
- Primary markets vs. expansion markets
|
||||
- Regulatory considerations by region
|
||||
|
||||
3. **Customer Segment Boundaries**
|
||||
- B2B, B2C, or B2B2C?
|
||||
- Primary vs. secondary segments
|
||||
- Segment size estimates
|
||||
|
||||
<ask>Should we include adjacent markets in the TAM calculation? This could significantly increase market size but may be less immediately addressable.</ask>
|
||||
|
||||
<template-output>market_definition</template-output>
|
||||
<template-output>geographic_scope</template-output>
|
||||
<template-output>segment_boundaries</template-output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Gather live market intelligence collaboratively">
|
||||
|
||||
<critical>This step REQUIRES WebSearch tool usage - gather CURRENT data from {{current_year}}</critical>
|
||||
<critical>Share findings as you go - make this collaborative, not a black box</critical>
|
||||
|
||||
<action>Let {user_name} know you're searching for current {{market_category}} market data: size, growth, analyst reports, recent trends. Tell them you'll share what you find in a few minutes and review it together.</action>
|
||||
|
||||
<step n="3a" title="Search for market size and industry data">
|
||||
<action>Conduct systematic web searches using WebSearch tool:
|
||||
|
||||
<WebSearch>{{market_category}} market size {{geographic_scope}} {{current_year}}</WebSearch>
|
||||
<WebSearch>{{market_category}} industry report Gartner Forrester IDC {{current_year}}</WebSearch>
|
||||
<WebSearch>{{market_category}} market growth rate CAGR forecast {{current_year}}</WebSearch>
|
||||
<WebSearch>{{market_category}} market trends {{current_year}}</WebSearch>
|
||||
<WebSearch>{{market_category}} TAM SAM market opportunity {{current_year}}</WebSearch>
|
||||
</action>
|
||||
|
||||
<action>Share findings WITH SOURCES including URLs and dates. Ask if it aligns with their expectations.</action>
|
||||
|
||||
<action>CRITICAL - Validate data before proceeding:
|
||||
|
||||
- Multiple sources with similar figures?
|
||||
- Recent sources ({{current_year}} or within 1-2 years)?
|
||||
- Credible sources (Gartner, Forrester, govt data, reputable pubs)?
|
||||
- Conflicts? Note explicitly, search for more sources, mark [Low Confidence]
|
||||
</action>
|
||||
|
||||
<action if="user_has_questions">Explore surprising data points together</action>
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
|
||||
<template-output>sources_market_size</template-output>
|
||||
</step>
|
||||
|
||||
<step n="3b" title="Search for recent news and developments" optional="true">
|
||||
<action>Search for recent market developments:
|
||||
|
||||
<WebSearch>{{market_category}} news {{current_year}} funding acquisitions</WebSearch>
|
||||
<WebSearch>{{market_category}} recent developments {{current_year}}</WebSearch>
|
||||
<WebSearch>{{market_category}} regulatory changes {{current_year}}</WebSearch>
|
||||
</action>
|
||||
|
||||
<action>Share noteworthy findings:
|
||||
|
||||
"I found some interesting recent developments:
|
||||
|
||||
{{key_news_highlights}}
|
||||
|
||||
Anything here surprise you or confirm what you suspected?"
|
||||
</action>
|
||||
</step>
|
||||
|
||||
<step n="3c" title="Optional: Government and academic sources" optional="true">
|
||||
<action if="research needs high credibility">Search for authoritative sources:
|
||||
|
||||
<WebSearch>{{market_category}} government statistics census data {{current_year}}</WebSearch>
|
||||
<WebSearch>{{market_category}} academic research white papers {{current_year}}</WebSearch>
|
||||
</action>
|
||||
</step>
|
||||
|
||||
<template-output>market_intelligence_raw</template-output>
|
||||
<template-output>key_data_points</template-output>
|
||||
<template-output>source_credibility_notes</template-output>
|
||||
</step>
|
||||
|
||||
<step n="4" goal="TAM, SAM, SOM Calculations">
|
||||
<action>Calculate market sizes using multiple methodologies for triangulation</action>
|
||||
|
||||
<critical>Use actual data gathered in previous steps, not hypothetical numbers</critical>
|
||||
|
||||
<step n="4a" title="TAM Calculation">
|
||||
**Method 1: Top-Down Approach**
|
||||
- Start with total industry size from research
|
||||
- Apply relevant filters and segments
|
||||
- Show calculation: Industry Size × Relevant Percentage
|
||||
|
||||
**Method 2: Bottom-Up Approach**
|
||||
|
||||
- Number of potential customers × Average revenue per customer
|
||||
- Build from unit economics
|
||||
|
||||
**Method 3: Value Theory Approach**
|
||||
|
||||
- Value created × Capturable percentage
|
||||
- Based on problem severity and alternative costs
|
||||
|
||||
<ask>Which TAM calculation method seems most credible given our data? Should we use multiple methods and triangulate?</ask>
|
||||
|
||||
<template-output>tam_calculation</template-output>
|
||||
<template-output>tam_methodology</template-output>
|
||||
</step>
|
||||
|
||||
<step n="4b" title="SAM Calculation">
|
||||
<action>Calculate Serviceable Addressable Market</action>
|
||||
|
||||
Apply constraints to TAM:
|
||||
|
||||
- Geographic limitations (markets you can serve)
|
||||
- Regulatory restrictions
|
||||
- Technical requirements (e.g., internet penetration)
|
||||
- Language/cultural barriers
|
||||
- Current business model limitations
|
||||
|
||||
SAM = TAM × Serviceable Percentage
|
||||
Show the calculation with clear assumptions.
|
||||
|
||||
<template-output>sam_calculation</template-output>
|
||||
</step>
|
||||
|
||||
<step n="4c" title="SOM Calculation">
|
||||
<action>Calculate realistic market capture</action>
|
||||
|
||||
Consider competitive dynamics:
|
||||
|
||||
- Current market share of competitors
|
||||
- Your competitive advantages
|
||||
- Resource constraints
|
||||
- Time to market considerations
|
||||
- Customer acquisition capabilities
|
||||
|
||||
Create 3 scenarios:
|
||||
|
||||
1. Conservative (1-2% market share)
|
||||
2. Realistic (3-5% market share)
|
||||
3. Optimistic (5-10% market share)
|
||||
|
||||
<template-output>som_scenarios</template-output>
|
||||
</step>
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Customer Segment Deep Dive">
|
||||
<action>Develop detailed understanding of target customers</action>
|
||||
|
||||
<step n="5a" title="Segment Identification" repeat="for-each-segment">
|
||||
For each major segment, research and define:
|
||||
|
||||
**Demographics/Firmographics:**
|
||||
|
||||
- Size and scale characteristics
|
||||
- Geographic distribution
|
||||
- Industry/vertical (for B2B)
|
||||
|
||||
**Psychographics:**
|
||||
|
||||
- Values and priorities
|
||||
- Decision-making process
|
||||
- Technology adoption patterns
|
||||
|
||||
**Behavioral Patterns:**
|
||||
|
||||
- Current solutions used
|
||||
- Purchasing frequency
|
||||
- Budget allocation
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
<template-output>segment*profile*{{segment_number}}</template-output>
|
||||
</step>
|
||||
|
||||
<step n="5b" title="Jobs-to-be-Done Framework">
|
||||
<action>Apply JTBD framework to understand customer needs</action>
|
||||
|
||||
For primary segment, identify:
|
||||
|
||||
**Functional Jobs:**
|
||||
|
||||
- Main tasks to accomplish
|
||||
- Problems to solve
|
||||
- Goals to achieve
|
||||
|
||||
**Emotional Jobs:**
|
||||
|
||||
- Feelings sought
|
||||
- Anxieties to avoid
|
||||
- Status desires
|
||||
|
||||
**Social Jobs:**
|
||||
|
||||
- How they want to be perceived
|
||||
- Group dynamics
|
||||
- Peer influences
|
||||
|
||||
<ask>Would you like to conduct actual customer interviews or surveys to validate these jobs? (We can create an interview guide)</ask>
|
||||
|
||||
<template-output>jobs_to_be_done</template-output>
|
||||
</step>
|
||||
|
||||
<step n="5c" title="Willingness to Pay Analysis">
|
||||
<action>Research and estimate pricing sensitivity</action>
|
||||
|
||||
Analyze:
|
||||
|
||||
- Current spending on alternatives
|
||||
- Budget allocation for this category
|
||||
- Value perception indicators
|
||||
- Price points of substitutes
|
||||
|
||||
<template-output>pricing_analysis</template-output>
|
||||
</step>
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Understand the competitive landscape">
|
||||
<action>Ask if they know their main competitors or if you should search for them.</action>
|
||||
|
||||
<step n="6a" title="Discover competitors together">
|
||||
<action if="user doesn't know competitors">Search for competitors:
|
||||
|
||||
<WebSearch>{{product_category}} competitors {{geographic_scope}} {{current_year}}</WebSearch>
|
||||
<WebSearch>{{product_category}} alternatives comparison {{current_year}}</WebSearch>
|
||||
<WebSearch>top {{product_category}} companies {{current_year}}</WebSearch>
|
||||
</action>
|
||||
|
||||
<action>Present findings. Ask them to pick the 3-5 that matter most (most concerned about or curious to understand).</action>
|
||||
</step>
|
||||
|
||||
<step n="6b" title="Research each competitor together" repeat="for-each-selected-competitor">
|
||||
<action>For each competitor, search for:
|
||||
- Company overview, product features
|
||||
- Pricing model
|
||||
- Funding and recent news
|
||||
- Customer reviews and ratings
|
||||
|
||||
Use {{current_year}} in all searches.
|
||||
</action>
|
||||
|
||||
<action>Share findings with sources. Ask what jumps out and if it matches expectations.</action>
|
||||
|
||||
<action if="user has follow-up questions">Dig deeper based on their interests</action>
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
<template-output>competitor*analysis*{{competitor_name}}</template-output>
|
||||
</step>
|
||||
|
||||
<step n="6c" title="Competitive Positioning Map">
|
||||
<action>Create positioning analysis</action>
|
||||
|
||||
Map competitors on key dimensions:
|
||||
|
||||
- Price vs. Value
|
||||
- Feature completeness vs. Ease of use
|
||||
- Market segment focus
|
||||
- Technology approach
|
||||
- Business model
|
||||
|
||||
Identify:
|
||||
|
||||
- Gaps in the market
|
||||
- Over-served areas
|
||||
- Differentiation opportunities
|
||||
|
||||
<template-output>competitive_positioning</template-output>
|
||||
</step>
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Industry Forces Analysis">
|
||||
<action>Apply Porter's Five Forces framework</action>
|
||||
|
||||
<critical>Use specific evidence from research, not generic assessments</critical>
|
||||
|
||||
Analyze each force with concrete examples:
|
||||
|
||||
<step n="7a" title="Supplier Power">
|
||||
Rate: [Low/Medium/High]
|
||||
- Key suppliers and dependencies
|
||||
- Switching costs
|
||||
- Concentration of suppliers
|
||||
- Forward integration threat
|
||||
</step>
|
||||
|
||||
<step n="7b" title="Buyer Power">
|
||||
Rate: [Low/Medium/High]
|
||||
- Customer concentration
|
||||
- Price sensitivity
|
||||
- Switching costs for customers
|
||||
- Backward integration threat
|
||||
</step>
|
||||
|
||||
<step n="7c" title="Competitive Rivalry">
|
||||
Rate: [Low/Medium/High]
|
||||
- Number and strength of competitors
|
||||
- Industry growth rate
|
||||
- Exit barriers
|
||||
- Differentiation levels
|
||||
</step>
|
||||
|
||||
<step n="7d" title="Threat of New Entry">
|
||||
Rate: [Low/Medium/High]
|
||||
- Capital requirements
|
||||
- Regulatory barriers
|
||||
- Network effects
|
||||
- Brand loyalty
|
||||
</step>
|
||||
|
||||
<step n="7e" title="Threat of Substitutes">
|
||||
Rate: [Low/Medium/High]
|
||||
- Alternative solutions
|
||||
- Switching costs to substitutes
|
||||
- Price-performance trade-offs
|
||||
</step>
|
||||
|
||||
<template-output>porters_five_forces</template-output>
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Market Trends and Future Outlook">
|
||||
<action>Identify trends and future market dynamics</action>
|
||||
|
||||
Research and analyze:
|
||||
|
||||
**Technology Trends:**
|
||||
|
||||
- Emerging technologies impacting market
|
||||
- Digital transformation effects
|
||||
- Automation possibilities
|
||||
|
||||
**Social/Cultural Trends:**
|
||||
|
||||
- Changing customer behaviors
|
||||
- Generational shifts
|
||||
- Social movements impact
|
||||
|
||||
**Economic Trends:**
|
||||
|
||||
- Macroeconomic factors
|
||||
- Industry-specific economics
|
||||
- Investment trends
|
||||
|
||||
**Regulatory Trends:**
|
||||
|
||||
- Upcoming regulations
|
||||
- Compliance requirements
|
||||
- Policy direction
|
||||
|
||||
<ask>Should we explore any specific emerging technologies or disruptions that could reshape this market?</ask>
|
||||
|
||||
<template-output>market_trends</template-output>
|
||||
<template-output>future_outlook</template-output>
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Opportunity Assessment and Strategy">
|
||||
<action>Synthesize research into strategic opportunities</action>
|
||||
|
||||
<step n="9a" title="Opportunity Identification">
|
||||
Based on all research, identify top 3-5 opportunities:
|
||||
|
||||
For each opportunity:
|
||||
|
||||
- Description and rationale
|
||||
- Size estimate (from SOM)
|
||||
- Resource requirements
|
||||
- Time to market
|
||||
- Risk assessment
|
||||
- Success criteria
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
<template-output>market_opportunities</template-output>
|
||||
</step>
|
||||
|
||||
<step n="9b" title="Go-to-Market Recommendations">
|
||||
Develop GTM strategy based on research:
|
||||
|
||||
**Positioning Strategy:**
|
||||
|
||||
- Value proposition refinement
|
||||
- Differentiation approach
|
||||
- Messaging framework
|
||||
|
||||
**Target Segment Sequencing:**
|
||||
|
||||
- Beachhead market selection
|
||||
- Expansion sequence
|
||||
- Segment-specific approaches
|
||||
|
||||
**Channel Strategy:**
|
||||
|
||||
- Distribution channels
|
||||
- Partnership opportunities
|
||||
- Marketing channels
|
||||
|
||||
**Pricing Strategy:**
|
||||
|
||||
- Model recommendation
|
||||
- Price points
|
||||
- Value metrics
|
||||
|
||||
<template-output>gtm_strategy</template-output>
|
||||
</step>
|
||||
|
||||
<step n="9c" title="Risk Analysis">
|
||||
Identify and assess key risks:
|
||||
|
||||
**Market Risks:**
|
||||
|
||||
- Demand uncertainty
|
||||
- Market timing
|
||||
- Economic sensitivity
|
||||
|
||||
**Competitive Risks:**
|
||||
|
||||
- Competitor responses
|
||||
- New entrants
|
||||
- Technology disruption
|
||||
|
||||
**Execution Risks:**
|
||||
|
||||
- Resource requirements
|
||||
- Capability gaps
|
||||
- Scaling challenges
|
||||
|
||||
For each risk: Impact (H/M/L) × Probability (H/M/L) = Risk Score
|
||||
Provide mitigation strategies.
|
||||
|
||||
<template-output>risk_assessment</template-output>
|
||||
</step>
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Financial Projections" optional="true" if="enable_financial_modeling == true">
|
||||
<action>Create financial model based on market research</action>
|
||||
|
||||
<ask>Would you like to create a financial model with revenue projections based on the market analysis?</ask>
|
||||
|
||||
<check if="yes">
|
||||
Build 3-year projections:
|
||||
|
||||
- Revenue model based on SOM scenarios
|
||||
- Customer acquisition projections
|
||||
- Unit economics
|
||||
- Break-even analysis
|
||||
- Funding requirements
|
||||
|
||||
<template-output>financial_projections</template-output>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="11" goal="Synthesize findings together into executive summary">
|
||||
|
||||
<critical>This is the last major content section - make it collaborative</critical>
|
||||
|
||||
<action>Review the research journey together. Share high-level summaries of market size, competitive dynamics, customer insights. Ask what stands out most - what surprised them or confirmed their thinking.</action>
|
||||
|
||||
<action>Collaboratively craft the narrative:
|
||||
|
||||
- What's the headline? (The ONE thing someone should know)
|
||||
- What are the 3-5 critical insights?
|
||||
- Recommended path forward?
|
||||
- Key risks?
|
||||
|
||||
This should read like a strategic brief, not a data dump.
|
||||
</action>
|
||||
|
||||
<action>Draft executive summary and share. Ask if it captures the essence and if anything is missing or overemphasized.</action>
|
||||
|
||||
<template-output>executive_summary</template-output>
|
||||
</step>
|
||||
|
||||
<step n="12" goal="Validate sources and compile report">
|
||||
|
||||
<critical>MANDATORY SOURCE VALIDATION - Do NOT skip this step!</critical>
|
||||
|
||||
<action>Before finalizing, conduct source audit:
|
||||
|
||||
Review every major claim in the report and verify:
|
||||
|
||||
**For Market Size Claims:**
|
||||
|
||||
- [ ] At least 2 independent sources cited with URLs
|
||||
- [ ] Sources are from {{current_year}} or within 2 years
|
||||
- [ ] Sources are credible (Gartner, Forrester, govt data, reputable pubs)
|
||||
- [ ] Conflicting estimates are noted with all sources
|
||||
|
||||
**For Competitive Data:**
|
||||
|
||||
- [ ] Competitor information has source URLs
|
||||
- [ ] Pricing data is current and sourced
|
||||
- [ ] Funding data is verified with dates
|
||||
- [ ] Customer reviews/ratings have source links
|
||||
|
||||
**For Growth Rates and Projections:**
|
||||
|
||||
- [ ] CAGR and forecast data are sourced
|
||||
- [ ] Methodology is explained or linked
|
||||
- [ ] Multiple analyst estimates are compared if available
|
||||
|
||||
**For Customer Insights:**
|
||||
|
||||
- [ ] Persona data is based on real research (cited)
|
||||
- [ ] Survey/interview data has sample size and source
|
||||
- [ ] Behavioral claims are backed by studies/data
|
||||
</action>
|
||||
|
||||
<action>Count and document source quality:
|
||||
|
||||
- Total sources cited: {{count_all_sources}}
|
||||
- High confidence (2+ sources): {{high_confidence_claims}}
|
||||
- Single source (needs verification): {{single_source_claims}}
|
||||
- Uncertain/speculative: {{low_confidence_claims}}
|
||||
|
||||
If {{single_source_claims}} or {{low_confidence_claims}} is high, consider additional research.
|
||||
</action>
|
||||
|
||||
<action>Compile full report with ALL sources properly referenced:
|
||||
|
||||
Generate the complete market research report using the template:
|
||||
|
||||
- Ensure every statistic has inline citation: [Source: Company, Year, URL]
|
||||
- Populate all {{sources_*}} template variables
|
||||
- Include confidence levels for major claims
|
||||
- Add References section with full source list
|
||||
</action>
|
||||
|
||||
<action>Present source quality summary to user:
|
||||
|
||||
"I've completed the research with {{count_all_sources}} total sources:
|
||||
|
||||
- {{high_confidence_claims}} claims verified with multiple sources
|
||||
- {{single_source_claims}} claims from single sources (marked for verification)
|
||||
- {{low_confidence_claims}} claims with low confidence or speculation
|
||||
|
||||
Would you like me to strengthen any areas with additional research?"
|
||||
</action>
|
||||
|
||||
<ask>Would you like to review any specific sections before finalizing? Are there any additional analyses you'd like to include?</ask>
|
||||
|
||||
<goto step="9a" if="user requests changes">Return to refine opportunities</goto>
|
||||
|
||||
<template-output>final_report_ready</template-output>
|
||||
<template-output>source_audit_complete</template-output>
|
||||
</step>
|
||||
|
||||
<step n="13" goal="Appendices and Supporting Materials" optional="true">
|
||||
<ask>Would you like to include detailed appendices with calculations, full competitor profiles, or raw research data?</ask>
|
||||
|
||||
<check if="yes">
|
||||
Create appendices with:
|
||||
|
||||
- Detailed TAM/SAM/SOM calculations
|
||||
- Full competitor profiles
|
||||
- Customer interview notes
|
||||
- Data sources and methodology
|
||||
- Financial model details
|
||||
- Glossary of terms
|
||||
|
||||
<template-output>appendices</template-output>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="14" goal="Update status file on completion" tag="workflow-status">
|
||||
<check if="standalone_mode != true">
|
||||
<action>Load the FULL file: {output_folder}/bmm-workflow-status.yaml</action>
|
||||
<action>Find workflow_status key "research"</action>
|
||||
<critical>ONLY write the file path as the status value - no other text, notes, or metadata</critical>
|
||||
<action>Update workflow_status["research"] = "{output_folder}/bmm-research-{{research_mode}}-{{date}}.md"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<action>Find first non-completed workflow in workflow_status (next workflow to do)</action>
|
||||
<action>Determine next agent from path file based on next workflow</action>
|
||||
</check>
|
||||
|
||||
<output>**✅ Research Complete ({{research_mode}} mode)**
|
||||
|
||||
**Research Report:**
|
||||
|
||||
- Research report generated and saved to {output_folder}/bmm-research-{{research_mode}}-{{date}}.md
|
||||
|
||||
{{#if standalone_mode != true}}
|
||||
**Status Updated:**
|
||||
|
||||
- Progress tracking updated: research marked complete
|
||||
- Next workflow: {{next_workflow}}
|
||||
{{else}}
|
||||
**Note:** Running in standalone mode (no progress tracking)
|
||||
{{/if}}
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
{{#if standalone_mode != true}}
|
||||
|
||||
- **Next workflow:** {{next_workflow}} ({{next_agent}} agent)
|
||||
- **Optional:** Review findings with stakeholders, or run additional analysis workflows (product-brief, game-brief, etc.)
|
||||
|
||||
Check status anytime with: `workflow-status`
|
||||
{{else}}
|
||||
Since no workflow is in progress:
|
||||
|
||||
- Review research findings
|
||||
- Refer to the BMM workflow guide if unsure what to do next
|
||||
- Or run `workflow-init` to create a workflow path and get guided next steps
|
||||
{{/if}}
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
133
bmad/bmm/workflows/1-analysis/research/instructions-router.md
Normal file
133
bmad/bmm/workflows/1-analysis/research/instructions-router.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Research Workflow Router Instructions
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>Communicate in {communication_language}, generate documents in {document_output_language}</critical>
|
||||
<critical>Web research is ENABLED - always use current {{current_year}} data</critical>
|
||||
|
||||
<critical>🚨 ANTI-HALLUCINATION PROTOCOL - MANDATORY 🚨</critical>
|
||||
<critical>NEVER present information without a verified source - if you cannot find a source, say "I could not find reliable data on this"</critical>
|
||||
<critical>ALWAYS cite sources with URLs when presenting data, statistics, or factual claims</critical>
|
||||
<critical>REQUIRE at least 2 independent sources for critical claims (market size, growth rates, competitive data)</critical>
|
||||
<critical>When sources conflict, PRESENT BOTH views and note the discrepancy - do NOT pick one arbitrarily</critical>
|
||||
<critical>Flag any data you are uncertain about with confidence levels: [High Confidence], [Medium Confidence], [Low Confidence - verify]</critical>
|
||||
<critical>Distinguish clearly between: FACTS (from sources), ANALYSIS (your interpretation), and SPECULATION (educated guesses)</critical>
|
||||
<critical>When using WebSearch results, ALWAYS extract and include the source URL for every claim</critical>
|
||||
|
||||
<!-- IDE-INJECT-POINT: research-subagents -->
|
||||
|
||||
<workflow>
|
||||
|
||||
<critical>This is a ROUTER that directs to specialized research instruction sets</critical>
|
||||
|
||||
<step n="1" goal="Validate workflow readiness" tag="workflow-status">
|
||||
<action>Check if {output_folder}/bmm-workflow-status.yaml exists</action>
|
||||
|
||||
<check if="status file not found">
|
||||
<output>No workflow status file found. Research is optional - you can continue without status tracking.</output>
|
||||
<action>Set standalone_mode = true</action>
|
||||
</check>
|
||||
|
||||
<check if="status file found">
|
||||
<action>Load the FULL file: {output_folder}/bmm-workflow-status.yaml</action>
|
||||
<action>Parse workflow_status section</action>
|
||||
<action>Check status of "research" workflow</action>
|
||||
<action>Get project_level from YAML metadata</action>
|
||||
<action>Find first non-completed workflow (next expected workflow)</action>
|
||||
<action>Pass status context to loaded instruction set for final update</action>
|
||||
|
||||
<check if="research status is file path (already completed)">
|
||||
<output>⚠️ Research already completed: {{research status}}</output>
|
||||
<ask>Re-running will create a new research report. Continue? (y/n)</ask>
|
||||
<check if="n">
|
||||
<output>Exiting. Use workflow-status to see your next step.</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<check if="research is not the next expected workflow (latter items are completed already in the list)">
|
||||
<output>⚠️ Next expected workflow: {{next_workflow}}. Research is out of sequence.</output>
|
||||
<output>Note: Research can provide valuable insights at any project stage.</output>
|
||||
<ask>Continue with Research anyway? (y/n)</ask>
|
||||
<check if="n">
|
||||
<output>Exiting. Run {{next_workflow}} instead.</output>
|
||||
<action>Exit workflow</action>
|
||||
</check>
|
||||
</check>
|
||||
|
||||
<action>Set standalone_mode = false</action>
|
||||
</check>
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Discover research needs through conversation">
|
||||
|
||||
<action>Welcome {user_name} warmly. Position yourself as their research partner who uses live {{current_year}} web data. Ask what they're looking to understand or research.</action>
|
||||
|
||||
<action>Listen and collaboratively identify the research type based on what they describe:
|
||||
|
||||
- Market/Business questions → Market Research
|
||||
- Competitor questions → Competitive Intelligence
|
||||
- Customer questions → User Research
|
||||
- Technology questions → Technical Research
|
||||
- Industry questions → Domain Research
|
||||
- Creating research prompts for AI platforms → Deep Research Prompt Generator
|
||||
|
||||
Confirm your understanding of what type would be most helpful and what it will produce.
|
||||
</action>
|
||||
|
||||
<action>Capture {{research_type}} and {{research_mode}}</action>
|
||||
|
||||
<template-output>research_type_discovery</template-output>
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Route to Appropriate Research Instructions">
|
||||
|
||||
<critical>Based on user selection, load the appropriate instruction set</critical>
|
||||
|
||||
<check if="research_type == 1 OR fuzzy match market research">
|
||||
<action>Set research_mode = "market"</action>
|
||||
<action>LOAD: {installed_path}/instructions-market.md</action>
|
||||
<action>Continue with market research workflow</action>
|
||||
</check>
|
||||
|
||||
<check if="research_type == 2 or prompt or fuzzy match deep research prompt">
|
||||
<action>Set research_mode = "deep-prompt"</action>
|
||||
<action>LOAD: {installed_path}/instructions-deep-prompt.md</action>
|
||||
<action>Continue with deep research prompt generation</action>
|
||||
</check>
|
||||
|
||||
<check if="research_type == 3 technical or architecture or fuzzy match indicates technical type of research">
|
||||
<action>Set research_mode = "technical"</action>
|
||||
<action>LOAD: {installed_path}/instructions-technical.md</action>
|
||||
<action>Continue with technical research workflow</action>
|
||||
|
||||
</check>
|
||||
|
||||
<check if="research_type == 4 or fuzzy match competitive">
|
||||
<action>Set research_mode = "competitive"</action>
|
||||
<action>This will use market research workflow with competitive focus</action>
|
||||
<action>LOAD: {installed_path}/instructions-market.md</action>
|
||||
<action>Pass mode="competitive" to focus on competitive intelligence</action>
|
||||
|
||||
</check>
|
||||
|
||||
<check if="research_type == 5 or fuzzy match user research">
|
||||
<action>Set research_mode = "user"</action>
|
||||
<action>This will use market research workflow with user research focus</action>
|
||||
<action>LOAD: {installed_path}/instructions-market.md</action>
|
||||
<action>Pass mode="user" to focus on customer insights</action>
|
||||
|
||||
</check>
|
||||
|
||||
<check if="research_type == 6 or fuzzy match domain or industry or category">
|
||||
<action>Set research_mode = "domain"</action>
|
||||
<action>This will use market research workflow with domain focus</action>
|
||||
<action>LOAD: {installed_path}/instructions-market.md</action>
|
||||
<action>Pass mode="domain" to focus on industry/domain analysis</action>
|
||||
</check>
|
||||
|
||||
<critical>The loaded instruction set will continue from here with full context of the {research_type}</critical>
|
||||
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
538
bmad/bmm/workflows/1-analysis/research/instructions-technical.md
Normal file
538
bmad/bmm/workflows/1-analysis/research/instructions-technical.md
Normal file
@@ -0,0 +1,538 @@
|
||||
# Technical/Architecture Research Instructions
|
||||
|
||||
<critical>The workflow execution engine is governed by: {project_root}/bmad/core/tasks/workflow.xml</critical>
|
||||
<critical>You MUST have already loaded and processed: {installed_path}/workflow.yaml</critical>
|
||||
<critical>This workflow uses ADAPTIVE FACILITATION - adjust your communication style based on {user_skill_level}</critical>
|
||||
<critical>This is a HIGHLY INTERACTIVE workflow - make technical decisions WITH user, not FOR them</critical>
|
||||
<critical>Web research is MANDATORY - use WebSearch tool with {{current_year}} for current version info and trends</critical>
|
||||
<critical>ALWAYS verify current versions - NEVER use hardcoded or outdated version numbers</critical>
|
||||
<critical>Communicate all responses in {communication_language} and tailor to {user_skill_level}</critical>
|
||||
<critical>Generate all documents in {document_output_language}</critical>
|
||||
|
||||
<critical>🚨 ANTI-HALLUCINATION PROTOCOL - MANDATORY 🚨</critical>
|
||||
<critical>NEVER invent version numbers, features, or technical details - ALWAYS verify with current {{current_year}} sources</critical>
|
||||
<critical>Every technical claim (version, feature, performance, compatibility) MUST have a cited source with URL</critical>
|
||||
<critical>Version numbers MUST be verified via WebSearch - do NOT rely on training data (it's outdated!)</critical>
|
||||
<critical>When comparing technologies, cite sources for each claim (performance benchmarks, community size, etc.)</critical>
|
||||
<critical>Mark confidence levels: [Verified {{current_year}} source], [Older source - verify], [Uncertain - needs verification]</critical>
|
||||
<critical>Distinguish: FACT (from official docs/sources), OPINION (from community/reviews), SPECULATION (your analysis)</critical>
|
||||
<critical>If you cannot find current information about a technology, state: "I could not find recent {{current_year}} data on [X]"</critical>
|
||||
<critical>Extract and include source URLs in all technology profiles and comparisons</critical>
|
||||
|
||||
<workflow>
|
||||
|
||||
<step n="1" goal="Discover technical research needs through conversation">
|
||||
|
||||
<action>Engage conversationally based on skill level:
|
||||
|
||||
<check if="{user_skill_level} == 'expert'">
|
||||
"Let's research the technical options for your decision.
|
||||
|
||||
I'll gather current data from {{current_year}}, compare approaches, and help you think through trade-offs.
|
||||
|
||||
What technical question are you wrestling with?"
|
||||
</check>
|
||||
|
||||
<check if="{user_skill_level} == 'intermediate'">
|
||||
"I'll help you research and evaluate your technical options.
|
||||
|
||||
We'll look at current technologies (using {{current_year}} data), understand the trade-offs, and figure out what fits your needs best.
|
||||
|
||||
What technical decision are you trying to make?"
|
||||
</check>
|
||||
|
||||
<check if="{user_skill_level} == 'beginner'">
|
||||
"Think of this as having a technical advisor help you research your options.
|
||||
|
||||
I'll explain what different technologies do, why you might choose one over another, and help you make an informed decision.
|
||||
|
||||
What technical challenge brought you here?"
|
||||
</check>
|
||||
</action>
|
||||
|
||||
<action>Through conversation, understand:
|
||||
|
||||
- **The technical question** - What they need to decide or understand
|
||||
- **The context** - Greenfield? Brownfield? Learning? Production?
|
||||
- **Current constraints** - Languages, platforms, team skills, budget
|
||||
- **What they already know** - Do they have candidates in mind?
|
||||
|
||||
Don't interrogate - explore together. If they're unsure, help them articulate the problem.
|
||||
</action>
|
||||
|
||||
<template-output>technical_question</template-output>
|
||||
<template-output>project_context</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="2" goal="Define Technical Requirements and Constraints">
|
||||
<action>Gather requirements and constraints that will guide the research</action>
|
||||
|
||||
**Let's define your technical requirements:**
|
||||
|
||||
<ask>**Functional Requirements** - What must the technology do?
|
||||
|
||||
Examples:
|
||||
|
||||
- Handle 1M requests per day
|
||||
- Support real-time data processing
|
||||
- Provide full-text search capabilities
|
||||
- Enable offline-first mobile app
|
||||
- Support multi-tenancy</ask>
|
||||
|
||||
<template-output>functional_requirements</template-output>
|
||||
|
||||
<ask>**Non-Functional Requirements** - Performance, scalability, security needs?
|
||||
|
||||
Consider:
|
||||
|
||||
- Performance targets (latency, throughput)
|
||||
- Scalability requirements (users, data volume)
|
||||
- Reliability and availability needs
|
||||
- Security and compliance requirements
|
||||
- Maintainability and developer experience</ask>
|
||||
|
||||
<template-output>non_functional_requirements</template-output>
|
||||
|
||||
<ask>**Constraints** - What limitations or requirements exist?
|
||||
|
||||
- Programming language preferences or requirements
|
||||
- Cloud platform (AWS, Azure, GCP, on-prem)
|
||||
- Budget constraints
|
||||
- Team expertise and skills
|
||||
- Timeline and urgency
|
||||
- Existing technology stack (if brownfield)
|
||||
- Open source vs commercial requirements
|
||||
- Licensing considerations</ask>
|
||||
|
||||
<template-output>technical_constraints</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="3" goal="Discover and evaluate technology options together">
|
||||
|
||||
<critical>MUST use WebSearch to find current options from {{current_year}}</critical>
|
||||
|
||||
<action>Ask if they have candidates in mind:
|
||||
|
||||
"Do you already have specific technologies you want to compare, or should I search for the current options?"
|
||||
</action>
|
||||
|
||||
<action if="user has candidates">Great! Let's research: {{user_candidates}}</action>
|
||||
|
||||
<action if="discovering options">Search for current leading technologies:
|
||||
|
||||
<WebSearch>{{technical_category}} best tools {{current_year}}</WebSearch>
|
||||
<WebSearch>{{technical_category}} comparison {{use_case}} {{current_year}}</WebSearch>
|
||||
<WebSearch>{{technical_category}} popular frameworks {{current_year}}</WebSearch>
|
||||
<WebSearch>state of {{technical_category}} {{current_year}}</WebSearch>
|
||||
</action>
|
||||
|
||||
<action>Share findings conversationally:
|
||||
|
||||
"Based on current {{current_year}} data, here are the main options:
|
||||
|
||||
{{discovered_options}}
|
||||
|
||||
<check if="{user_skill_level} == 'expert'">
|
||||
These are the leaders right now. Which ones make sense to evaluate for your use case?"
|
||||
</check>
|
||||
|
||||
<check if="{user_skill_level} == 'beginner'">
|
||||
Each of these is popular for different reasons. Let me know if you want me to explain what makes each one different."
|
||||
</check>
|
||||
</action>
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
|
||||
<template-output>technology_options</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="4" goal="Research each technology together in depth">
|
||||
|
||||
<critical>For each option, use WebSearch to gather CURRENT {{current_year}} information</critical>
|
||||
|
||||
<step n="4a" title="Deep dive on each technology" repeat="for-each-option">
|
||||
|
||||
<action>For {{technology_name}}, conduct comprehensive research:
|
||||
|
||||
<WebSearch>{{technology_name}} overview what is {{current_year}}</WebSearch>
|
||||
<WebSearch>{{technology_name}} latest version release notes {{current_year}}</WebSearch>
|
||||
<WebSearch>{{technology_name}} pros cons trade-offs {{current_year}}</WebSearch>
|
||||
<WebSearch>{{technology_name}} production experience real world {{current_year}}</WebSearch>
|
||||
<WebSearch>{{technology_name}} vs alternatives comparison {{current_year}}</WebSearch>
|
||||
</action>
|
||||
|
||||
<action>Share findings conversationally and collaboratively:
|
||||
|
||||
"Here's what I found about {{technology_name}}:
|
||||
|
||||
**Overview:**
|
||||
{{what_it_is_and_solves}}
|
||||
|
||||
**Current Status ({{current_year}}):**
|
||||
{{maturity_community_release_cadence}}
|
||||
|
||||
**Technical Characteristics:**
|
||||
|
||||
- Architecture and design philosophy
|
||||
- Core features and capabilities
|
||||
- Performance characteristics
|
||||
- Scalability approach
|
||||
- Integration capabilities
|
||||
|
||||
**Developer Experience:**
|
||||
|
||||
- Learning curve
|
||||
- Documentation quality
|
||||
- Tooling ecosystem
|
||||
- Testing support
|
||||
- Debugging capabilities
|
||||
|
||||
**Operations:**
|
||||
|
||||
- Deployment complexity
|
||||
- Monitoring and observability
|
||||
- Operational overhead
|
||||
- Cloud provider support
|
||||
- Container/K8s compatibility
|
||||
|
||||
**Ecosystem:**
|
||||
|
||||
- Available libraries and plugins
|
||||
- Third-party integrations
|
||||
- Commercial support options
|
||||
- Training and educational resources
|
||||
|
||||
**Community and Adoption:**
|
||||
|
||||
- GitHub stars/contributors (if applicable)
|
||||
- Production usage examples
|
||||
- Case studies from similar use cases
|
||||
- Community support channels
|
||||
- Job market demand
|
||||
|
||||
**Costs:**
|
||||
|
||||
- Licensing model
|
||||
- Hosting/infrastructure costs
|
||||
- Support costs
|
||||
- Training costs
|
||||
- Total cost of ownership estimate
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
<template-output>tech*profile*{{option_number}}</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="5" goal="Comparative Analysis">
|
||||
<action>Create structured comparison across all options</action>
|
||||
|
||||
**Create comparison matrices:**
|
||||
|
||||
<action>Generate comparison table with key dimensions:</action>
|
||||
|
||||
**Comparison Dimensions:**
|
||||
|
||||
1. **Meets Requirements** - How well does each meet functional requirements?
|
||||
2. **Performance** - Speed, latency, throughput benchmarks
|
||||
3. **Scalability** - Horizontal/vertical scaling capabilities
|
||||
4. **Complexity** - Learning curve and operational complexity
|
||||
5. **Ecosystem** - Maturity, community, libraries, tools
|
||||
6. **Cost** - Total cost of ownership
|
||||
7. **Risk** - Maturity, vendor lock-in, abandonment risk
|
||||
8. **Developer Experience** - Productivity, debugging, testing
|
||||
9. **Operations** - Deployment, monitoring, maintenance
|
||||
10. **Future-Proofing** - Roadmap, innovation, sustainability
|
||||
|
||||
<action>Rate each option on relevant dimensions (High/Medium/Low or 1-5 scale)</action>
|
||||
|
||||
<template-output>comparative_analysis</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="6" goal="Trade-offs and Decision Factors">
|
||||
<action>Analyze trade-offs between options</action>
|
||||
|
||||
**Identify key trade-offs:**
|
||||
|
||||
For each pair of leading options, identify trade-offs:
|
||||
|
||||
- What do you gain by choosing Option A over Option B?
|
||||
- What do you sacrifice?
|
||||
- Under what conditions would you choose one vs the other?
|
||||
|
||||
**Decision factors by priority:**
|
||||
|
||||
<ask>What are your top 3 decision factors?
|
||||
|
||||
Examples:
|
||||
|
||||
- Time to market
|
||||
- Performance
|
||||
- Developer productivity
|
||||
- Operational simplicity
|
||||
- Cost efficiency
|
||||
- Future flexibility
|
||||
- Team expertise match
|
||||
- Community and support</ask>
|
||||
|
||||
<template-output>decision_priorities</template-output>
|
||||
|
||||
<action>Weight the comparison analysis by decision priorities</action>
|
||||
|
||||
<template-output>weighted_analysis</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="7" goal="Use Case Fit Analysis">
|
||||
<action>Evaluate fit for specific use case</action>
|
||||
|
||||
**Match technologies to your specific use case:**
|
||||
|
||||
Based on:
|
||||
|
||||
- Your functional and non-functional requirements
|
||||
- Your constraints (team, budget, timeline)
|
||||
- Your context (greenfield vs brownfield)
|
||||
- Your decision priorities
|
||||
|
||||
Analyze which option(s) best fit your specific scenario.
|
||||
|
||||
<ask>Are there any specific concerns or "must-haves" that would immediately eliminate any options?</ask>
|
||||
|
||||
<template-output>use_case_fit</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="8" goal="Real-World Evidence">
|
||||
<action>Gather production experience evidence</action>
|
||||
|
||||
**Search for real-world experiences:**
|
||||
|
||||
For top 2-3 candidates:
|
||||
|
||||
- Production war stories and lessons learned
|
||||
- Known issues and gotchas
|
||||
- Migration experiences (if replacing existing tech)
|
||||
- Performance benchmarks from real deployments
|
||||
- Team scaling experiences
|
||||
- Reddit/HackerNews discussions
|
||||
- Conference talks and blog posts from practitioners
|
||||
|
||||
<template-output>real_world_evidence</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="9" goal="Architecture Pattern Research" optional="true">
|
||||
<action>If researching architecture patterns, provide pattern analysis</action>
|
||||
|
||||
<ask>Are you researching architecture patterns (microservices, event-driven, etc.)?</ask>
|
||||
|
||||
<check if="yes">
|
||||
|
||||
Research and document:
|
||||
|
||||
**Pattern Overview:**
|
||||
|
||||
- Core principles and concepts
|
||||
- When to use vs when not to use
|
||||
- Prerequisites and foundations
|
||||
|
||||
**Implementation Considerations:**
|
||||
|
||||
- Technology choices for the pattern
|
||||
- Reference architectures
|
||||
- Common pitfalls and anti-patterns
|
||||
- Migration path from current state
|
||||
|
||||
**Trade-offs:**
|
||||
|
||||
- Benefits and drawbacks
|
||||
- Complexity vs benefits analysis
|
||||
- Team skill requirements
|
||||
- Operational overhead
|
||||
|
||||
<template-output>architecture_pattern_analysis</template-output>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="10" goal="Recommendations and Decision Framework">
|
||||
<action>Synthesize research into clear recommendations</action>
|
||||
|
||||
**Generate recommendations:**
|
||||
|
||||
**Top Recommendation:**
|
||||
|
||||
- Primary technology choice with rationale
|
||||
- Why it best fits your requirements and constraints
|
||||
- Key benefits for your use case
|
||||
- Risks and mitigation strategies
|
||||
|
||||
**Alternative Options:**
|
||||
|
||||
- Second and third choices
|
||||
- When you might choose them instead
|
||||
- Scenarios where they would be better
|
||||
|
||||
**Implementation Roadmap:**
|
||||
|
||||
- Proof of concept approach
|
||||
- Key decisions to make during implementation
|
||||
- Migration path (if applicable)
|
||||
- Success criteria and validation approach
|
||||
|
||||
**Risk Mitigation:**
|
||||
|
||||
- Identified risks and mitigation plans
|
||||
- Contingency options if primary choice doesn't work
|
||||
- Exit strategy considerations
|
||||
|
||||
<invoke-task halt="true">{project-root}/bmad/core/tasks/adv-elicit.xml</invoke-task>
|
||||
|
||||
<template-output>recommendations</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="11" goal="Decision Documentation">
|
||||
<action>Create architecture decision record (ADR) template</action>
|
||||
|
||||
**Generate Architecture Decision Record:**
|
||||
|
||||
Create ADR format documentation:
|
||||
|
||||
```markdown
|
||||
# ADR-XXX: [Decision Title]
|
||||
|
||||
## Status
|
||||
|
||||
[Proposed | Accepted | Superseded]
|
||||
|
||||
## Context
|
||||
|
||||
[Technical context and problem statement]
|
||||
|
||||
## Decision Drivers
|
||||
|
||||
[Key factors influencing the decision]
|
||||
|
||||
## Considered Options
|
||||
|
||||
[Technologies/approaches evaluated]
|
||||
|
||||
## Decision
|
||||
|
||||
[Chosen option and rationale]
|
||||
|
||||
## Consequences
|
||||
|
||||
**Positive:**
|
||||
|
||||
- [Benefits of this choice]
|
||||
|
||||
**Negative:**
|
||||
|
||||
- [Drawbacks and risks]
|
||||
|
||||
**Neutral:**
|
||||
|
||||
- [Other impacts]
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
[Key considerations for implementation]
|
||||
|
||||
## References
|
||||
|
||||
[Links to research, benchmarks, case studies]
|
||||
```
|
||||
|
||||
<template-output>architecture_decision_record</template-output>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="12" goal="Finalize Technical Research Report">
|
||||
<action>Compile complete technical research report</action>
|
||||
|
||||
**Your Technical Research Report includes:**
|
||||
|
||||
1. **Executive Summary** - Key findings and recommendation
|
||||
2. **Requirements and Constraints** - What guided the research
|
||||
3. **Technology Options** - All candidates evaluated
|
||||
4. **Detailed Profiles** - Deep dive on each option
|
||||
5. **Comparative Analysis** - Side-by-side comparison
|
||||
6. **Trade-off Analysis** - Key decision factors
|
||||
7. **Real-World Evidence** - Production experiences
|
||||
8. **Recommendations** - Detailed recommendation with rationale
|
||||
9. **Architecture Decision Record** - Formal decision documentation
|
||||
10. **Next Steps** - Implementation roadmap
|
||||
|
||||
<action>Save complete report to {default_output_file}</action>
|
||||
|
||||
<ask>Would you like to:
|
||||
|
||||
1. Deep dive into specific technology
|
||||
2. Research implementation patterns for chosen technology
|
||||
3. Generate proof-of-concept plan
|
||||
4. Create deep research prompt for ongoing investigation
|
||||
5. Exit workflow
|
||||
|
||||
Select option (1-5):</ask>
|
||||
|
||||
<check if="option 4">
|
||||
<action>LOAD: {installed_path}/instructions-deep-prompt.md</action>
|
||||
<action>Pre-populate with technical research context</action>
|
||||
</check>
|
||||
|
||||
</step>
|
||||
|
||||
<step n="FINAL" goal="Update status file on completion" tag="workflow-status">
|
||||
<check if="standalone_mode != true">
|
||||
<action>Load the FULL file: {output_folder}/bmm-workflow-status.yaml</action>
|
||||
<action>Find workflow_status key "research"</action>
|
||||
<critical>ONLY write the file path as the status value - no other text, notes, or metadata</critical>
|
||||
<action>Update workflow_status["research"] = "{output_folder}/bmm-research-technical-{{date}}.md"</action>
|
||||
<action>Save file, preserving ALL comments and structure including STATUS DEFINITIONS</action>
|
||||
|
||||
<action>Find first non-completed workflow in workflow_status (next workflow to do)</action>
|
||||
<action>Determine next agent from path file based on next workflow</action>
|
||||
</check>
|
||||
|
||||
<output>**✅ Technical Research Complete**
|
||||
|
||||
**Research Report:**
|
||||
|
||||
- Technical research report generated and saved to {output_folder}/bmm-research-technical-{{date}}.md
|
||||
|
||||
{{#if standalone_mode != true}}
|
||||
**Status Updated:**
|
||||
|
||||
- Progress tracking updated: research marked complete
|
||||
- Next workflow: {{next_workflow}}
|
||||
{{else}}
|
||||
**Note:** Running in standalone mode (no progress tracking)
|
||||
{{/if}}
|
||||
|
||||
**Next Steps:**
|
||||
|
||||
{{#if standalone_mode != true}}
|
||||
|
||||
- **Next workflow:** {{next_workflow}} ({{next_agent}} agent)
|
||||
- **Optional:** Review findings with architecture team, or run additional analysis workflows
|
||||
|
||||
Check status anytime with: `workflow-status`
|
||||
{{else}}
|
||||
Since no workflow is in progress:
|
||||
|
||||
- Review technical research findings
|
||||
- Refer to the BMM workflow guide if unsure what to do next
|
||||
- Or run `workflow-init` to create a workflow path and get guided next steps
|
||||
{{/if}}
|
||||
</output>
|
||||
</step>
|
||||
|
||||
</workflow>
|
||||
@@ -0,0 +1,94 @@
|
||||
# Deep Research Prompt
|
||||
|
||||
**Generated:** {{date}}
|
||||
**Created by:** {{user_name}}
|
||||
**Target Platform:** {{target_platform}}
|
||||
|
||||
---
|
||||
|
||||
## Research Prompt (Ready to Use)
|
||||
|
||||
### Research Question
|
||||
|
||||
{{research_topic}}
|
||||
|
||||
### Research Goal and Context
|
||||
|
||||
**Objective:** {{research_goal}}
|
||||
|
||||
**Context:**
|
||||
{{research_persona}}
|
||||
|
||||
### Scope and Boundaries
|
||||
|
||||
**Temporal Scope:** {{temporal_scope}}
|
||||
|
||||
**Geographic Scope:** {{geographic_scope}}
|
||||
|
||||
**Thematic Focus:**
|
||||
{{thematic_boundaries}}
|
||||
|
||||
### Information Requirements
|
||||
|
||||
**Types of Information Needed:**
|
||||
{{information_types}}
|
||||
|
||||
**Preferred Sources:**
|
||||
{{preferred_sources}}
|
||||
|
||||
### Output Structure
|
||||
|
||||
**Format:** {{output_format}}
|
||||
|
||||
**Required Sections:**
|
||||
{{key_sections}}
|
||||
|
||||
**Depth Level:** {{depth_level}}
|
||||
|
||||
### Research Methodology
|
||||
|
||||
**Keywords and Technical Terms:**
|
||||
{{research_keywords}}
|
||||
|
||||
**Special Requirements:**
|
||||
{{special_requirements}}
|
||||
|
||||
**Validation Criteria:**
|
||||
{{validation_criteria}}
|
||||
|
||||
### Follow-up Strategy
|
||||
|
||||
{{follow_up_strategy}}
|
||||
|
||||
---
|
||||
|
||||
## Complete Research Prompt (Copy and Paste)
|
||||
|
||||
```
|
||||
{{deep_research_prompt}}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Platform-Specific Usage Tips
|
||||
|
||||
{{platform_tips}}
|
||||
|
||||
---
|
||||
|
||||
## Research Execution Checklist
|
||||
|
||||
{{execution_checklist}}
|
||||
|
||||
---
|
||||
|
||||
## Metadata
|
||||
|
||||
**Workflow:** BMad Research Workflow - Deep Research Prompt Generator v2.0
|
||||
**Generated:** {{date}}
|
||||
**Research Type:** Deep Research Prompt
|
||||
**Platform:** {{target_platform}}
|
||||
|
||||
---
|
||||
|
||||
_This research prompt was generated using the BMad Method Research Workflow, incorporating best practices from ChatGPT Deep Research, Gemini Deep Research, Grok DeepSearch, and Claude Projects (2025)._
|
||||
347
bmad/bmm/workflows/1-analysis/research/template-market.md
Normal file
347
bmad/bmm/workflows/1-analysis/research/template-market.md
Normal file
@@ -0,0 +1,347 @@
|
||||
# Market Research Report: {{product_name}}
|
||||
|
||||
**Date:** {{date}}
|
||||
**Prepared by:** {{user_name}}
|
||||
**Research Depth:** {{research_depth}}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
{{executive_summary}}
|
||||
|
||||
### Key Market Metrics
|
||||
|
||||
- **Total Addressable Market (TAM):** {{tam_calculation}}
|
||||
- **Serviceable Addressable Market (SAM):** {{sam_calculation}}
|
||||
- **Serviceable Obtainable Market (SOM):** {{som_scenarios}}
|
||||
|
||||
### Critical Success Factors
|
||||
|
||||
{{key_success_factors}}
|
||||
|
||||
---
|
||||
|
||||
## 1. Research Objectives and Methodology
|
||||
|
||||
### Research Objectives
|
||||
|
||||
{{research_objectives}}
|
||||
|
||||
### Scope and Boundaries
|
||||
|
||||
- **Product/Service:** {{product_description}}
|
||||
- **Market Definition:** {{market_definition}}
|
||||
- **Geographic Scope:** {{geographic_scope}}
|
||||
- **Customer Segments:** {{segment_boundaries}}
|
||||
|
||||
### Research Methodology
|
||||
|
||||
{{research_methodology}}
|
||||
|
||||
### Data Sources
|
||||
|
||||
{{source_credibility_notes}}
|
||||
|
||||
---
|
||||
|
||||
## 2. Market Overview
|
||||
|
||||
### Market Definition
|
||||
|
||||
{{market_definition}}
|
||||
|
||||
### Market Size and Growth
|
||||
|
||||
#### Total Addressable Market (TAM)
|
||||
|
||||
**Methodology:** {{tam_methodology}}
|
||||
|
||||
{{tam_calculation}}
|
||||
|
||||
#### Serviceable Addressable Market (SAM)
|
||||
|
||||
{{sam_calculation}}
|
||||
|
||||
#### Serviceable Obtainable Market (SOM)
|
||||
|
||||
{{som_scenarios}}
|
||||
|
||||
### Market Intelligence Summary
|
||||
|
||||
{{market_intelligence_raw}}
|
||||
|
||||
### Key Data Points
|
||||
|
||||
{{key_data_points}}
|
||||
|
||||
---
|
||||
|
||||
## 3. Market Trends and Drivers
|
||||
|
||||
### Key Market Trends
|
||||
|
||||
{{market_trends}}
|
||||
|
||||
### Growth Drivers
|
||||
|
||||
{{growth_drivers}}
|
||||
|
||||
### Market Inhibitors
|
||||
|
||||
{{market_inhibitors}}
|
||||
|
||||
### Future Outlook
|
||||
|
||||
{{future_outlook}}
|
||||
|
||||
---
|
||||
|
||||
## 4. Customer Analysis
|
||||
|
||||
### Target Customer Segments
|
||||
|
||||
{{#segment_profile_1}}
|
||||
|
||||
#### Segment 1
|
||||
|
||||
{{segment_profile_1}}
|
||||
{{/segment_profile_1}}
|
||||
|
||||
{{#segment_profile_2}}
|
||||
|
||||
#### Segment 2
|
||||
|
||||
{{segment_profile_2}}
|
||||
{{/segment_profile_2}}
|
||||
|
||||
{{#segment_profile_3}}
|
||||
|
||||
#### Segment 3
|
||||
|
||||
{{segment_profile_3}}
|
||||
{{/segment_profile_3}}
|
||||
|
||||
{{#segment_profile_4}}
|
||||
|
||||
#### Segment 4
|
||||
|
||||
{{segment_profile_4}}
|
||||
{{/segment_profile_4}}
|
||||
|
||||
{{#segment_profile_5}}
|
||||
|
||||
#### Segment 5
|
||||
|
||||
{{segment_profile_5}}
|
||||
{{/segment_profile_5}}
|
||||
|
||||
### Jobs-to-be-Done Analysis
|
||||
|
||||
{{jobs_to_be_done}}
|
||||
|
||||
### Pricing Analysis and Willingness to Pay
|
||||
|
||||
{{pricing_analysis}}
|
||||
|
||||
---
|
||||
|
||||
## 5. Competitive Landscape
|
||||
|
||||
### Market Structure
|
||||
|
||||
{{market_structure}}
|
||||
|
||||
### Competitor Analysis
|
||||
|
||||
{{#competitor_analysis_1}}
|
||||
|
||||
#### Competitor 1
|
||||
|
||||
{{competitor_analysis_1}}
|
||||
{{/competitor_analysis_1}}
|
||||
|
||||
{{#competitor_analysis_2}}
|
||||
|
||||
#### Competitor 2
|
||||
|
||||
{{competitor_analysis_2}}
|
||||
{{/competitor_analysis_2}}
|
||||
|
||||
{{#competitor_analysis_3}}
|
||||
|
||||
#### Competitor 3
|
||||
|
||||
{{competitor_analysis_3}}
|
||||
{{/competitor_analysis_3}}
|
||||
|
||||
{{#competitor_analysis_4}}
|
||||
|
||||
#### Competitor 4
|
||||
|
||||
{{competitor_analysis_4}}
|
||||
{{/competitor_analysis_4}}
|
||||
|
||||
{{#competitor_analysis_5}}
|
||||
|
||||
#### Competitor 5
|
||||
|
||||
{{competitor_analysis_5}}
|
||||
{{/competitor_analysis_5}}
|
||||
|
||||
### Competitive Positioning
|
||||
|
||||
{{competitive_positioning}}
|
||||
|
||||
---
|
||||
|
||||
## 6. Industry Analysis
|
||||
|
||||
### Porter's Five Forces Assessment
|
||||
|
||||
{{porters_five_forces}}
|
||||
|
||||
### Technology Adoption Lifecycle
|
||||
|
||||
{{adoption_lifecycle}}
|
||||
|
||||
### Value Chain Analysis
|
||||
|
||||
{{value_chain_analysis}}
|
||||
|
||||
---
|
||||
|
||||
## 7. Market Opportunities
|
||||
|
||||
### Identified Opportunities
|
||||
|
||||
{{market_opportunities}}
|
||||
|
||||
### Opportunity Prioritization Matrix
|
||||
|
||||
{{opportunity_prioritization}}
|
||||
|
||||
---
|
||||
|
||||
## 8. Strategic Recommendations
|
||||
|
||||
### Go-to-Market Strategy
|
||||
|
||||
{{gtm_strategy}}
|
||||
|
||||
#### Positioning Strategy
|
||||
|
||||
{{positioning_strategy}}
|
||||
|
||||
#### Target Segment Sequencing
|
||||
|
||||
{{segment_sequencing}}
|
||||
|
||||
#### Channel Strategy
|
||||
|
||||
{{channel_strategy}}
|
||||
|
||||
#### Pricing Strategy
|
||||
|
||||
{{pricing_recommendations}}
|
||||
|
||||
### Implementation Roadmap
|
||||
|
||||
{{implementation_roadmap}}
|
||||
|
||||
---
|
||||
|
||||
## 9. Risk Assessment
|
||||
|
||||
### Risk Analysis
|
||||
|
||||
{{risk_assessment}}
|
||||
|
||||
### Mitigation Strategies
|
||||
|
||||
{{mitigation_strategies}}
|
||||
|
||||
---
|
||||
|
||||
## 10. Financial Projections
|
||||
|
||||
{{#financial_projections}}
|
||||
{{financial_projections}}
|
||||
{{/financial_projections}}
|
||||
|
||||
---
|
||||
|
||||
## Appendices
|
||||
|
||||
### Appendix A: Data Sources and References
|
||||
|
||||
{{data_sources}}
|
||||
|
||||
### Appendix B: Detailed Calculations
|
||||
|
||||
{{detailed_calculations}}
|
||||
|
||||
### Appendix C: Additional Analysis
|
||||
|
||||
{{#appendices}}
|
||||
{{appendices}}
|
||||
{{/appendices}}
|
||||
|
||||
### Appendix D: Glossary of Terms
|
||||
|
||||
{{glossary}}
|
||||
|
||||
---
|
||||
|
||||
## References and Sources
|
||||
|
||||
**CRITICAL: All data in this report must be verifiable through the sources listed below**
|
||||
|
||||
### Market Size and Growth Data Sources
|
||||
|
||||
{{sources_market_size}}
|
||||
|
||||
### Competitive Intelligence Sources
|
||||
|
||||
{{sources_competitive}}
|
||||
|
||||
### Customer Research Sources
|
||||
|
||||
{{sources_customer}}
|
||||
|
||||
### Industry Trends and Analysis Sources
|
||||
|
||||
{{sources_trends}}
|
||||
|
||||
### Additional References
|
||||
|
||||
{{sources_additional}}
|
||||
|
||||
### Source Quality Assessment
|
||||
|
||||
- **High Credibility Sources (2+ corroborating):** {{high_confidence_count}} claims
|
||||
- **Medium Credibility (single source):** {{medium_confidence_count}} claims
|
||||
- **Low Credibility (needs verification):** {{low_confidence_count}} claims
|
||||
|
||||
**Note:** Any claim marked [Low Confidence] or [Single source] should be independently verified before making critical business decisions.
|
||||
|
||||
---
|
||||
|
||||
## Document Information
|
||||
|
||||
**Workflow:** BMad Market Research Workflow v1.0
|
||||
**Generated:** {{date}}
|
||||
**Next Review:** {{next_review_date}}
|
||||
**Classification:** {{classification}}
|
||||
|
||||
### Research Quality Metrics
|
||||
|
||||
- **Data Freshness:** Current as of {{date}}
|
||||
- **Source Reliability:** {{source_reliability_score}}
|
||||
- **Confidence Level:** {{confidence_level}}
|
||||
- **Total Sources Cited:** {{total_sources}}
|
||||
- **Web Searches Conducted:** {{search_count}}
|
||||
|
||||
---
|
||||
|
||||
_This market research report was generated using the BMad Method Market Research Workflow, combining systematic analysis frameworks with real-time market intelligence gathering. All factual claims are backed by cited sources with verification dates._
|
||||
245
bmad/bmm/workflows/1-analysis/research/template-technical.md
Normal file
245
bmad/bmm/workflows/1-analysis/research/template-technical.md
Normal file
@@ -0,0 +1,245 @@
|
||||
# Technical Research Report: {{technical_question}}
|
||||
|
||||
**Date:** {{date}}
|
||||
**Prepared by:** {{user_name}}
|
||||
**Project Context:** {{project_context}}
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
{{recommendations}}
|
||||
|
||||
### Key Recommendation
|
||||
|
||||
**Primary Choice:** [Technology/Pattern Name]
|
||||
|
||||
**Rationale:** [2-3 sentence summary]
|
||||
|
||||
**Key Benefits:**
|
||||
|
||||
- [Benefit 1]
|
||||
- [Benefit 2]
|
||||
- [Benefit 3]
|
||||
|
||||
---
|
||||
|
||||
## 1. Research Objectives
|
||||
|
||||
### Technical Question
|
||||
|
||||
{{technical_question}}
|
||||
|
||||
### Project Context
|
||||
|
||||
{{project_context}}
|
||||
|
||||
### Requirements and Constraints
|
||||
|
||||
#### Functional Requirements
|
||||
|
||||
{{functional_requirements}}
|
||||
|
||||
#### Non-Functional Requirements
|
||||
|
||||
{{non_functional_requirements}}
|
||||
|
||||
#### Technical Constraints
|
||||
|
||||
{{technical_constraints}}
|
||||
|
||||
---
|
||||
|
||||
## 2. Technology Options Evaluated
|
||||
|
||||
{{technology_options}}
|
||||
|
||||
---
|
||||
|
||||
## 3. Detailed Technology Profiles
|
||||
|
||||
{{#tech_profile_1}}
|
||||
|
||||
### Option 1: [Technology Name]
|
||||
|
||||
{{tech_profile_1}}
|
||||
{{/tech_profile_1}}
|
||||
|
||||
{{#tech_profile_2}}
|
||||
|
||||
### Option 2: [Technology Name]
|
||||
|
||||
{{tech_profile_2}}
|
||||
{{/tech_profile_2}}
|
||||
|
||||
{{#tech_profile_3}}
|
||||
|
||||
### Option 3: [Technology Name]
|
||||
|
||||
{{tech_profile_3}}
|
||||
{{/tech_profile_3}}
|
||||
|
||||
{{#tech_profile_4}}
|
||||
|
||||
### Option 4: [Technology Name]
|
||||
|
||||
{{tech_profile_4}}
|
||||
{{/tech_profile_4}}
|
||||
|
||||
{{#tech_profile_5}}
|
||||
|
||||
### Option 5: [Technology Name]
|
||||
|
||||
{{tech_profile_5}}
|
||||
{{/tech_profile_5}}
|
||||
|
||||
---
|
||||
|
||||
## 4. Comparative Analysis
|
||||
|
||||
{{comparative_analysis}}
|
||||
|
||||
### Weighted Analysis
|
||||
|
||||
**Decision Priorities:**
|
||||
{{decision_priorities}}
|
||||
|
||||
{{weighted_analysis}}
|
||||
|
||||
---
|
||||
|
||||
## 5. Trade-offs and Decision Factors
|
||||
|
||||
{{use_case_fit}}
|
||||
|
||||
### Key Trade-offs
|
||||
|
||||
[Comparison of major trade-offs between top options]
|
||||
|
||||
---
|
||||
|
||||
## 6. Real-World Evidence
|
||||
|
||||
{{real_world_evidence}}
|
||||
|
||||
---
|
||||
|
||||
## 7. Architecture Pattern Analysis
|
||||
|
||||
{{#architecture_pattern_analysis}}
|
||||
{{architecture_pattern_analysis}}
|
||||
{{/architecture_pattern_analysis}}
|
||||
|
||||
---
|
||||
|
||||
## 8. Recommendations
|
||||
|
||||
{{recommendations}}
|
||||
|
||||
### Implementation Roadmap
|
||||
|
||||
1. **Proof of Concept Phase**
|
||||
- [POC objectives and timeline]
|
||||
|
||||
2. **Key Implementation Decisions**
|
||||
- [Critical decisions to make during implementation]
|
||||
|
||||
3. **Migration Path** (if applicable)
|
||||
- [Migration approach from current state]
|
||||
|
||||
4. **Success Criteria**
|
||||
- [How to validate the decision]
|
||||
|
||||
### Risk Mitigation
|
||||
|
||||
{{risk_mitigation}}
|
||||
|
||||
---
|
||||
|
||||
## 9. Architecture Decision Record (ADR)
|
||||
|
||||
{{architecture_decision_record}}
|
||||
|
||||
---
|
||||
|
||||
## 10. References and Resources
|
||||
|
||||
### Documentation
|
||||
|
||||
- [Links to official documentation]
|
||||
|
||||
### Benchmarks and Case Studies
|
||||
|
||||
- [Links to benchmarks and real-world case studies]
|
||||
|
||||
### Community Resources
|
||||
|
||||
- [Links to communities, forums, discussions]
|
||||
|
||||
### Additional Reading
|
||||
|
||||
- [Links to relevant articles, papers, talks]
|
||||
|
||||
---
|
||||
|
||||
## Appendices
|
||||
|
||||
### Appendix A: Detailed Comparison Matrix
|
||||
|
||||
[Full comparison table with all evaluated dimensions]
|
||||
|
||||
### Appendix B: Proof of Concept Plan
|
||||
|
||||
[Detailed POC plan if needed]
|
||||
|
||||
### Appendix C: Cost Analysis
|
||||
|
||||
[TCO analysis if performed]
|
||||
|
||||
---
|
||||
|
||||
## References and Sources
|
||||
|
||||
**CRITICAL: All technical claims, versions, and benchmarks must be verifiable through sources below**
|
||||
|
||||
### Official Documentation and Release Notes
|
||||
|
||||
{{sources_official_docs}}
|
||||
|
||||
### Performance Benchmarks and Comparisons
|
||||
|
||||
{{sources_benchmarks}}
|
||||
|
||||
### Community Experience and Reviews
|
||||
|
||||
{{sources_community}}
|
||||
|
||||
### Architecture Patterns and Best Practices
|
||||
|
||||
{{sources_architecture}}
|
||||
|
||||
### Additional Technical References
|
||||
|
||||
{{sources_additional}}
|
||||
|
||||
### Version Verification
|
||||
|
||||
- **Technologies Researched:** {{technology_count}}
|
||||
- **Versions Verified ({{current_year}}):** {{verified_versions_count}}
|
||||
- **Sources Requiring Update:** {{outdated_sources_count}}
|
||||
|
||||
**Note:** All version numbers were verified using current {{current_year}} sources. Versions may change - always verify latest stable release before implementation.
|
||||
|
||||
---
|
||||
|
||||
## Document Information
|
||||
|
||||
**Workflow:** BMad Research Workflow - Technical Research v2.0
|
||||
**Generated:** {{date}}
|
||||
**Research Type:** Technical/Architecture Research
|
||||
**Next Review:** [Date for review/update]
|
||||
**Total Sources Cited:** {{total_sources}}
|
||||
|
||||
---
|
||||
|
||||
_This technical research report was generated using the BMad Method Research Workflow, combining systematic technology evaluation frameworks with real-time research and analysis. All version numbers and technical claims are backed by current {{current_year}} sources._
|
||||
44
bmad/bmm/workflows/1-analysis/research/workflow.yaml
Normal file
44
bmad/bmm/workflows/1-analysis/research/workflow.yaml
Normal file
@@ -0,0 +1,44 @@
|
||||
# Research Workflow - Multi-Type Research System
|
||||
name: research
|
||||
description: "Adaptive research workflow supporting multiple research types: market research, deep research prompt generation, technical/architecture evaluation, competitive intelligence, user research, and domain analysis"
|
||||
author: "BMad"
|
||||
|
||||
# Critical variables from config
|
||||
config_source: "{project-root}/bmad/bmm/config.yaml"
|
||||
output_folder: "{config_source}:output_folder"
|
||||
user_name: "{config_source}:user_name"
|
||||
communication_language: "{config_source}:communication_language"
|
||||
document_output_language: "{config_source}:document_output_language"
|
||||
user_skill_level: "{config_source}:user_skill_level"
|
||||
date: system-generated
|
||||
current_year: system-generated
|
||||
current_month: system-generated
|
||||
|
||||
# Research behavior - WEB RESEARCH IS DEFAULT
|
||||
enable_web_research: true
|
||||
|
||||
# Source tracking and verification - CRITICAL FOR ACCURACY
|
||||
require_citations: true
|
||||
require_source_urls: true
|
||||
minimum_sources_per_claim: 2
|
||||
fact_check_critical_data: true
|
||||
|
||||
# Workflow components - ROUTER PATTERN
|
||||
installed_path: "{project-root}/bmad/bmm/workflows/1-analysis/research"
|
||||
instructions: "{installed_path}/instructions-router.md" # Router loads specific instruction sets
|
||||
validation: "{installed_path}/checklist.md"
|
||||
|
||||
# Research type specific instructions (loaded by router)
|
||||
instructions_market: "{installed_path}/instructions-market.md"
|
||||
instructions_deep_prompt: "{installed_path}/instructions-deep-prompt.md"
|
||||
instructions_technical: "{installed_path}/instructions-technical.md"
|
||||
|
||||
# Templates (loaded based on research type)
|
||||
template_market: "{installed_path}/template-market.md"
|
||||
template_deep_prompt: "{installed_path}/template-deep-prompt.md"
|
||||
template_technical: "{installed_path}/template-technical.md"
|
||||
|
||||
# Output configuration (dynamic based on research type selected in router)
|
||||
default_output_file: "{output_folder}/research-{{research_type}}-{{date}}.md"
|
||||
|
||||
standalone: true
|
||||
Reference in New Issue
Block a user