feat: Add Deep Research System v4.2.0 (#380)

feat: Add Deep Research System v4.2.0 - Autonomous web research capabilities

## Overview
Comprehensive implementation of Deep Research framework aligned with DR Agent architecture, enabling autonomous, adaptive, and intelligent web research capabilities.

## Key Features

### 🔬 Deep Research Agent
- 15th specialized agent for comprehensive research orchestration
- Adaptive planning strategies: Planning-Only, Intent-Planning, Unified Intent-Planning
- Multi-hop reasoning with genealogy tracking (up to 5 hops)
- Self-reflective mechanisms with confidence scoring (0.0-1.0)
- Case-based learning for cross-session intelligence

### 🎯 New /sc:research Command
- Intelligent web research with depth control (quick/standard/deep/exhaustive)
- Parallel-first execution for optimal performance
- Domain filtering and time-based search options
- Automatic report generation in claudedocs/

### 🔍 Tavily MCP Integration
- 7th MCP server for real-time web search
- News search with time filtering
- Content extraction from search results
- Multi-round searching with iterative refinement
- Free tier available with optional API key

### 🎨 MODE_DeepResearch
- 7th behavioral mode for systematic investigation
- 6-phase workflow: Understand → Plan → TodoWrite → Execute → Track → Validate
- Evidence-based reasoning with citation management
- Parallel operation defaults for efficiency

## Technical Changes

### Framework Updates
- Updated agent count: 14 → 15 agents
- Updated mode count: 6 → 7 modes
- Updated MCP server count: 6 → 7 servers
- Updated command count: 24 → 25 commands

### Configuration
- Added RESEARCH_CONFIG.md for research settings
- Added deep_research_workflows.md with examples
- Standardized file naming conventions (UPPERCASE for Core)
- Removed multi-source investigation features for simplification

### Integration Points
- Enhanced MCP component with remote server support
- Added check_research_prerequisites() in environment.py
- Created verify_research_integration.sh script
- Updated all documentation guides

## Requirements
- TAVILY_API_KEY environment variable (free tier available)
- Node.js and npm for Tavily MCP execution

## Documentation
- Complete user guide integration
- Workflow examples and best practices
- API configuration instructions
- Depth level explanations

🤖 Generated with Claude Code

Co-authored-by: moshe_anconina <moshe_a@ituran.com>
Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Moshe Anconina 2025-09-21 04:54:42 +03:00 committed by GitHub
parent e4f2f82aa9
commit f7cb0f7eb7
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
22 changed files with 2169 additions and 39 deletions

View File

@ -7,6 +7,37 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
## [4.2.0] - 2025-09-18
### Added
- **Deep Research System** - Complete implementation of autonomous web research capabilities
- New `/sc:research` command for intelligent web research with DR Agent architecture
- `deep-research-agent` - 15th specialized agent for research orchestration
- `MODE_DeepResearch` - 7th behavioral mode for research workflows
- Tavily MCP integration (7th MCP server) for real-time web search
- Research configuration system (`RESEARCH_CONFIG.md`)
- Comprehensive workflow examples (`deep_research_workflows.md`)
- Three planning strategies: Planning-Only, Intent-to-Planning, Unified Intent-Planning
- Multi-hop reasoning with genealogy tracking for complex queries
- Case-based reasoning for learning from past research patterns
### Changed
- Updated agent count from 14 to 15 (added deep-research-agent)
- Updated mode count from 6 to 7 (added MODE_DeepResearch)
- Updated MCP server count from 6 to 7 (added Tavily)
- Updated command count from 24 to 25 (added /sc:research)
- Enhanced MCP component with remote server support for Tavily
- Added `_install_remote_mcp_server` method to handle remote MCP configurations
### Technical
- Added Tavily to `server_docs_map` in `setup/components/mcp_docs.py`
- Implemented remote MCP server handler in `setup/components/mcp.py`
- Added `check_research_prerequisites()` function in `setup/utils/environment.py`
- Created verification script `scripts/verify_research_integration.sh`
### Requirements
- `TAVILY_API_KEY` environment variable for web search functionality
- Node.js and npm for Tavily MCP execution
## [4.1.4] - 2025-09-20
### Added
- Comprehensive flag documentation integrated into `/sc:help` command
@ -19,7 +50,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Smart server merging (existing + selected + previously installed)
- Documentation cleanup: removed non-existent commands (sc:fix, sc:simple-pix, sc:update, sc:develop, sc:modernize, sc:simple-fix)
- CLI logic to allow mcp_docs installation without server selection
### Changed
- MCP component now supports true incremental installation
- mcp_docs component auto-detects and installs documentation for all detected servers

View File

@ -34,14 +34,19 @@ This guide documents how SuperClaude's Context-Oriented Configuration Framework
├── MCP_Playwright.md # Playwright MCP integration
├── MCP_Sequential.md # Sequential MCP integration
├── MCP_Serena.md # Serena MCP integration
├── MCP_Tavily.md # Tavily MCP integration
├── MCP_Zig.md # Zig MCP integration
├── MODE_Brainstorming.md # Collaborative discovery mode
├── MODE_Business_Panel.md # Business expert panel mode
├── MODE_DeepResearch.md # Deep research mode
├── MODE_Introspection.md # Transparent reasoning mode
├── MODE_Orchestration.md # Tool coordination mode
├── MODE_Task_Management.md # Task orchestration mode
├── MODE_Token_Efficiency.md # Compressed communication mode
├── agents/ # Domain specialist contexts (14 total)
├── agents/ # Domain specialist contexts (19 total)
│ ├── backend-architect.md # Backend expertise
│ ├── business-panel-experts.md # Business strategy panel
│ ├── deep-research-agent.md # Deep research expertise
│ ├── devops-architect.md # DevOps expertise
│ ├── frontend-architect.md # Frontend expertise
│ ├── learning-guide.md # Educational expertise
@ -53,27 +58,34 @@ This guide documents how SuperClaude's Context-Oriented Configuration Framework
│ ├── root-cause-analyst.md # Problem diagnosis expertise
│ ├── security-engineer.md # Security expertise
│ ├── socratic-mentor.md # Educational expertise
│ ├── spec-panel-experts.md # Specification review panel
│ ├── system-architect.md # System design expertise
│ └── technical-writer.md # Documentation expertise
│ ├── technical-writer.md # Documentation expertise
│ ├── test-runner.md # Test execution expertise
│ └── wave-orchestrator.md # Wave orchestration patterns
└── commands/ # Workflow pattern contexts
└── sc/ # SuperClaude command namespace (21 total)
└── sc/ # SuperClaude command namespace (25 total)
├── analyze.md # Analysis patterns
├── brainstorm.md # Discovery patterns
├── build.md # Build patterns
├── business-panel.md # Business expert panel patterns
├── cleanup.md # Cleanup patterns
├── design.md # Design patterns
├── document.md # Documentation patterns
├── estimate.md # Estimation patterns
├── explain.md # Explanation patterns
├── git.md # Git workflow patterns
├── help.md # Help and command listing
├── implement.md # Implementation patterns
├── improve.md # Improvement patterns
├── index.md # Index patterns
├── load.md # Context loading patterns
├── reflect.md # Reflection patterns
├── research.md # Deep research patterns
├── save.md # Session persistence patterns
├── select-tool.md # Tool selection patterns
├── spawn.md # Multi-agent patterns
├── spec-panel.md # Specification review panel
├── task.md # Task management patterns
├── test.md # Testing patterns
├── troubleshoot.md # Troubleshooting patterns
@ -112,9 +124,12 @@ The main `CLAUDE.md` file uses an import system to load multiple context files:
@MCP_Playwright.md # Playwright MCP integration
@MCP_Sequential.md # Sequential MCP integration
@MCP_Serena.md # Serena MCP integration
@MCP_Tavily.md # Tavily MCP integration
@MCP_Zig.md # Zig MCP integration
*CRITICAL*
@MODE_Brainstorming.md # Collaborative discovery mode
@MODE_Business_Panel.md # Business expert panel mode
@MODE_DeepResearch.md # Deep research mode
@MODE_Introspection.md # Transparent reasoning mode
@MODE_Task_Management.md # Task orchestration mode
@MODE_Orchestration.md # Tool coordination mode

View File

@ -1,6 +1,6 @@
# SuperClaude Agents Guide 🤖
SuperClaude provides 14 domain specialist agents that Claude Code can invoke for specialized expertise.
SuperClaude provides 15 domain specialist agents that Claude Code can invoke for specialized expertise.
## 🧪 Testing Agent Activation
@ -243,6 +243,48 @@ Task Analysis →
**Works Best With**: system-architect (infrastructure planning), security-engineer (compliance), performance-engineer (monitoring)
---
### deep-research-agent 🔬
**Expertise**: Comprehensive research with adaptive strategies and multi-hop reasoning
**Auto-Activation**:
- Keywords: "research", "investigate", "discover", "explore", "find out", "search for", "latest", "current"
- Commands: `/sc:research` automatically activates this agent
- Context: Complex queries requiring thorough research, current information needs, fact-checking
- Complexity: Questions spanning multiple domains or requiring iterative exploration
**Capabilities**:
- **Adaptive Planning Strategies**: Planning (direct), Intent (clarify first), Unified (collaborative)
- **Multi-Hop Reasoning**: Up to 5 levels - entity expansion, temporal progression, conceptual deepening, causal chains
- **Self-Reflective Mechanisms**: Progress assessment after each major step with replanning triggers
- **Evidence Management**: Clear citations, relevance scoring, uncertainty acknowledgment
- **Tool Orchestration**: Parallel-first execution with Tavily (search), Playwright (JavaScript content), Sequential (reasoning)
- **Learning Integration**: Pattern recognition and strategy reuse via Serena memory
**Research Depth Levels**:
- **Quick**: Basic search, 1 hop, summary output
- **Standard**: Extended search, 2-3 hops, structured report (default)
- **Deep**: Comprehensive search, 3-4 hops, detailed analysis
- **Exhaustive**: Maximum depth, 5 hops, complete investigation
**Examples**:
1. **Technical Research**: `/sc:research "latest React Server Components patterns"` → Comprehensive technical research with implementation examples
2. **Market Analysis**: `/sc:research "AI coding assistants landscape 2024" --strategy unified` → Collaborative analysis with user input
3. **Academic Investigation**: `/sc:research "quantum computing breakthroughs" --depth exhaustive` → Comprehensive literature review with evidence chains
**Workflow Pattern** (6-Phase):
1. **Understand** (5-10%): Assess query complexity
2. **Plan** (10-15%): Select strategy and identify parallel opportunities
3. **TodoWrite** (5%): Create adaptive task hierarchy (3-15 tasks)
4. **Execute** (50-60%): Parallel searches and extractions
5. **Track** (Continuous): Monitor progress and confidence
6. **Validate** (10-15%): Verify evidence chains
**Output**: Reports saved to `claudedocs/research_[topic]_[timestamp].md`
**Works Best With**: system-architect (technical research), learning-guide (educational research), requirements-analyst (market research)
### Quality & Analysis Agents 🔍
### security-engineer 🔒
@ -618,6 +660,7 @@ After applying agent fixes, test with:
| **Documentation** | "documentation", "readme", "API docs" | technical-writer |
| **Learning** | "explain", "tutorial", "beginner", "teaching" | learning-guide |
| **Requirements** | "requirements", "PRD", "specification" | requirements-analyst |
| **Research** | "research", "investigate", "latest", "current" | deep-research-agent |
### Command-Agent Mapping
@ -631,6 +674,7 @@ After applying agent fixes, test with:
| `/sc:design` | system-architect | Domain architects, requirements-analyst |
| `/sc:test` | quality-engineer | security-engineer, performance-engineer |
| `/sc:explain` | learning-guide | technical-writer, domain specialists |
| `/sc:research` | deep-research-agent | Technical specialists, learning-guide |
### Effective Agent Combinations

View File

@ -1,6 +1,6 @@
# SuperClaude Commands Guide
SuperClaude provides 24 commands for Claude Code: `/sc:*` commands for workflows and `@agent-*` for specialists.
SuperClaude provides 25 commands for Claude Code: `/sc:*` commands for workflows and `@agent-*` for specialists.
## Command Types
@ -106,7 +106,7 @@ python3 -m SuperClaude install --list-components | grep mcp
- [Essential Commands](#essential-commands) - Start here (8 core commands)
- [Common Workflows](#common-workflows) - Command combinations that work
- [Full Command Reference](#full-command-reference) - All 23 commands organized by category
- [Full Command Reference](#full-command-reference) - All 25 commands organized by category
- [Troubleshooting](#troubleshooting) - Common issues and solutions
- [Command Index](#command-index) - Find commands by category
@ -133,6 +133,24 @@ python3 -m SuperClaude install --list-components | grep mcp
- Discovering available commands: `/sc:help`
- Getting a quick reminder of command names: `/sc:help`
### `/sc:research` - Deep Research Command
**Purpose**: Comprehensive web research with adaptive planning and intelligent search
**Syntax**: `/sc:research "[query]"` `[--depth quick|standard|deep|exhaustive] [--strategy planning|intent|unified]`
**Use Cases**:
- Technical research: `/sc:research "latest React 19 features" --depth deep`
- Market analysis: `/sc:research "AI coding assistant landscape 2024" --strategy unified`
- Academic investigation: `/sc:research "quantum computing breakthroughs" --depth exhaustive`
- Current events: `/sc:research "latest AI developments 2024"`
**Key Capabilities**:
- **6-Phase Workflow**: Understand → Plan → TodoWrite → Execute → Track → Validate
- **Adaptive Depth**: Quick (basic search), Standard (extended), Deep (comprehensive), Exhaustive (maximum depth)
- **Planning Strategies**: Planning (direct), Intent (clarify first), Unified (collaborative)
- **Parallel Execution**: Default parallel searches and extractions
- **Evidence Management**: Clear citations with relevance scoring
- **Output Standards**: Reports saved to `claudedocs/research_[topic]_[timestamp].md`
### `/sc:implement` - Feature Development
**Purpose**: Full-stack feature implementation with intelligent specialist routing
**Syntax**: `/sc:implement "feature description"` `[--type frontend|backend|fullstack] [--focus security|performance]`
@ -277,6 +295,7 @@ python3 -m SuperClaude install --list-components | grep mcp
| Command | Purpose | Best For |
|---------|---------|----------|
| **analyze** | Code assessment | Quality audits, security reviews |
| **research** | Web research with intelligent search | Technical research, current events, market analysis |
| **business-panel** | Strategic analysis | Business decisions, competitive assessment |
| **spec-panel** | Specification review | Requirements validation, architecture analysis |
| **troubleshoot** | Problem diagnosis | Bug investigation, performance issues |

View File

@ -2,7 +2,7 @@
## Overview
MCP (Model Context Protocol) servers extend Claude Code's capabilities through specialized tools. SuperClaude integrates 6 MCP servers and provides Claude with instructions on when to activate them based on your tasks.
MCP (Model Context Protocol) servers extend Claude Code's capabilities through specialized tools. SuperClaude integrates 7 MCP servers and provides Claude with instructions on when to activate them based on your tasks.
### 🔍 Reality Check
- **What MCP servers are**: External Node.js processes that provide additional tools
@ -17,6 +17,7 @@ MCP (Model Context Protocol) servers extend Claude Code's capabilities through s
- **playwright**: Browser automation and E2E testing
- **morphllm-fast-apply**: Pattern-based code transformations
- **serena**: Semantic code understanding and project memory
- **tavily**: Web search and real-time information retrieval
## Quick Start
@ -32,6 +33,7 @@ MCP (Model Context Protocol) servers extend Claude Code's capabilities through s
| `test`, `e2e`, `browser` | **playwright** |
| Multi-file edits, refactoring | **morphllm-fast-apply** |
| Large projects, sessions | **serena** |
| `/sc:research`, `latest`, `current` | **tavily** |
## Server Details
@ -119,6 +121,36 @@ export MORPH_API_KEY="your_key_here"
/sc:refactor "extract UserService" --serena
```
### tavily 🔍
**Purpose**: Web search and real-time information retrieval for research
**Triggers**: `/sc:research` commands, "latest" information requests, current events, fact-checking
**Requirements**: Node.js 16+, TAVILY_API_KEY (free tier available at https://app.tavily.com)
```bash
# Automatic activation
/sc:research "latest AI developments 2024"
# → Performs intelligent web research
# Manual activation
/sc:analyze "market trends" --tavily
# API key setup (get free key at https://app.tavily.com)
export TAVILY_API_KEY="tvly-your_api_key_here"
```
**Capabilities:**
- **Web Search**: Comprehensive searches with ranking and filtering
- **News Search**: Time-filtered current events and updates
- **Content Extraction**: Full-text extraction from search results
- **Domain Filtering**: Include/exclude specific domains
- **Multi-Hop Research**: Iterative searches based on findings (up to 5 hops)
**Research Depth Control:**
- `--depth quick`: 5-10 sources, basic synthesis
- `--depth standard`: 10-20 sources, structured report (default)
- `--depth deep`: 20-40 sources, comprehensive analysis
- `--depth exhaustive`: 40+ sources, academic-level research
## Configuration
**MCP Configuration File (`~/.claude.json`):**
@ -150,6 +182,11 @@ export MORPH_API_KEY="your_key_here"
"serena": {
"command": "uvx",
"args": ["--from", "git+https://github.com/oraios/serena", "serena", "start-mcp-server", "--context", "ide-assistant"]
},
"tavily": {
"command": "npx",
"args": ["-y", "@tavily/mcp@latest"],
"env": {"TAVILY_API_KEY": "${TAVILY_API_KEY}"}
}
}
}
@ -211,16 +248,21 @@ export TWENTYFIRST_API_KEY="your_key_here"
# For Morphllm server (required for bulk transformations)
export MORPH_API_KEY="your_key_here"
# For Tavily server (required for web search - free tier available)
export TAVILY_API_KEY="tvly-your_key_here"
# Add to shell profile for persistence
echo 'export TWENTYFIRST_API_KEY="your_key"' >> ~/.bashrc
echo 'export MORPH_API_KEY="your_key"' >> ~/.bashrc
echo 'export TAVILY_API_KEY="your_key"' >> ~/.bashrc
```
**Environment Variable Usage:**
- ✅ `TWENTYFIRST_API_KEY` - Required for Magic MCP server functionality
- ✅ `MORPH_API_KEY` - Required for Morphllm MCP server functionality
- ✅ `TAVILY_API_KEY` - Required for Tavily MCP server functionality (free tier available)
- ❌ Other env vars in docs - Examples only, not used by framework
- 📝 Both are paid service API keys, framework works without them
- 📝 Magic and Morphllm are paid services, Tavily has free tier, framework works without them
## Server Combinations
@ -238,6 +280,8 @@ echo 'export MORPH_API_KEY="your_key"' >> ~/.bashrc
- **Web Development**: magic + context7 + playwright
- **Enterprise Refactoring**: serena + morphllm + sequential-thinking
- **Complex Analysis**: sequential-thinking + context7 + serena
- **Deep Research**: tavily + sequential-thinking + serena + playwright
- **Current Events**: tavily + context7 + sequential-thinking
## Integration
@ -245,11 +289,13 @@ echo 'export MORPH_API_KEY="your_key"' >> ~/.bashrc
- Analysis commands automatically use Sequential + Serena
- Implementation commands use Magic + Context7
- Testing commands use Playwright + Sequential
- Research commands use Tavily + Sequential + Playwright
**With Behavioral Modes:**
- Brainstorming Mode: Sequential for discovery
- Task Management: Serena for persistence
- Orchestration Mode: Optimal server selection
- Deep Research Mode: Tavily + Sequential + Playwright coordination
**Performance Control:**
- Automatic resource management based on system load

View File

@ -9,6 +9,7 @@ Test modes by using `/sc:` commands - they activate automatically based on task
|------|---------|---------------|---------------|---------------|
| **🧠 Brainstorming** | Interactive discovery | "brainstorm", "maybe", vague requests | Socratic questions, requirement elicitation | New project planning, unclear requirements |
| **🔍 Introspection** | Meta-cognitive analysis | Error recovery, "analyze reasoning" | Transparent thinking markers (🤔, 🎯, 💡) | Debugging, learning, optimization |
| **🔬 Deep Research** | Systematic investigation mindset | `/sc:research`, investigation keywords | 6-phase workflow, evidence-based reasoning | Technical research, current events, market analysis |
| **📋 Task Management** | Complex coordination | >3 steps, >2 directories | Phase breakdown, memory persistence | Multi-step operations, project management |
| **🎯 Orchestration** | Intelligent tool selection | Multi-tool ops, high resource usage | Optimal tool routing, parallel execution | Complex analysis, performance optimization |
| **⚡ Token Efficiency** | Compressed communication | High context usage, `--uc` flag | Symbol systems, estimated 30-50% token reduction | Resource constraints, large operations |
@ -120,6 +121,60 @@ Introspective Approach:
---
### 🔬 Deep Research Mode - Systematic Investigation Mindset
**Purpose**: Research mindset for systematic investigation and evidence-based reasoning.
**Auto-Activation Triggers:**
- `/sc:research` command invocation
- Research-related keywords: investigate, explore, discover, analyze
- Questions requiring current information beyond knowledge cutoff
- Complex research requirements
- Manual flag: `--research`
**Behavioral Modifications:**
- **Thinking Style**: Systematic over casual, evidence over assumption, progressive depth exploration
- **Communication**: Lead with confidence levels, provide inline citations, acknowledge uncertainties
- **Priority Shifts**: Completeness over speed, accuracy over speculation, verification over assumption
- **Process Adaptations**: Always create investigation plans, default to parallel operations, maintain evidence chains
**6-Phase Research Workflow:**
- 📋 **Understand** (5-10%): Assess query complexity and requirements
- 📝 **Plan** (10-15%): Select strategy (planning/intent/unified) and identify parallelization
- ✅ **TodoWrite** (5%): Create adaptive task hierarchy (3-15 tasks based on complexity)
- 🔄 **Execute** (50-60%): Parallel-first searches and smart extraction routing
- 📊 **Track** (Continuous): Monitor progress and update confidence scores
- ✓ **Validate** (10-15%): Verify evidence chains and ensure completeness
**Example Experience:**
```
Standard Mode: "Here are some search results about quantum computing..."
Deep Research Mode:
"📊 Research Plan: Quantum computing breakthroughs
✓ TodoWrite: Created 8 research tasks
🔄 Executing parallel searches across domains
📈 Confidence: 0.82 across 15 verified sources
📝 Report saved: claudedocs/research_quantum_[timestamp].md"
```
#### Quality Standards
- [ ] Minimum 2 sources per claim with inline citations
- [ ] Confidence scoring (0.0-1.0) for all findings
- [ ] Parallel execution by default for independent operations
- [ ] Reports saved to claudedocs/ with proper structure
- [ ] Clear methodology and evidence presentation
**Verify:** `/sc:research "test topic"` should create TodoWrite and execute systematically
**Test:** All research should include confidence scores and citations
**Check:** Reports should be saved to claudedocs/ automatically
**Works Best With:**
- **→ Task Management**: Research planning with TodoWrite integration
- **→ Orchestration**: Parallel Tavily/Playwright coordination
- **Manual Override**: Use `--depth` and `--strategy` for fine control
---
### 📋 Task Management Mode - Complex Coordination
**Purpose**: Hierarchical task organization with session persistence for multi-step operations.

115
README.md
View File

@ -14,7 +14,7 @@
<a href="https://github.com/SuperClaude-Org/SuperQwen_Framework" target="_blank">
<img src="https://img.shields.io/badge/Try-SuperQwen_Framework-orange" alt="Try SuperQwen Framework"/>
</a>
<img src="https://img.shields.io/badge/version-4.1.4-blue" alt="Version">
<img src="https://img.shields.io/badge/version-4.2.0-blue" alt="Version">
<img src="https://img.shields.io/badge/License-MIT-yellow.svg" alt="License">
<img src="https://img.shields.io/badge/PRs-welcome-brightgreen.svg" alt="PRs Welcome">
</p>
@ -61,7 +61,7 @@
| **Commands** | **Agents** | **Modes** | **MCP Servers** |
|:------------:|:----------:|:---------:|:---------------:|
| **24** | **14** | **6** | **6** |
| **25** | **15** | **7** | **7** |
| Slash Commands | Specialized AI | Behavioral | Integrations |
Use the new `/sc:help` command to see a full list of all available commands.
@ -204,7 +204,8 @@ pip install --break-system-packages SuperClaude
<td width="50%">
### 🤖 **Smarter Agent System**
**14 specialized agents** with domain expertise:
**15 specialized agents** with domain expertise:
- Deep Research agent for autonomous web research
- Security engineer catches real vulnerabilities
- Frontend architect understands UI patterns
- Automatic coordination based on context
@ -216,7 +217,7 @@ pip install --break-system-packages SuperClaude
### 📝 **Improved Namespace**
**`/sc:` prefix** for all commands:
- No conflicts with custom commands
- 23 commands covering full lifecycle
- 25 commands covering full lifecycle
- From brainstorming to deployment
- Clean, organized command structure
@ -226,21 +227,23 @@ pip install --break-system-packages SuperClaude
<td width="50%">
### 🔧 **MCP Server Integration**
**6 powerful servers** working together:
**7 powerful servers** working together:
- **Context7** → Up-to-date documentation
- **Sequential** → Complex analysis
- **Magic** → UI component generation
- **Playwright** → Browser testing
- **Morphllm** → Bulk transformations
- **Serena** → Session persistence
- **Tavily** → Web search for deep research
</td>
<td width="50%">
### 🎯 **Behavioral Modes**
**6 adaptive modes** for different contexts:
**7 adaptive modes** for different contexts:
- **Brainstorming** → Asks right questions
- **Business Panel** → Multi-expert strategic analysis
- **Deep Research** → Autonomous web research
- **Orchestration** → Efficient tool coordination
- **Token-Efficiency** → 30-50% context savings
- **Task Management** → Systematic organization
@ -278,6 +281,98 @@ pip install --break-system-packages SuperClaude
<div align="center">
## 🔬 **Deep Research Capabilities**
### **Autonomous Web Research Aligned with DR Agent Architecture**
SuperClaude v4.2 introduces comprehensive Deep Research capabilities, enabling autonomous, adaptive, and intelligent web research.
<table>
<tr>
<td width="50%">
### 🎯 **Adaptive Planning**
**Three intelligent strategies:**
- **Planning-Only**: Direct execution for clear queries
- **Intent-Planning**: Clarification for ambiguous requests
- **Unified**: Collaborative plan refinement (default)
</td>
<td width="50%">
### 🔄 **Multi-Hop Reasoning**
**Up to 5 iterative searches:**
- Entity expansion (Paper → Authors → Works)
- Concept deepening (Topic → Details → Examples)
- Temporal progression (Current → Historical)
- Causal chains (Effect → Cause → Prevention)
</td>
</tr>
<tr>
<td width="50%">
### 📊 **Quality Scoring**
**Confidence-based validation:**
- Source credibility assessment (0.0-1.0)
- Coverage completeness tracking
- Synthesis coherence evaluation
- Minimum threshold: 0.6, Target: 0.8
</td>
<td width="50%">
### 🧠 **Case-Based Learning**
**Cross-session intelligence:**
- Pattern recognition and reuse
- Strategy optimization over time
- Successful query formulations saved
- Performance improvement tracking
</td>
</tr>
</table>
### **Research Command Usage**
```bash
# Basic research with automatic depth
/sc:research "latest AI developments 2024"
# Controlled research depth
/sc:research "quantum computing breakthroughs" --depth exhaustive
# Specific strategy selection
/sc:research "market analysis" --strategy planning-only
# Domain-filtered research
/sc:research "React patterns" --domains "reactjs.org,github.com"
```
### **Research Depth Levels**
| Depth | Sources | Hops | Time | Best For |
|:-----:|:-------:|:----:|:----:|----------|
| **Quick** | 5-10 | 1 | ~2min | Quick facts, simple queries |
| **Standard** | 10-20 | 3 | ~5min | General research (default) |
| **Deep** | 20-40 | 4 | ~8min | Comprehensive analysis |
| **Exhaustive** | 40+ | 5 | ~10min | Academic-level research |
### **Integrated Tool Orchestration**
The Deep Research system intelligently coordinates multiple tools:
- **Tavily MCP**: Primary web search and discovery
- **Playwright MCP**: Complex content extraction
- **Sequential MCP**: Multi-step reasoning and synthesis
- **Serena MCP**: Memory and learning persistence
- **Context7 MCP**: Technical documentation lookup
</div>
---
<div align="center">
## 📚 **Documentation**
### **Complete Guide to SuperClaude**
@ -302,19 +397,19 @@ pip install --break-system-packages SuperClaude
<td valign="top">
- 🎯 [**Commands Reference**](Docs/User-Guide/commands.md)
*All 23 slash commands*
*All 25 slash commands*
- 🤖 [**Agents Guide**](Docs/User-Guide/agents.md)
*14 specialized agents*
*15 specialized agents*
- 🎨 [**Behavioral Modes**](Docs/User-Guide/modes.md)
*5 adaptive modes*
*7 adaptive modes*
- 🚩 [**Flags Guide**](Docs/User-Guide/flags.md)
*Control behaviors*
- 🔧 [**MCP Servers**](Docs/User-Guide/mcp-servers.md)
*6 server integrations*
*7 server integrations*
- 💼 [**Session Management**](Docs/User-Guide/session-management.md)
*Save & restore state*

View File

@ -0,0 +1,185 @@
---
name: deep-research-agent
description: Specialist for comprehensive research with adaptive strategies and intelligent exploration
category: analysis
---
# Deep Research Agent
## Triggers
- /sc:research command activation
- Complex investigation requirements
- Complex information synthesis needs
- Academic research contexts
- Real-time information requests
## Behavioral Mindset
Think like a research scientist crossed with an investigative journalist. Apply systematic methodology, follow evidence chains, question sources critically, and synthesize findings coherently. Adapt your approach based on query complexity and information availability.
## Core Capabilities
### Adaptive Planning Strategies
**Planning-Only** (Simple/Clear Queries)
- Direct execution without clarification
- Single-pass investigation
- Straightforward synthesis
**Intent-Planning** (Ambiguous Queries)
- Generate clarifying questions first
- Refine scope through interaction
- Iterative query development
**Unified Planning** (Complex/Collaborative)
- Present investigation plan
- Seek user confirmation
- Adjust based on feedback
### Multi-Hop Reasoning Patterns
**Entity Expansion**
- Person → Affiliations → Related work
- Company → Products → Competitors
- Concept → Applications → Implications
**Temporal Progression**
- Current state → Recent changes → Historical context
- Event → Causes → Consequences → Future implications
**Conceptual Deepening**
- Overview → Details → Examples → Edge cases
- Theory → Practice → Results → Limitations
**Causal Chains**
- Observation → Immediate cause → Root cause
- Problem → Contributing factors → Solutions
Maximum hop depth: 5 levels
Track hop genealogy for coherence
### Self-Reflective Mechanisms
**Progress Assessment**
After each major step:
- Have I addressed the core question?
- What gaps remain?
- Is my confidence improving?
- Should I adjust strategy?
**Quality Monitoring**
- Source credibility check
- Information consistency verification
- Bias detection and balance
- Completeness evaluation
**Replanning Triggers**
- Confidence below 60%
- Contradictory information >30%
- Dead ends encountered
- Time/resource constraints
### Evidence Management
**Result Evaluation**
- Assess information relevance
- Check for completeness
- Identify gaps in knowledge
- Note limitations clearly
**Citation Requirements**
- Provide sources when available
- Use inline citations for clarity
- Note when information is uncertain
### Tool Orchestration
**Search Strategy**
1. Broad initial searches (Tavily)
2. Identify key sources
3. Deep extraction as needed
4. Follow interesting leads
**Extraction Routing**
- Static HTML → Tavily extraction
- JavaScript content → Playwright
- Technical docs → Context7
- Local context → Native tools
**Parallel Optimization**
- Batch similar searches
- Concurrent extractions
- Distributed analysis
- Never sequential without reason
### Learning Integration
**Pattern Recognition**
- Track successful query formulations
- Note effective extraction methods
- Identify reliable source types
- Learn domain-specific patterns
**Memory Usage**
- Check for similar past research
- Apply successful strategies
- Store valuable findings
- Build knowledge over time
## Research Workflow
### Discovery Phase
- Map information landscape
- Identify authoritative sources
- Detect patterns and themes
- Find knowledge boundaries
### Investigation Phase
- Deep dive into specifics
- Cross-reference information
- Resolve contradictions
- Extract insights
### Synthesis Phase
- Build coherent narrative
- Create evidence chains
- Identify remaining gaps
- Generate recommendations
### Reporting Phase
- Structure for audience
- Add proper citations
- Include confidence levels
- Provide clear conclusions
## Quality Standards
### Information Quality
- Verify key claims when possible
- Recency preference for current topics
- Assess information reliability
- Bias detection and mitigation
### Synthesis Requirements
- Clear fact vs interpretation
- Transparent contradiction handling
- Explicit confidence statements
- Traceable reasoning chains
### Report Structure
- Executive summary
- Methodology description
- Key findings with evidence
- Synthesis and analysis
- Conclusions and recommendations
- Complete source list
## Performance Optimization
- Cache search results
- Reuse successful patterns
- Prioritize high-value sources
- Balance depth with time
## Boundaries
**Excel at**: Current events, technical research, intelligent search, evidence-based analysis
**Limitations**: No paywall bypass, no private data access, no speculation without evidence

View File

@ -0,0 +1,103 @@
---
name: research
description: Deep web research with adaptive planning and intelligent search
category: command
complexity: advanced
mcp-servers: [tavily, sequential, playwright, serena]
personas: [deep-research-agent]
---
# /sc:research - Deep Research Command
> **Context Framework Note**: This command activates comprehensive research capabilities with adaptive planning, multi-hop reasoning, and evidence-based synthesis.
## Triggers
- Research questions beyond knowledge cutoff
- Complex research questions
- Current events and real-time information
- Academic or technical research requirements
- Market analysis and competitive intelligence
## Context Trigger Pattern
```
/sc:research "[query]" [--depth quick|standard|deep|exhaustive] [--strategy planning|intent|unified]
```
## Behavioral Flow
### 1. Understand (5-10% effort)
- Assess query complexity and ambiguity
- Identify required information types
- Determine resource requirements
- Define success criteria
### 2. Plan (10-15% effort)
- Select planning strategy based on complexity
- Identify parallelization opportunities
- Generate research question decomposition
- Create investigation milestones
### 3. TodoWrite (5% effort)
- Create adaptive task hierarchy
- Scale tasks to query complexity (3-15 tasks)
- Establish task dependencies
- Set progress tracking
### 4. Execute (50-60% effort)
- **Parallel-first searches**: Always batch similar queries
- **Smart extraction**: Route by content complexity
- **Multi-hop exploration**: Follow entity and concept chains
- **Evidence collection**: Track sources and confidence
### 5. Track (Continuous)
- Monitor TodoWrite progress
- Update confidence scores
- Log successful patterns
- Identify information gaps
### 6. Validate (10-15% effort)
- Verify evidence chains
- Check source credibility
- Resolve contradictions
- Ensure completeness
## Key Patterns
### Parallel Execution
- Batch all independent searches
- Run concurrent extractions
- Only sequential for dependencies
### Evidence Management
- Track search results
- Provide clear citations when available
- Note uncertainties explicitly
### Adaptive Depth
- **Quick**: Basic search, 1 hop, summary output
- **Standard**: Extended search, 2-3 hops, structured report
- **Deep**: Comprehensive search, 3-4 hops, detailed analysis
- **Exhaustive**: Maximum depth, 5 hops, complete investigation
## MCP Integration
- **Tavily**: Primary search and extraction engine
- **Sequential**: Complex reasoning and synthesis
- **Playwright**: JavaScript-heavy content extraction
- **Serena**: Research session persistence
## Output Standards
- Save reports to `claudedocs/research_[topic]_[timestamp].md`
- Include executive summary
- Provide confidence levels
- List all sources with citations
## Examples
```
/sc:research "latest developments in quantum computing 2024"
/sc:research "competitive analysis of AI coding assistants" --depth deep
/sc:research "best practices for distributed systems" --strategy unified
```
## Boundaries
**Will**: Current information, intelligent search, evidence-based analysis
**Won't**: Make claims without sources, skip validation, access restricted content

View File

@ -0,0 +1,446 @@
# Deep Research Configuration
## Default Settings
```yaml
research_defaults:
planning_strategy: unified
max_hops: 5
confidence_threshold: 0.7
memory_enabled: true
parallelization: true
parallel_first: true # MANDATORY DEFAULT
sequential_override_requires_justification: true # NEW
parallel_execution_rules:
DEFAULT_MODE: PARALLEL # EMPHASIZED
mandatory_parallel:
- "Multiple search queries"
- "Batch URL extractions"
- "Independent analyses"
- "Non-dependent hops"
- "Result processing"
- "Information extraction"
sequential_only_with_justification:
- reason: "Explicit dependency"
example: "Hop N requires Hop N-1 results"
- reason: "Resource constraint"
example: "API rate limit reached"
- reason: "User requirement"
example: "User requests sequential for debugging"
parallel_optimization:
batch_sizes:
searches: 5
extractions: 3
analyses: 2
intelligent_grouping:
by_domain: true
by_complexity: true
by_resource: true
planning_strategies:
planning_only:
clarification: false
user_confirmation: false
execution: immediate
intent_planning:
clarification: true
max_questions: 3
execution: after_clarification
unified:
clarification: optional
plan_presentation: true
user_feedback: true
execution: after_confirmation
hop_configuration:
max_depth: 5
timeout_per_hop: 60s
parallel_hops: true
loop_detection: true
genealogy_tracking: true
confidence_scoring:
relevance_weight: 0.5
completeness_weight: 0.5
minimum_threshold: 0.6
target_threshold: 0.8
self_reflection:
frequency: after_each_hop
triggers:
- confidence_below_threshold
- contradictions_detected
- time_elapsed_percentage: 80
- user_intervention
actions:
- assess_quality
- identify_gaps
- consider_replanning
- adjust_strategy
memory_management:
case_based_reasoning: true
pattern_learning: true
session_persistence: true
cross_session_learning: true
retention_days: 30
tool_coordination:
discovery_primary: tavily
extraction_smart_routing: true
reasoning_engine: sequential
memory_backend: serena
parallel_tool_calls: true
quality_gates:
planning_gate:
required_elements: [objectives, strategy, success_criteria]
execution_gate:
min_confidence: 0.6
synthesis_gate:
coherence_required: true
clarity_required: true
extraction_settings:
scraping_strategy: selective
screenshot_capture: contextual
authentication_handling: ethical
javascript_rendering: auto_detect
timeout_per_page: 15s
```
## Performance Optimizations
```yaml
optimization_strategies:
caching:
- Cache Tavily search results: 1 hour
- Cache Playwright extractions: 24 hours
- Cache Sequential analysis: 1 hour
- Reuse case patterns: always
parallelization:
- Parallel searches: max 5
- Parallel extractions: max 3
- Parallel analysis: max 2
- Tool call batching: true
resource_limits:
- Max time per research: 10 minutes
- Max search iterations: 10
- Max hops: 5
- Max memory per session: 100MB
```
## Strategy Selection Rules
```yaml
strategy_selection:
planning_only:
indicators:
- Clear, specific query
- Technical documentation request
- Well-defined scope
- No ambiguity detected
intent_planning:
indicators:
- Ambiguous terms present
- Broad topic area
- Multiple possible interpretations
- User expertise unknown
unified:
indicators:
- Complex multi-faceted query
- User collaboration beneficial
- Iterative refinement expected
- High-stakes research
```
## Source Credibility Matrix
```yaml
source_credibility:
tier_1_sources:
score: 0.9-1.0
types:
- Academic journals
- Government publications
- Official documentation
- Peer-reviewed papers
tier_2_sources:
score: 0.7-0.9
types:
- Established media
- Industry reports
- Expert blogs
- Technical forums
tier_3_sources:
score: 0.5-0.7
types:
- Community resources
- User documentation
- Social media (verified)
- Wikipedia
tier_4_sources:
score: 0.3-0.5
types:
- User forums
- Social media (unverified)
- Personal blogs
- Comments sections
```
## Depth Configurations
```yaml
research_depth_profiles:
quick:
max_sources: 10
max_hops: 1
iterations: 1
time_limit: 2 minutes
confidence_target: 0.6
extraction: tavily_only
standard:
max_sources: 20
max_hops: 3
iterations: 2
time_limit: 5 minutes
confidence_target: 0.7
extraction: selective
deep:
max_sources: 40
max_hops: 4
iterations: 3
time_limit: 8 minutes
confidence_target: 0.8
extraction: comprehensive
exhaustive:
max_sources: 50+
max_hops: 5
iterations: 5
time_limit: 10 minutes
confidence_target: 0.9
extraction: all_sources
```
## Multi-Hop Patterns
```yaml
hop_patterns:
entity_expansion:
description: "Explore entities found in previous hop"
example: "Paper → Authors → Other works → Collaborators"
max_branches: 3
concept_deepening:
description: "Drill down into concepts"
example: "Topic → Subtopics → Details → Examples"
max_depth: 4
temporal_progression:
description: "Follow chronological development"
example: "Current → Recent → Historical → Origins"
direction: backward
causal_chain:
description: "Trace cause and effect"
example: "Effect → Immediate cause → Root cause → Prevention"
validation: required
```
## Extraction Routing Rules
```yaml
extraction_routing:
use_tavily:
conditions:
- Static HTML content
- Simple article structure
- No JavaScript requirement
- Public access
use_playwright:
conditions:
- JavaScript rendering required
- Dynamic content present
- Authentication needed
- Interactive elements
- Screenshots required
use_context7:
conditions:
- Technical documentation
- API references
- Framework guides
- Library documentation
use_native:
conditions:
- Local file access
- Simple explanations
- Code generation
- General knowledge
```
## Case-Based Learning Schema
```yaml
case_schema:
case_id:
format: "research_[timestamp]_[topic_hash]"
case_content:
query: "original research question"
strategy_used: "planning approach"
successful_patterns:
- query_formulations: []
- extraction_methods: []
- synthesis_approaches: []
findings:
key_discoveries: []
source_credibility_scores: {}
confidence_levels: {}
lessons_learned:
what_worked: []
what_failed: []
optimizations: []
metrics:
time_taken: seconds
sources_processed: count
hops_executed: count
confidence_achieved: float
```
## Replanning Thresholds
```yaml
replanning_triggers:
confidence_based:
critical: < 0.4
low: < 0.6
acceptable: 0.6-0.7
good: > 0.7
time_based:
warning: 70% of limit
critical: 90% of limit
quality_based:
insufficient_sources: < 3
contradictions: > 30%
gaps_identified: > 50%
user_based:
explicit_request: immediate
implicit_dissatisfaction: assess
```
## Output Format Templates
```yaml
output_formats:
summary:
max_length: 500 words
sections: [key_finding, evidence, sources]
confidence_display: simple
report:
sections: [executive_summary, methodology, findings, synthesis, conclusions]
citations: inline
confidence_display: detailed
visuals: included
academic:
sections: [abstract, introduction, methodology, literature_review, findings, discussion, conclusions]
citations: academic_format
confidence_display: statistical
appendices: true
```
## Error Handling
```yaml
error_handling:
tavily_errors:
api_key_missing: "Check TAVILY_API_KEY environment variable"
rate_limit: "Wait and retry with exponential backoff"
no_results: "Expand search terms or try alternatives"
playwright_errors:
timeout: "Skip source or increase timeout"
navigation_failed: "Mark as inaccessible, continue"
screenshot_failed: "Continue without visual"
quality_errors:
low_confidence: "Trigger replanning"
contradictions: "Seek additional sources"
insufficient_data: "Expand search scope"
```
## Integration Points
```yaml
mcp_integration:
tavily:
role: primary_search
fallback: native_websearch
playwright:
role: complex_extraction
fallback: tavily_extraction
sequential:
role: reasoning_engine
fallback: native_reasoning
context7:
role: technical_docs
fallback: tavily_search
serena:
role: memory_management
fallback: session_only
```
## Monitoring Metrics
```yaml
metrics_tracking:
performance:
- search_latency
- extraction_time
- synthesis_duration
- total_research_time
quality:
- confidence_scores
- source_diversity
- coverage_completeness
- contradiction_rate
efficiency:
- cache_hit_rate
- parallel_execution_rate
- memory_usage
- api_cost
learning:
- pattern_reuse_rate
- strategy_success_rate
- improvement_trajectory
```

View File

@ -0,0 +1,495 @@
# Deep Research Workflows
## Example 1: Planning-Only Strategy
### Scenario
Clear research question: "Latest TensorFlow 3.0 features"
### Execution
```bash
/sc:research "Latest TensorFlow 3.0 features" --strategy planning-only --depth standard
```
### Workflow
```yaml
1. Planning (Immediate):
- Decompose: Official docs, changelog, tutorials
- No user clarification needed
2. Execution:
- Hop 1: Official TensorFlow documentation
- Hop 2: Recent tutorials and examples
- Confidence: 0.85 achieved
3. Synthesis:
- Features list with examples
- Migration guide references
- Performance comparisons
```
## Example 2: Intent-to-Planning Strategy
### Scenario
Ambiguous request: "AI safety"
### Execution
```bash
/sc:research "AI safety" --strategy intent-planning --depth deep
```
### Workflow
```yaml
1. Intent Clarification:
Questions:
- "Are you interested in technical AI alignment, policy/governance, or current events?"
- "What's your background level (researcher, developer, general interest)?"
- "Any specific AI systems or risks of concern?"
2. User Response:
- "Technical alignment for LLMs, researcher level"
3. Refined Planning:
- Focus on alignment techniques
- Academic sources priority
- Include recent papers
4. Multi-Hop Execution:
- Hop 1: Recent alignment papers
- Hop 2: Key researchers and labs
- Hop 3: Emerging techniques
- Hop 4: Open problems
5. Self-Reflection:
- Coverage: Complete ✓
- Depth: Adequate ✓
- Confidence: 0.82 ✓
```
## Example 3: Unified Intent-Planning with Replanning
### Scenario
Complex research: "Build AI startup competitive analysis"
### Execution
```bash
/sc:research "Build AI startup competitive analysis" --strategy unified --hops 5
```
### Workflow
```yaml
1. Initial Plan Presentation:
Proposed Research Areas:
- Current AI startup landscape
- Funding and valuations
- Technology differentiators
- Market positioning
- Growth strategies
"Does this cover your needs? Any specific competitors or aspects to focus on?"
2. User Adjustment:
"Focus on code generation tools, include pricing and technical capabilities"
3. Revised Multi-Hop Research:
- Hop 1: List of code generation startups
- Hop 2: Technical capabilities comparison
- Hop 3: Pricing and business models
- Hop 4: Customer reviews and adoption
- Hop 5: Investment and growth metrics
4. Mid-Research Replanning:
- Low confidence on technical details (0.55)
- Switch to Playwright for interactive demos
- Add GitHub repository analysis
5. Quality Gate Check:
- Technical coverage: Improved to 0.78 ✓
- Pricing data: Complete 0.90 ✓
- Competitive matrix: Generated ✓
```
## Example 4: Case-Based Research with Learning
### Scenario
Similar to previous research: "Rust async runtime comparison"
### Execution
```bash
/sc:research "Rust async runtime comparison" --memory enabled
```
### Workflow
```yaml
1. Case Retrieval:
Found Similar Case:
- "Go concurrency patterns" research
- Successful pattern: Technical benchmarks + code examples + community feedback
2. Adapted Strategy:
- Use similar structure for Rust
- Focus on: Tokio, async-std, smol
- Include benchmarks and examples
3. Execution with Known Patterns:
- Skip broad searches
- Direct to technical sources
- Use proven extraction methods
4. New Learning Captured:
- Rust community prefers different metrics than Go
- Crates.io provides useful statistics
- Discord communities have valuable discussions
5. Memory Update:
- Store successful Rust research patterns
- Note language-specific source preferences
- Save for future Rust queries
```
## Example 5: Self-Reflective Refinement Loop
### Scenario
Evolving research: "Quantum computing for optimization"
### Execution
```bash
/sc:research "Quantum computing for optimization" --confidence 0.8 --depth exhaustive
```
### Workflow
```yaml
1. Initial Research Phase:
- Academic papers collected
- Basic concepts understood
- Confidence: 0.65 (below threshold)
2. Self-Reflection Analysis:
Gaps Identified:
- Practical implementations missing
- No industry use cases
- Mathematical details unclear
3. Replanning Decision:
- Add industry reports
- Include video tutorials for math
- Search for code implementations
4. Enhanced Research:
- Hop 1→2: Papers → Authors → Implementations
- Hop 3→4: Companies → Case studies
- Hop 5: Tutorial videos for complex math
5. Quality Achievement:
- Confidence raised to 0.82 ✓
- Comprehensive coverage achieved
- Multiple perspectives included
```
## Example 6: Technical Documentation Research with Playwright
### Scenario
Research the latest Next.js 14 App Router features
### Execution
```bash
/sc:research "Next.js 14 App Router complete guide" --depth deep --scrape selective --screenshots
```
### Workflow
```yaml
1. Tavily Search:
- Find official docs, tutorials, blog posts
- Identify JavaScript-heavy documentation sites
2. URL Analysis:
- Next.js docs → JavaScript rendering required
- Blog posts → Static content, Tavily sufficient
- Video tutorials → Need transcript extraction
3. Playwright Navigation:
- Navigate to official documentation
- Handle interactive code examples
- Capture screenshots of UI components
4. Dynamic Extraction:
- Extract code samples
- Capture interactive demos
- Document routing patterns
5. Synthesis:
- Combine official docs with community tutorials
- Create comprehensive guide with visuals
- Include code examples and best practices
```
## Example 7: Competitive Intelligence with Visual Documentation
### Scenario
Analyze competitor pricing and features
### Execution
```bash
/sc:research "AI writing assistant tools pricing features 2024" --scrape all --screenshots --interactive
```
### Workflow
```yaml
1. Market Discovery:
- Tavily finds: Jasper, Copy.ai, Writesonic, etc.
- Identify pricing pages and feature lists
2. Complexity Assessment:
- Dynamic pricing calculators detected
- Interactive feature comparisons found
- Login-gated content identified
3. Playwright Extraction:
- Navigate to each pricing page
- Interact with pricing sliders
- Capture screenshots of pricing tiers
4. Feature Analysis:
- Extract feature matrices
- Compare capabilities
- Document limitations
5. Report Generation:
- Competitive positioning matrix
- Visual pricing comparison
- Feature gap analysis
- Strategic recommendations
```
## Example 8: Academic Research with Authentication
### Scenario
Research latest machine learning papers
### Execution
```bash
/sc:research "transformer architecture improvements 2024" --depth exhaustive --auth --scrape auto
```
### Workflow
```yaml
1. Academic Search:
- Tavily finds papers on arXiv, IEEE, ACM
- Identify open vs. gated content
2. Access Strategy:
- arXiv: Direct access, no auth needed
- IEEE: Institutional access required
- ACM: Mixed access levels
3. Extraction Approach:
- Public papers: Tavily extraction
- Gated content: Playwright with auth
- PDFs: Download and process
4. Citation Network:
- Follow reference chains
- Identify key contributors
- Map research lineage
5. Literature Synthesis:
- Chronological development
- Key innovations identified
- Future directions mapped
- Comprehensive bibliography
```
## Example 9: Real-time Market Data Research
### Scenario
Gather current cryptocurrency market analysis
### Execution
```bash
/sc:research "cryptocurrency market analysis BTC ETH 2024" --scrape all --interactive --screenshots
```
### Workflow
```yaml
1. Market Discovery:
- Find: CoinMarketCap, CoinGecko, TradingView
- Identify real-time data sources
2. Dynamic Content Handling:
- Playwright loads live charts
- Capture price movements
- Extract volume data
3. Interactive Analysis:
- Interact with chart timeframes
- Toggle technical indicators
- Capture different views
4. Data Synthesis:
- Current market conditions
- Technical analysis
- Sentiment indicators
- Visual documentation
5. Report Output:
- Market snapshot with charts
- Technical analysis summary
- Trading volume trends
- Risk assessment
```
## Example 10: Multi-Domain Research with Parallel Execution
### Scenario
Comprehensive analysis of "AI in healthcare 2024"
### Execution
```bash
/sc:research "AI in healthcare applications 2024" --depth exhaustive --hops 5 --parallel
```
### Workflow
```yaml
1. Domain Decomposition:
Parallel Searches:
- Medical AI applications
- Regulatory landscape
- Market analysis
- Technical implementations
- Ethical considerations
2. Multi-Hop Exploration:
Each Domain:
- Hop 1: Broad landscape
- Hop 2: Key players
- Hop 3: Case studies
- Hop 4: Challenges
- Hop 5: Future trends
3. Cross-Domain Synthesis:
- Medical ↔ Technical connections
- Regulatory ↔ Market impacts
- Ethical ↔ Implementation constraints
4. Quality Assessment:
- Coverage: All domains addressed
- Depth: Sufficient detail per domain
- Integration: Cross-domain insights
- Confidence: 0.87 achieved
5. Comprehensive Report:
- Executive summary
- Domain-specific sections
- Integrated analysis
- Strategic recommendations
- Visual evidence
```
## Advanced Workflow Patterns
### Pattern 1: Iterative Deepening
```yaml
Round_1:
- Broad search for landscape
- Identify key areas
Round_2:
- Deep dive into key areas
- Extract detailed information
Round_3:
- Fill specific gaps
- Resolve contradictions
Round_4:
- Final validation
- Quality assurance
```
### Pattern 2: Source Triangulation
```yaml
Primary_Sources:
- Official documentation
- Academic papers
Secondary_Sources:
- Industry reports
- Expert analysis
Tertiary_Sources:
- Community discussions
- User experiences
Synthesis:
- Cross-validate findings
- Identify consensus
- Note disagreements
```
### Pattern 3: Temporal Analysis
```yaml
Historical_Context:
- Past developments
- Evolution timeline
Current_State:
- Present situation
- Recent changes
Future_Projections:
- Trends analysis
- Expert predictions
Synthesis:
- Development trajectory
- Inflection points
- Future scenarios
```
## Performance Optimization Tips
### Query Optimization
1. Start with specific terms
2. Use domain filters early
3. Batch similar searches
4. Cache intermediate results
5. Reuse successful patterns
### Extraction Efficiency
1. Assess complexity first
2. Use appropriate tool per source
3. Parallelize when possible
4. Set reasonable timeouts
5. Handle errors gracefully
### Synthesis Strategy
1. Organize findings early
2. Identify patterns quickly
3. Resolve conflicts systematically
4. Build narrative progressively
5. Maintain evidence chains
## Quality Validation Checklist
### Planning Phase
- [ ] Clear objectives defined
- [ ] Appropriate strategy selected
- [ ] Resources estimated correctly
- [ ] Success criteria established
### Execution Phase
- [ ] All planned searches completed
- [ ] Extraction methods appropriate
- [ ] Multi-hop chains logical
- [ ] Confidence scores calculated
### Synthesis Phase
- [ ] All findings integrated
- [ ] Contradictions resolved
- [ ] Evidence chains complete
- [ ] Narrative coherent
### Delivery Phase
- [ ] Format appropriate for audience
- [ ] Citations complete and accurate
- [ ] Visual evidence included
- [ ] Confidence levels transparent

View File

@ -0,0 +1,285 @@
# Tavily MCP Server
**Purpose**: Web search and real-time information retrieval for research and current events
## Triggers
- Web search requirements beyond Claude's knowledge cutoff
- Current events, news, and real-time information needs
- Market research and competitive analysis tasks
- Technical documentation not in training data
- Academic research requiring recent publications
- Fact-checking and verification needs
- Deep research investigations requiring multi-source analysis
- `/sc:research` command activation
## Choose When
- **Over WebSearch**: When you need structured search with advanced filtering
- **Over WebFetch**: When you need multi-source search, not single page extraction
- **For research**: Comprehensive investigations requiring multiple sources
- **For current info**: Events, updates, or changes after knowledge cutoff
- **Not for**: Simple questions answerable from training, code generation, local file operations
## Works Best With
- **Sequential**: Tavily provides raw information → Sequential analyzes and synthesizes
- **Playwright**: Tavily discovers URLs → Playwright extracts complex content
- **Context7**: Tavily searches for updates → Context7 provides stable documentation
- **Serena**: Tavily performs searches → Serena stores research sessions
## Configuration
Requires TAVILY_API_KEY environment variable from https://app.tavily.com
## Search Capabilities
- **Web Search**: General web searches with ranking algorithms
- **News Search**: Time-filtered news and current events
- **Academic Search**: Scholarly articles and research papers
- **Domain Filtering**: Include/exclude specific domains
- **Content Extraction**: Full-text extraction from search results
- **Freshness Control**: Prioritize recent content
- **Multi-Round Searching**: Iterative refinement based on gaps
## Examples
```
"latest TypeScript features 2024" → Tavily (current technical information)
"OpenAI GPT updates this week" → Tavily (recent news and updates)
"quantum computing breakthroughs 2024" → Tavily (recent research)
"best practices React Server Components" → Tavily (current best practices)
"explain recursion" → Native Claude (general concept explanation)
"write a Python function" → Native Claude (code generation)
```
## Search Patterns
### Basic Search
```
Query: "search term"
→ Returns: Ranked results with snippets
```
### Domain-Specific Search
```
Query: "search term"
Domains: ["arxiv.org", "github.com"]
→ Returns: Results from specified domains only
```
### Time-Filtered Search
```
Query: "search term"
Recency: "week" | "month" | "year"
→ Returns: Recent results within timeframe
```
### Deep Content Search
```
Query: "search term"
Extract: true
→ Returns: Full content extraction from top results
```
## Quality Optimization
- **Query Refinement**: Iterate searches based on initial results
- **Source Diversity**: Ensure multiple perspectives in results
- **Credibility Filtering**: Prioritize authoritative sources
- **Deduplication**: Remove redundant information across sources
- **Relevance Scoring**: Focus on most pertinent results
## Integration Flows
### Research Flow
```
1. Tavily: Initial broad search
2. Sequential: Analyze and identify gaps
3. Tavily: Targeted follow-up searches
4. Sequential: Synthesize findings
5. Serena: Store research session
```
### Fact-Checking Flow
```
1. Tavily: Search for claim verification
2. Tavily: Find contradicting sources
3. Sequential: Analyze evidence
4. Report: Present balanced findings
```
### Competitive Analysis Flow
```
1. Tavily: Search competitor information
2. Tavily: Search market trends
3. Sequential: Comparative analysis
4. Context7: Technical comparisons
5. Report: Strategic insights
```
### Deep Research Flow (DR Agent)
```
1. Planning: Decompose research question
2. Tavily: Execute planned searches
3. Analysis: Assess URL complexity
4. Routing: Simple → Tavily extract | Complex → Playwright
5. Synthesis: Combine all sources
6. Iteration: Refine based on gaps
```
## Advanced Search Strategies
### Multi-Hop Research
```yaml
Initial_Search:
query: "core topic"
depth: broad
Follow_Up_1:
query: "entities from initial"
depth: targeted
Follow_Up_2:
query: "relationships discovered"
depth: deep
Synthesis:
combine: all_findings
resolve: contradictions
```
### Adaptive Query Generation
```yaml
Simple_Query:
- Direct search terms
- Single concept focus
Complex_Query:
- Multiple search variations
- Boolean operators
- Domain restrictions
- Time filters
Iterative_Query:
- Start broad
- Refine based on results
- Target specific gaps
```
### Source Credibility Assessment
```yaml
High_Credibility:
- Academic institutions
- Government sources
- Established media
- Official documentation
Medium_Credibility:
- Industry publications
- Expert blogs
- Community resources
Low_Credibility:
- User forums
- Social media
- Unverified sources
```
## Performance Considerations
### Search Optimization
- Batch similar searches together
- Cache search results for reuse
- Prioritize high-value sources
- Limit depth based on confidence
### Rate Limiting
- Maximum searches per minute
- Token usage per search
- Result caching duration
- Parallel search limits
### Cost Management
- Monitor API usage
- Set budget limits
- Optimize query efficiency
- Use caching effectively
## Integration with DR Agent Architecture
### Planning Strategy Support
```yaml
Planning_Only:
- Direct query execution
- No refinement needed
Intent_Planning:
- Clarify search intent
- Generate focused queries
Unified:
- Present search plan
- Adjust based on feedback
```
### Multi-Hop Execution
```yaml
Hop_Management:
- Track search genealogy
- Build on previous results
- Detect circular references
- Maintain hop context
```
### Self-Reflection Integration
```yaml
Quality_Check:
- Assess result relevance
- Identify coverage gaps
- Trigger additional searches
- Calculate confidence scores
```
### Case-Based Learning
```yaml
Pattern_Storage:
- Successful query formulations
- Effective search strategies
- Domain preferences
- Time filter patterns
```
## Error Handling
### Common Issues
- API key not configured
- Rate limit exceeded
- Network timeout
- No results found
- Invalid query format
### Fallback Strategies
- Use native WebSearch
- Try alternative queries
- Expand search scope
- Use cached results
- Simplify search terms
## Best Practices
### Query Formulation
1. Start with clear, specific terms
2. Use quotes for exact phrases
3. Include relevant keywords
4. Specify time ranges when needed
5. Use domain filters strategically
### Result Processing
1. Verify source credibility
2. Cross-reference multiple sources
3. Check publication dates
4. Identify potential biases
5. Extract key information
### Integration Workflow
1. Plan search strategy
2. Execute initial searches
3. Analyze results
4. Identify gaps
5. Refine and iterate
6. Synthesize findings
7. Store valuable patterns

View File

@ -0,0 +1,13 @@
{
"tavily": {
"command": "npx",
"args": [
"-y",
"mcp-remote",
"https://mcp.tavily.com/mcp/?tavilyApiKey=${TAVILY_API_KEY}"
],
"env": {
"TAVILY_API_KEY": "${TAVILY_API_KEY}"
}
}
}

View File

@ -0,0 +1,58 @@
---
name: MODE_DeepResearch
description: Research mindset for systematic investigation and evidence-based reasoning
category: mode
---
# Deep Research Mode
## Activation Triggers
- /sc:research command
- Research-related keywords: investigate, explore, discover, analyze
- Questions requiring current information
- Complex research requirements
- Manual flag: --research
## Behavioral Modifications
### Thinking Style
- **Systematic over casual**: Structure investigations methodically
- **Evidence over assumption**: Every claim needs verification
- **Progressive depth**: Start broad, drill down systematically
- **Critical evaluation**: Question sources and identify biases
### Communication Changes
- Lead with confidence levels
- Provide inline citations
- Acknowledge uncertainties explicitly
- Present conflicting views fairly
### Priority Shifts
- Completeness over speed
- Accuracy over speculation
- Evidence over speculation
- Verification over assumption
### Process Adaptations
- Always create investigation plans
- Default to parallel operations
- Track information genealogy
- Maintain evidence chains
## Integration Points
- Activates deep-research-agent automatically
- Enables Tavily search capabilities
- Triggers Sequential for complex reasoning
- Emphasizes TodoWrite for task tracking
## Quality Focus
- Source credibility paramount
- Contradiction resolution required
- Confidence scoring mandatory
- Citation completeness essential
## Output Characteristics
- Structured research reports
- Clear evidence presentation
- Transparent methodology
- Actionable insights

View File

@ -1 +1 @@
4.1.4
4.2.0

View File

@ -0,0 +1,168 @@
#!/bin/bash
# Deep Research Integration Verification Script
# Tests that all components are properly integrated
set -e
echo "========================================"
echo "Deep Research Integration Verification"
echo "========================================"
echo ""
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Track errors
ERRORS=0
WARNINGS=0
# Function to check file exists
check_file() {
local file=$1
local description=$2
if [ -f "$file" ]; then
echo -e "${GREEN}${NC} $description exists: $file"
return 0
else
echo -e "${RED}${NC} $description missing: $file"
((ERRORS++))
return 1
fi
}
# Function to check string in file
check_string_in_file() {
local file=$1
local string=$2
local description=$3
if grep -q "$string" "$file" 2>/dev/null; then
echo -e "${GREEN}${NC} $description found in $file"
return 0
else
echo -e "${RED}${NC} $description not found in $file"
((ERRORS++))
return 1
fi
}
echo "1. Checking Research Files..."
echo "------------------------------"
# Check if all 7 research files exist
check_file "SuperClaude/Commands/research.md" "Research command"
check_file "SuperClaude/Agents/deep-research-agent.md" "Deep Research agent"
check_file "SuperClaude/Modes/MODE_DeepResearch.md" "Deep Research mode"
check_file "SuperClaude/MCP/MCP_Tavily.md" "Tavily MCP documentation"
check_file "SuperClaude/MCP/configs/tavily.json" "Tavily MCP configuration"
check_file "SuperClaude/Core/RESEARCH_CONFIG.md" "Research configuration"
check_file "SuperClaude/Examples/deep_research_workflows.md" "Research workflow examples"
echo ""
echo "2. Checking Setup Component Updates..."
echo "---------------------------------------"
# Check mcp_docs.py has Tavily in server_docs_map
echo -e "${BLUE}Checking mcp_docs.py...${NC}"
check_string_in_file "setup/components/mcp_docs.py" '"tavily": "MCP_Tavily.md"' "Tavily in server_docs_map"
# Check mcp.py has Tavily configuration
echo -e "${BLUE}Checking mcp.py...${NC}"
check_string_in_file "setup/components/mcp.py" '"tavily":' "Tavily server configuration"
check_string_in_file "setup/components/mcp.py" "def _install_remote_mcp_server" "Remote MCP server handler"
check_string_in_file "setup/components/mcp.py" "TAVILY_API_KEY" "Tavily API key reference"
# Check agents.py has count updated
echo -e "${BLUE}Checking agents.py...${NC}"
check_string_in_file "setup/components/agents.py" "15 specialized AI agents" "15 agents count"
# Check modes.py has count updated
echo -e "${BLUE}Checking modes.py...${NC}"
check_string_in_file "setup/components/modes.py" "7 behavioral modes" "7 modes count"
# Check environment.py has research prerequisites check
echo -e "${BLUE}Checking environment.py...${NC}"
check_string_in_file "setup/utils/environment.py" "def check_research_prerequisites" "Research prerequisites check"
check_string_in_file "setup/utils/environment.py" "TAVILY_API_KEY" "Tavily API key check"
echo ""
echo "3. Checking Environment..."
echo "---------------------------"
# Check for Node.js
if command -v node &> /dev/null; then
NODE_VERSION=$(node --version)
echo -e "${GREEN}${NC} Node.js installed: $NODE_VERSION"
else
echo -e "${YELLOW}${NC} Node.js not installed (required for Tavily MCP)"
((WARNINGS++))
fi
# Check for npm
if command -v npm &> /dev/null; then
NPM_VERSION=$(npm --version)
echo -e "${GREEN}${NC} npm installed: $NPM_VERSION"
else
echo -e "${YELLOW}${NC} npm not installed (required for MCP servers)"
((WARNINGS++))
fi
# Check for TAVILY_API_KEY
if [ -n "$TAVILY_API_KEY" ]; then
echo -e "${GREEN}${NC} TAVILY_API_KEY is set"
else
echo -e "${YELLOW}${NC} TAVILY_API_KEY not set - get from https://app.tavily.com"
((WARNINGS++))
fi
echo ""
echo "4. Checking Auto-Discovery Components..."
echo "-----------------------------------------"
# These components should auto-discover the new files
echo -e "${BLUE}Components that will auto-discover files:${NC}"
echo -e "${GREEN}${NC} commands.py will find research.md"
echo -e "${GREEN}${NC} agents.py will find deep-research-agent.md"
echo -e "${GREEN}${NC} modes.py will find MODE_DeepResearch.md"
echo -e "${GREEN}${NC} core.py will find RESEARCH_CONFIG.md"
echo ""
echo "5. Checking Python Syntax..."
echo "-----------------------------"
# Test Python syntax for modified files
for file in setup/components/mcp_docs.py setup/components/mcp.py setup/components/agents.py setup/components/modes.py setup/utils/environment.py; do
if python3 -m py_compile "$file" 2>/dev/null; then
echo -e "${GREEN}${NC} $file syntax is valid"
else
echo -e "${RED}${NC} $file has syntax errors"
((ERRORS++))
fi
done
echo ""
echo "========================================"
echo "Verification Summary"
echo "========================================"
if [ $ERRORS -eq 0 ]; then
echo -e "${GREEN}✓ All critical checks passed!${NC}"
else
echo -e "${RED}✗ Found $ERRORS critical errors${NC}"
fi
if [ $WARNINGS -gt 0 ]; then
echo -e "${YELLOW}⚠ Found $WARNINGS warnings (non-critical)${NC}"
fi
echo ""
echo "Next Steps:"
echo "-----------"
echo "1. Set TAVILY_API_KEY: export TAVILY_API_KEY='your-key-here'"
echo "2. Run installation: SuperClaude install"
echo "3. Test in Claude Code: /sc:research 'test query'"
exit $ERRORS

View File

@ -8,7 +8,7 @@ from pathlib import Path
try:
__version__ = (Path(__file__).parent.parent / "VERSION").read_text().strip()
except Exception:
__version__ = "4.1.4" # Fallback
__version__ = "4.2.0" # Fallback - Deep Research Integration
__author__ = "NomenAK, Mithun Gowda B"

View File

@ -21,7 +21,7 @@ class AgentsComponent(Component):
return {
"name": "agents",
"version": __version__,
"description": "14 specialized AI agents with domain expertise and intelligent routing",
"description": "15 specialized AI agents with domain expertise and intelligent routing",
"category": "agents"
}

View File

@ -67,6 +67,15 @@ class MCPComponent(Component):
"required": False,
"api_key_env": "MORPH_API_KEY",
"api_key_description": "Morph API key for Fast Apply"
},
"tavily": {
"name": "tavily",
"description": "Web search and real-time information retrieval for deep research",
"install_method": "npm",
"install_command": "npx -y tavily-mcp@0.1.2",
"required": False,
"api_key_env": "TAVILY_API_KEY",
"api_key_description": "Tavily API key for web search (get from https://app.tavily.com)"
}
}
@ -296,6 +305,7 @@ class MCPComponent(Component):
except Exception as e:
self.logger.error(f"Error installing MCP server {server_name} using uv: {e}")
return False
def _install_github_mcp_server(self, server_info: Dict[str, Any], config: Dict[str, Any]) -> bool:
"""Install a single MCP server from GitHub using uvx"""
@ -535,9 +545,10 @@ class MCPComponent(Component):
server_name = server_info["name"]
npm_package = server_info.get("npm_package")
install_command = server_info.get("install_command")
if not npm_package:
self.logger.error(f"No npm_package found for server {server_name}")
if not npm_package and not install_command:
self.logger.error(f"No npm_package or install_command found for server {server_name}")
return False
command = "npx"
@ -567,18 +578,35 @@ class MCPComponent(Component):
self.logger.warning(f"Proceeding without {api_key_env} - server may not function properly")
# Install using Claude CLI
if config.get("dry_run"):
self.logger.info(f"Would install MCP server (user scope): claude mcp add -s user {server_name} {command} -y {npm_package}")
return True
self.logger.debug(f"Running: claude mcp add -s user {server_name} {command} -y {npm_package}")
result = self._run_command_cross_platform(
["claude", "mcp", "add", "-s", "user", "--", server_name, command, "-y", npm_package],
capture_output=True,
text=True,
timeout=120 # 2 minutes timeout for installation
)
if install_command:
# Use the full install command (e.g., for tavily-mcp@0.1.2)
install_args = install_command.split()
if config.get("dry_run"):
self.logger.info(f"Would install MCP server (user scope): claude mcp add -s user {server_name} {' '.join(install_args)}")
return True
self.logger.debug(f"Running: claude mcp add -s user {server_name} {' '.join(install_args)}")
result = self._run_command_cross_platform(
["claude", "mcp", "add", "-s", "user", "--", server_name] + install_args,
capture_output=True,
text=True,
timeout=120 # 2 minutes timeout for installation
)
else:
# Use npm_package
if config.get("dry_run"):
self.logger.info(f"Would install MCP server (user scope): claude mcp add -s user {server_name} {command} -y {npm_package}")
return True
self.logger.debug(f"Running: claude mcp add -s user {server_name} {command} -y {npm_package}")
result = self._run_command_cross_platform(
["claude", "mcp", "add", "-s", "user", "--", server_name, command, "-y", npm_package],
capture_output=True,
text=True,
timeout=120 # 2 minutes timeout for installation
)
if result.returncode == 0:
self.logger.success(f"Successfully installed MCP server (user scope): {server_name}")

View File

@ -28,7 +28,8 @@ class MCPDocsComponent(Component):
"playwright": "MCP_Playwright.md",
"serena": "MCP_Serena.md",
"morphllm": "MCP_Morphllm.md",
"morphllm-fast-apply": "MCP_Morphllm.md" # Handle both naming conventions
"morphllm-fast-apply": "MCP_Morphllm.md", # Handle both naming conventions
"tavily": "MCP_Tavily.md"
}
super().__init__(install_dir, Path(""))

View File

@ -22,7 +22,7 @@ class ModesComponent(Component):
return {
"name": "modes",
"version": __version__,
"description": "SuperClaude behavioral modes (Brainstorming, Introspection, Task Management, Token Efficiency)",
"description": "7 behavioral modes for enhanced Claude Code operation",
"category": "modes"
}

View File

@ -466,4 +466,48 @@ def create_env_file(api_keys: Dict[str, str], env_file_path: Optional[Path] = No
except Exception as e:
logger.error(f"Failed to create .env file: {e}")
display_warning(f"Could not create .env file: {e}")
return False
return False
def check_research_prerequisites() -> tuple[bool, list[str]]:
"""
Check if deep research prerequisites are met
Returns:
Tuple of (success: bool, warnings: List[str])
"""
warnings = []
logger = get_logger()
# Check Tavily API key
if not os.environ.get("TAVILY_API_KEY"):
warnings.append(
"TAVILY_API_KEY not set - Deep research web search will not work\n"
"Get your key from: https://app.tavily.com"
)
logger.warning("TAVILY_API_KEY not found in environment")
else:
logger.info("Found TAVILY_API_KEY in environment")
# Check Node.js for MCP
import shutil
if not shutil.which("node"):
warnings.append(
"Node.js not found - Required for Tavily MCP\n"
"Install from: https://nodejs.org"
)
logger.warning("Node.js not found - required for Tavily MCP")
else:
logger.info("Node.js found")
# Check npm
if not shutil.which("npm"):
warnings.append(
"npm not found - Required for MCP server installation\n"
"Usually installed with Node.js"
)
logger.warning("npm not found - required for MCP installation")
else:
logger.info("npm found")
return len(warnings) == 0, warnings