103 lines
3.1 KiB
Markdown
Raw Normal View History

feat: Add Deep Research System v4.2.0 (#380) feat: Add Deep Research System v4.2.0 - Autonomous web research capabilities ## Overview Comprehensive implementation of Deep Research framework aligned with DR Agent architecture, enabling autonomous, adaptive, and intelligent web research capabilities. ## Key Features ### 🔬 Deep Research Agent - 15th specialized agent for comprehensive research orchestration - Adaptive planning strategies: Planning-Only, Intent-Planning, Unified Intent-Planning - Multi-hop reasoning with genealogy tracking (up to 5 hops) - Self-reflective mechanisms with confidence scoring (0.0-1.0) - Case-based learning for cross-session intelligence ### 🎯 New /sc:research Command - Intelligent web research with depth control (quick/standard/deep/exhaustive) - Parallel-first execution for optimal performance - Domain filtering and time-based search options - Automatic report generation in claudedocs/ ### 🔍 Tavily MCP Integration - 7th MCP server for real-time web search - News search with time filtering - Content extraction from search results - Multi-round searching with iterative refinement - Free tier available with optional API key ### 🎨 MODE_DeepResearch - 7th behavioral mode for systematic investigation - 6-phase workflow: Understand → Plan → TodoWrite → Execute → Track → Validate - Evidence-based reasoning with citation management - Parallel operation defaults for efficiency ## Technical Changes ### Framework Updates - Updated agent count: 14 → 15 agents - Updated mode count: 6 → 7 modes - Updated MCP server count: 6 → 7 servers - Updated command count: 24 → 25 commands ### Configuration - Added RESEARCH_CONFIG.md for research settings - Added deep_research_workflows.md with examples - Standardized file naming conventions (UPPERCASE for Core) - Removed multi-source investigation features for simplification ### Integration Points - Enhanced MCP component with remote server support - Added check_research_prerequisites() in environment.py - Created verify_research_integration.sh script - Updated all documentation guides ## Requirements - TAVILY_API_KEY environment variable (free tier available) - Node.js and npm for Tavily MCP execution ## Documentation - Complete user guide integration - Workflow examples and best practices - API configuration instructions - Depth level explanations 🤖 Generated with Claude Code Co-authored-by: moshe_anconina <moshe_a@ituran.com> Co-authored-by: Claude <noreply@anthropic.com>
2025-09-21 04:54:42 +03:00
---
name: research
description: Deep web research with adaptive planning and intelligent search
category: command
complexity: advanced
mcp-servers: [tavily, sequential, playwright, serena]
personas: [deep-research-agent]
---
# /sc:research - Deep Research Command
> **Context Framework Note**: This command activates comprehensive research capabilities with adaptive planning, multi-hop reasoning, and evidence-based synthesis.
## Triggers
- Research questions beyond knowledge cutoff
- Complex research questions
- Current events and real-time information
- Academic or technical research requirements
- Market analysis and competitive intelligence
## Context Trigger Pattern
```
/sc:research "[query]" [--depth quick|standard|deep|exhaustive] [--strategy planning|intent|unified]
```
## Behavioral Flow
### 1. Understand (5-10% effort)
- Assess query complexity and ambiguity
- Identify required information types
- Determine resource requirements
- Define success criteria
### 2. Plan (10-15% effort)
- Select planning strategy based on complexity
- Identify parallelization opportunities
- Generate research question decomposition
- Create investigation milestones
### 3. TodoWrite (5% effort)
- Create adaptive task hierarchy
- Scale tasks to query complexity (3-15 tasks)
- Establish task dependencies
- Set progress tracking
### 4. Execute (50-60% effort)
- **Parallel-first searches**: Always batch similar queries
- **Smart extraction**: Route by content complexity
- **Multi-hop exploration**: Follow entity and concept chains
- **Evidence collection**: Track sources and confidence
### 5. Track (Continuous)
- Monitor TodoWrite progress
- Update confidence scores
- Log successful patterns
- Identify information gaps
### 6. Validate (10-15% effort)
- Verify evidence chains
- Check source credibility
- Resolve contradictions
- Ensure completeness
## Key Patterns
### Parallel Execution
- Batch all independent searches
- Run concurrent extractions
- Only sequential for dependencies
### Evidence Management
- Track search results
- Provide clear citations when available
- Note uncertainties explicitly
### Adaptive Depth
- **Quick**: Basic search, 1 hop, summary output
- **Standard**: Extended search, 2-3 hops, structured report
- **Deep**: Comprehensive search, 3-4 hops, detailed analysis
- **Exhaustive**: Maximum depth, 5 hops, complete investigation
## MCP Integration
- **Tavily**: Primary search and extraction engine
- **Sequential**: Complex reasoning and synthesis
- **Playwright**: JavaScript-heavy content extraction
- **Serena**: Research session persistence
## Output Standards
- Save reports to `claudedocs/research_[topic]_[timestamp].md`
- Include executive summary
- Provide confidence levels
- List all sources with citations
## Examples
```
/sc:research "latest developments in quantum computing 2024"
/sc:research "competitive analysis of AI coding assistants" --depth deep
/sc:research "best practices for distributed systems" --strategy unified
```
## Boundaries
**Will**: Current information, intelligent search, evidence-based analysis
**Won't**: Make claims without sources, skip validation, access restricted content