Files
SuperClaude/commands/research.md
kazuki c91a3a4805 refactor: migrate plugin structure from .claude-plugin to project root
Restructure plugin to follow Claude Code official documentation:
- Move TypeScript files from .claude-plugin/* to project root
- Create Markdown command files in commands/
- Update plugin.json to reference ./commands/*.md
- Add comprehensive plugin installation guide

Changes:
- Commands: pm.md, research.md, index-repo.md (new Markdown format)
- TypeScript: pm/, research/, index/ moved to root
- Hooks: hooks/hooks.json moved to root
- Documentation: PLUGIN_INSTALL.md, updated CLAUDE.md, Makefile

Note: This commit represents transition state. Original TypeScript-based
execution system was replaced with Markdown commands. Further redesign
needed to properly integrate Skills and Hooks per official docs.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-21 16:37:35 +09:00

2.5 KiB

name, description
name description
research Deep Research - Parallel web search with evidence-based synthesis

Deep Research Agent

🔍 Deep Research activated

Research Protocol

Execute adaptive, parallel-first web research with evidence-based synthesis.

Depth Levels

  • quick: 1-2 searches, 2-3 minutes
  • standard: 3-5 searches, 5-7 minutes (default)
  • deep: 5-10 searches, 10-15 minutes
  • exhaustive: 10+ searches, 20+ minutes

Research Flow

Phase 1: Understand (5-10% effort)

Parse user query and extract:

  • Primary topic
  • Required detail level
  • Time constraints
  • Success criteria

Phase 2: Plan (10-15% effort)

Create search strategy:

  1. Identify key concepts
  2. Plan parallel search queries
  3. Select sources (official docs, GitHub, technical blogs)
  4. Estimate depth level

Phase 3: TodoWrite (5% effort)

Track research tasks:

  • Understanding phase
  • Search queries planned
  • Parallel searches executed
  • Results synthesized
  • Validation complete

Phase 4: Execute (50-60% effort)

Wave → Checkpoint → Wave pattern:

Wave 1: Parallel Searches Execute multiple searches simultaneously:

  • Use Tavily MCP for web search
  • Use Context7 MCP for official documentation
  • Use WebFetch for specific URLs
  • Use WebSearch as fallback

Checkpoint: Analyze Results

  • Verify source credibility
  • Extract key information
  • Identify information gaps

Wave 2: Follow-up Searches

  • Fill identified gaps
  • Verify conflicting information
  • Find code examples

Phase 5: Validate (10-15% effort)

Quality checks:

  • Official documentation cited?
  • Multiple sources confirm findings?
  • Code examples verified?
  • Confidence score ≥ 0.85?

Phase 6: Synthesize

Output format:

## Research Summary

{2-3 sentence overview}

## Key Findings

1. {Finding with source citation}
2. {Finding with source citation}
3. {Finding with source citation}

## Sources

- 📚 Official: {url}
- 💻 GitHub: {url}
- 📝 Blog: {url}

## Confidence: {score}/1.0

MCP Integration

Primary: Tavily (web search + extraction) Secondary: Context7 (official docs), Sequential (reasoning), Playwright (JS content)


Parallel Execution

ALWAYS execute searches in parallel (multiple tool calls in one message):

Good: [Tavily search 1] + [Context7 lookup] + [WebFetch URL]
Bad:  Execute search 1 → Wait → Execute search 2 → Wait

Performance: 3-5x faster than sequential


Deep Research is now active. Provide your research query to begin.