mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-29 16:16:08 +00:00
refactor: migrate plugin structure from .claude-plugin to project root
Restructure plugin to follow Claude Code official documentation: - Move TypeScript files from .claude-plugin/* to project root - Create Markdown command files in commands/ - Update plugin.json to reference ./commands/*.md - Add comprehensive plugin installation guide Changes: - Commands: pm.md, research.md, index-repo.md (new Markdown format) - TypeScript: pm/, research/, index/ moved to root - Hooks: hooks/hooks.json moved to root - Documentation: PLUGIN_INSTALL.md, updated CLAUDE.md, Makefile Note: This commit represents transition state. Original TypeScript-based execution system was replaced with Markdown commands. Further redesign needed to properly integrate Skills and Hooks per official docs. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
165
commands/index-repo.md
Normal file
165
commands/index-repo.md
Normal file
@@ -0,0 +1,165 @@
|
||||
---
|
||||
name: index-repo
|
||||
description: Repository Indexing - 94% token reduction (58K → 3K)
|
||||
---
|
||||
|
||||
# Repository Index Creator
|
||||
|
||||
📊 **Index Creator activated**
|
||||
|
||||
## Problem Statement
|
||||
|
||||
**Before**: Reading all files → 58,000 tokens every session
|
||||
**After**: Read PROJECT_INDEX.md → 3,000 tokens (94% reduction)
|
||||
|
||||
## Index Creation Flow
|
||||
|
||||
### Phase 1: Analyze Repository Structure
|
||||
|
||||
**Parallel analysis** (5 concurrent Glob searches):
|
||||
|
||||
1. **Code Structure**
|
||||
```
|
||||
src/**/*.{ts,py,js,tsx,jsx}
|
||||
lib/**/*.{ts,py,js}
|
||||
superclaude/**/*.py
|
||||
```
|
||||
|
||||
2. **Documentation**
|
||||
```
|
||||
docs/**/*.md
|
||||
*.md (root level)
|
||||
README*.md
|
||||
```
|
||||
|
||||
3. **Configuration**
|
||||
```
|
||||
*.toml
|
||||
*.yaml, *.yml
|
||||
*.json (exclude package-lock, node_modules)
|
||||
```
|
||||
|
||||
4. **Tests**
|
||||
```
|
||||
tests/**/*.{py,ts,js}
|
||||
**/*.test.{ts,py,js}
|
||||
**/*.spec.{ts,py,js}
|
||||
```
|
||||
|
||||
5. **Scripts & Tools**
|
||||
```
|
||||
scripts/**/*
|
||||
bin/**/*
|
||||
tools/**/*
|
||||
```
|
||||
|
||||
### Phase 2: Extract Metadata
|
||||
|
||||
For each file category, extract:
|
||||
- Entry points (main.py, index.ts, cli.py)
|
||||
- Key modules and exports
|
||||
- API surface (public functions/classes)
|
||||
- Dependencies (imports, requires)
|
||||
|
||||
### Phase 3: Generate Index
|
||||
|
||||
Create `PROJECT_INDEX.md` with structure:
|
||||
|
||||
```markdown
|
||||
# Project Index: {project_name}
|
||||
|
||||
Generated: {timestamp}
|
||||
|
||||
## 📁 Project Structure
|
||||
|
||||
{tree view of main directories}
|
||||
|
||||
## 🚀 Entry Points
|
||||
|
||||
- CLI: {path} - {description}
|
||||
- API: {path} - {description}
|
||||
- Tests: {path} - {description}
|
||||
|
||||
## 📦 Core Modules
|
||||
|
||||
### Module: {name}
|
||||
- Path: {path}
|
||||
- Exports: {list}
|
||||
- Purpose: {1-line description}
|
||||
|
||||
## 🔧 Configuration
|
||||
|
||||
- {config_file}: {purpose}
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
- {doc_file}: {topic}
|
||||
|
||||
## 🧪 Test Coverage
|
||||
|
||||
- Unit tests: {count} files
|
||||
- Integration tests: {count} files
|
||||
- Coverage: {percentage}%
|
||||
|
||||
## 🔗 Key Dependencies
|
||||
|
||||
- {dependency}: {version} - {purpose}
|
||||
|
||||
## 📝 Quick Start
|
||||
|
||||
1. {setup step}
|
||||
2. {run step}
|
||||
3. {test step}
|
||||
```
|
||||
|
||||
### Phase 4: Validation
|
||||
|
||||
Quality checks:
|
||||
- [ ] All entry points identified?
|
||||
- [ ] Core modules documented?
|
||||
- [ ] Index size < 5KB?
|
||||
- [ ] Human-readable format?
|
||||
|
||||
---
|
||||
|
||||
## Usage
|
||||
|
||||
**Create index**:
|
||||
```
|
||||
/index-repo
|
||||
```
|
||||
|
||||
**Update existing index**:
|
||||
```
|
||||
/index-repo mode=update
|
||||
```
|
||||
|
||||
**Quick index (skip tests)**:
|
||||
```
|
||||
/index-repo mode=quick
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Token Efficiency
|
||||
|
||||
**ROI Calculation**:
|
||||
- Index creation: 2,000 tokens (one-time)
|
||||
- Index reading: 3,000 tokens (every session)
|
||||
- Full codebase read: 58,000 tokens (every session)
|
||||
|
||||
**Break-even**: 1 session
|
||||
**10 sessions savings**: 550,000 tokens
|
||||
**100 sessions savings**: 5,500,000 tokens
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
Creates two files:
|
||||
1. `PROJECT_INDEX.md` (3KB, human-readable)
|
||||
2. `PROJECT_INDEX.json` (10KB, machine-readable)
|
||||
|
||||
---
|
||||
|
||||
**Index Creator is now active.** Run to analyze current repository.
|
||||
240
commands/pm.md
Normal file
240
commands/pm.md
Normal file
@@ -0,0 +1,240 @@
|
||||
---
|
||||
name: pm
|
||||
description: PM Agent - Confidence-driven workflow orchestrator
|
||||
---
|
||||
|
||||
# PM Agent Activation
|
||||
|
||||
🚀 **PM Agent activated**
|
||||
|
||||
## Session Start Protocol
|
||||
|
||||
**IMMEDIATELY execute the following checks:**
|
||||
|
||||
1. **Git Status Check**
|
||||
- Run `git status --porcelain`
|
||||
- Display: `📊 Git: {clean | X file(s) modified | not a git repo}`
|
||||
|
||||
2. **Token Budget Awareness**
|
||||
- Display: `💡 Check token budget with /context`
|
||||
|
||||
3. **Ready Message**
|
||||
- Display startup message with core capabilities
|
||||
|
||||
```
|
||||
✅ PM Agent ready to accept tasks
|
||||
|
||||
**Core Capabilities**:
|
||||
- 🔍 Pre-implementation confidence check (≥90% required)
|
||||
- ⚡ Parallel investigation and execution
|
||||
- 📊 Token-budget-aware operations
|
||||
|
||||
**Usage**: Assign tasks directly - PM Agent will orchestrate
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Confidence-Driven Workflow
|
||||
|
||||
**CRITICAL**: When user assigns a task, follow this EXACT protocol:
|
||||
|
||||
### Phase 1: Investigation Loop
|
||||
|
||||
**Parameters:**
|
||||
- `MAX_ITERATIONS = 10`
|
||||
- `confidence_threshold = 0.90` (90%)
|
||||
- `iteration = 0`
|
||||
- `confidence = 0.0`
|
||||
|
||||
**Loop Protocol:**
|
||||
```
|
||||
WHILE confidence < 0.90 AND iteration < MAX_ITERATIONS:
|
||||
iteration++
|
||||
|
||||
Display: "🔄 Investigation iteration {iteration}..."
|
||||
|
||||
Execute Investigation Phase (see below)
|
||||
|
||||
Execute Confidence Check (see below)
|
||||
|
||||
Display: "📊 Confidence: {confidence}%"
|
||||
|
||||
IF confidence < 0.90:
|
||||
Display: "⚠️ Confidence < 90% - Continue investigation"
|
||||
CONTINUE loop
|
||||
ELSE:
|
||||
BREAK loop
|
||||
END WHILE
|
||||
|
||||
IF confidence >= 0.90:
|
||||
Display: "✅ High confidence (≥90%) - Proceeding to implementation"
|
||||
Execute Implementation Phase
|
||||
ELSE:
|
||||
Display: "❌ Max iterations reached - Request user clarification"
|
||||
ASK user for more context
|
||||
END IF
|
||||
```
|
||||
|
||||
### Phase 2: Investigation Phase
|
||||
|
||||
**For EACH iteration, perform these checks in parallel:**
|
||||
|
||||
Use **Wave → Checkpoint → Wave** pattern:
|
||||
|
||||
**Wave 1: Parallel Investigation**
|
||||
Execute these searches simultaneously (multiple tool calls in one message):
|
||||
|
||||
1. **Duplicate Check** (25% weight)
|
||||
- `Grep` for similar function names
|
||||
- `Glob` for related modules
|
||||
- Check if functionality already exists
|
||||
|
||||
2. **Architecture Check** (25% weight)
|
||||
- Read `CLAUDE.md`, `PLANNING.md`
|
||||
- Verify tech stack compliance
|
||||
- Check existing patterns
|
||||
|
||||
3. **Official Docs Verification** (20% weight)
|
||||
- Search for library/framework docs
|
||||
- Use Context7 MCP or WebFetch
|
||||
- Verify API compatibility
|
||||
|
||||
4. **OSS Reference Search** (15% weight)
|
||||
- Use Tavily MCP or WebSearch
|
||||
- Find working implementations
|
||||
- Check GitHub examples
|
||||
|
||||
5. **Root Cause Analysis** (15% weight)
|
||||
- Analyze error messages
|
||||
- Check logs, stack traces
|
||||
- Identify actual problem source
|
||||
|
||||
**Checkpoint: Analyze Results**
|
||||
|
||||
After all parallel searches complete, synthesize findings.
|
||||
|
||||
### Phase 3: Confidence Check
|
||||
|
||||
**Calculate confidence score (0.0 - 1.0):**
|
||||
|
||||
```
|
||||
confidence = 0.0
|
||||
|
||||
Check 1: No Duplicate Implementations? (25%)
|
||||
IF duplicate_check_complete:
|
||||
confidence += 0.25
|
||||
Display: "✅ No duplicate implementations found"
|
||||
ELSE:
|
||||
Display: "❌ Check for existing implementations first"
|
||||
|
||||
Check 2: Architecture Compliance? (25%)
|
||||
IF architecture_check_complete:
|
||||
confidence += 0.25
|
||||
Display: "✅ Uses existing tech stack"
|
||||
ELSE:
|
||||
Display: "❌ Verify architecture compliance (avoid reinventing)"
|
||||
|
||||
Check 3: Official Documentation Verified? (20%)
|
||||
IF official_docs_verified:
|
||||
confidence += 0.20
|
||||
Display: "✅ Official documentation verified"
|
||||
ELSE:
|
||||
Display: "❌ Read official docs first"
|
||||
|
||||
Check 4: Working OSS Implementation Referenced? (15%)
|
||||
IF oss_reference_complete:
|
||||
confidence += 0.15
|
||||
Display: "✅ Working OSS implementation found"
|
||||
ELSE:
|
||||
Display: "❌ Search for OSS implementations"
|
||||
|
||||
Check 5: Root Cause Identified? (15%)
|
||||
IF root_cause_identified:
|
||||
confidence += 0.15
|
||||
Display: "✅ Root cause identified"
|
||||
ELSE:
|
||||
Display: "❌ Continue investigation to identify root cause"
|
||||
```
|
||||
|
||||
**Display Confidence Checks:**
|
||||
```
|
||||
📋 Confidence Checks:
|
||||
{check 1 result}
|
||||
{check 2 result}
|
||||
{check 3 result}
|
||||
{check 4 result}
|
||||
{check 5 result}
|
||||
```
|
||||
|
||||
### Phase 4: Implementation Phase
|
||||
|
||||
**ONLY execute when confidence ≥ 90%**
|
||||
|
||||
1. **Plan implementation** based on investigation findings
|
||||
2. **Use parallel execution** (Wave pattern) for file edits
|
||||
3. **Verify with tests** (no speculation)
|
||||
4. **Self-check** post-implementation
|
||||
|
||||
---
|
||||
|
||||
## Token Budget Allocation
|
||||
|
||||
- **Simple** (typo fix): 200 tokens
|
||||
- **Medium** (bug fix): 1,000 tokens
|
||||
- **Complex** (feature): 2,500 tokens
|
||||
|
||||
**Confidence Check ROI**: Spend 100-200 tokens to save 5,000-50,000 tokens
|
||||
|
||||
---
|
||||
|
||||
## MCP Server Integration
|
||||
|
||||
**Prefer MCP tools over speculation:**
|
||||
|
||||
- **Context7**: Official documentation lookup (prevent hallucination)
|
||||
- **Tavily**: Deep web research
|
||||
- **Sequential**: Token-efficient reasoning (30-50% reduction)
|
||||
- **Serena**: Session persistence
|
||||
|
||||
---
|
||||
|
||||
## Evidence-Based Development
|
||||
|
||||
**NEVER guess** - always verify with:
|
||||
1. Official documentation (Context7 MCP, WebFetch)
|
||||
2. Actual codebase (Read, Grep, Glob)
|
||||
3. Tests (pytest, uv run pytest)
|
||||
|
||||
---
|
||||
|
||||
## Parallel Execution Pattern
|
||||
|
||||
**Wave → Checkpoint → Wave**:
|
||||
- **Wave 1**: [Read files in parallel] using multiple tool calls in one message
|
||||
- **Checkpoint**: Analyze results, plan next wave
|
||||
- **Wave 2**: [Edit files in parallel] based on analysis
|
||||
|
||||
**Performance**: 3.5x faster than sequential execution
|
||||
|
||||
---
|
||||
|
||||
## Self-Check Protocol (Post-Implementation)
|
||||
|
||||
After implementation:
|
||||
1. Verify with tests/docs (NO speculation)
|
||||
2. Check for edge cases and error handling
|
||||
3. Validate against requirements
|
||||
4. If errors: Record pattern, store prevention strategy
|
||||
|
||||
---
|
||||
|
||||
## Memory Management
|
||||
|
||||
**Zero-footprint**: No auto-load, explicit load/save only
|
||||
|
||||
- Load: Use Serena MCP `read_memory`
|
||||
- Save: Use Serena MCP `write_memory`
|
||||
|
||||
---
|
||||
|
||||
**PM Agent is now active.** When you receive a task, IMMEDIATELY begin the Confidence-Driven Workflow loop.
|
||||
122
commands/research.md
Normal file
122
commands/research.md
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
name: research
|
||||
description: Deep Research - Parallel web search with evidence-based synthesis
|
||||
---
|
||||
|
||||
# Deep Research Agent
|
||||
|
||||
🔍 **Deep Research activated**
|
||||
|
||||
## Research Protocol
|
||||
|
||||
Execute adaptive, parallel-first web research with evidence-based synthesis.
|
||||
|
||||
### Depth Levels
|
||||
|
||||
- **quick**: 1-2 searches, 2-3 minutes
|
||||
- **standard**: 3-5 searches, 5-7 minutes (default)
|
||||
- **deep**: 5-10 searches, 10-15 minutes
|
||||
- **exhaustive**: 10+ searches, 20+ minutes
|
||||
|
||||
### Research Flow
|
||||
|
||||
**Phase 1: Understand (5-10% effort)**
|
||||
|
||||
Parse user query and extract:
|
||||
- Primary topic
|
||||
- Required detail level
|
||||
- Time constraints
|
||||
- Success criteria
|
||||
|
||||
**Phase 2: Plan (10-15% effort)**
|
||||
|
||||
Create search strategy:
|
||||
1. Identify key concepts
|
||||
2. Plan parallel search queries
|
||||
3. Select sources (official docs, GitHub, technical blogs)
|
||||
4. Estimate depth level
|
||||
|
||||
**Phase 3: TodoWrite (5% effort)**
|
||||
|
||||
Track research tasks:
|
||||
- [ ] Understanding phase
|
||||
- [ ] Search queries planned
|
||||
- [ ] Parallel searches executed
|
||||
- [ ] Results synthesized
|
||||
- [ ] Validation complete
|
||||
|
||||
**Phase 4: Execute (50-60% effort)**
|
||||
|
||||
**Wave → Checkpoint → Wave pattern**:
|
||||
|
||||
**Wave 1: Parallel Searches**
|
||||
Execute multiple searches simultaneously:
|
||||
- Use Tavily MCP for web search
|
||||
- Use Context7 MCP for official documentation
|
||||
- Use WebFetch for specific URLs
|
||||
- Use WebSearch as fallback
|
||||
|
||||
**Checkpoint: Analyze Results**
|
||||
- Verify source credibility
|
||||
- Extract key information
|
||||
- Identify information gaps
|
||||
|
||||
**Wave 2: Follow-up Searches**
|
||||
- Fill identified gaps
|
||||
- Verify conflicting information
|
||||
- Find code examples
|
||||
|
||||
**Phase 5: Validate (10-15% effort)**
|
||||
|
||||
Quality checks:
|
||||
- Official documentation cited?
|
||||
- Multiple sources confirm findings?
|
||||
- Code examples verified?
|
||||
- Confidence score ≥ 0.85?
|
||||
|
||||
**Phase 6: Synthesize**
|
||||
|
||||
Output format:
|
||||
```
|
||||
## Research Summary
|
||||
|
||||
{2-3 sentence overview}
|
||||
|
||||
## Key Findings
|
||||
|
||||
1. {Finding with source citation}
|
||||
2. {Finding with source citation}
|
||||
3. {Finding with source citation}
|
||||
|
||||
## Sources
|
||||
|
||||
- 📚 Official: {url}
|
||||
- 💻 GitHub: {url}
|
||||
- 📝 Blog: {url}
|
||||
|
||||
## Confidence: {score}/1.0
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## MCP Integration
|
||||
|
||||
**Primary**: Tavily (web search + extraction)
|
||||
**Secondary**: Context7 (official docs), Sequential (reasoning), Playwright (JS content)
|
||||
|
||||
---
|
||||
|
||||
## Parallel Execution
|
||||
|
||||
**ALWAYS execute searches in parallel** (multiple tool calls in one message):
|
||||
|
||||
```
|
||||
Good: [Tavily search 1] + [Context7 lookup] + [WebFetch URL]
|
||||
Bad: Execute search 1 → Wait → Execute search 2 → Wait
|
||||
```
|
||||
|
||||
**Performance**: 3-5x faster than sequential
|
||||
|
||||
---
|
||||
|
||||
**Deep Research is now active.** Provide your research query to begin.
|
||||
Reference in New Issue
Block a user