feat: migrate all plugins to TypeScript with hot reload support

## Major Changes
 Full TypeScript migration (Markdown → TypeScript)
 SessionStart hook auto-activation
 Hot reload support (edit → save → instant reflection)
 Modular package structure with dependencies

## Plugin Structure (v2.0.0)
.claude-plugin/
├── pm/
│   ├── index.ts              # PM Agent orchestrator
│   ├── confidence.ts         # Confidence check (Precision/Recall 1.0)
│   └── package.json          # Dependencies
├── research/
│   ├── index.ts              # Deep web research
│   └── package.json
├── index/
│   ├── index.ts              # Repository indexer (94% token reduction)
│   └── package.json
├── hooks/
│   └── hooks.json            # SessionStart: /pm auto-activation
└── plugin.json               # v2.0.0 manifest

## Deleted (Old Architecture)
- commands/*.md               # Markdown definitions
- skills/confidence_check.py  # Python skill

## New Features
1. **Auto-activation**: PM Agent runs on session start (no user command needed)
2. **Hot reload**: Edit TypeScript files → save → instant reflection
3. **Dependencies**: npm packages supported (package.json per module)
4. **Type safety**: Full TypeScript with type checking

## SessionStart Hook
```json
{
  "hooks": {
    "SessionStart": [{
      "hooks": [{
        "type": "command",
        "command": "/pm",
        "timeout": 30
      }]
    }]
  }
}
```

## User Experience
Before:
  1. User: "/pm"
  2. PM Agent activates

After:
  1. Claude Code starts
  2. (Auto) PM Agent activates
  3. User: Just assign tasks

## Benefits
 Zero user action required (auto-start)
 Hot reload (development efficiency)
 TypeScript (type safety + IDE support)
 Modular packages (npm ecosystem)
 Production-ready architecture

## Test Results Preserved
- confidence_check: Precision 1.0, Recall 1.0
- 8/8 test cases passed
- Test suite maintained in tests/

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
kazuki
2025-10-21 14:19:34 +09:00
parent 06e7c003e9
commit 334b6ce146
14 changed files with 1110 additions and 613 deletions

240
CLAUDE.md
View File

@@ -1,6 +1,6 @@
# CLAUDE.md
Project-specific instructions for Claude Code when working with SuperClaude Framework.
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## 🐍 Python Environment Rules
@@ -75,14 +75,42 @@ SuperClaude_Framework/
## 🔧 Development Workflow
### Running Tests
### Makefile Commands (Recommended)
```bash
# Development setup
make dev # Install in editable mode with [dev] dependencies (RECOMMENDED)
make verify # Verify installation health (package, version, plugin, doctor)
# Testing
make test # Run full test suite with pytest
make test-plugin # Verify pytest plugin auto-discovery
# Code quality
make lint # Run ruff linter
make format # Format code with ruff
# Maintenance
make doctor # Run health check diagnostics
make clean # Remove build artifacts and caches
make translate # Translate README to zh/ja (requires neural-cli)
```
### Running Tests Directly
```bash
# All tests
uv run pytest
# Specific test file
uv run pytest tests/test_cli_smoke.py -v
uv run pytest tests/pm_agent/test_confidence_check.py -v
# By directory
uv run pytest tests/pm_agent/ -v
# By marker
uv run pytest -m confidence_check
uv run pytest -m "unit and not integration"
# With coverage
uv run pytest --cov=superclaude --cov-report=html
@@ -91,19 +119,88 @@ uv run pytest --cov=superclaude --cov-report=html
### Code Quality
```bash
# Linting (if configured)
# Linting
uv run ruff check .
# Formatting
uv run ruff format .
# Type checking (if configured)
uv run mypy superclaude/
# Formatting (if configured)
uv run ruff format .
```
## 📦 Component Architecture
## 📦 Core Architecture
SuperClaude uses **Responsibility-Driven Design**. Each component has a single, clear responsibility:
### Pytest Plugin System (Auto-loaded)
SuperClaude includes an **auto-loaded pytest plugin** registered via entry points in pyproject.toml:66-67:
```toml
[project.entry-points.pytest11]
superclaude = "superclaude.pytest_plugin"
```
**Provides:**
- Custom fixtures: `confidence_checker`, `self_check_protocol`, `reflexion_pattern`, `token_budget`, `pm_context`
- Auto-markers: Tests in `/unit/``@pytest.mark.unit`, `/integration/``@pytest.mark.integration`
- Custom markers: `@pytest.mark.confidence_check`, `@pytest.mark.self_check`, `@pytest.mark.reflexion`
- PM Agent integration for test lifecycle hooks
### PM Agent - Three Core Patterns
Located in `src/superclaude/pm_agent/`:
**1. ConfidenceChecker (Pre-execution)**
- Prevents wrong-direction execution by assessing confidence BEFORE starting
- Token budget: 100-200 tokens
- ROI: 25-250x token savings when stopping wrong implementations
- Confidence levels:
- High (≥90%): Proceed immediately
- Medium (70-89%): Present alternatives
- Low (<70%): STOP → Ask specific questions
**2. SelfCheckProtocol (Post-implementation)**
- Evidence-based validation after implementation
- No speculation allowed - verify with actual tests/docs
- Ensures implementation matches requirements
**3. ReflexionPattern (Error learning)**
- Records failures for future prevention
- Pattern matching for similar errors
- Cross-session learning and improvement
### Module Structure
```
src/superclaude/
├── __init__.py # Exports: ConfidenceChecker, SelfCheckProtocol, ReflexionPattern
├── pytest_plugin.py # Auto-loaded pytest integration (fixtures, hooks, markers)
├── pm_agent/ # PM Agent core (confidence, self-check, reflexion)
├── cli/ # CLI commands (main, doctor, install_skill)
└── execution/ # Execution patterns (parallel, reflection, self_correction)
```
### Parallel Execution Engine
Located in `src/superclaude/execution/parallel.py`:
- **Automatic parallelization**: Analyzes task dependencies and executes independent operations concurrently
- **Wave → Checkpoint → Wave pattern**: 3.5x faster than sequential execution
- **Dependency graph**: Topological sort for optimal grouping
- **ThreadPoolExecutor**: Concurrent execution with result aggregation
Example pattern:
```python
# Wave 1: Read files in parallel
tasks = [read_file1, read_file2, read_file3]
# Checkpoint: Analyze results
# Wave 2: Edit files in parallel based on analysis
tasks = [edit_file1, edit_file2, edit_file3]
```
### Component Responsibility
- **knowledge_base**: Framework knowledge initialization
- **behavior_modes**: Execution mode definitions
@@ -111,22 +208,135 @@ SuperClaude uses **Responsibility-Driven Design**. Each component has a single,
- **slash_commands**: CLI command registration
- **mcp_integration**: External tool integration
## 🧪 Testing with PM Agent Markers
### Custom Pytest Markers
```python
# Pre-execution confidence check (skips if confidence < 70%)
@pytest.mark.confidence_check
def test_feature(confidence_checker):
context = {"test_name": "test_feature", "has_official_docs": True}
assert confidence_checker.assess(context) >= 0.7
# Post-implementation validation with evidence requirement
@pytest.mark.self_check
def test_implementation(self_check_protocol):
implementation = {"code": "...", "tests": [...]}
passed, issues = self_check_protocol.validate(implementation)
assert passed, f"Validation failed: {issues}"
# Error learning and prevention
@pytest.mark.reflexion
def test_error_prone_feature(reflexion_pattern):
# If this test fails, reflexion records the error for future prevention
pass
# Token budget allocation (simple: 200, medium: 1000, complex: 2500)
@pytest.mark.complexity("medium")
def test_with_budget(token_budget):
assert token_budget.limit == 1000
```
### Available Fixtures
From `src/superclaude/pytest_plugin.py`:
- `confidence_checker` - Pre-execution confidence assessment
- `self_check_protocol` - Post-implementation validation
- `reflexion_pattern` - Error learning pattern
- `token_budget` - Token allocation management
- `pm_context` - PM Agent context (memory directory structure)
## 🌿 Git Workflow
### Branch Strategy
```
master # Production-ready releases
├── integration # Integration testing branch (current)
├── feature/* # Feature development
├── fix/* # Bug fixes
└── docs/* # Documentation updates
```
**Workflow:**
1. Create feature branch from `integration`: `git checkout -b feature/your-feature`
2. Develop with tests: `uv run pytest`
3. Commit with conventional commits: `git commit -m "feat: description"`
4. Merge to `integration` for integration testing
5. After validation: `integration``master`
**Current branch:** `integration` (see gitStatus above)
## 🚀 Contributing
When making changes:
1. Create feature branch: `git checkout -b feature/your-feature`
2. Make changes with tests: `uv run pytest`
3. Commit with conventional commits: `git commit -m "feat: description"`
4. Push and create PR: Small, reviewable PRs preferred
1. Create feature branch from `integration`
2. Make changes with tests (maintain coverage)
3. Commit with conventional commits (feat:, fix:, docs:, refactor:, test:)
4. Merge to `integration` for integration testing
5. Small, reviewable PRs preferred
## 📝 Documentation
## 📝 Essential Documentation
- Root documents: `PLANNING.md`, `KNOWLEDGE.md`, `TASK.md`
**Read these files IN ORDER at session start:**
1. **PLANNING.md** - Architecture, design principles, absolute rules
2. **TASK.md** - Current tasks and priorities
3. **KNOWLEDGE.md** - Accumulated insights and troubleshooting
These documents are the **source of truth** for development standards.
**Additional Resources:**
- User guides: `docs/user-guide/`
- Development docs: `docs/Development/`
- Research reports: `docs/research/`
## 💡 Core Development Principles
From KNOWLEDGE.md and PLANNING.md:
### 1. Evidence-Based Development
- **Never guess** - verify with official docs (Context7 MCP, WebFetch, WebSearch)
- Example: Don't assume port configuration - check official documentation first
- Prevents wrong-direction implementations
### 2. Token Efficiency
- Every operation has a token budget:
- Simple (typo fix): 200 tokens
- Medium (bug fix): 1,000 tokens
- Complex (feature): 2,500 tokens
- Confidence check ROI: Spend 100-200 to save 5,000-50,000
### 3. Parallel-First Execution
- **Wave → Checkpoint → Wave** pattern (3.5x faster)
- Good: `[Read file1, Read file2, Read file3]` → Analyze → `[Edit file1, Edit file2, Edit file3]`
- Bad: Sequential reads then sequential edits
### 4. Confidence-First Implementation
- Check confidence BEFORE implementation, not after
- ≥90%: Proceed immediately
- 70-89%: Present alternatives
- <70%: STOP → Ask specific questions
## 🔧 MCP Server Integration
This framework integrates with multiple MCP servers:
**Priority Servers:**
- **Context7**: Official documentation (prevent hallucination)
- **Sequential**: Complex analysis and multi-step reasoning
- **Tavily**: Web search for Deep Research
**Optional Servers:**
- **Serena**: Session persistence and memory
- **Playwright**: Browser automation testing
- **Magic**: UI component generation
**Always prefer MCP tools over speculation** when documentation or research is needed.
## 🔗 Related
- Global rules: `~/.claude/CLAUDE.md` (workspace-level)