docs: Add comprehensive Framework-Hooks documentation

Complete technical documentation for the SuperClaude Framework-Hooks system:

• Overview documentation explaining pattern-driven intelligence architecture
• Individual hook documentation for all 7 lifecycle hooks with performance targets
• Complete configuration documentation for all YAML/JSON config files
• Pattern system documentation covering minimal/dynamic/learned patterns
• Shared modules documentation for all core intelligence components
• Integration guide showing SuperClaude framework coordination
• Performance guide with optimization strategies and benchmarks

Key technical features documented:
- 90% context reduction through pattern-driven approach (50KB+ → 5KB)
- 10x faster bootstrap performance (500ms+ → <50ms)
- 7 lifecycle hooks with specific performance targets (50-200ms)
- 5-level compression system with quality preservation ≥95%
- Just-in-time capability loading with intelligent caching
- Cross-hook learning system for continuous improvement
- MCP server coordination for all 6 servers
- Integration with 4 behavioral modes and 8-step quality gates

Documentation provides complete technical reference for developers,
system administrators, and users working with the Framework-Hooks
system architecture and implementation.

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
NomenAK
2025-08-05 16:50:10 +02:00
parent 3e40322d0a
commit cee59e343c
32 changed files with 19206 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,492 @@
# Post-Tool-Use Hook Documentation
## Purpose
The **post_tool_use hook** implements comprehensive validation and learning after every tool execution in Claude Code. It serves as the primary quality assurance and continuous improvement mechanism for the SuperClaude framework, ensuring operations comply with RULES.md and PRINCIPLES.md while learning from each execution to enhance future performance.
**Core Functions:**
- **Quality Validation**: Verifies tool execution against SuperClaude framework standards
- **Rules Compliance**: Enforces RULES.md operational requirements and safety protocols
- **Principles Alignment**: Validates adherence to PRINCIPLES.md development philosophy
- **Effectiveness Measurement**: Quantifies operation success and learning value
- **Error Pattern Detection**: Identifies and learns from recurring issues and failures
- **Learning Integration**: Records insights for continuous framework improvement
## Execution Context
The post_tool_use hook **runs after every tool use** in Claude Code, providing universal validation coverage across all operations.
**Execution Trigger Points:**
- **Universal Coverage**: Activated after every tool execution (Read, Write, Edit, Bash, etc.)
- **Automatic Activation**: No manual intervention required - built into Claude Code's execution pipeline
- **Real-Time Processing**: Immediate validation and feedback on tool results
- **Session Integration**: Maintains context across multiple tool executions within a session
**Input Processing:**
- Receives complete tool execution result via stdin as JSON
- Extracts execution context including parameters, results, errors, and performance data
- Analyzes operation characteristics and quality indicators
- Enriches context with framework-specific metadata
**Output Generation:**
- Comprehensive validation report with quality scores and compliance status
- Actionable recommendations for improvement and optimization
- Learning insights and pattern detection results
- Performance metrics and effectiveness measurements
## Performance Target
**Primary Target: <100ms execution time**
The hook is designed to provide comprehensive validation while maintaining minimal impact on overall system performance.
**Performance Breakdown:**
- **Initialization**: <20ms (component loading and configuration)
- **Context Extraction**: <15ms (analyzing tool results and parameters)
- **Validation Processing**: <35ms (RULES.md and PRINCIPLES.md compliance checking)
- **Learning Analysis**: <20ms (pattern detection and effectiveness measurement)
- **Report Generation**: <10ms (creating comprehensive validation report)
**Performance Monitoring:**
- Real-time execution time tracking with target enforcement
- Automatic performance degradation detection and alerts
- Resource usage monitoring (memory, CPU utilization)
- Fallback mechanisms for performance constraint scenarios
**Optimization Strategies:**
- Parallel validation processing for independent checks
- Cached validation results for repeated patterns
- Incremental validation for large operations
- Smart rule selection based on operation context
## Validation Levels
The hook implements four distinct validation levels, each providing increasing depth of analysis:
### Basic Level
**Focus**: Syntax and fundamental correctness
- **Syntax Validation**: Ensures generated code is syntactically correct
- **Basic Security Scan**: Detects obvious security vulnerabilities
- **Rule Compliance Check**: Validates core RULES.md requirements
- **Performance Target**: <50ms execution time
- **Use Cases**: Simple operations, low-risk contexts, performance-critical scenarios
### Standard Level (Default)
**Focus**: Comprehensive quality and type safety
- **All Basic Level checks**
- **Type Analysis**: Deep type compatibility checking and inference
- **Code Quality Assessment**: Maintainability, readability, and best practices
- **Principle Alignment**: Verification against PRINCIPLES.md guidelines
- **Performance Target**: <100ms execution time
- **Use Cases**: Regular development operations, standard complexity tasks
### Comprehensive Level
**Focus**: Security and performance optimization
- **All Standard Level checks**
- **Security Assessment**: Vulnerability analysis and threat modeling
- **Performance Analysis**: Bottleneck identification and optimization recommendations
- **Error Pattern Detection**: Advanced pattern recognition for failure modes
- **Learning Integration**: Enhanced effectiveness measurement and adaptation
- **Performance Target**: <150ms execution time
- **Use Cases**: High-risk operations, production deployments, security-sensitive contexts
### Production Level
**Focus**: Integration and deployment readiness
- **All Comprehensive Level checks**
- **Integration Testing**: Cross-component compatibility verification
- **Deployment Validation**: Production readiness assessment
- **Quality Gate Enforcement**: Complete 8-step validation cycle
- **Comprehensive Reporting**: Detailed compliance and quality documentation
- **Performance Target**: <200ms execution time
- **Use Cases**: Production deployments, critical system changes, release preparation
## RULES.md Compliance
The hook implements comprehensive enforcement of SuperClaude's core operational rules:
### File Operation Rules
**Read Before Write/Edit Enforcement:**
- Validates that Read operations precede Write/Edit operations
- Checks recent tool history (last 3 operations) for compliance
- Issues errors for violations with clear remediation guidance
- Provides exceptions for new file creation scenarios
**Absolute Path Validation:**
- Scans all path parameters (file_path, path, directory, output_path)
- Blocks relative path usage with specific violation reporting
- Allows approved prefixes (http://, https://, absolute paths)
- Prevents path traversal attacks and ensures operation security
**High-Risk Operation Validation:**
- Identifies high-risk operations (delete, refactor, deploy, migrate)
- Recommends validation for complex operations (complexity > 0.7)
- Provides warnings for operations lacking pre-validation
- Tracks validation compliance across operation types
### Security Requirements
**Input Validation Enforcement:**
- Detects user input handling patterns without validation
- Scans for external data processing vulnerabilities
- Validates API input sanitization and error handling
- Reports security violations with severity classification
**Secret Management Validation:**
- Scans for hardcoded sensitive information (passwords, API keys, tokens)
- Issues critical alerts for secret exposure risks
- Validates secure credential handling patterns
- Provides guidance for proper secret management
**Production Safety Checks:**
- Identifies production context indicators
- Validates safety measures for production operations
- Blocks unsafe operations in production environments
- Ensures proper rollback and recovery mechanisms
### Systematic Code Changes
**Project-Wide Discovery Validation:**
- Ensures comprehensive discovery before systematic changes
- Validates search completeness across all file types
- Confirms impact assessment documentation
- Verifies coordinated change execution planning
## PRINCIPLES.md Alignment
The hook validates adherence to SuperClaude's core development principles:
### Evidence-Based Decision Making
**Evidence Over Assumptions:**
- Detects assumption-based reasoning without supporting evidence
- Requires measurable data for significant decisions
- Validates hypothesis testing and empirical verification
- Promotes evidence-based development practices
**Decision Documentation:**
- Ensures decision rationale is recorded and accessible
- Validates trade-off analysis and alternative consideration
- Requires evidence for architectural and design choices
- Supports future decision review and learning
### Development Priority Validation
**Code Over Documentation:**
- Validates that documentation follows working code implementation
- Prevents documentation-first development anti-patterns
- Ensures documentation accuracy reflects actual implementation
- Promotes iterative development with validated outcomes
**Working Software Priority:**
- Verifies working implementations before extensive documentation
- Validates incremental development with functional milestones
- Ensures user value delivery through functional software
- Supports rapid prototyping and validation cycles
### Efficiency and Quality Balance
**Efficiency Over Verbosity:**
- Analyzes output size and complexity for unnecessary verbosity
- Recommends token efficiency techniques for large outputs
- Validates communication clarity without redundancy
- Promotes concise, actionable guidance and documentation
**Quality Without Compromise:**
- Ensures efficiency improvements don't sacrifice quality
- Validates testing and validation coverage during optimization
- Maintains code clarity and maintainability standards
- Balances development speed with long-term sustainability
## Learning Integration
The hook implements sophisticated learning mechanisms to continuously improve framework effectiveness:
### Effectiveness Measurement
**Multi-Dimensional Scoring:**
- **Overall Effectiveness**: Weighted combination of quality, performance, and satisfaction
- **Quality Score**: Code quality, security compliance, and principle alignment
- **Performance Score**: Execution time efficiency and resource utilization
- **User Satisfaction Estimate**: Success rate and error impact assessment
- **Learning Value**: Complexity, novelty, and insight generation potential
**Effectiveness Calculation:**
```yaml
effectiveness_weights:
quality_score: 30% # Code quality and compliance
performance_score: 25% # Execution efficiency
user_satisfaction: 35% # Perceived value and success
learning_value: 10% # Knowledge generation potential
```
### Pattern Recognition and Adaptation
**Success Pattern Detection:**
- Identifies effective tool usage patterns and MCP server coordination
- Recognizes high-quality output characteristics and optimal performance
- Records successful validation patterns and compliance strategies
- Builds pattern library for future operation optimization
**Failure Pattern Analysis:**
- Detects recurring error patterns and failure modes
- Analyzes root causes and contributing factors
- Identifies improvement opportunities and prevention strategies
- Generates targeted recommendations for specific failure types
**Adaptation Mechanisms:**
- **Real-Time Adjustment**: Dynamic threshold modification based on effectiveness
- **Rule Refinement**: Continuous improvement of validation rules and criteria
- **Principle Enhancement**: Evolution of principle interpretation and application
- **Validation Optimization**: Performance tuning based on usage patterns
### Learning Event Recording
**Operation Pattern Learning:**
- Records tool usage effectiveness with context and outcomes
- Tracks MCP server coordination patterns and success rates
- Documents user preference patterns and adaptation opportunities
- Builds comprehensive operation effectiveness database
**Error Recovery Learning:**
- Captures error context, recovery actions, and success rates
- Identifies effective error handling patterns and prevention strategies
- Records recovery time and resource requirements
- Builds error pattern knowledge base for future prevention
## Error Pattern Detection
The hook implements advanced error pattern detection to identify and prevent recurring issues:
### Error Classification System
**Severity-Based Classification:**
- **Critical Errors**: Security vulnerabilities, data corruption risks, system instability
- **Standard Errors**: Rule violations, quality failures, incomplete implementations
- **Warnings**: Principle deviations, optimization opportunities, best practice suggestions
- **Suggestions**: Code improvements, efficiency enhancements, learning recommendations
**Pattern Recognition Engine:**
- **Temporal Pattern Detection**: Identifies error trends over time and contexts
- **Contextual Pattern Analysis**: Recognizes error patterns specific to operation types
- **Cross-Operation Correlation**: Detects error patterns spanning multiple tool executions
- **User-Specific Pattern Learning**: Identifies individual user error tendencies
### Error Prevention Strategies
**Proactive Prevention:**
- **Pre-Validation Recommendations**: Suggests validation for similar high-risk operations
- **Security Check Integration**: Implements automated security validation checks
- **Performance Optimization**: Recommends parallel execution for large operations
- **Pattern-Based Warnings**: Provides early warnings for known problematic patterns
**Reactive Learning:**
- **Error Recovery Documentation**: Records successful recovery strategies
- **Pattern Knowledge Base**: Builds comprehensive error pattern database
- **Adaptation Recommendations**: Generates specific guidance for error prevention
- **User Education**: Provides learning opportunities from error analysis
## Configuration
The hook's behavior is controlled through multiple configuration layers providing flexibility and customization:
### Primary Configuration Source
**superclaude-config.json - post_tool_use section:**
```json
{
"post_tool_use": {
"enabled": true,
"performance_target_ms": 100,
"features": [
"quality_validation",
"rules_compliance_checking",
"principles_alignment_verification",
"effectiveness_measurement",
"error_pattern_detection",
"learning_opportunity_identification"
],
"configuration": {
"rules_validation": true,
"principles_validation": true,
"quality_standards_enforcement": true,
"effectiveness_tracking": true,
"learning_integration": true
},
"validation_levels": {
"basic": ["syntax_validation"],
"standard": ["syntax_validation", "type_analysis", "code_quality"],
"comprehensive": ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis"],
"production": ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis", "integration_testing", "deployment_validation"]
}
}
}
```
### Detailed Validation Configuration
**config/validation.yaml** provides comprehensive validation rule definitions:
**Rules Validation Configuration:**
- File operation rules (read_before_write, absolute_paths_only, validate_before_execution)
- Security requirements (input_validation, no_hardcoded_secrets, production_safety)
- Error severity levels and blocking behavior
- Context-aware validation adjustments
**Principles Validation Configuration:**
- Evidence-based decision making requirements
- Code-over-documentation enforcement
- Efficiency-over-verbosity thresholds
- Test-driven development validation
**Quality Standards:**
- Minimum quality scores for different assessment areas
- Performance thresholds and optimization indicators
- Security compliance requirements and checks
- Maintainability factors and measurement criteria
### Performance and Resource Configuration
**Performance Targets:**
```yaml
performance_configuration:
validation_targets:
processing_time_ms: 100 # Primary performance target
memory_usage_mb: 50 # Memory utilization limit
cpu_utilization_percent: 30 # CPU usage threshold
optimization_strategies:
parallel_validation: true # Enable parallel processing
cached_results: true # Cache validation results
incremental_validation: true # Optimize for repeated operations
smart_rule_selection: true # Context-aware rule application
```
**Resource Management:**
- Maximum validation time limits with fallback mechanisms
- Memory and CPU usage constraints with monitoring
- Automatic resource optimization and constraint handling
- Performance degradation detection and response
## Quality Gates Integration
The post_tool_use hook is integral to SuperClaude's 8-step validation cycle, contributing to multiple quality gates:
### Step 3: Code Quality Assessment
**Comprehensive Quality Analysis:**
- **Code Structure**: Evaluates organization, modularity, and architectural patterns
- **Maintainability**: Assesses readability, documentation, and modification ease
- **Best Practices**: Validates adherence to language and framework conventions
- **Technical Debt**: Identifies accumulation and provides reduction recommendations
**Quality Metrics:**
- Code quality score calculation (target: >0.7)
- Maintainability index with trend analysis
- Technical debt assessment and prioritization
- Best practice compliance percentage
### Step 4: Security Assessment
**Multi-Layer Security Validation:**
- **Vulnerability Analysis**: Scans for common security vulnerabilities (OWASP Top 10)
- **Input Validation**: Ensures proper sanitization and validation of external inputs
- **Authentication/Authorization**: Validates proper access control implementation
- **Data Protection**: Verifies secure data handling and storage practices
**Security Compliance:**
- Security score calculation (target: >0.8)
- Vulnerability severity assessment and prioritization
- Compliance reporting for security standards
- Threat modeling and risk assessment integration
### Step 5: Testing Validation
**Test Coverage and Quality:**
- **Test Presence**: Validates existence of appropriate tests for code changes
- **Coverage Analysis**: Measures test coverage depth and breadth
- **Test Quality**: Assesses test effectiveness and maintainability
- **Integration Testing**: Validates cross-component test coverage
**Testing Metrics:**
- Unit test coverage percentage (target: ≥80%)
- Integration test coverage (target: ≥70%)
- Test quality score and effectiveness measurement
- Testing best practice compliance validation
### Integration with Other Quality Gates
**Coordination with Pre-Tool-Use (Steps 1-2):**
- Receives syntax and type validation results for enhanced analysis
- Builds upon initial validation with deeper quality assessment
- Provides feedback for future pre-validation optimization
**Coordination with Session End (Steps 6-8):**
- Contributes validation results to performance analysis
- Provides quality metrics for documentation verification
- Supports integration testing with operation effectiveness data
### Quality Gate Reporting
**Comprehensive Quality Reports:**
- Step-by-step validation results with detailed findings
- Quality score breakdowns by category and importance
- Trend analysis and improvement recommendations
- Compliance status with actionable remediation steps
**Integration Metrics:**
- Overall quality gate passage rate
- Step-specific success rates and failure analysis
- Quality improvement trends over time
- Framework effectiveness measurement and optimization
## Advanced Features
### Context-Aware Validation
**Project Type Adaptations:**
- **Frontend Projects**: Additional accessibility, responsive design, and browser compatibility checks
- **Backend Projects**: Enhanced API security, data validation, and performance optimization focus
- **Full-Stack Projects**: Integration testing, end-to-end validation, and deployment safety verification
**User Expertise Adjustments:**
- **Beginner Users**: High validation verbosity, educational suggestions, step-by-step guidance
- **Intermediate Users**: Medium verbosity, best practice suggestions, optimization recommendations
- **Expert Users**: Low verbosity, advanced optimization suggestions, architectural guidance
### Learning System Integration
**Cross-Hook Learning:**
- Shares effectiveness data with pre_tool_use hook for optimization
- Coordinates with session_start hook for user preference learning
- Integrates with stop hook for comprehensive session analysis
**Adaptive Behavior:**
- Adjusts validation thresholds based on user expertise and project context
- Learns from validation effectiveness and user feedback
- Optimizes rule selection and severity based on operation patterns
### Error Recovery and Resilience
**Graceful Degradation:**
- Maintains essential validation even during system constraints
- Provides fallback validation reports on processing errors
- Preserves user context and operation continuity during failures
**Learning from Failures:**
- Records validation hook errors for system improvement
- Analyzes failure patterns to prevent future issues
- Generates insights from error recovery experiences
## Integration Examples
### MCP Server Coordination
**Serena Integration:**
- Receives semantic validation support for code structure analysis
- Coordinates edit validation for complex refactoring operations
- Leverages project context for enhanced validation accuracy
**Morphllm Integration:**
- Validates intelligent editing operations and pattern applications
- Coordinates edit effectiveness measurement and optimization
- Provides feedback for fast-apply optimization
**Sequential Integration:**
- Leverages complex validation analysis for multi-step operations
- Coordinates systematic validation for architectural changes
- Integrates reasoning validation with decision documentation
### Hook Ecosystem Integration
**Pre-Tool-Use Coordination:**
- Receives validation preparation data for enhanced analysis
- Provides effectiveness feedback for future operation optimization
- Coordinates rule enforcement across the complete execution cycle
**Session Management Integration:**
- Contributes validation metrics to session analytics
- Provides quality insights for session summary generation
- Supports cross-session learning and pattern recognition
## Conclusion
The post_tool_use hook serves as the cornerstone of SuperClaude's quality assurance and continuous improvement system. By providing comprehensive validation, learning integration, and adaptive behavior, it ensures that every tool execution contributes to the framework's overall effectiveness while maintaining the highest standards of quality, security, and compliance.
Through its sophisticated validation levels, error pattern detection, and learning mechanisms, the hook enables SuperClaude to continuously evolve and improve, providing users with increasingly effective and reliable development assistance while maintaining strict adherence to the framework's core principles and operational rules.

View File

@@ -0,0 +1,676 @@
# pre_compact Hook Technical Documentation
## Overview
The `pre_compact` hook implements SuperClaude's intelligent token optimization system, executing before context compaction in Claude Code to achieve 30-50% token reduction while maintaining ≥95% information preservation. This hook serves as the core implementation of `MODE_Token_Efficiency.md` compression algorithms.
## Purpose
**Token efficiency and compression before context compaction** - The pre_compact hook provides intelligent context optimization through adaptive compression strategies, symbol systems, and evidence-based validation. It operates as a preprocessing layer that optimizes content for efficient token usage while preserving semantic accuracy and technical correctness.
### Core Objectives
- **Resource Management**: Optimize token usage during large-scale operations and high resource utilization
- **Quality Preservation**: Maintain ≥95% information retention through selective compression strategies
- **Framework Protection**: Complete exclusion of SuperClaude framework content from compression
- **Adaptive Intelligence**: Context-aware compression based on content type, user expertise, and resource constraints
- **Performance Optimization**: Sub-150ms execution time for real-time compression decisions
## Execution Context
The pre_compact hook executes **before context compaction** in the Claude Code session lifecycle, triggered by:
### Automatic Activation Triggers
- **Resource Constraints**: Context usage >75%, memory pressure, conversation length thresholds
- **Performance Optimization**: Multi-MCP server coordination, extended sessions, complex analysis workflows
- **Content Characteristics**: Large content blocks, repetitive patterns, technical documentation
- **Framework Integration**: Wave coordination, task management operations, quality gate validation
### Execution Sequence
```
Claude Code Session → Context Analysis → pre_compact Hook → Compression Applied → Context Compaction → Response Generation
```
### Integration Points
- **Before**: Context analysis and resource state evaluation
- **During**: Selective compression with real-time quality validation
- **After**: Optimized content delivery to Claude Code context system
## Performance Target
**Performance Target: <150ms execution time**
The hook operates within strict performance constraints to ensure real-time compression decisions:
### Performance Benchmarks
- **Target Execution Time**: 150ms maximum
- **Typical Performance**: 50-100ms for standard content
- **Efficiency Metric**: 100 characters per millisecond processing rate
- **Resource Overhead**: <5% additional memory usage during compression
### Performance Monitoring
```python
performance_metrics = {
'compression_time_ms': execution_time,
'target_met': execution_time < 150,
'efficiency_score': chars_per_ms / 100,
'processing_rate': content_length / execution_time
}
```
### Optimization Strategies
- **Parallel Content Analysis**: Concurrent processing of content sections
- **Intelligent Caching**: Reuse compression results for similar content patterns
- **Early Exit Strategies**: Skip compression for framework content immediately
- **Selective Processing**: Apply compression only where beneficial
## Compression Levels
**5-Level Compression Strategy** providing adaptive optimization based on resource constraints and content characteristics:
### Level 1: Minimal (0-40% compression)
```yaml
compression_level: minimal
symbol_systems: false
abbreviation_systems: false
structural_optimization: false
quality_threshold: 0.98
use_cases:
- user_content
- low_resource_usage
- high_quality_required
```
**Application**: User project files, documentation, source code requiring high fidelity preservation.
### Level 2: Efficient (40-70% compression)
```yaml
compression_level: efficient
symbol_systems: true
abbreviation_systems: false
structural_optimization: true
quality_threshold: 0.95
use_cases:
- moderate_resource_usage
- balanced_efficiency
```
**Application**: Session metadata, checkpoint data, working artifacts with acceptable optimization trade-offs.
### Level 3: Compressed (70-85% compression)
```yaml
compression_level: compressed
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
quality_threshold: 0.90
use_cases:
- high_resource_usage
- user_requests_brevity
```
**Application**: Analysis results, cached data, temporary working content with aggressive optimization.
### Level 4: Critical (85-95% compression)
```yaml
compression_level: critical
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
quality_threshold: 0.85
use_cases:
- resource_constraints
- emergency_compression
```
**Application**: Emergency resource situations, historical session data, highly repetitive content.
### Level 5: Emergency (95%+ compression)
```yaml
compression_level: emergency
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
aggressive_optimization: true
quality_threshold: 0.80
use_cases:
- critical_resource_constraints
- emergency_situations
```
**Application**: Critical resource exhaustion scenarios with maximum token conservation priority.
## Selective Compression
**Framework exclusion and content classification** ensuring optimal compression strategies based on content type and preservation requirements:
### Content Classification System
#### Framework Content (0% compression)
```yaml
framework_exclusions:
patterns:
- "/SuperClaude/SuperClaude/"
- "~/.claude/"
- ".claude/"
- "SuperClaude/*"
- "CLAUDE.md"
- "FLAGS.md"
- "PRINCIPLES.md"
- "ORCHESTRATOR.md"
- "MCP_*.md"
- "MODE_*.md"
- "SESSION_LIFECYCLE.md"
compression_level: "preserve"
reasoning: "Framework content must be preserved for proper operation"
```
**Protection Strategy**: Complete exclusion from all compression algorithms with immediate early exit upon framework content detection.
#### User Content Preservation (Minimal compression)
```yaml
user_content_preservation:
patterns:
- "project_files"
- "user_documentation"
- "source_code"
- "configuration_files"
- "custom_content"
compression_level: "minimal"
reasoning: "User content requires high fidelity preservation"
```
**Protection Strategy**: Light compression with whitespace optimization only, preserving semantic accuracy and technical correctness.
#### Session Data Optimization (Efficient compression)
```yaml
session_data_optimization:
patterns:
- "session_metadata"
- "checkpoint_data"
- "cache_content"
- "working_artifacts"
- "analysis_results"
compression_level: "efficient"
reasoning: "Session data can be compressed while maintaining utility"
```
**Optimization Strategy**: Symbol systems and structural optimization applied with 95% quality preservation target.
### Content Detection Algorithm
```python
def _analyze_content_sources(self, content: str, metadata: dict) -> Tuple[float, float]:
"""Analyze ratio of framework vs user content."""
framework_indicators = [
'SuperClaude', 'CLAUDE.md', 'FLAGS.md', 'PRINCIPLES.md',
'ORCHESTRATOR.md', 'MCP_', 'MODE_', 'SESSION_LIFECYCLE'
]
user_indicators = [
'project_files', 'user_documentation', 'source_code',
'configuration_files', 'custom_content'
]
```
## Symbol Systems
**Symbol systems replace verbose text** with standardized symbols for efficient communication while preserving semantic meaning:
### Core Logic & Flow Symbols
| Symbol | Meaning | Example Usage |
|--------|---------|---------------|
| → | leads to, implies | `auth.js:45 → security risk` |
| ⇒ | transforms to | `input ⇒ validated_output` |
| ← | rollback, reverse | `migration ← rollback` |
| ⇄ | bidirectional | `sync ⇄ remote` |
| & | and, combine | `security & performance` |
| \| | separator, or | `react\|vue\|angular` |
| : | define, specify | `scope: file\|module` |
| » | sequence, then | `build » test » deploy` |
| ∴ | therefore | `tests fail ∴ code broken` |
| ∵ | because | `slow ∵ O(n²) algorithm` |
| ≡ | equivalent | `method1 ≡ method2` |
| ≈ | approximately | `≈2.5K tokens` |
| ≠ | not equal | `actual ≠ expected` |
### Status & Progress Symbols
| Symbol | Meaning | Context |
|--------|---------|---------|
| ✅ | completed, passed | Task completion, validation success |
| ❌ | failed, error | Operation failure, validation error |
| ⚠️ | warning | Non-critical issues, attention required |
| | information | Informational messages, context |
| 🔄 | in progress | Active operations, processing |
| ⏳ | waiting, pending | Queued operations, dependencies |
| 🚨 | critical, urgent | High-priority issues, immediate action |
| 🎯 | target, goal | Objectives, milestones |
| 📊 | metrics, data | Performance data, analytics |
| 💡 | insight, learning | Discoveries, optimizations |
### Technical Domain Symbols
| Symbol | Domain | Usage Context |
|--------|---------|---------------|
| ⚡ | Performance | Speed optimization, efficiency |
| 🔍 | Analysis | Investigation, examination |
| 🔧 | Configuration | Setup, tool configuration |
| 🛡️ | Security | Protection, vulnerability analysis |
| 📦 | Deployment | Packaging, distribution |
| 🎨 | Design | UI/UX, frontend development |
| 🌐 | Network | Web services, connectivity |
| 📱 | Mobile | Responsive design, mobile apps |
| 🏗️ | Architecture | System structure, design patterns |
| 🧩 | Components | Modular design, composability |
### Symbol System Implementation
```python
symbol_systems = {
'core_logic_flow': {
'enabled': True,
'mappings': {
'leads to': '',
'transforms to': '',
'therefore': '',
'because': ''
}
},
'status_progress': {
'enabled': True,
'mappings': {
'completed': '',
'failed': '',
'warning': '⚠️',
'in progress': '🔄'
}
}
}
```
## Abbreviation Systems
**Technical abbreviations for efficiency** providing domain-specific shorthand while maintaining clarity and context:
### System & Architecture Abbreviations
| Full Term | Abbreviation | Context |
|-----------|--------------|---------|
| configuration | cfg | System settings, setup files |
| settings | cfg | Configuration parameters |
| implementation | impl | Code structure, algorithms |
| code structure | impl | Software architecture |
| architecture | arch | System design, patterns |
| system design | arch | Architectural decisions |
| performance | perf | Optimization, benchmarks |
| optimization | perf | Efficiency improvements |
| operations | ops | Deployment, DevOps |
| deployment | ops | Release processes |
| environment | env | Runtime context, settings |
| runtime context | env | Execution environment |
### Development Process Abbreviations
| Full Term | Abbreviation | Context |
|-----------|--------------|---------|
| requirements | req | Project specifications |
| dependencies | deps | Package management |
| packages | deps | Library dependencies |
| validation | val | Testing, verification |
| verification | val | Quality assurance |
| testing | test | Quality validation |
| quality assurance | test | Testing processes |
| documentation | docs | Technical writing |
| guides | docs | User documentation |
| standards | std | Coding conventions |
| conventions | std | Style guidelines |
### Quality & Analysis Abbreviations
| Full Term | Abbreviation | Context |
|-----------|--------------|---------|
| quality | qual | Code quality, maintainability |
| maintainability | qual | Long-term code health |
| security | sec | Safety measures, vulnerabilities |
| safety measures | sec | Security protocols |
| error | err | Exception handling |
| exception handling | err | Error management |
| recovery | rec | Resilience, fault tolerance |
| resilience | rec | System robustness |
| severity | sev | Priority levels, criticality |
| priority level | sev | Issue classification |
| optimization | opt | Performance improvements |
| improvement | opt | Enhancement strategies |
### Abbreviation System Implementation
```python
abbreviation_systems = {
'system_architecture': {
'enabled': True,
'mappings': {
'configuration': 'cfg',
'implementation': 'impl',
'architecture': 'arch',
'performance': 'perf'
}
},
'development_process': {
'enabled': True,
'mappings': {
'requirements': 'req',
'dependencies': 'deps',
'validation': 'val',
'testing': 'test'
}
}
}
```
## Quality Preservation
**95% information retention target** through comprehensive quality validation and evidence-based compression effectiveness monitoring:
### Quality Preservation Standards
```yaml
quality_preservation:
minimum_thresholds:
information_preservation: 0.95
semantic_accuracy: 0.95
technical_correctness: 0.98
user_content_fidelity: 0.99
validation_criteria:
key_concept_retention: true
technical_term_preservation: true
code_example_accuracy: true
reference_link_preservation: true
```
### Quality Validation Framework
```python
def _validate_compression_quality(self, compression_results, strategy) -> dict:
"""Validate compression quality against standards."""
validation = {
'overall_quality_met': True,
'preservation_score': 0.0,
'compression_efficiency': 0.0,
'quality_issues': [],
'quality_warnings': []
}
# Calculate preservation score
total_preservation = sum(result.preservation_score for result in compression_results.values())
validation['preservation_score'] = total_preservation / len(compression_results)
# Quality threshold validation
if validation['preservation_score'] < strategy.quality_threshold:
validation['overall_quality_met'] = False
validation['quality_issues'].append(
f"Preservation score {validation['preservation_score']:.2f} below threshold {strategy.quality_threshold}"
)
```
### Quality Monitoring Metrics
- **Information Preservation**: Semantic content retention measurement
- **Technical Correctness**: Code accuracy and reference preservation
- **Compression Efficiency**: Token reduction vs. quality trade-off analysis
- **User Content Fidelity**: Project-specific content preservation verification
### Quality Gate Integration
```python
quality_validation = self._validate_compression_quality(
compression_results, compression_strategy
)
if not quality_validation['overall_quality_met']:
log_decision(
"pre_compact",
"quality_validation",
"failed",
f"Preservation score: {quality_validation['preservation_score']:.2f}"
)
```
## Configuration
**Settings from compression.yaml** providing comprehensive configuration management for adaptive compression strategies:
### Core Configuration Structure
```yaml
# Performance Targets
performance_targets:
processing_time_ms: 150
compression_ratio_target: 0.50
quality_preservation_target: 0.95
token_efficiency_gain: 0.40
# Adaptive Compression Strategy
adaptive_compression:
context_awareness:
user_expertise_factor: true
project_complexity_factor: true
domain_specific_optimization: true
learning_integration:
effectiveness_feedback: true
user_preference_learning: true
pattern_optimization: true
```
### Compression Level Configuration
```python
def __init__(self):
# Load compression configuration
try:
self.compression_config = config_loader.load_config('compression')
except FileNotFoundError:
self.compression_config = self.hook_config.get('configuration', {})
# Performance tracking
self.performance_target_ms = config_loader.get_hook_config(
'pre_compact', 'performance_target_ms', 150
)
```
### Dynamic Configuration Management
- **Context-Aware Settings**: Automatic adjustment based on content type and resource state
- **Learning Integration**: User preference adaptation and pattern optimization
- **Performance Monitoring**: Real-time configuration tuning based on effectiveness metrics
- **Fallback Strategies**: Graceful degradation when configuration loading fails
### Integration with SuperClaude Framework
```yaml
integration:
mcp_servers:
morphllm: "coordinate_compression_with_editing"
serena: "memory_compression_strategies"
modes:
token_efficiency: "primary_compression_mode"
task_management: "session_data_compression"
learning_engine:
effectiveness_tracking: true
pattern_learning: true
adaptation_feedback: true
```
## MODE_Token_Efficiency Integration
**Implementation of MODE_Token_Efficiency compression algorithms** providing seamless integration with SuperClaude's token optimization behavioral mode:
### Mode Integration Architecture
```python
# MODE_Token_Efficiency.md → pre_compact.py implementation
class PreCompactHook:
"""
Pre-compact hook implementing SuperClaude token efficiency intelligence.
Implements MODE_Token_Efficiency.md algorithms:
- 5-level compression strategy
- Selective content classification
- Symbol systems optimization
- Quality preservation validation
"""
```
### Behavioral Mode Coordination
- **Auto-Activation**: Resource usage >75%, large-scale operations, user brevity requests
- **Compression Strategy Selection**: Adaptive algorithm based on MODE configuration
- **Quality Gate Integration**: Validation against MODE preservation targets
- **Performance Compliance**: Sub-150ms execution aligned with MODE efficiency requirements
### MODE Configuration Inheritance
```yaml
# MODE_Token_Efficiency.md settings → compression.yaml
compression_levels:
minimal: # MODE: 0-40% compression
quality_threshold: 0.98
symbol_systems: false
efficient: # MODE: 40-70% compression
quality_threshold: 0.95
symbol_systems: true
compressed: # MODE: 70-85% compression
quality_threshold: 0.90
abbreviation_systems: true
```
### Real-Time Mode Synchronization
```python
def _determine_compression_strategy(self, context: dict, content_analysis: dict) -> CompressionStrategy:
"""Determine optimal compression strategy aligned with MODE_Token_Efficiency."""
# MODE-compliant compression level determination
compression_level = self.compression_engine.determine_compression_level({
'resource_usage_percent': context.get('token_usage_percent', 0),
'conversation_length': context.get('conversation_length', 0),
'user_requests_brevity': context.get('user_requests_compression', False),
'complexity_score': context.get('content_complexity', 0.0)
})
```
### Learning Integration with MODE
```python
def _record_compression_learning(self, context, compression_results, quality_validation):
"""Record compression learning aligned with MODE adaptation."""
self.learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.USER,
context,
{
'compression_level': compression_level.value,
'preservation_score': quality_validation['preservation_score'],
'compression_efficiency': quality_validation['compression_efficiency']
},
overall_effectiveness,
0.9 # High confidence in MODE-aligned compression metrics
)
```
### Framework Compliance Validation
- **Symbol Systems**: Direct implementation of MODE symbol mappings
- **Abbreviation Systems**: MODE-compliant technical abbreviation patterns
- **Quality Preservation**: MODE 95% information retention standards
- **Selective Compression**: MODE content classification and protection strategies
## Key Features
### Intelligent Compression Strategy Selection
```python
def _determine_compression_strategy(self, context: dict, content_analysis: dict) -> CompressionStrategy:
"""
Adaptive compression strategy based on:
- Resource constraints and token usage
- Content type classification
- User preferences and expertise level
- Quality preservation requirements
"""
```
### Selective Content Preservation
- **Framework Exclusion**: Zero compression for SuperClaude components
- **User Content Protection**: High-fidelity preservation for project files
- **Session Data Optimization**: Efficient compression for operational data
- **Quality-Gated Processing**: Real-time validation against preservation targets
### Symbol Systems Optimization
- **Logic Flow Enhancement**: Mathematical and directional symbols
- **Status Communication**: Visual progress and state indicators
- **Domain-Specific Symbols**: Technical context-aware representations
- **Persona-Aware Selection**: Symbol choice based on active domain expertise
### Abbreviation Systems
- **Technical Efficiency**: Domain-specific shorthand for common terms
- **Context-Sensitive Application**: Intelligent abbreviation based on user familiarity
- **Quality Preservation**: Abbreviations that maintain semantic clarity
- **Learning Integration**: Pattern optimization based on effectiveness feedback
### Quality-Gated Compression
- **Real-Time Validation**: Continuous quality monitoring during compression
- **Preservation Score Tracking**: Quantitative information retention measurement
- **Adaptive Threshold Management**: Dynamic quality targets based on content type
- **Fallback Strategies**: Graceful degradation when quality targets not met
## Implementation Details
### Compression Engine Architecture
```python
from compression_engine import (
CompressionEngine, CompressionLevel, ContentType,
CompressionResult, CompressionStrategy
)
class PreCompactHook:
def __init__(self):
self.compression_engine = CompressionEngine()
self.performance_target_ms = 150
```
### Content Analysis Pipeline
1. **Content Characteristics Analysis**: Complexity, repetition, technical density
2. **Source Classification**: Framework vs. user vs. session content identification
3. **Compressibility Assessment**: Potential optimization opportunity evaluation
4. **Strategy Selection**: Optimal compression level and technique determination
5. **Quality Validation**: Real-time preservation score monitoring
### Performance Optimization Techniques
- **Early Exit Strategy**: Framework content bypass for immediate exclusion
- **Parallel Processing**: Concurrent analysis of content sections
- **Intelligent Caching**: Compression result reuse for similar patterns
- **Selective Application**: Compression only where beneficial and safe
### Error Handling and Fallback
```python
def _create_fallback_compression_config(self, compact_request: dict, error: str) -> dict:
"""Create fallback compression configuration on error."""
return {
'compression_enabled': False,
'fallback_mode': True,
'error': error,
'quality': {
'preservation_score': 1.0, # No compression = perfect preservation
'quality_met': False, # But failed to optimize
'issues': [f"Compression hook error: {error}"]
}
}
```
## Results and Benefits
### Typical Performance Metrics
- **Token Reduction**: 30-50% typical savings with quality preservation
- **Processing Speed**: 50-100ms typical execution time (well under 150ms target)
- **Quality Preservation**: ≥95% information retention consistently achieved
- **Framework Protection**: 100% exclusion success rate for SuperClaude components
### Integration Benefits
- **Seamless MODE Integration**: Direct implementation of MODE_Token_Efficiency algorithms
- **Real-Time Optimization**: Sub-150ms compression decisions during active sessions
- **Quality-First Approach**: Preservation targets never compromised for efficiency gains
- **Adaptive Intelligence**: Learning-based optimization for improved effectiveness over time
### User Experience Improvements
- **Transparent Operation**: Compression applied without user intervention or awareness
- **Quality Assurance**: Technical correctness and semantic accuracy maintained
- **Performance Enhancement**: Faster response times through optimized token usage
- **Contextual Adaptation**: Compression strategies tailored to specific use cases and domains
---
*This hook serves as the core implementation of SuperClaude's intelligent token optimization system, providing evidence-based compression with adaptive strategies and quality-first preservation standards.*

View File

@@ -0,0 +1,805 @@
# Pre-Tool-Use Hook Technical Documentation
**Intelligent Tool Routing and MCP Server Selection Hook**
---
## Purpose
The `pre_tool_use` hook implements intelligent tool routing and MCP server selection for the SuperClaude framework. It runs before every tool execution in Claude Code, providing optimal tool configuration, MCP server coordination, and performance optimization within a strict 200ms execution target.
**Core Value Proposition**:
- **Intelligent Routing**: Matches tool requests to optimal execution strategies using pattern detection
- **MCP Server Orchestration**: Coordinates multiple specialized servers (Context7, Sequential, Magic, Playwright, Morphllm, Serena)
- **Performance Optimization**: Parallel execution planning, caching strategies, and resource management
- **Adaptive Intelligence**: Learning-based routing improvements over time
- **Fallback Resilience**: Graceful degradation when preferred tools are unavailable
---
## Execution Context
### Trigger Event
The hook executes **before every tool use** in Claude Code, intercepting tool requests to enhance them with SuperClaude intelligence.
### Execution Flow
```
Tool Request → pre_tool_use Hook → Enhanced Tool Configuration → Tool Execution
```
### Input Context
```json
{
"tool_name": "Read|Write|Edit|Analyze|Build|Test|...",
"parameters": {...},
"user_intent": "natural language description",
"session_context": {...},
"previous_tools": [...],
"operation_sequence": [...],
"resource_state": {...}
}
```
### Output Enhancement
```json
{
"tool_name": "original_tool",
"enhanced_mode": true,
"mcp_integration": {
"enabled": true,
"servers": ["serena", "sequential"],
"coordination_strategy": "collaborative"
},
"performance_optimization": {
"parallel_execution": true,
"caching_enabled": true,
"optimizations": ["parallel_file_processing"]
},
"execution_metadata": {
"estimated_time_ms": 1200,
"complexity_score": 0.65,
"intelligence_level": "medium"
}
}
```
---
## Performance Target
### Primary Target: <200ms Execution Time
- **Requirement**: Complete routing analysis and configuration within 200ms
- **Measurement**: End-to-end hook execution time from input to enhanced configuration
- **Validation**: Real-time performance tracking with target compliance reporting
- **Optimization**: Cached pattern recognition, pre-computed routing tables, intelligent fallbacks
### Performance Architecture
```yaml
Performance Zones:
green_zone: 0-150ms # Optimal performance with full intelligence
yellow_zone: 150-200ms # Target compliance with efficiency mode
red_zone: 200ms+ # Performance fallback with reduced intelligence
```
### Efficiency Calculation
```python
efficiency_score = (
time_efficiency * 0.4 + # Execution speed relative to target
complexity_efficiency * 0.3 + # Handling complexity appropriately
resource_efficiency * 0.3 # Resource utilization optimization
)
```
---
## Core Features
### 1. Intelligent Tool Routing
**Pattern-Based Tool Analysis**:
- Analyzes tool name, parameters, and context to determine optimal execution strategy
- Detects operation complexity (0.0-1.0 scale) based on file count, operation type, and requirements
- Identifies parallelization opportunities for multi-file operations
- Determines intelligence requirements for analysis and generation tasks
**Operation Categorization**:
```python
Operation Types:
- READ: File reading, search, navigation
- WRITE: File creation, editing, updates
- BUILD: Implementation, generation, creation
- TEST: Validation, testing, verification
- ANALYZE: Analysis, debugging, investigation
```
**Complexity Scoring Algorithm**:
```python
base_complexity = {
'READ': 0.0,
'WRITE': 0.2,
'BUILD': 0.4,
'TEST': 0.1,
'ANALYZE': 0.3
}
file_multiplier = (file_count - 1) * 0.1
directory_multiplier = (directory_count - 1) * 0.05
intelligence_bonus = 0.2 if requires_intelligence else 0.0
complexity_score = base_complexity + file_multiplier + directory_multiplier + intelligence_bonus
```
### 2. Context-Aware Configuration
**Session Context Integration**:
- Tracks tool usage patterns across session for optimization opportunities
- Analyzes tool chain patterns (Read→Edit, Multi-file operations, Analysis chains)
- Applies session-specific optimizations based on detected patterns
- Maintains resource state awareness for performance tuning
**Operation Chain Analysis**:
```python
Pattern Detection:
- read_edit_pattern: Read followed by Edit operations
- multi_file_pattern: Multiple file operations in sequence
- analysis_chain: Sequential analysis operations with caching opportunities
```
### 3. Real-Time Adaptation
**Learning Engine Integration**:
- Records tool usage effectiveness for routing optimization
- Adapts routing decisions based on historical performance
- Applies user-specific and project-specific routing preferences
- Continuous improvement through effectiveness measurement
**Adaptation Scopes**:
- **User Level**: Personal routing preferences and patterns
- **Project Level**: Project-specific tool effectiveness patterns
- **Session Level**: Real-time adaptation within current session
---
## MCP Server Routing Logic
### Server Capability Matching
The hook implements sophisticated capability matching to select optimal MCP servers:
```python
Server Capabilities Map:
context7: [documentation_access, framework_patterns, best_practices]
sequential: [complex_reasoning, systematic_analysis, hypothesis_testing]
magic: [ui_generation, design_systems, component_patterns]
playwright: [browser_automation, testing_frameworks, performance_testing]
morphllm: [pattern_application, fast_apply, intelligent_editing]
serena: [semantic_understanding, project_context, memory_management]
```
### Routing Decision Matrix
#### Single Server Selection
```yaml
Context7 Triggers:
- Library/framework keywords in user intent
- Documentation-related operations
- API reference needs
- Best practices queries
Sequential Triggers:
- Complexity score > 0.6
- Multi-step analysis required
- Debugging complex issues
- System architecture analysis
Magic Triggers:
- UI/component keywords
- Frontend development operations
- Design system integration
- Component generation requests
Playwright Triggers:
- Testing operations
- Browser automation needs
- Performance testing requirements
- E2E validation requests
Morphllm Triggers:
- Pattern-based editing
- Fast apply suitable operations
- Token optimization critical
- Simple to moderate complexity
Serena Triggers:
- File count > 5
- Symbol-level operations
- Project-wide analysis
- Memory operations
```
#### Multi-Server Coordination
```python
Coordination Strategies:
- single_server: One MCP server handles the operation
- collaborative: Multiple servers work together
- sequential_handoff: Primary server Secondary server
- parallel_coordination: Servers work on different aspects simultaneously
```
### Server Selection Algorithm
```python
def select_mcp_servers(context, requirements):
servers = []
# Primary capability matching
for server, capabilities in server_capabilities.items():
if any(cap in requirements['capabilities_needed'] for cap in capabilities):
servers.append(server)
# Context-specific routing
if context['complexity_score'] > 0.6:
servers.append('sequential')
if context['file_count'] > 5:
servers.append('serena')
# User intent analysis
intent_lower = context.get('user_intent', '').lower()
if any(word in intent_lower for word in ['component', 'ui', 'frontend']):
servers.append('magic')
# Deduplication and prioritization
return list(dict.fromkeys(servers)) # Preserve order, remove duplicates
```
---
## Fallback Strategies
### Hierarchy of Fallback Options
#### Level 1: Preferred MCP Server Unavailable
```python
Strategy: Alternative Server Selection
- Sequential unavailable Use Morphllm for analysis
- Serena unavailable Use native tools with manual coordination
- Magic unavailable Generate basic components with Context7 patterns
```
#### Level 2: Multiple MCP Servers Unavailable
```python
Strategy: Capability Degradation
- Disable enhanced intelligence features
- Fall back to native Claude Code tools
- Maintain basic functionality with warnings
- Preserve user context and intent
```
#### Level 3: All MCP Servers Unavailable
```python
Strategy: Native Tool Execution
- Execute original tool request without enhancement
- Log degradation for performance analysis
- Provide clear feedback about reduced capabilities
- Maintain operational continuity
```
### Fallback Configuration Generation
```python
def create_fallback_tool_config(tool_request, error):
return {
'tool_name': tool_request.get('tool_name'),
'enhanced_mode': False,
'fallback_mode': True,
'error': error,
'mcp_integration': {
'enabled': False,
'servers': [],
'coordination_strategy': 'none'
},
'performance_optimization': {
'parallel_execution': False,
'caching_enabled': False,
'optimizations': []
},
'performance_metrics': {
'routing_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
```
### Error Recovery Mechanisms
- **Graceful Degradation**: Reduce capability rather than failing completely
- **Context Preservation**: Maintain user intent and session context during fallback
- **Performance Continuity**: Ensure operations continue with acceptable performance
- **Learning Integration**: Record fallback events for routing improvement
---
## Configuration
### Hook-Specific Configuration (superclaude-config.json)
```json
{
"pre_tool_use": {
"enabled": true,
"description": "ORCHESTRATOR + MCP routing intelligence for optimal tool selection",
"performance_target_ms": 200,
"features": [
"intelligent_tool_routing",
"mcp_server_selection",
"performance_optimization",
"context_aware_configuration",
"fallback_strategy_implementation",
"real_time_adaptation"
],
"configuration": {
"mcp_intelligence": true,
"pattern_detection": true,
"learning_adaptations": true,
"performance_optimization": true,
"fallback_strategies": true
},
"integration": {
"mcp_servers": ["context7", "sequential", "magic", "playwright", "morphllm", "serena"],
"quality_gates": true,
"learning_engine": true
}
}
}
```
### MCP Server Integration Configuration
```json
{
"mcp_server_integration": {
"enabled": true,
"servers": {
"context7": {
"description": "Library documentation and framework patterns",
"capabilities": ["documentation_access", "framework_patterns", "best_practices"],
"performance_profile": "standard"
},
"sequential": {
"description": "Multi-step reasoning and complex analysis",
"capabilities": ["complex_reasoning", "systematic_analysis", "hypothesis_testing"],
"performance_profile": "intensive"
},
"serena": {
"description": "Semantic analysis and memory management",
"capabilities": ["semantic_understanding", "project_context", "memory_management"],
"performance_profile": "standard"
}
},
"coordination": {
"intelligent_routing": true,
"fallback_strategies": true,
"performance_optimization": true,
"learning_adaptation": true
}
}
}
```
### Runtime Configuration Loading
```python
class PreToolUseHook:
def __init__(self):
# Load hook-specific configuration
self.hook_config = config_loader.get_hook_config('pre_tool_use')
# Load orchestrator configuration (YAML or fallback)
try:
self.orchestrator_config = config_loader.load_config('orchestrator')
except FileNotFoundError:
self.orchestrator_config = self.hook_config.get('configuration', {})
# Performance targets from configuration
self.performance_target_ms = config_loader.get_hook_config(
'pre_tool_use', 'performance_target_ms', 200
)
```
---
## Learning Integration
### Learning Data Collection
**Operation Pattern Recording**:
```python
def record_tool_learning(context, tool_config):
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'tool_name': context['tool_name'],
'mcp_servers_used': tool_config.get('mcp_integration', {}).get('servers', []),
'execution_strategy': tool_config.get('execution_metadata', {}).get('intelligence_level'),
'optimizations_applied': tool_config.get('performance_optimization', {}).get('optimizations', [])
},
effectiveness_score=0.8, # Updated after execution
confidence_score=0.7,
metadata={'hook': 'pre_tool_use', 'version': '1.0'}
)
```
### Adaptive Routing Enhancement
**Learning-Based Routing Improvements**:
- **User Preferences**: Learn individual user's tool and server preferences
- **Project Patterns**: Adapt to project-specific optimal routing strategies
- **Performance Optimization**: Route based on historical performance data
- **Error Pattern Recognition**: Avoid routing strategies that historically failed
### Learning Scope Hierarchy
```python
Learning Scopes:
1. Session Level: Real-time adaptation within current session
2. User Level: Personal routing preferences across sessions
3. Project Level: Project-specific optimization patterns
4. Global Level: Framework-wide routing intelligence
```
### Effectiveness Measurement
```python
Effectiveness Metrics:
- execution_time: Actual vs estimated execution time
- success_rate: Successful operation completion rate
- quality_score: Output quality assessment
- user_satisfaction: Implicit feedback from continued usage
- resource_efficiency: Resource utilization optimization
```
---
## Performance Optimization
### Caching Strategies
#### Pattern Recognition Cache
```python
Cache Structure:
- Key: hash(user_intent + tool_name + context_hash)
- Value: routing_decision + confidence_score
- TTL: 60 minutes for pattern stability
- Size: 1000 entries with LRU eviction
```
#### MCP Server Response Cache
```python
Cache Strategy:
- Documentation lookups: 30 minutes TTL
- Analysis results: Session-scoped cache
- Pattern templates: 1 hour TTL
- Server availability: 5 minutes TTL
```
#### Performance Optimizations
```python
Optimization Techniques:
1. Pre-computed Routing Tables: Common patterns pre-calculated
2. Lazy Loading: Load components only when needed
3. Parallel Analysis: Run pattern detection and MCP planning concurrently
4. Result Reuse: Cache and reuse analysis results within session
5. Intelligent Fallbacks: Fast fallback paths for common failure modes
```
### Resource Management
```python
Resource Optimization:
- Memory: Bounded caches with intelligent eviction
- CPU: Parallel processing for independent operations
- I/O: Batch operations where possible
- Network: Connection pooling for MCP servers
```
### Execution Time Optimization
```python
Time Budget Allocation:
- Pattern Detection: 50ms (25%)
- MCP Server Selection: 30ms (15%)
- Configuration Generation: 40ms (20%)
- Learning Integration: 20ms (10%)
- Buffer/Safety Margin: 60ms (30%)
Total Target: 200ms
```
---
## Integration with ORCHESTRATOR.md
### Pattern Matching Implementation
The hook implements the ORCHESTRATOR.md pattern matching system:
```python
# Quick Pattern Matching from ORCHESTRATOR.md
pattern_mappings = {
'ui_component': {
'keywords': ['component', 'design', 'frontend', 'UI'],
'mcp_server': 'magic',
'persona': 'frontend'
},
'deep_analysis': {
'keywords': ['architecture', 'complex', 'system-wide'],
'mcp_server': 'sequential',
'thinking_mode': 'think_hard'
},
'large_scope': {
'keywords': ['many files', 'entire codebase'],
'mcp_server': 'serena',
'delegation': True
},
'symbol_operations': {
'keywords': ['rename', 'refactor', 'extract', 'move'],
'mcp_server': 'serena',
'precision': 'lsp'
},
'pattern_edits': {
'keywords': ['framework', 'style', 'cleanup'],
'mcp_server': 'morphllm',
'optimization': 'token'
}
}
```
### Resource Zone Implementation
```python
def get_resource_zone(resource_usage):
if resource_usage <= 0.75:
return 'green_zone' # Full capabilities
elif resource_usage <= 0.85:
return 'yellow_zone' # Efficiency mode
else:
return 'red_zone' # Essential operations only
```
### Tool Selection Guide Integration
The hook implements the ORCHESTRATOR.md tool selection guide:
```python
def apply_orchestrator_routing(context, user_intent):
"""Apply ORCHESTRATOR.md routing patterns"""
# MCP Server selection based on ORCHESTRATOR.md
if any(word in user_intent.lower() for word in ['library', 'docs', 'framework']):
return ['context7']
if any(word in user_intent.lower() for word in ['complex', 'analysis', 'debug']):
return ['sequential']
if any(word in user_intent.lower() for word in ['component', 'ui', 'design']):
return ['magic']
if context.get('file_count', 1) > 5:
return ['serena']
# Default to native tools for simple operations
return []
```
### Quality Gate Integration
```python
Quality Gates Applied:
- Step 1 (Syntax Validation): Tool parameter validation
- Step 2 (Type Analysis): Context type checking and compatibility
- Performance Monitoring: Real-time execution time tracking
- Fallback Validation: Ensure fallback strategies maintain functionality
```
### Auto-Activation Rules Implementation
The hook implements ORCHESTRATOR.md auto-activation rules:
```python
def apply_auto_activation_rules(context):
"""Apply ORCHESTRATOR.md auto-activation patterns"""
activations = []
# Enable Sequential for complex operations
if (context.get('complexity_score', 0) > 0.6 or
context.get('requires_intelligence')):
activations.append('sequential')
# Enable Serena for multi-file operations
if (context.get('file_count', 1) > 5 or
any(op in context.get('operation_sequence', []) for op in ['rename', 'extract'])):
activations.append('serena')
# Enable delegation for large operations
if (context.get('file_count', 1) > 3 or
context.get('directory_count', 1) > 2):
activations.append('delegation')
return activations
```
---
## Technical Implementation Details
### Core Architecture Components
#### 1. Framework Logic Integration
```python
from framework_logic import FrameworkLogic, OperationContext, OperationType, RiskLevel
# Provides SuperClaude framework intelligence
self.framework_logic = FrameworkLogic()
```
#### 2. Pattern Detection Engine
```python
from pattern_detection import PatternDetector, PatternMatch
# Analyzes patterns for routing decisions
detection_result = self.pattern_detector.detect_patterns(
user_intent, context, operation_data
)
```
#### 3. MCP Intelligence Coordination
```python
from mcp_intelligence import MCPIntelligence, MCPActivationPlan
# Creates optimal MCP server activation plans
mcp_plan = self.mcp_intelligence.create_activation_plan(
user_intent, context, operation_data
)
```
#### 4. Learning Engine Integration
```python
from learning_engine import LearningEngine
# Applies learned adaptations and records new patterns
enhanced_routing = self.learning_engine.apply_adaptations(context, base_routing)
```
### Error Handling Architecture
```python
Exception Handling Strategy:
1. Catch all exceptions during routing analysis
2. Log error with context for debugging
3. Generate fallback configuration
4. Preserve user intent and operation continuity
5. Record error for learning and improvement
```
### Performance Monitoring
```python
Performance Tracking:
- Initialization time measurement
- Per-operation execution time tracking
- Target compliance validation (<200ms)
- Efficiency score calculation
- Resource utilization monitoring
```
---
## Usage Examples
### Example 1: Simple File Read
```json
Input Request:
{
"tool_name": "Read",
"parameters": {"file_path": "/src/components/Button.tsx"},
"user_intent": "read button component"
}
Hook Enhancement:
{
"tool_name": "Read",
"enhanced_mode": false,
"mcp_integration": {"enabled": false},
"execution_metadata": {
"complexity_score": 0.0,
"intelligence_level": "low"
}
}
```
### Example 2: Complex Multi-File Analysis
```json
Input Request:
{
"tool_name": "Analyze",
"parameters": {"directory": "/src/**/*.ts"},
"user_intent": "analyze typescript architecture patterns"
}
Hook Enhancement:
{
"tool_name": "Analyze",
"enhanced_mode": true,
"mcp_integration": {
"enabled": true,
"servers": ["sequential", "serena"],
"coordination_strategy": "collaborative"
},
"performance_optimization": {
"parallel_execution": true,
"optimizations": ["parallel_file_processing"]
},
"execution_metadata": {
"complexity_score": 0.75,
"intelligence_level": "high"
}
}
```
### Example 3: UI Component Generation
```json
Input Request:
{
"tool_name": "Generate",
"parameters": {"component_type": "form"},
"user_intent": "create login form component"
}
Hook Enhancement:
{
"tool_name": "Generate",
"enhanced_mode": true,
"mcp_integration": {
"enabled": true,
"servers": ["magic", "context7"],
"coordination_strategy": "sequential_handoff"
},
"build_operations": {
"framework_integration": true,
"component_generation": true,
"quality_validation": true
}
}
```
---
## Monitoring and Debugging
### Performance Metrics
```python
Tracked Metrics:
- routing_time_ms: Time spent in routing analysis
- target_met: Boolean indicating <200ms compliance
- efficiency_score: Overall routing effectiveness (0.0-1.0)
- mcp_servers_activated: Count of MCP servers coordinated
- optimizations_applied: List of performance optimizations
- fallback_triggered: Boolean indicating fallback usage
```
### Logging Integration
```python
Log Events:
- Hook start: Tool name and parameters
- Routing decisions: MCP server selection rationale
- Execution strategy: Chosen execution approach
- Performance metrics: Timing and efficiency data
- Error events: Failures and fallback triggers
- Hook completion: Success/failure status
```
### Debug Information
```python
Debug Output:
- Pattern detection results with confidence scores
- MCP server capability matching analysis
- Optimization opportunity identification
- Learning adaptation application
- Configuration generation process
- Performance target validation
```
---
## Related Documentation
- **ORCHESTRATOR.md**: Core routing patterns and coordination strategies
- **Framework Integration**: Quality gates and mode coordination
- **MCP Server Documentation**: Individual server capabilities and integration
- **Learning Engine**: Adaptive intelligence and pattern recognition
- **Performance Monitoring**: System-wide performance tracking and optimization
---
*The pre_tool_use hook serves as the intelligent routing engine for the SuperClaude framework, ensuring optimal tool selection, MCP server coordination, and performance optimization for every operation within Claude Code.*

View File

@@ -0,0 +1,540 @@
# Session Start Hook Technical Documentation
## Purpose
The session_start hook is the foundational intelligence layer of the SuperClaude-Lite framework that initializes every Claude Code session with intelligent, context-aware configuration. This hook transforms basic Claude Code sessions into SuperClaude-powered experiences by implementing comprehensive project analysis, intelligent mode detection, and optimized MCP server routing.
The hook serves as the entry point for SuperClaude's session lifecycle pattern, establishing the groundwork for all subsequent intelligent behaviors including adaptive learning, performance optimization, and context preservation across sessions.
## Execution Context
The session_start hook executes at the very beginning of every Claude Code session, before any user interactions or tool executions occur. It sits at the critical initialization phase where session context is established and intelligence systems are activated.
**Execution Flow Position:**
```
Claude Code Session Start → session_start Hook → Enhanced Session Configuration → User Interaction
```
**Lifecycle Integration:**
- **Trigger**: Every new Claude Code session initialization
- **Duration**: Target <50ms execution time
- **Dependencies**: Session context data from Claude Code
- **Output**: Enhanced session configuration with SuperClaude intelligence
- **Next Phase**: Active session with intelligent routing and optimization
## Performance Target
**Target: <50ms execution time**
This aggressive performance target is critical for maintaining seamless user experience during session initialization. The hook must complete its comprehensive analysis and configuration within this window to avoid perceptible delays.
**Why 50ms Matters:**
- **User Experience**: Sub-perceptible delay maintains natural interaction flow
- **Session Efficiency**: Fast bootstrap enables immediate intelligent behavior
- **Resource Optimization**: Efficient initialization preserves compute budget for actual work
- **Learning System**: Quick analysis allows for real-time adaptation without latency
**Performance Monitoring:**
- Real-time execution time tracking with detailed metrics
- Efficiency score calculation based on target achievement
- Performance degradation alerts and optimization recommendations
- Historical performance analysis for continuous improvement
## Core Features
### 1. Smart Project Context Loading with Framework Exclusion
**Implementation**: The hook performs intelligent project structure analysis while implementing selective content loading to optimize performance and focus.
**Technical Details:**
- **Rapid Project Scanning**: Limited file enumeration (max 100 files) for performance
- **Technology Stack Detection**: Identifies Node.js, Python, Rust, Go projects via manifest files
- **Framework Recognition**: Detects React, Vue, Angular, Express through dependency analysis
- **Production Environment Detection**: Identifies deployment configurations and CI/CD setup
- **Test Infrastructure Analysis**: Locates test directories and testing frameworks
- **Framework Exclusion Strategy**: Completely excludes SuperClaude framework directories from analysis to prevent recursive processing
**Code Implementation:**
```python
def _analyze_project_structure(self, project_path: Path) -> dict:
# Quick enumeration with performance limit
files = list(project_path.rglob('*'))[:100]
# Technology detection via manifest files
if (project_path / 'package.json').exists():
analysis['project_type'] = 'nodejs'
# Framework detection through dependency analysis
with open(package_json) as f:
deps = {**pkg_data.get('dependencies', {}), **pkg_data.get('devDependencies', {})}
if 'react' in deps: analysis['framework_detected'] = 'react'
```
### 2. Automatic Mode Detection and Activation
**Implementation**: Uses pattern recognition algorithms to detect user intent and automatically activate appropriate SuperClaude behavioral modes.
**Detection Algorithms:**
- **Intent Analysis**: Natural language processing of user input for operation type detection
- **Complexity Scoring**: Multi-factor analysis including file count, operation type, and complexity indicators
- **Brainstorming Detection**: Identifies uncertainty indicators ("not sure", "maybe", "thinking about")
- **Task Management Triggers**: Detects multi-step operations and delegation opportunities
- **Token Efficiency Needs**: Identifies resource constraints and optimization requirements
**Mode Activation Logic:**
```python
def _activate_intelligent_modes(self, context: dict, recommendations: dict) -> list:
activated_modes = []
# Brainstorming mode activation
if context.get('brainstorming_likely', False):
activated_modes.append({'name': 'brainstorming', 'trigger': 'user input'})
# Task management mode activation
if 'task_management' in recommendations.get('recommended_modes', []):
activated_modes.append({'name': 'task_management', 'trigger': 'pattern detection'})
```
### 3. MCP Server Intelligence Routing
**Implementation**: Intelligent analysis of project context and user intent to determine optimal MCP server activation strategy.
**Routing Intelligence:**
- **Context-Aware Selection**: Matches MCP server capabilities to detected project needs
- **Performance Optimization**: Considers server resource profiles and coordination costs
- **Fallback Strategy Planning**: Establishes backup activation patterns for server failures
- **Coordination Strategy**: Determines optimal server interaction patterns (parallel vs sequential)
**Server Selection Matrix:**
- **Context7**: Activated for external library dependencies and framework integration needs
- **Sequential**: Enabled for complex analysis requirements and multi-step reasoning
- **Magic**: Triggered by UI component requests and design system needs
- **Playwright**: Activated for testing requirements and browser automation
- **Morphllm**: Enabled for pattern-based editing and token optimization scenarios
- **Serena**: Activated for semantic analysis and project memory management
### 4. User Preference Adaptation
**Implementation**: Applies machine learning-based adaptations from previous sessions to personalize the session configuration.
**Learning Integration:**
- **Historical Pattern Analysis**: Analyzes successful configurations from previous sessions
- **User Expertise Detection**: Infers user skill level from interaction patterns and terminology
- **Preference Extraction**: Identifies consistent user choices and optimization preferences
- **Adaptive Configuration**: Applies learned preferences to current session setup
**Learning Engine Integration:**
```python
def _apply_learning_adaptations(self, context: dict, detection_result: dict) -> dict:
enhanced_recommendations = self.learning_engine.apply_adaptations(
context, base_recommendations
)
return enhanced_recommendations
```
### 5. Performance-Optimized Initialization
**Implementation**: Comprehensive performance optimization strategy that balances intelligence with speed.
**Optimization Techniques:**
- **Lazy Loading**: Defers non-critical analysis until actual usage
- **Intelligent Caching**: Reuses previous analysis results when project context unchanged
- **Parallel Processing**: Concurrent execution of independent analysis components
- **Resource-Aware Configuration**: Adapts initialization depth based on available resources
- **Progressive Enhancement**: Enables additional features as resource budget allows
## Implementation Details
### Architecture Pattern
The session_start hook implements a layered architecture with clear separation of concerns:
**Layer 1: Context Extraction**
```python
def _extract_session_context(self, session_data: dict) -> dict:
# Enriches basic session data with project analysis and user intent detection
context = {
'session_id': session_data.get('session_id', 'unknown'),
'project_path': session_data.get('project_path', ''),
'user_input': session_data.get('user_input', ''),
# ... additional context enrichment
}
```
**Layer 2: Intelligence Analysis**
```python
def _detect_session_patterns(self, context: dict) -> dict:
# Pattern detection using SuperClaude's pattern recognition algorithms
detection_result = self.pattern_detector.detect_patterns(
context.get('user_input', ''),
context,
operation_data
)
```
**Layer 3: Configuration Generation**
```python
def _generate_session_config(self, context: dict, recommendations: dict,
mcp_plan: dict, compression_config: dict) -> dict:
# Comprehensive session configuration assembly
return comprehensive_session_configuration
```
### Error Handling Strategy
**Graceful Degradation**: The hook implements comprehensive error handling that ensures session functionality even when intelligence systems fail.
```python
def initialize_session(self, session_context: dict) -> dict:
try:
# Full intelligence initialization
return enhanced_session_config
except Exception as e:
# Graceful fallback
return self._create_fallback_session_config(session_context, str(e))
```
**Fallback Configuration:**
- Disables SuperClaude intelligence features
- Maintains basic Claude Code functionality
- Provides error context for debugging
- Enables recovery for subsequent sessions
### Performance Measurement
**Real-Time Metrics:**
```python
# Performance tracking integration
execution_time = (time.time() - start_time) * 1000
session_config['performance_metrics'] = {
'initialization_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'efficiency_score': self._calculate_initialization_efficiency(execution_time)
}
```
## Configuration
### Hook-Specific Configuration (superclaude-config.json)
```json
{
"hook_configurations": {
"session_start": {
"enabled": true,
"description": "SESSION_LIFECYCLE + FLAGS logic with intelligent bootstrap",
"performance_target_ms": 50,
"features": [
"smart_project_context_loading",
"automatic_mode_detection",
"mcp_server_intelligence_routing",
"user_preference_adaptation",
"performance_optimized_initialization"
],
"configuration": {
"auto_project_detection": true,
"framework_exclusion_enabled": true,
"intelligence_activation": true,
"learning_integration": true,
"performance_monitoring": true
},
"error_handling": {
"graceful_fallback": true,
"preserve_user_context": true,
"error_learning": true
}
}
}
}
```
### Configuration Loading Strategy
**Primary Configuration Source**: superclaude-config.json hook_configurations.session_start
**Fallback Strategy**: YAML configuration files in config/ directory
**Runtime Adaptation**: Learning engine modifications applied during execution
```python
# Configuration loading with fallback
self.hook_config = config_loader.get_hook_config('session_start')
try:
self.session_config = config_loader.load_config('session')
except FileNotFoundError:
self.session_config = self.hook_config.get('configuration', {})
```
## Pattern Loading Strategy
### Minimal Pattern Bootstrap
The hook implements a strategic pattern loading approach that loads only essential patterns during initialization to meet the 50ms performance target.
**Pattern Loading Phases:**
**Phase 1: Critical Patterns (Target: 3-5KB)**
- Core operation type detection patterns
- Basic project structure recognition
- Essential mode activation triggers
- Primary MCP server routing logic
**Phase 2: Context-Specific Patterns (Lazy Loaded)**
- Framework-specific intelligence patterns
- Advanced optimization strategies
- Historical learning adaptations
- Complex coordination algorithms
**Implementation Strategy:**
```python
def _detect_session_patterns(self, context: dict) -> dict:
# Load minimal patterns for fast detection
detection_result = self.pattern_detector.detect_patterns(
context.get('user_input', ''),
context,
operation_data # Contains only essential pattern data
)
```
**Pattern Optimization Techniques:**
- **Compressed Pattern Storage**: Use efficient data structures for pattern representation
- **Selective Pattern Loading**: Load only patterns relevant to detected project type
- **Cached Pattern Results**: Reuse pattern analysis for similar contexts
- **Progressive Pattern Enhancement**: Enable additional patterns as session progresses
## Shared Modules Used
### framework_logic.py
**Purpose**: Implements core SuperClaude decision-making algorithms from RULES.md, PRINCIPLES.md, and ORCHESTRATOR.md.
**Key Components Used:**
- `OperationType` enum for operation classification
- `OperationContext` dataclass for structured context management
- `RiskLevel` assessment for quality gate determination
- Quality gate configuration based on operation context
**Usage in session_start:**
```python
from framework_logic import FrameworkLogic, OperationContext, OperationType, RiskLevel
# Quality gate configuration
operation_context = OperationContext(
operation_type=context.get('operation_type', OperationType.READ),
file_count=context.get('file_count_estimate', 1),
complexity_score=context.get('complexity_score', 0.0),
risk_level=RiskLevel.LOW
)
return self.framework_logic.get_quality_gates(operation_context)
```
### pattern_detection.py
**Purpose**: Provides intelligent pattern recognition for session configuration.
**Key Components Used:**
- Pattern matching algorithms for user intent detection
- Mode recommendation logic based on detected patterns
- MCP server selection recommendations
- Confidence scoring for pattern matches
### mcp_intelligence.py
**Purpose**: Implements intelligent MCP server selection and coordination.
**Key Components Used:**
- MCP activation plan generation
- Server coordination strategy determination
- Performance cost estimation
- Fallback strategy planning
### compression_engine.py
**Purpose**: Provides intelligent compression strategy selection for token efficiency.
**Key Components Used:**
- Compression level determination based on context
- Quality impact estimation
- Compression savings calculation
- Selective compression configuration
### learning_engine.py
**Purpose**: Enables adaptive learning and preference application.
**Key Components Used:**
- Learning event recording for session patterns
- Adaptation application from previous sessions
- Effectiveness measurement and feedback loops
- Pattern recognition and improvement suggestions
### yaml_loader.py
**Purpose**: Provides configuration loading and management capabilities.
**Key Components Used:**
- Hook-specific configuration loading
- YAML configuration file management
- Fallback configuration strategies
- Hot-reload configuration support
### logger.py
**Purpose**: Provides comprehensive logging and metrics collection.
**Key Components Used:**
- Hook execution logging with timing
- Decision logging for audit trails
- Error logging with context preservation
- Performance metrics collection
## Error Handling
### Comprehensive Error Recovery Strategy
**Error Categories and Responses:**
**1. Project Analysis Failures**
```python
def _analyze_project_structure(self, project_path: Path) -> dict:
try:
# Full project analysis
return comprehensive_analysis
except Exception:
# Return partial analysis with safe defaults
return basic_analysis_with_defaults
```
**2. Pattern Detection Failures**
- Fallback to basic mode configuration
- Use cached patterns from previous sessions
- Apply conservative intelligence settings
- Maintain core functionality without advanced features
**3. MCP Server Planning Failures**
- Disable problematic servers
- Use fallback server combinations
- Apply conservative coordination strategies
- Maintain basic tool functionality
**4. Learning System Failures**
- Disable adaptive features temporarily
- Use static configuration defaults
- Log errors for future analysis
- Preserve session functionality
### Error Learning Integration
**Error Pattern Recognition:**
```python
def _record_session_learning(self, context: dict, session_config: dict):
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
session_config,
success_score,
confidence_score,
metadata
)
```
**Recovery Optimization:**
- Errors are analyzed for pattern recognition
- Successful recovery strategies are learned and applied
- Error frequency analysis drives system improvements
- Proactive error prevention based on historical patterns
## Session Context Enhancement
### Context Enrichment Process
The session_start hook transforms basic Claude Code session data into rich, intelligent context that enables advanced SuperClaude behaviors throughout the session.
**Input Context (Basic):**
- session_id: Basic session identifier
- project_path: File system path
- user_input: Initial user request
- conversation_length: Basic metrics
**Enhanced Context (SuperClaude):**
- Project analysis with technology stack detection
- User intent analysis with complexity scoring
- Mode activation recommendations
- MCP server routing plans
- Performance optimization settings
- Learning adaptations from previous sessions
### Context Preservation Strategy
**Session Configuration Generation:**
```python
def _generate_session_config(self, context: dict, recommendations: dict,
mcp_plan: dict, compression_config: dict) -> dict:
return {
'session_id': context['session_id'],
'superclaude_enabled': True,
'active_modes': recommendations.get('recommended_modes', []),
'mcp_servers': mcp_plan,
'compression': compression_config,
'performance': performance_config,
'learning': learning_config,
'context': context_preservation,
'quality_gates': quality_gate_config
}
```
**Context Utilization Throughout Session:**
- **MCP Server Routing**: Uses project analysis for intelligent server selection
- **Mode Activation**: Applies detected patterns for behavioral mode triggers
- **Performance Optimization**: Uses complexity analysis for resource allocation
- **Quality Gates**: Applies context-appropriate validation levels
- **Learning Integration**: Captures session patterns for future improvement
### Long-Term Context Evolution
**Cross-Session Learning:**
- Session patterns are analyzed and stored for future sessions
- User preferences are extracted and applied automatically
- Project-specific optimizations are learned and reused
- Error patterns are identified and proactively avoided
**Context Continuity:**
- Enhanced context from session_start provides foundation for entire session
- Context elements influence all subsequent hook behaviors
- Learning from current session feeds into future session_start executions
- Continuous improvement cycle maintains and enhances context quality over time
## Integration Points
### SuperClaude Framework Integration
**SESSION_LIFECYCLE.md Compliance:**
- Implements initialization phase of session lifecycle pattern
- Provides foundation for checkpoint and persistence systems
- Enables context continuity across session boundaries
**FLAGS.md Logic Implementation:**
- Automatically detects and applies appropriate flag combinations
- Implements flag precedence and conflict resolution
- Provides intelligent default flag selection based on context
**ORCHESTRATOR.md Pattern Integration:**
- Implements intelligent routing patterns for MCP server selection
- Applies resource management strategies during initialization
- Establishes foundation for quality gate enforcement
### Hook Ecosystem Coordination
**Downstream Hook Preparation:**
- pre_tool_use: Receives enhanced context for intelligent tool routing
- post_tool_use: Gets quality gate configuration for validation
- pre_compact: Receives compression configuration for optimization
- stop: Gets learning configuration for session analytics
**Cross-Hook Data Flow:**
```
session_start → Enhanced Context → All Subsequent Hooks
Learning Engine ← Session Analytics ← stop Hook
```
This comprehensive technical documentation provides a complete understanding of how the session_start hook operates as the foundational intelligence layer of the SuperClaude-Lite framework, transforming basic Claude Code sessions into intelligent, adaptive, and optimized experiences.

View File

@@ -0,0 +1,383 @@
# Stop Hook Documentation
## Overview
The Stop Hook is a comprehensive session analytics and persistence engine that runs at the end of each Claude Code session. It implements the `/sc:save` logic with advanced performance tracking, providing detailed analytics about session effectiveness, learning consolidation, and intelligent session data storage.
## Purpose
The Stop Hook serves as the primary session analytics and persistence system for SuperClaude Framework, delivering:
- **Session Analytics**: Comprehensive performance and effectiveness metrics
- **Learning Consolidation**: Consolidation of learning events from the entire session
- **Session Persistence**: Intelligent session data storage with compression
- **Performance Optimization**: Recommendations for future sessions based on analytics
- **Quality Assessment**: Session success evaluation and improvement suggestions
- **Framework Effectiveness**: Measurement of SuperClaude framework impact
## Execution Context
### When This Hook Runs
- **Trigger**: Session termination in Claude Code
- **Context**: End of user session, before final cleanup
- **Data Available**: Complete session history, operations log, error records
- **Timing**: After all user operations completed, before session cleanup
### Hook Integration Points
- **Session Lifecycle**: Final stage of session processing
- **MCP Intelligence**: Coordinates with MCP servers for enhanced analytics
- **Learning Engine**: Consolidates learning events and adaptations
- **Framework Logic**: Applies SuperClaude framework patterns for analysis
## Performance Target
**Primary Target**: <200ms execution time for complete session analytics
### Performance Benchmarks
- **Initialization**: <50ms for component loading
- **Analytics Generation**: <100ms for comprehensive analysis
- **Session Persistence**: <30ms for data storage
- **Learning Consolidation**: <20ms for learning events processing
- **Total Processing**: <200ms end-to-end execution
### Performance Monitoring
```python
execution_time = (time.time() - start_time) * 1000
target_met = execution_time < self.performance_target_ms
```
## Session Analytics
### Comprehensive Performance Metrics
#### Overall Score Calculation
```python
overall_score = (
productivity * 0.4 +
effectiveness * 0.4 +
(1.0 - error_rate) * 0.2
)
```
#### Performance Categories
- **Productivity Score**: Operations per minute, completion rates
- **Quality Score**: Error rates, operation success rates
- **Intelligence Utilization**: MCP server usage, SuperClaude effectiveness
- **Resource Efficiency**: Memory, CPU, token usage optimization
- **User Satisfaction Estimate**: Derived from session patterns and outcomes
#### Analytics Components
```yaml
performance_metrics:
overall_score: 0.85 # Combined performance indicator
productivity_score: 0.78 # Operations efficiency
quality_score: 0.92 # Error-free execution rate
efficiency_score: 0.84 # Resource utilization
satisfaction_estimate: 0.87 # Estimated user satisfaction
```
### Bottleneck Identification
- **High Error Rate**: >20% operation failure rate
- **Low Productivity**: <50% productivity score
- **Underutilized Intelligence**: <30% MCP usage with SuperClaude enabled
- **Resource Constraints**: Memory/CPU/token usage optimization opportunities
### Optimization Opportunities Detection
- **Tool Usage Optimization**: >10 unique tools suggest coordination improvement
- **MCP Server Coordination**: <2 servers with >5 operations suggest better orchestration
- **Workflow Enhancement**: Pattern analysis for efficiency improvements
## Learning Consolidation
### Learning Events Processing
The hook consolidates all learning events generated during the session:
```python
def _consolidate_learning_events(self, context: dict) -> dict:
# Generate learning insights from session
insights = self.learning_engine.generate_learning_insights()
# Session-specific learning metrics
session_learning = {
'session_effectiveness': context.get('superclaude_effectiveness', 0),
'performance_score': context.get('session_productivity', 0),
'mcp_coordination_effectiveness': min(context.get('mcp_usage_ratio', 0) * 2, 1.0),
'error_recovery_success': 1.0 - context.get('error_rate', 0)
}
```
### Learning Categories
- **Effectiveness Feedback**: Session performance patterns
- **User Preferences**: Interaction and usage patterns
- **Technical Patterns**: Tool usage and coordination effectiveness
- **Error Recovery**: Success patterns for error handling
### Adaptation Creation
- **Session-Level Adaptations**: Immediate session pattern learning
- **User-Level Adaptations**: Long-term preference learning
- **Technical Adaptations**: Tool and workflow optimization patterns
## Session Persistence
### Intelligent Storage Strategy
#### Data Classification
- **Session Analytics**: Complete performance and effectiveness data
- **Learning Events**: Consolidated learning insights and adaptations
- **Context Data**: Session operational context and metadata
- **Recommendations**: Generated suggestions for future sessions
#### Compression Logic
```python
# Apply compression for large session data
if len(analytics_data) > 10000: # 10KB threshold
compression_result = self.compression_engine.compress_content(
analytics_data,
context,
{'content_type': 'session_data'}
)
```
#### Storage Optimization
- **Session Cleanup**: Maintains 50 most recent sessions
- **Automatic Pruning**: Removes sessions older than retention policy
- **Compression**: Applied to sessions >10KB for storage efficiency
### Persistence Results
```yaml
persistence_result:
persistence_enabled: true
session_data_saved: true
analytics_saved: true
learning_data_saved: true
compression_applied: true
compression_ratio: 0.65
storage_optimized: true
```
## Recommendations Generation
### Performance Improvements
Generated when overall score <70%:
- Focus on reducing error rate through validation
- Enable more SuperClaude intelligence features
- Optimize tool selection and usage patterns
### SuperClaude Optimizations
Based on framework effectiveness analysis:
- **Low Effectiveness (<60%)**: Enable more MCP servers, use delegation features
- **Disabled Framework**: Recommend SuperClaude enablement for productivity
- **Underutilization**: Activate compression and intelligence features
### Learning Suggestions
- **Low Learning Events (<3)**: Engage with more complex operations
- **Pattern Recognition**: Suggestions based on successful session patterns
- **Workflow Enhancement**: Recommendations for process improvements
### Workflow Enhancements
Based on error patterns and efficiency analysis:
- **High Error Rate (>10%)**: Use validation hooks, enable pre-tool intelligence
- **Resource Optimization**: Memory, CPU, token usage improvements
- **Coordination Enhancement**: Better MCP server and tool coordination
## Configuration
### Hook Configuration
Loaded from `superclaude-config.json` hook configuration:
```yaml
stop_hook:
performance_target_ms: 200
analytics:
comprehensive_metrics: true
learning_consolidation: true
recommendation_generation: true
persistence:
enabled: true
compression_threshold_bytes: 10000
session_retention_count: 50
learning:
session_adaptations: true
user_preference_tracking: true
technical_pattern_learning: true
```
### Session Configuration
Falls back to session.yaml configuration when available:
```yaml
session:
analytics_enabled: true
learning_consolidation: true
performance_tracking: true
recommendation_generation: true
persistence_optimization: true
```
## Integration with /sc:save
### Command Implementation
The Stop Hook directly implements the `/sc:save` command logic:
#### Core /sc:save Features
- **Session Analytics**: Complete session performance analysis
- **Learning Consolidation**: All learning events processed and stored
- **Intelligent Persistence**: Session data saved with optimization
- **Recommendation Generation**: Actionable suggestions for improvement
- **Performance Tracking**: <200ms execution time monitoring
#### /sc:save Workflow Integration
```python
def process_session_stop(self, session_data: dict) -> dict:
# 1. Extract session context
context = self._extract_session_context(session_data)
# 2. Analyze session performance (/sc:save analytics)
performance_analysis = self._analyze_session_performance(context)
# 3. Consolidate learning events (/sc:save learning)
learning_consolidation = self._consolidate_learning_events(context)
# 4. Generate session analytics (/sc:save metrics)
session_analytics = self._generate_session_analytics(...)
# 5. Perform session persistence (/sc:save storage)
persistence_result = self._perform_session_persistence(...)
# 6. Generate recommendations (/sc:save recommendations)
recommendations = self._generate_recommendations(...)
```
### /sc:save Output Format
```yaml
session_report:
session_id: "session_2025-01-31_14-30-00"
session_completed: true
completion_timestamp: 1704110400
analytics:
session_summary: {...}
performance_metrics: {...}
superclaude_effectiveness: {...}
quality_analysis: {...}
learning_summary: {...}
persistence:
persistence_enabled: true
analytics_saved: true
compression_applied: true
recommendations:
performance_improvements: [...]
superclaude_optimizations: [...]
learning_suggestions: [...]
workflow_enhancements: [...]
```
## Quality Assessment
### Session Success Criteria
A session is considered successful when:
- **Overall Score**: >60% performance score
- **SuperClaude Effectiveness**: >60% when framework enabled
- **Learning Achievement**: >0 insights generated
- **Recommendations**: Actionable suggestions provided
### Quality Metrics
```yaml
quality_analysis:
error_rate: 0.05 # 5% error rate
operation_success_rate: 0.95 # 95% success rate
bottlenecks: ["low_productivity"] # Identified issues
optimization_opportunities: [...] # Improvement areas
```
### Success Indicators
- **Session Success**: `overall_score > 0.6`
- **SuperClaude Effective**: `effectiveness_score > 0.6`
- **Learning Achieved**: `insights_generated > 0`
- **Recommendations Generated**: `total_recommendations > 0`
### User Satisfaction Estimation
```python
def _estimate_user_satisfaction(self, context: dict) -> float:
satisfaction_factors = []
# Error rate impact
satisfaction_factors.append(1.0 - error_rate)
# Productivity impact
satisfaction_factors.append(productivity)
# SuperClaude effectiveness impact
if superclaude_enabled:
satisfaction_factors.append(effectiveness)
# Session duration optimization (15-60 minutes optimal)
satisfaction_factors.append(duration_satisfaction)
return statistics.mean(satisfaction_factors)
```
## Error Handling
### Graceful Degradation
When errors occur during hook execution:
```python
except Exception as e:
log_error("stop", str(e), {"session_data": session_data})
return self._create_fallback_report(session_data, str(e))
```
### Fallback Reporting
```yaml
fallback_report:
session_completed: false
error: "Analysis engine failure"
fallback_mode: true
analytics:
performance_metrics:
overall_score: 0.0
persistence:
persistence_enabled: false
```
### Recovery Strategies
- **Analytics Failure**: Provide basic session summary
- **Persistence Failure**: Continue with recommendations generation
- **Learning Engine Error**: Skip learning consolidation, continue with core analytics
- **Complete Failure**: Return minimal session completion report
## Performance Optimization
### Efficiency Strategies
- **Lazy Loading**: Components initialized only when needed
- **Batch Processing**: Multiple analytics operations combined
- **Compression**: Large session data automatically compressed
- **Caching**: Learning insights cached for reuse
### Resource Management
- **Memory Optimization**: Session cleanup after processing
- **Storage Efficiency**: Old sessions automatically pruned
- **Processing Time**: <200ms target with continuous monitoring
- **Token Efficiency**: Compressed analytics data when appropriate
## Future Enhancements
### Planned Features
- **Cross-Session Analytics**: Performance trends across multiple sessions
- **Predictive Recommendations**: ML-based optimization suggestions
- **Real-Time Monitoring**: Live session analytics during execution
- **Collaborative Learning**: Shared learning patterns across users
- **Advanced Compression**: Context-aware compression algorithms
### Integration Opportunities
- **Dashboard Integration**: Real-time analytics visualization
- **Notification System**: Alerts for performance degradation
- **API Endpoints**: Session analytics via REST API
- **Export Capabilities**: Analytics data export for external analysis
---
*The Stop Hook represents the culmination of session management in SuperClaude Framework, providing comprehensive analytics, learning consolidation, and intelligent persistence to enable continuous improvement and optimization of user productivity.*

View File

@@ -0,0 +1,462 @@
# Subagent Stop Hook Documentation
## Purpose
The `subagent_stop` hook implements **MODE_Task_Management delegation coordination and analytics** by analyzing subagent task completion performance and providing comprehensive delegation effectiveness measurement. This hook specializes in **task delegation analytics and coordination**, measuring multi-agent collaboration effectiveness and optimizing wave orchestration strategies.
**Core Responsibilities:**
- Analyze subagent task completion and performance metrics
- Measure delegation effectiveness and coordination success
- Learn from parallel execution patterns and cross-agent coordination
- Optimize wave orchestration strategies for multi-agent operations
- Coordinate cross-agent knowledge sharing and learning
- Track task management framework effectiveness across delegated operations
## Execution Context
The `subagent_stop` hook executes **after subagent operations complete** in Claude Code, specifically when:
- **Subagent Task Completion**: When individual subagents finish their delegated tasks
- **Multi-Agent Coordination End**: After parallel task execution completes
- **Wave Orchestration Completion**: When wave-based task coordination finishes
- **Delegation Strategy Assessment**: For analyzing effectiveness of different delegation approaches
- **Cross-Agent Learning**: When coordination patterns need to be captured for future optimization
**Integration Points:**
- Integrates with Claude Code's subagent delegation system
- Coordinates with MODE_Task_Management for delegation analytics
- Synchronizes with wave orchestration for multi-agent coordination
- Links with learning engine for continuous delegation improvement
## Performance Target
**Target Execution Time: <150ms**
The hook maintains strict performance requirements to ensure minimal overhead during delegation analytics:
```python
# Performance configuration
self.performance_target_ms = config_loader.get_hook_config('subagent_stop', 'performance_target_ms', 150)
# Performance tracking
execution_time = (time.time() - start_time) * 1000
coordination_report['performance_metrics'] = {
'coordination_analysis_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'coordination_efficiency': self._calculate_coordination_efficiency(context, execution_time)
}
```
**Performance Optimization Features:**
- **Fast Context Extraction**: Efficient subagent data parsing and context enrichment
- **Streamlined Analytics**: Optimized delegation effectiveness calculations
- **Batched Operations**: Grouped analysis operations for efficiency
- **Cached Learning**: Reuse of previous coordination patterns for faster analysis
## Delegation Analytics
The hook provides comprehensive **delegation effectiveness measurement** through multiple analytical dimensions:
### Task Completion Analysis
```python
def _analyze_task_completion(self, context: dict) -> dict:
"""Analyze task completion performance."""
task_analysis = {
'completion_success': context.get('task_success', False),
'completion_quality': context.get('output_quality', 0.0),
'completion_efficiency': context.get('resource_efficiency', 0.0),
'completion_time_performance': 0.0,
'success_factors': [],
'improvement_areas': []
}
```
**Key Metrics:**
- **Completion Success Rate**: Binary success/failure tracking for delegated tasks
- **Output Quality Assessment**: Quality scoring (0.0-1.0) based on validation results and error indicators
- **Resource Efficiency**: Memory, CPU, and time utilization effectiveness measurement
- **Time Performance**: Actual vs. expected execution time analysis
- **Success Factor Identification**: Patterns that lead to successful delegation outcomes
- **Improvement Area Detection**: Areas requiring optimization in future delegations
### Delegation Effectiveness Measurement
```python
def _analyze_delegation_effectiveness(self, context: dict, task_analysis: dict) -> dict:
"""Analyze effectiveness of task delegation."""
delegation_analysis = {
'delegation_strategy': context.get('delegation_strategy', 'unknown'),
'delegation_success': context.get('task_success', False),
'delegation_efficiency': 0.0,
'coordination_overhead': 0.0,
'parallel_benefit': 0.0,
'delegation_value': 0.0
}
```
**Delegation Strategies Analyzed:**
- **Files Strategy**: Individual file-based delegation effectiveness
- **Folders Strategy**: Directory-level delegation performance
- **Auto Strategy**: Intelligent delegation strategy effectiveness
- **Custom Strategies**: User-defined delegation pattern analysis
**Effectiveness Dimensions:**
- **Delegation Efficiency**: Ratio of productive work to coordination overhead
- **Coordination Overhead**: Time and resource cost of agent coordination
- **Parallel Benefit**: Actual speedup achieved through parallel execution
- **Overall Delegation Value**: Composite score weighing quality, efficiency, and parallel benefits
## Wave Orchestration
The hook provides advanced **multi-agent coordination analysis** for wave-based task orchestration:
### Wave Coordination Success
```python
def _analyze_coordination_patterns(self, context: dict, delegation_analysis: dict) -> dict:
"""Analyze coordination patterns and effectiveness."""
coordination_analysis = {
'coordination_strategy': 'unknown',
'synchronization_effectiveness': 0.0,
'data_flow_efficiency': 0.0,
'wave_coordination_success': 0.0,
'cross_agent_learning': 0.0,
'coordination_patterns_detected': []
}
```
**Wave Orchestration Features:**
- **Progressive Enhancement**: Iterative improvement through multiple coordination waves
- **Systematic Analysis**: Comprehensive methodical analysis across wave cycles
- **Adaptive Coordination**: Dynamic strategy adjustment based on wave performance
- **Enterprise Orchestration**: Large-scale coordination for complex multi-agent operations
### Wave Performance Metrics
```python
def _update_wave_orchestration_metrics(self, context: dict, coordination_analysis: dict) -> dict:
"""Update wave orchestration performance metrics."""
wave_metrics = {
'wave_performance': 0.0,
'orchestration_efficiency': 0.0,
'wave_learning_value': 0.0,
'next_wave_recommendations': []
}
```
**Wave Strategy Analysis:**
- **Wave Position Tracking**: Current position within multi-wave coordination
- **Inter-Wave Communication**: Data flow and synchronization between waves
- **Wave Success Metrics**: Performance measurement across wave cycles
- **Orchestration Efficiency**: Resource utilization effectiveness in wave coordination
## Cross-Agent Learning
The hook implements **sophisticated learning mechanisms** for continuous delegation improvement:
### Learning Event Recording
```python
def _record_coordination_learning(self, context: dict, delegation_analysis: dict,
optimization_insights: dict):
"""Record coordination learning for future optimization."""
# Record delegation effectiveness
self.learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.PROJECT,
context,
{
'delegation_strategy': context.get('delegation_strategy'),
'task_type': context.get('task_type'),
'delegation_value': delegation_analysis['delegation_value'],
'coordination_overhead': delegation_analysis['coordination_overhead'],
'parallel_benefit': delegation_analysis['parallel_benefit']
},
delegation_analysis['delegation_value'],
0.8,
{'hook': 'subagent_stop', 'coordination_learning': True}
)
```
**Learning Categories:**
- **Performance Optimization**: Delegation strategy effectiveness patterns
- **Operation Patterns**: Successful task completion patterns
- **Coordination Patterns**: Effective multi-agent coordination strategies
- **Error Recovery**: Learning from delegation failures and recovery strategies
**Learning Scopes:**
- **Project-Level Learning**: Delegation patterns specific to current project
- **User-Level Learning**: Cross-project delegation preferences and patterns
- **System-Level Learning**: Framework-wide coordination optimization patterns
## Parallel Execution Tracking
The hook provides comprehensive **parallel operation performance analysis**:
### Parallel Benefit Calculation
```python
# Calculate parallel benefit
parallel_tasks = context.get('parallel_tasks', [])
if len(parallel_tasks) > 1:
# Estimate parallel benefit based on task coordination
parallel_efficiency = context.get('parallel_efficiency', 1.0)
theoretical_speedup = len(parallel_tasks)
actual_speedup = theoretical_speedup * parallel_efficiency
delegation_analysis['parallel_benefit'] = actual_speedup / theoretical_speedup
```
**Parallel Performance Metrics:**
- **Theoretical vs. Actual Speedup**: Comparison of expected and achieved parallel performance
- **Parallel Efficiency**: Effectiveness of parallel task coordination
- **Synchronization Overhead**: Cost of coordinating parallel operations
- **Resource Contention Analysis**: Impact of resource sharing on parallel performance
### Coordination Pattern Detection
```python
# Detect coordination patterns
if delegation_analysis['delegation_value'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('effective_delegation')
if coordination_analysis['synchronization_effectiveness'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('efficient_synchronization')
if coordination_analysis['wave_coordination_success'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('successful_wave_orchestration')
```
**Pattern Categories:**
- **Effective Delegation**: High-value delegation strategies
- **Efficient Synchronization**: Optimal coordination mechanisms
- **Successful Wave Orchestration**: High-performing wave coordination patterns
- **Resource Optimization**: Efficient resource utilization patterns
## Configuration
The hook is configured through `superclaude-config.json` with comprehensive settings for delegation analytics:
### Core Configuration
```json
{
"hooks": {
"subagent_stop": {
"enabled": true,
"priority": 7,
"performance_target_ms": 150,
"delegation_analytics": {
"enabled": true,
"strategy_analysis": ["files", "folders", "auto"],
"effectiveness_threshold": 0.6,
"coordination_overhead_threshold": 0.3
},
"wave_orchestration": {
"enabled": true,
"wave_strategies": ["progressive", "systematic", "adaptive", "enterprise"],
"success_threshold": 0.7,
"learning_enabled": true
},
"parallel_tracking": {
"efficiency_threshold": 0.7,
"synchronization_tracking": true,
"resource_contention_analysis": true
},
"learning_configuration": {
"coordination_learning": true,
"pattern_detection": true,
"cross_agent_learning": true,
"performance_learning": true
}
}
}
}
```
### Task Management Configuration
```json
{
"session": {
"task_management": {
"delegation_strategies": ["files", "folders", "auto"],
"wave_orchestration": {
"enabled": true,
"strategies": ["progressive", "systematic", "adaptive", "enterprise"],
"complexity_threshold": 0.4,
"min_wave_tasks": 3
},
"parallel_coordination": {
"max_parallel_agents": 7,
"synchronization_timeout_ms": 5000,
"resource_sharing_enabled": true
},
"learning_integration": {
"delegation_learning": true,
"wave_learning": true,
"cross_session_learning": true
}
}
}
}
```
## MODE_Task_Management Integration
The hook implements **MODE_Task_Management** through comprehensive integration with the task management framework:
### Task Management Layer Integration
```python
# Load task management configuration
self.task_config = config_loader.get_section('session', 'task_management', {})
# Integration with task management layers
# Layer 1: TodoRead/TodoWrite (Session Tasks) - Real-time state management
# Layer 2: /task Command (Project Management) - Cross-session persistence
# Layer 3: /spawn Command (Meta-Orchestration) - Complex multi-domain operations
# Layer 4: /loop Command (Iterative Enhancement) - Progressive refinement workflows
```
**Framework Integration Points:**
- **Session Task Tracking**: Integration with TodoWrite for task completion analytics
- **Project Task Coordination**: Cross-session task management integration
- **Meta-Orchestration**: Complex multi-domain operation coordination
- **Iterative Enhancement**: Progressive refinement and quality improvement cycles
### Auto-Activation Patterns
The hook supports MODE_Task_Management auto-activation patterns:
```python
# Auto-activation triggers from MODE_Task_Management:
# - Sub-Agent Delegation: >2 directories OR >3 files OR complexity >0.4
# - Wave Mode: complexity ≥0.4 AND files >3 AND operation_types >2
# - Loop Mode: polish, refine, enhance, improve keywords detected
```
**Detection Patterns:**
- **Multi-Step Operations**: 3+ step sequences with dependency analysis
- **Complexity Thresholds**: Operations exceeding 0.4 complexity score
- **File Count Triggers**: 3+ files for delegation, 2+ directories for coordination
- **Performance Opportunities**: Auto-detect parallelizable operations with time estimates
## Coordination Effectiveness
The hook provides comprehensive **success metrics for delegation** through multiple measurement dimensions:
### Overall Effectiveness Calculation
```python
'performance_summary': {
'overall_effectiveness': (
task_analysis['completion_quality'] * 0.4 +
delegation_analysis['delegation_value'] * 0.3 +
coordination_analysis['synchronization_effectiveness'] * 0.3
),
'delegation_success': delegation_analysis['delegation_value'] > 0.6,
'coordination_success': coordination_analysis['synchronization_effectiveness'] > 0.7,
'learning_value': wave_metrics.get('wave_learning_value', 0.5)
}
```
**Effectiveness Dimensions:**
- **Task Quality (40%)**: Output quality and completion success
- **Delegation Value (30%)**: Effectiveness of delegation strategy and execution
- **Coordination Success (30%)**: Synchronization and coordination effectiveness
### Success Thresholds
```python
# Success criteria
delegation_success = delegation_analysis['delegation_value'] > 0.6
coordination_success = coordination_analysis['synchronization_effectiveness'] > 0.7
wave_success = wave_metrics['wave_performance'] > 0.8
```
**Performance Benchmarks:**
- **Delegation Success**: >60% delegation value threshold
- **Coordination Success**: >70% synchronization effectiveness threshold
- **Wave Success**: >80% wave performance threshold
- **Overall Effectiveness**: Composite score incorporating all dimensions
### Optimization Recommendations
```python
def _generate_optimization_insights(self, context: dict, task_analysis: dict,
delegation_analysis: dict, coordination_analysis: dict) -> dict:
"""Generate optimization insights for future delegations."""
insights = {
'delegation_optimizations': [],
'coordination_improvements': [],
'wave_strategy_recommendations': [],
'performance_enhancements': [],
'learning_opportunities': []
}
```
**Recommendation Categories:**
- **Delegation Optimizations**: Alternative strategies, overhead reduction, task partitioning improvements
- **Coordination Improvements**: Synchronization mechanism optimization, data exchange pattern improvements
- **Wave Strategy Recommendations**: Orchestration strategy adjustments, task distribution optimization
- **Performance Enhancements**: Execution speed optimization, resource utilization improvements
- **Learning Opportunities**: Pattern recognition, cross-agent learning, continuous improvement areas
## Error Handling and Resilience
The hook implements robust error handling with graceful degradation:
```python
def _create_fallback_report(self, subagent_data: dict, error: str) -> dict:
"""Create fallback coordination report on error."""
return {
'subagent_id': subagent_data.get('subagent_id', 'unknown'),
'task_id': subagent_data.get('task_id', 'unknown'),
'completion_timestamp': time.time(),
'error': error,
'fallback_mode': True,
'task_completion': {
'success': False,
'quality_score': 0.0,
'efficiency_score': 0.0,
'error_occurred': True
}
}
```
**Error Recovery Strategies:**
- **Graceful Degradation**: Fallback coordination reports when analysis fails
- **Context Preservation**: Maintain essential coordination data even during errors
- **Error Logging**: Comprehensive error tracking for debugging and improvement
- **Performance Monitoring**: Continue performance tracking even in error conditions
## Integration with SuperClaude Framework
The hook integrates seamlessly with the broader SuperClaude framework:
### Framework Components
- **Learning Engine Integration**: Records coordination patterns for continuous improvement
- **Pattern Detection**: Identifies successful delegation and coordination patterns
- **MCP Intelligence**: Coordinates with MCP servers for enhanced analysis
- **Compression Engine**: Optimizes data storage and transfer for coordination analytics
- **Framework Logic**: Implements SuperClaude operational principles and patterns
### Quality Gates Integration
The hook contributes to SuperClaude's 8-step quality validation cycle:
- **Step 2.5**: Task management validation during orchestration operations
- **Step 7.5**: Session completion verification and summary documentation
- **Continuous**: Real-time metrics collection and performance monitoring
- **Post-Session**: Comprehensive session analytics and completion reporting
### Future Enhancements
Planned improvements for enhanced delegation coordination:
- **Predictive Delegation**: ML-based delegation strategy recommendation
- **Cross-Project Learning**: Delegation pattern sharing across projects
- **Real-Time Optimization**: Dynamic delegation adjustment during execution
- **Advanced Wave Strategies**: More sophisticated wave orchestration patterns
- **Resource Prediction**: Predictive resource allocation for delegated tasks