Delete Framework-Hooks directory

Signed-off-by: NomenAK <39598727+NomenAK@users.noreply.github.com>
This commit is contained in:
NomenAK
2025-08-13 17:05:41 +02:00
committed by GitHub
parent b2fc8965ce
commit c86e797f1b
89 changed files with 0 additions and 44053 deletions

View File

@@ -1,99 +0,0 @@
# Hook Documentation Update Summary
## Overview
Updated hook documentation files to accurately reflect the actual Python implementations, removing marketing language and aspirational features in favor of technical accuracy.
## Key Changes Made
### Common Updates Across All Hooks
1. **Replaced aspirational descriptions** with accurate technical implementation details
2. **Added actual execution context** including timeout values from `settings.json`
3. **Updated execution flows** to match stdin/stdout JSON processing pattern
4. **Documented actual shared module dependencies** and their usage
5. **Simplified language** to focus on what the code actually does
6. **Added implementation line counts** for context
7. **Corrected performance targets** to match configuration values
### Specific Hook Updates
#### session_start.md
- **Lines**: 704-line Python implementation
- **Timeout**: 10 seconds (from settings.json)
- **Key Features**: Lazy loading architecture, project structure analysis, user intent analysis, MCP server configuration
- **Shared Modules**: framework_logic, pattern_detection, mcp_intelligence, compression_engine, learning_engine, yaml_loader, logger
- **Performance**: <50ms target
#### pre_tool_use.md
- **Lines**: 648-line Python implementation
- **Timeout**: 15 seconds (from settings.json)
- **Key Features**: Operation characteristics analysis, tool chain context analysis, MCP server routing, performance optimization
- **Performance**: <200ms target
#### post_tool_use.md
- **Lines**: 794-line Python implementation
- **Timeout**: 10 seconds (from settings.json)
- **Key Features**: Validation against RULES.md and PRINCIPLES.md, effectiveness measurement, error pattern detection, learning integration
- **Performance**: <100ms target
#### pre_compact.md
- **Timeout**: 15 seconds (from settings.json)
- **Key Features**: MODE_Token_Efficiency implementation, selective compression, symbol systems
- **Performance**: <150ms target
#### notification.md
- **Timeout**: 10 seconds (from settings.json)
- **Key Features**: Just-in-time capability loading, notification type handling
- **Processing**: High/medium/low priority notification handling
#### stop.md
- **Timeout**: 15 seconds (from settings.json)
- **Key Features**: Session analytics, learning consolidation, data persistence
- **Performance**: <200ms target
#### subagent_stop.md
- **Timeout**: 15 seconds (from settings.json)
- **Key Features**: Delegation effectiveness measurement, multi-agent coordination analytics
- **Performance**: <150ms target
## Technical Accuracy Improvements
1. **Execution Pattern**: All hooks follow stdin JSON → process → stdout JSON pattern
2. **Error Handling**: All hooks implement graceful fallback with basic functionality preservation
3. **Shared Modules**: Documented actual module imports and specific method usage
4. **Configuration**: Referenced actual configuration files and fallback strategies
5. **Performance**: Corrected timeout values and performance targets based on actual settings
## Language Changes
- **Before**: "comprehensive intelligence layer", "transformative capabilities", "revolutionary approach"
- **After**: "analyzes project context", "implements pattern detection", "provides MCP server coordination"
- **Before**: Complex architectural descriptions without implementation details
- **After**: Actual method names, class structures, and execution flows
- **Before**: Aspirational features not yet implemented
- **After**: Features that actually exist in the Python code
## Documentation Quality
- Focused on practical implementation details developers need
- Removed marketing language in favor of technical precision
- Added concrete examples from actual code
- Clarified what each hook actually does vs. what it might do
- Made timeouts and performance targets realistic and accurate
## Files Updated
- `/docs/Hooks/session_start.md` - Major revision focusing on actual implementation
- `/docs/Hooks/pre_tool_use.md` - Streamlined to match 648-line implementation
- `/docs/Hooks/post_tool_use.md` - Focused on validation and learning implementation
- `/docs/Hooks/pre_compact.md` - Simplified compression implementation description
- `/docs/Hooks/notification.md` - Concise notification handling description
- `/docs/Hooks/stop.md` - Session analytics and persistence focus
- `/docs/Hooks/subagent_stop.md` - Delegation analytics focus
## Result
Documentation now accurately represents what the Python implementations actually do, with humble technical language focused on practical functionality rather than aspirational capabilities.

View File

@@ -1,88 +0,0 @@
{
"complexity:0.5_files:3_mcp_server:sequential_op:test_operation_type:mcp_server_preference": {
"adaptation_id": "adapt_1754411689_0",
"pattern_signature": "complexity:0.5_files:3_mcp_server:sequential_op:test_operation_type:mcp_server_preference",
"trigger_conditions": {
"operation_type": "test_operation",
"complexity_score": 0.5,
"file_count": 3
},
"modifications": {
"preferred_mcp_server": "sequential"
},
"effectiveness_history": [
0.8,
0.8,
0.8
],
"usage_count": 40,
"last_used": 1754476722.0475128,
"confidence_score": 0.9
},
"op:recovery_test_type:recovery_pattern": {
"adaptation_id": "adapt_1754411724_1",
"pattern_signature": "op:recovery_test_type:recovery_pattern",
"trigger_conditions": {
"operation_type": "recovery_test"
},
"modifications": {},
"effectiveness_history": [
0.9,
0.9
],
"usage_count": 39,
"last_used": 1754476722.0475132,
"confidence_score": 0.8
},
"unknown_pattern": {
"adaptation_id": "adapt_1754413397_2",
"pattern_signature": "unknown_pattern",
"trigger_conditions": {
"resource_usage_percent": 0,
"conversation_length": 0
},
"modifications": {},
"effectiveness_history": [
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0,
1.0
],
"usage_count": 73,
"last_used": 1754476722.062738,
"confidence_score": 0.8
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -1 +0,0 @@
{}

View File

@@ -1 +0,0 @@
55ca6726

View File

@@ -1 +0,0 @@
{}

View File

@@ -1,313 +0,0 @@
# SuperClaude-Lite Compression Configuration
# Token efficiency strategies and selective compression patterns
# Compression Levels and Strategies
compression_levels:
minimal: # 0-40% compression
symbol_systems: false
abbreviation_systems: false
structural_optimization: false
quality_threshold: 0.98
use_cases: ["user_content", "low_resource_usage", "high_quality_required"]
efficient: # 40-70% compression
symbol_systems: true
abbreviation_systems: false
structural_optimization: true
quality_threshold: 0.95
use_cases: ["moderate_resource_usage", "balanced_efficiency"]
compressed: # 70-85% compression
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
quality_threshold: 0.90
use_cases: ["high_resource_usage", "user_requests_brevity"]
critical: # 85-95% compression
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
quality_threshold: 0.85
use_cases: ["resource_constraints", "emergency_compression"]
emergency: # 95%+ compression
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
aggressive_optimization: true
quality_threshold: 0.80
use_cases: ["critical_resource_constraints", "emergency_situations"]
# Selective Compression Patterns
selective_compression:
content_classification:
framework_exclusions:
patterns:
- "~/.claude/"
- ".claude/"
- "SuperClaude/*"
- "CLAUDE.md"
- "FLAGS.md"
- "PRINCIPLES.md"
- "ORCHESTRATOR.md"
- "MCP_*.md"
- "MODE_*.md"
- "SESSION_LIFECYCLE.md"
compression_level: "preserve" # 0% compression
reasoning: "Framework content must be preserved for proper operation"
user_content_preservation:
patterns:
- "project_files"
- "user_documentation"
- "source_code"
- "configuration_files"
- "custom_content"
compression_level: "minimal" # Light compression only
reasoning: "User content requires high fidelity preservation"
session_data_optimization:
patterns:
- "session_metadata"
- "checkpoint_data"
- "cache_content"
- "working_artifacts"
- "analysis_results"
compression_level: "efficient" # 40-70% compression
reasoning: "Session data can be compressed while maintaining utility"
compressible_content:
patterns:
- "framework_repetition"
- "historical_session_data"
- "cached_analysis_results"
- "temporary_working_data"
compression_level: "compressed" # 70-85% compression
reasoning: "Highly compressible content with acceptable quality trade-offs"
# Symbol Systems Configuration
symbol_systems:
core_logic_flow:
enabled: true
mappings:
"leads to": "→"
"implies": "→"
"transforms to": "⇒"
"converts to": "⇒"
"rollback": "←"
"reverse": "←"
"bidirectional": "⇄"
"sync": "⇄"
"and": "&"
"combine": "&"
"separator": "|"
"or": "|"
"define": ":"
"specify": ":"
"sequence": "»"
"then": "»"
"therefore": "∴"
"because": "∵"
"equivalent": "≡"
"approximately": "≈"
"not equal": "≠"
status_progress:
enabled: true
mappings:
"completed": "✅"
"passed": "✅"
"failed": "❌"
"error": "❌"
"warning": "⚠️"
"information": ""
"in progress": "🔄"
"processing": "🔄"
"waiting": "⏳"
"pending": "⏳"
"critical": "🚨"
"urgent": "🚨"
"target": "🎯"
"goal": "🎯"
"metrics": "📊"
"data": "📊"
"insight": "💡"
"learning": "💡"
technical_domains:
enabled: true
mappings:
"performance": "⚡"
"optimization": "⚡"
"analysis": "🔍"
"investigation": "🔍"
"configuration": "🔧"
"setup": "🔧"
"security": "🛡️"
"protection": "🛡️"
"deployment": "📦"
"package": "📦"
"design": "🎨"
"frontend": "🎨"
"network": "🌐"
"connectivity": "🌐"
"mobile": "📱"
"responsive": "📱"
"architecture": "🏗️"
"system structure": "🏗️"
"components": "🧩"
"modular": "🧩"
# Abbreviation Systems Configuration
abbreviation_systems:
system_architecture:
enabled: true
mappings:
"configuration": "cfg"
"settings": "cfg"
"implementation": "impl"
"code structure": "impl"
"architecture": "arch"
"system design": "arch"
"performance": "perf"
"optimization": "perf"
"operations": "ops"
"deployment": "ops"
"environment": "env"
"runtime context": "env"
development_process:
enabled: true
mappings:
"requirements": "req"
"dependencies": "deps"
"packages": "deps"
"validation": "val"
"verification": "val"
"testing": "test"
"quality assurance": "test"
"documentation": "docs"
"guides": "docs"
"standards": "std"
"conventions": "std"
quality_analysis:
enabled: true
mappings:
"quality": "qual"
"maintainability": "qual"
"security": "sec"
"safety measures": "sec"
"error": "err"
"exception handling": "err"
"recovery": "rec"
"resilience": "rec"
"severity": "sev"
"priority level": "sev"
"optimization": "opt"
"improvement": "opt"
# Structural Optimization Techniques
structural_optimization:
whitespace_optimization:
enabled: true
remove_redundant_spaces: true
normalize_line_breaks: true
preserve_code_formatting: true
phrase_simplification:
enabled: true
common_phrase_replacements:
"in order to": "to"
"it is important to note that": "note:"
"please be aware that": "note:"
"it should be noted that": "note:"
"for the purpose of": "for"
"with regard to": "regarding"
"in relation to": "regarding"
redundancy_removal:
enabled: true
remove_articles: ["the", "a", "an"] # Only in high compression levels
remove_filler_words: ["very", "really", "quite", "rather"]
combine_repeated_concepts: true
# Quality Preservation Standards
quality_preservation:
minimum_thresholds:
information_preservation: 0.95
semantic_accuracy: 0.95
technical_correctness: 0.98
user_content_fidelity: 0.99
validation_criteria:
key_concept_retention: true
technical_term_preservation: true
code_example_accuracy: true
reference_link_preservation: true
quality_monitoring:
real_time_validation: true
effectiveness_tracking: true
user_feedback_integration: true
adaptive_threshold_adjustment: true
# Adaptive Compression Strategy
adaptive_compression:
context_awareness:
user_expertise_factor: true
project_complexity_factor: true
domain_specific_optimization: true
learning_integration:
effectiveness_feedback: true
user_preference_learning: true
pattern_optimization: true
dynamic_adjustment:
resource_pressure_response: true
quality_threshold_adaptation: true
performance_optimization: true
# Performance Targets
performance_targets:
processing_time_ms: 150
compression_ratio_target: 0.50 # 50% compression
quality_preservation_target: 0.95
token_efficiency_gain: 0.40 # 40% token reduction
# Cache Configuration
caching:
compression_results:
enabled: true
cache_duration_minutes: 30
max_cache_size_mb: 50
invalidation_strategy: "content_change_detection"
symbol_mappings:
enabled: true
preload_common_patterns: true
learning_based_optimization: true
pattern_recognition:
enabled: true
adaptive_pattern_learning: true
user_specific_patterns: true
# Integration with Other Systems
integration:
mcp_servers:
morphllm: "coordinate_compression_with_editing"
serena: "memory_compression_strategies"
modes:
token_efficiency: "primary_compression_mode"
task_management: "session_data_compression"
learning_engine:
effectiveness_tracking: true
pattern_learning: true
adaptation_feedback: true

View File

@@ -1,322 +0,0 @@
# Hook Coordination Configuration
# Intelligent hook execution patterns, dependency resolution, and optimization
# Enables smart coordination of all Framework-Hooks lifecycle events
# Metadata
version: "1.0.0"
last_updated: "2025-01-06"
description: "Hook coordination and execution intelligence patterns"
# Hook Execution Patterns
execution_patterns:
parallel_execution:
# Independent hooks that can run simultaneously
groups:
- name: "independent_analysis"
hooks: ["compression_engine", "pattern_detection"]
description: "Compression and pattern analysis can run independently"
max_parallel: 2
timeout: 5000 # ms
- name: "intelligence_gathering"
hooks: ["mcp_intelligence", "learning_engine"]
description: "MCP and learning intelligence can run in parallel"
max_parallel: 2
timeout: 3000 # ms
- name: "background_optimization"
hooks: ["performance_monitor", "cache_management"]
description: "Performance monitoring and cache operations"
max_parallel: 2
timeout: 2000 # ms
sequential_execution:
# Hooks that must run in specific order
chains:
- name: "session_lifecycle"
sequence: ["session_start", "pre_tool_use", "post_tool_use", "stop"]
description: "Core session lifecycle must be sequential"
mandatory: true
break_on_error: true
- name: "context_preparation"
sequence: ["session_start", "context_loading", "pattern_detection"]
description: "Context must be prepared before pattern analysis"
conditional: {session_type: "complex"}
- name: "optimization_chain"
sequence: ["compression_engine", "performance_monitor", "learning_engine"]
description: "Optimization workflow sequence"
trigger: {optimization_mode: true}
conditional_execution:
# Hooks that execute based on context conditions
rules:
- hook: "compression_engine"
conditions:
- resource_usage: ">0.75"
- conversation_length: ">50"
- enable_compression: true
priority: "high"
- hook: "pattern_detection"
conditions:
- complexity_score: ">0.5"
- enable_pattern_analysis: true
OR:
- operation_type: ["analyze", "review", "debug"]
priority: "medium"
- hook: "mcp_intelligence"
conditions:
- mcp_servers_available: true
- operation_requires_mcp: true
priority: "high"
- hook: "learning_engine"
conditions:
- learning_enabled: true
- session_type: ["interactive", "complex"]
priority: "medium"
- hook: "performance_monitor"
conditions:
- performance_monitoring: true
OR:
- complexity_score: ">0.7"
- resource_usage: ">0.8"
priority: "low"
# Dependency Resolution
dependency_resolution:
hook_dependencies:
# Define dependencies between hooks
session_start:
requires: []
provides: ["session_context", "initial_state"]
pre_tool_use:
requires: ["session_context"]
provides: ["tool_context", "pre_analysis"]
depends_on: ["session_start"]
compression_engine:
requires: ["session_context"]
provides: ["compression_config", "optimized_context"]
optional_depends: ["session_start"]
pattern_detection:
requires: ["session_context"]
provides: ["detected_patterns", "pattern_insights"]
optional_depends: ["session_start", "compression_engine"]
mcp_intelligence:
requires: ["tool_context", "detected_patterns"]
provides: ["mcp_recommendations", "server_selection"]
depends_on: ["pre_tool_use"]
optional_depends: ["pattern_detection"]
post_tool_use:
requires: ["tool_context", "tool_results"]
provides: ["post_analysis", "performance_metrics"]
depends_on: ["pre_tool_use"]
learning_engine:
requires: ["post_analysis", "performance_metrics"]
provides: ["learning_insights", "adaptations"]
depends_on: ["post_tool_use"]
optional_depends: ["mcp_intelligence", "pattern_detection"]
stop:
requires: ["session_context"]
provides: ["session_summary", "cleanup_status"]
depends_on: ["session_start"]
optional_depends: ["learning_engine", "post_tool_use"]
resolution_strategies:
# How to resolve dependency conflicts
missing_dependency:
strategy: "graceful_degradation"
fallback: "skip_optional"
circular_dependency:
strategy: "break_weakest_link"
priority_order: ["session_start", "pre_tool_use", "post_tool_use", "stop"]
timeout_handling:
strategy: "continue_without_dependency"
timeout_threshold: 10000 # ms
# Performance Optimization
performance_optimization:
execution_optimization:
# Optimize hook execution based on context
fast_path:
conditions:
- complexity_score: "<0.3"
- operation_type: ["simple", "basic"]
- resource_usage: "<0.5"
optimizations:
- skip_non_essential_hooks: true
- reduce_analysis_depth: true
- enable_aggressive_caching: true
- parallel_where_possible: true
comprehensive_path:
conditions:
- complexity_score: ">0.7"
- operation_type: ["complex", "analysis"]
- accuracy_priority: "high"
optimizations:
- enable_all_hooks: true
- deep_analysis_mode: true
- cross_hook_coordination: true
- detailed_logging: true
resource_management:
# Manage resource usage across hooks
resource_budgets:
cpu_budget: 80 # percent
memory_budget: 70 # percent
time_budget: 15000 # ms total
resource_allocation:
session_lifecycle: 30 # percent of budget
intelligence_hooks: 40 # percent
optimization_hooks: 30 # percent
caching_strategies:
# Hook result caching
cacheable_hooks:
- hook: "pattern_detection"
cache_key: ["session_context", "operation_type"]
cache_duration: 300 # seconds
- hook: "mcp_intelligence"
cache_key: ["operation_context", "available_servers"]
cache_duration: 600 # seconds
- hook: "compression_engine"
cache_key: ["context_size", "compression_level"]
cache_duration: 1800 # seconds
# Context-Aware Execution
context_awareness:
operation_context:
# Adapt execution based on operation context
context_patterns:
- context_type: "ui_development"
hook_priorities: ["mcp_intelligence", "pattern_detection", "compression_engine"]
preferred_execution: "fast_parallel"
- context_type: "code_analysis"
hook_priorities: ["pattern_detection", "mcp_intelligence", "learning_engine"]
preferred_execution: "comprehensive_sequential"
- context_type: "performance_optimization"
hook_priorities: ["performance_monitor", "compression_engine", "pattern_detection"]
preferred_execution: "resource_optimized"
user_preferences:
# Adapt to user preferences and patterns
preference_patterns:
- user_type: "performance_focused"
optimizations: ["aggressive_caching", "parallel_execution", "skip_optional"]
- user_type: "quality_focused"
optimizations: ["comprehensive_analysis", "detailed_validation", "full_coordination"]
- user_type: "speed_focused"
optimizations: ["fast_path", "minimal_hooks", "cached_results"]
# Error Handling and Recovery
error_handling:
error_recovery:
# Hook failure recovery strategies
recovery_strategies:
- error_type: "timeout"
recovery: "continue_without_hook"
log_level: "warning"
- error_type: "dependency_missing"
recovery: "graceful_degradation"
log_level: "info"
- error_type: "critical_failure"
recovery: "abort_and_cleanup"
log_level: "error"
resilience_patterns:
# Make hook execution resilient
resilience_features:
retry_failed_hooks: true
max_retries: 2
retry_backoff: "exponential" # linear, exponential
graceful_degradation: true
fallback_to_basic: true
preserve_essential_hooks: ["session_start", "stop"]
error_isolation: true
prevent_error_cascade: true
maintain_session_integrity: true
# Hook Lifecycle Management
lifecycle_management:
hook_states:
# Track hook execution states
state_tracking:
- pending
- initializing
- running
- completed
- failed
- skipped
- timeout
lifecycle_events:
# Events during hook execution
event_handlers:
before_hook_execution:
actions: ["validate_dependencies", "check_resources", "prepare_context"]
after_hook_execution:
actions: ["update_metrics", "cache_results", "trigger_dependent_hooks"]
hook_failure:
actions: ["log_error", "attempt_recovery", "notify_dependent_hooks"]
monitoring:
# Monitor hook execution
performance_tracking:
track_execution_time: true
track_resource_usage: true
track_success_rate: true
track_dependency_resolution: true
health_monitoring:
hook_health_checks: true
dependency_health_checks: true
performance_degradation_detection: true
# Dynamic Configuration
dynamic_configuration:
adaptive_execution:
# Adapt execution patterns based on performance
adaptation_triggers:
- performance_degradation: ">20%"
action: "switch_to_fast_path"
- error_rate: ">10%"
action: "enable_resilience_mode"
- resource_pressure: ">90%"
action: "reduce_hook_scope"
learning_integration:
# Learn from hook execution patterns
learning_features:
learn_optimal_execution_order: true
learn_user_preferences: true
learn_performance_patterns: true
adapt_to_project_context: true

View File

@@ -1,181 +0,0 @@
# Intelligence Patterns Configuration
# Core learning intelligence patterns for SuperClaude Framework-Hooks
# Defines multi-dimensional pattern recognition, adaptive learning, and intelligence behaviors
# Metadata
version: "1.0.0"
last_updated: "2025-01-06"
description: "Core intelligence patterns for declarative learning and adaptation"
# Learning Intelligence Configuration
learning_intelligence:
pattern_recognition:
# Multi-dimensional pattern analysis
dimensions:
primary:
- context_type # Type of operation context
- complexity_score # Operation complexity (0.0-1.0)
- operation_type # Category of operation
- performance_score # Performance effectiveness (0.0-1.0)
secondary:
- file_count # Number of files involved
- directory_count # Number of directories
- mcp_server # MCP server involved
- user_expertise # Detected user skill level
# Pattern signature generation
signature_generation:
method: "multi_dimensional_hash"
include_context: true
fallback_signature: "unknown_pattern"
max_signature_length: 128
# Pattern clustering for similar behavior grouping
clustering:
algorithm: "k_means"
min_cluster_size: 3
max_clusters: 20
similarity_threshold: 0.8
recalculate_interval: 100 # operations
adaptive_learning:
# Dynamic learning rate adjustment
learning_rate:
initial: 0.7
min: 0.1
max: 1.0
adaptation_strategy: "confidence_based"
# Confidence scoring
confidence_scoring:
base_confidence: 0.5
consistency_weight: 0.4
frequency_weight: 0.3
recency_weight: 0.3
# Effectiveness thresholds
effectiveness_thresholds:
learn_threshold: 0.7 # Minimum effectiveness to create adaptation
confidence_threshold: 0.6 # Minimum confidence to apply adaptation
forget_threshold: 0.3 # Below this, remove adaptation
pattern_quality:
# Pattern validation rules
validation_rules:
min_usage_count: 3
max_consecutive_perfect_scores: 10
effectiveness_variance_limit: 0.5
required_dimensions: ["context_type", "operation_type"]
# Quality scoring
quality_metrics:
diversity_score_weight: 0.4
consistency_score_weight: 0.3
usage_frequency_weight: 0.3
# Pattern Analysis Configuration
pattern_analysis:
anomaly_detection:
# Detect unusual patterns that might indicate issues
anomaly_patterns:
- name: "overfitting_detection"
condition: {consecutive_perfect_scores: ">10"}
severity: "medium"
action: "flag_for_review"
- name: "pattern_stagnation"
condition: {no_new_patterns: ">30_days"}
severity: "low"
action: "suggest_pattern_diversity"
- name: "effectiveness_degradation"
condition: {effectiveness_trend: "decreasing", duration: ">7_days"}
severity: "high"
action: "trigger_pattern_analysis"
trend_analysis:
# Track learning trends over time
tracking_windows:
short_term: 24 # hours
medium_term: 168 # hours (1 week)
long_term: 720 # hours (30 days)
trend_indicators:
- effectiveness_trend
- pattern_diversity_trend
- confidence_trend
- usage_frequency_trend
# Intelligence Enhancement Patterns
intelligence_enhancement:
predictive_capabilities:
# Predictive pattern matching
prediction_horizon: 5 # operations ahead
prediction_confidence_threshold: 0.7
prediction_accuracy_tracking: true
context_awareness:
# Context understanding and correlation
context_correlation:
enable_cross_session: true
enable_project_correlation: true
enable_user_correlation: true
correlation_strength_threshold: 0.6
adaptive_strategies:
# Strategy adaptation based on performance
strategy_adaptation:
performance_window: 20 # operations
adaptation_threshold: 0.8
rollback_threshold: 0.5
max_adaptations_per_session: 5
# Pattern Lifecycle Management
lifecycle_management:
pattern_evolution:
# How patterns evolve over time
evolution_triggers:
- usage_count_milestone: [10, 50, 100, 500]
- effectiveness_improvement: 0.1
- confidence_improvement: 0.1
evolution_actions:
- promote_to_global
- increase_weight
- expand_context
- merge_similar_patterns
pattern_cleanup:
# Automatic pattern cleanup
cleanup_triggers:
max_patterns: 1000
unused_pattern_age: 30 # days
low_effectiveness_threshold: 0.3
cleanup_strategies:
- archive_unused
- merge_similar
- remove_ineffective
- compress_historical
# Integration Configuration
integration:
cache_management:
# Pattern caching for performance
cache_patterns: true
cache_duration: 3600 # seconds
max_cache_size: 100 # patterns
cache_invalidation: "smart" # smart, time_based, usage_based
performance_optimization:
# Performance tuning
lazy_loading: true
batch_processing: true
background_analysis: true
max_processing_time_ms: 100
compatibility:
# Backwards compatibility
support_legacy_patterns: true
migration_assistance: true
graceful_degradation: true

View File

@@ -1,70 +0,0 @@
# SuperClaude-Lite Logging Configuration
# Simple logging configuration for hook execution monitoring
# Core Logging Settings
logging:
enabled: false
level: "ERROR" # ERROR, WARNING, INFO, DEBUG
# File Settings
file_settings:
log_directory: "cache/logs"
retention_days: 30
rotation_strategy: "daily"
# Hook Logging Settings
hook_logging:
log_lifecycle: false # Log hook start/end events
log_decisions: false # Log decision points
log_errors: false # Log error events
log_timing: false # Include timing information
# Performance Settings
performance:
max_overhead_ms: 1 # Maximum acceptable logging overhead
async_logging: false # Keep simple for now
# Privacy Settings
privacy:
sanitize_user_content: true
exclude_sensitive_data: true
anonymize_session_ids: false # Keep for correlation
# Hook-Specific Configuration
hook_configuration:
pre_tool_use:
enabled: true
log_tool_selection: true
log_input_validation: true
post_tool_use:
enabled: true
log_output_processing: true
log_integration_success: true
session_start:
enabled: true
log_initialization: true
log_configuration_loading: true
pre_compact:
enabled: true
log_compression_decisions: true
notification:
enabled: true
log_notification_handling: true
stop:
enabled: true
log_cleanup_operations: true
subagent_stop:
enabled: true
log_subagent_cleanup: true
# Development Settings
development:
verbose_errors: false
include_stack_traces: false # Keep logs clean
debug_mode: false

View File

@@ -1,308 +0,0 @@
# MCP Orchestration Configuration
# Intelligent server selection, coordination, and load balancing patterns
# Enables smart MCP server orchestration based on context and performance
# Metadata
version: "1.0.0"
last_updated: "2025-01-06"
description: "MCP server orchestration intelligence patterns"
# Server Selection Intelligence
server_selection:
decision_tree:
# UI/Design Operations
- name: "ui_component_operations"
conditions:
keywords: ["component", "ui", "design", "frontend", "jsx", "tsx", "css"]
OR:
- operation_type: ["build", "implement", "design"]
- file_extensions: [".jsx", ".tsx", ".vue", ".css", ".scss"]
primary_server: "magic"
support_servers: ["context7"]
coordination_mode: "parallel"
confidence: 0.9
# Analysis and Architecture Operations
- name: "complex_analysis"
conditions:
AND:
- complexity_score: ">0.7"
- operation_type: ["analyze", "review", "debug", "troubleshoot"]
OR:
- file_count: ">10"
- keywords: ["architecture", "system", "complex"]
primary_server: "sequential"
support_servers: ["context7", "serena"]
coordination_mode: "sequential"
confidence: 0.85
# Code Refactoring and Transformation
- name: "code_refactoring"
conditions:
AND:
- operation_type: ["refactor", "transform", "modify"]
OR:
- file_count: ">5"
- complexity_score: ">0.5"
- keywords: ["refactor", "cleanup", "optimize"]
primary_server: "serena"
support_servers: ["morphllm", "sequential"]
coordination_mode: "hybrid"
confidence: 0.8
# Documentation and Learning
- name: "documentation_operations"
conditions:
keywords: ["document", "explain", "guide", "tutorial", "learn"]
OR:
- operation_type: ["document", "explain"]
- file_extensions: [".md", ".rst", ".txt"]
primary_server: "context7"
support_servers: ["sequential"]
coordination_mode: "sequential"
confidence: 0.85
# Testing and Validation
- name: "testing_operations"
conditions:
keywords: ["test", "validate", "check", "verify", "e2e"]
OR:
- operation_type: ["test", "validate"]
- file_patterns: ["*test*", "*spec*", "*e2e*"]
primary_server: "playwright"
support_servers: ["sequential", "magic"]
coordination_mode: "parallel"
confidence: 0.8
# Fast Edits and Transformations
- name: "fast_edits"
conditions:
AND:
- complexity_score: "<0.4"
- file_count: "<5"
operation_type: ["edit", "modify", "fix", "update"]
primary_server: "morphllm"
support_servers: ["serena"]
coordination_mode: "fallback"
confidence: 0.7
# Fallback Strategy
fallback_chain:
default_primary: "sequential"
fallback_sequence: ["context7", "serena", "morphllm", "magic", "playwright"]
fallback_threshold: 3.0 # seconds timeout
# Load Balancing Intelligence
load_balancing:
health_monitoring:
# Server health check configuration
check_interval: 30 # seconds
timeout: 5 # seconds
retry_count: 3
health_metrics:
- response_time
- error_rate
- request_queue_size
- availability_percentage
performance_thresholds:
# Performance-based routing thresholds
response_time:
excellent: 500 # ms
good: 1000 # ms
warning: 2000 # ms
critical: 5000 # ms
error_rate:
excellent: 0.01 # 1%
good: 0.03 # 3%
warning: 0.05 # 5%
critical: 0.15 # 15%
queue_size:
excellent: 0
good: 2
warning: 5
critical: 10
routing_strategies:
# Load balancing algorithms
primary_strategy: "weighted_performance"
strategies:
round_robin:
description: "Distribute requests evenly across healthy servers"
weight_factor: "equal"
weighted_performance:
description: "Route based on server performance metrics"
weight_factors:
response_time: 0.4
error_rate: 0.3
availability: 0.3
least_connections:
description: "Route to server with fewest active connections"
connection_tracking: true
performance_based:
description: "Route to best-performing server"
performance_window: 300 # seconds
# Cross-Server Coordination
coordination_patterns:
sequential_coordination:
# When servers work in sequence
patterns:
- name: "analysis_then_implementation"
sequence: ["sequential", "morphllm"]
trigger: {operation: "implement", analysis_required: true}
- name: "research_then_build"
sequence: ["context7", "magic"]
trigger: {operation: "build", research_required: true}
- name: "plan_then_execute"
sequence: ["sequential", "serena", "morphllm"]
trigger: {complexity: ">0.7", operation: "refactor"}
parallel_coordination:
# When servers work simultaneously
patterns:
- name: "ui_with_docs"
parallel: ["magic", "context7"]
trigger: {operation: "build", component_type: "ui"}
synchronization: "merge_results"
- name: "test_with_validation"
parallel: ["playwright", "sequential"]
trigger: {operation: "test", validation_required: true}
synchronization: "wait_all"
hybrid_coordination:
# Mixed coordination patterns
patterns:
- name: "comprehensive_refactoring"
phases:
- phase: 1
servers: ["sequential"] # Analysis
wait_for_completion: true
- phase: 2
servers: ["serena", "morphllm"] # Parallel execution
synchronization: "coordinate_changes"
# Dynamic Server Capabilities
capability_assessment:
dynamic_capabilities:
# Assess server capabilities in real-time
assessment_interval: 60 # seconds
capability_metrics:
- processing_speed
- accuracy_score
- specialization_match
- current_load
capability_mapping:
# Map operations to server capabilities
magic:
specializations: ["ui", "components", "design", "frontend"]
performance_profile: "medium_latency_high_quality"
optimal_load: 3
sequential:
specializations: ["analysis", "debugging", "complex_reasoning"]
performance_profile: "high_latency_high_quality"
optimal_load: 2
context7:
specializations: ["documentation", "learning", "research"]
performance_profile: "low_latency_medium_quality"
optimal_load: 5
serena:
specializations: ["refactoring", "large_codebases", "semantic_analysis"]
performance_profile: "medium_latency_high_precision"
optimal_load: 3
morphllm:
specializations: ["fast_edits", "transformations", "pattern_matching"]
performance_profile: "low_latency_medium_quality"
optimal_load: 4
playwright:
specializations: ["testing", "validation", "browser_automation"]
performance_profile: "high_latency_specialized"
optimal_load: 2
# Error Handling and Recovery
error_handling:
retry_strategies:
# Server error retry patterns
exponential_backoff:
initial_delay: 1 # seconds
max_delay: 60 # seconds
multiplier: 2
max_retries: 3
graceful_degradation:
# Fallback when servers fail
degradation_levels:
- level: 1
strategy: "use_secondary_server"
performance_impact: "minimal"
- level: 2
strategy: "reduce_functionality"
performance_impact: "moderate"
- level: 3
strategy: "basic_operation_only"
performance_impact: "significant"
circuit_breaker:
# Circuit breaker pattern for failing servers
failure_threshold: 5 # failures before opening circuit
recovery_timeout: 30 # seconds before attempting recovery
half_open_requests: 3 # test requests during recovery
# Performance Optimization
performance_optimization:
caching:
# Server response caching
enable_response_caching: true
cache_duration: 300 # seconds
max_cache_size: 100 # responses
cache_key_strategy: "operation_context_hash"
request_optimization:
# Request batching and optimization
enable_request_batching: true
batch_size: 3
batch_timeout: 1000 # ms
predictive_routing:
# Predict optimal server based on patterns
enable_prediction: true
prediction_model: "pattern_based"
prediction_confidence_threshold: 0.7
# Monitoring and Analytics
monitoring:
metrics_collection:
# Collect orchestration metrics
collect_routing_decisions: true
collect_performance_metrics: true
collect_error_patterns: true
retention_days: 30
analytics:
# Server orchestration analytics
routing_accuracy_tracking: true
performance_trend_analysis: true
optimization_recommendations: true
alerts:
# Alert thresholds
high_error_rate: 0.1 # 10%
slow_response_time: 5000 # ms
server_unavailable: true

View File

@@ -1,59 +0,0 @@
# Mode detection patterns for SuperClaude-Lite
mode_detection:
brainstorming:
enabled: true
trigger_patterns:
- "I want to build"
- "thinking about"
- "not sure"
- "maybe.*could"
- "brainstorm"
- "explore"
- "figure out"
- "unclear.*requirements"
- "ambiguous.*needs"
confidence_threshold: 0.7
auto_activate: true
task_management:
enabled: true
trigger_patterns:
- "multiple.*tasks"
- "complex.*system"
- "build.*comprehensive"
- "coordinate.*work"
- "large-scale.*operation"
- "manage.*operations"
- "comprehensive.*refactoring"
- "authentication.*system"
confidence_threshold: 0.7
auto_activate: true
auto_activation_thresholds:
file_count: 3
complexity_score: 0.4
token_efficiency:
enabled: true
trigger_patterns:
- "brief"
- "concise"
- "compressed"
- "efficient.*output"
- "token.*optimization"
- "short.*response"
- "running.*low.*context"
confidence_threshold: 0.75
auto_activate: true
introspection:
enabled: true
trigger_patterns:
- "analyze.*reasoning"
- "examine.*decision"
- "reflect.*on"
- "meta.*cognitive"
- "thinking.*process"
- "reasoning.*process"
- "decision.*made"
confidence_threshold: 0.6
auto_activate: true

View File

@@ -1,97 +0,0 @@
# Orchestrator routing patterns
routing_patterns:
context7:
triggers:
- "library.*documentation"
- "framework.*patterns"
- "react|vue|angular"
- "official.*way"
- "React Query"
- "integrate.*library"
capabilities: ["documentation", "patterns", "integration"]
priority: "medium"
sequential:
triggers:
- "analyze.*complex"
- "debug.*systematic"
- "troubleshoot.*bottleneck"
- "investigate.*root"
- "debug.*why"
- "detailed.*analysis"
- "running.*slowly"
- "performance.*bottleneck"
- "bundle.*size"
capabilities: ["analysis", "debugging", "systematic"]
priority: "high"
magic:
triggers:
- "component.*ui"
- "responsive.*modal"
- "navigation.*component"
- "mobile.*friendly"
- "responsive.*dashboard"
- "charts.*real-time"
- "build.*dashboard"
capabilities: ["ui", "components", "responsive"]
priority: "medium"
playwright:
triggers:
- "test.*workflow"
- "browser.*automation"
- "cross-browser.*testing"
- "performance.*testing"
- "end-to-end.*tests"
- "checkout.*flow"
- "e2e.*tests"
capabilities: ["testing", "automation", "e2e"]
priority: "medium"
morphllm:
triggers:
- "edit.*file"
- "simple.*modification"
- "quick.*change"
capabilities: ["editing", "modification"]
priority: "low"
serena:
triggers:
- "refactor.*codebase"
- "complex.*analysis"
- "multi.*file"
- "refactor.*entire"
- "new.*API.*patterns"
capabilities: ["refactoring", "semantic", "large-scale"]
priority: "high"
# Auto-activation thresholds
auto_activation:
complexity_thresholds:
enable_delegation:
file_count: 3
directory_count: 2
complexity_score: 0.4
enable_sequential:
complexity_score: 0.6
enable_validation:
risk_level: ["high", "critical"]
# Hybrid intelligence selection
hybrid_intelligence:
morphllm_vs_serena:
morphllm_criteria:
file_count_max: 10
complexity_max: 0.6
preferred_operations: ["edit", "modify", "simple_refactor"]
serena_criteria:
file_count_min: 5
complexity_min: 0.4
preferred_operations: ["refactor", "analyze", "extract", "move"]
# Performance optimization
performance_optimization:
resource_management:
token_threshold_percent: 75

View File

@@ -1,346 +0,0 @@
# SuperClaude-Lite Performance Configuration
# Performance targets, thresholds, and optimization strategies
# Hook Performance Targets
hook_targets:
session_start:
target_ms: 50
warning_threshold_ms: 75
critical_threshold_ms: 100
optimization_priority: "critical"
pre_tool_use:
target_ms: 200
warning_threshold_ms: 300
critical_threshold_ms: 500
optimization_priority: "high"
post_tool_use:
target_ms: 100
warning_threshold_ms: 150
critical_threshold_ms: 250
optimization_priority: "medium"
pre_compact:
target_ms: 150
warning_threshold_ms: 200
critical_threshold_ms: 300
optimization_priority: "high"
notification:
target_ms: 100
warning_threshold_ms: 150
critical_threshold_ms: 200
optimization_priority: "medium"
stop:
target_ms: 200
warning_threshold_ms: 300
critical_threshold_ms: 500
optimization_priority: "low"
subagent_stop:
target_ms: 150
warning_threshold_ms: 200
critical_threshold_ms: 300
optimization_priority: "medium"
# System Performance Targets
system_targets:
overall_session_efficiency: 0.75
mcp_coordination_efficiency: 0.70
compression_effectiveness: 0.50
learning_adaptation_rate: 0.80
user_satisfaction_target: 0.75
resource_utilization:
memory_target_mb: 100
memory_warning_mb: 150
memory_critical_mb: 200
cpu_target_percent: 40
cpu_warning_percent: 60
cpu_critical_percent: 80
token_efficiency_target: 0.40
token_warning_threshold: 0.20
token_critical_threshold: 0.10
# MCP Server Performance
mcp_server_performance:
context7:
activation_target_ms: 150
response_target_ms: 500
cache_hit_ratio_target: 0.70
quality_score_target: 0.90
sequential:
activation_target_ms: 200
response_target_ms: 1000
analysis_depth_target: 0.80
reasoning_quality_target: 0.85
magic:
activation_target_ms: 120
response_target_ms: 800
component_quality_target: 0.85
generation_speed_target: 0.75
playwright:
activation_target_ms: 300
response_target_ms: 2000
test_reliability_target: 0.90
automation_efficiency_target: 0.80
morphllm:
activation_target_ms: 80
response_target_ms: 400
edit_accuracy_target: 0.95
processing_efficiency_target: 0.85
serena:
activation_target_ms: 100
response_target_ms: 600
semantic_accuracy_target: 0.90
memory_efficiency_target: 0.80
# Compression Performance
compression_performance:
target_compression_ratio: 0.50
quality_preservation_minimum: 0.95
processing_speed_target_chars_per_ms: 100
level_targets:
minimal:
compression_ratio: 0.15
quality_preservation: 0.98
processing_time_factor: 1.0
efficient:
compression_ratio: 0.40
quality_preservation: 0.95
processing_time_factor: 1.2
compressed:
compression_ratio: 0.60
quality_preservation: 0.90
processing_time_factor: 1.5
critical:
compression_ratio: 0.75
quality_preservation: 0.85
processing_time_factor: 1.8
emergency:
compression_ratio: 0.85
quality_preservation: 0.80
processing_time_factor: 2.0
# Learning Engine Performance
learning_performance:
adaptation_response_time_ms: 200
pattern_detection_accuracy: 0.80
effectiveness_prediction_accuracy: 0.75
learning_rates:
user_preference_learning: 0.85
operation_pattern_learning: 0.80
performance_optimization_learning: 0.75
error_recovery_learning: 0.90
memory_efficiency:
learning_data_compression_ratio: 0.30
memory_cleanup_efficiency: 0.90
cache_hit_ratio: 0.70
# Quality Gate Performance
quality_gate_performance:
validation_speed_targets:
syntax_validation_ms: 50
type_analysis_ms: 100
code_quality_ms: 150
security_assessment_ms: 200
performance_analysis_ms: 250
accuracy_targets:
rule_compliance_detection: 0.95
principle_alignment_assessment: 0.90
quality_scoring_accuracy: 0.85
security_vulnerability_detection: 0.98
comprehensive_validation_target_ms: 500
# Task Management Performance
task_management_performance:
delegation_efficiency_targets:
file_based_delegation: 0.65
folder_based_delegation: 0.70
auto_delegation: 0.75
wave_orchestration_targets:
coordination_overhead_max: 0.20
wave_synchronization_efficiency: 0.85
parallel_execution_speedup: 1.50
task_completion_targets:
success_rate: 0.90
quality_score: 0.80
time_efficiency: 0.75
# Mode-Specific Performance
mode_performance:
brainstorming:
dialogue_response_time_ms: 300
convergence_efficiency: 0.80
brief_generation_quality: 0.85
user_satisfaction_target: 0.85
task_management:
coordination_overhead_max: 0.15
delegation_efficiency: 0.70
parallel_execution_benefit: 1.40
analytics_generation_time_ms: 500
token_efficiency:
compression_processing_time_ms: 150
efficiency_gain_target: 0.40
quality_preservation_target: 0.95
user_acceptance_rate: 0.80
introspection:
analysis_depth_target: 0.80
insight_quality_target: 0.75
transparency_effectiveness: 0.85
learning_value_target: 0.70
# Performance Monitoring
performance_monitoring:
real_time_tracking:
enabled: true
sampling_interval_ms: 100
metric_aggregation_window_s: 60
alert_threshold_breaches: 3
metrics_collection:
execution_times: true
resource_utilization: true
quality_scores: true
user_satisfaction: true
error_rates: true
alerting:
performance_degradation: true
resource_exhaustion: true
quality_threshold_breach: true
user_satisfaction_drop: true
reporting:
hourly_summaries: true
daily_analytics: true
weekly_trends: true
monthly_optimization_reports: true
# Optimization Strategies
optimization_strategies:
caching:
intelligent_caching: true
cache_warming: true
predictive_loading: true
cache_invalidation: "smart"
parallel_processing:
auto_detection: true
optimal_concurrency: "dynamic"
load_balancing: "intelligent"
resource_coordination: "adaptive"
resource_management:
memory_optimization: true
cpu_optimization: true
token_optimization: true
storage_optimization: true
adaptive_performance:
dynamic_target_adjustment: true
context_aware_optimization: true
learning_based_improvement: true
user_preference_integration: true
# Performance Thresholds
performance_thresholds:
green_zone: # 0-70% resource usage
all_optimizations_available: true
proactive_caching: true
full_feature_set: true
normal_verbosity: true
yellow_zone: # 70-85% resource usage
efficiency_mode_activation: true
cache_optimization: true
reduced_verbosity: true
non_critical_feature_deferral: true
orange_zone: # 85-95% resource usage
aggressive_optimization: true
compression_activation: true
feature_reduction: true
essential_operations_only: true
red_zone: # 95%+ resource usage
emergency_mode: true
maximum_compression: true
minimal_features: true
critical_operations_only: true
# Fallback Performance
fallback_performance:
graceful_degradation:
feature_prioritization: true
quality_vs_speed_tradeoffs: "intelligent"
user_notification: true
automatic_recovery: true
emergency_protocols:
resource_exhaustion: "immediate_compression"
timeout_protection: "operation_cancellation"
error_cascade_prevention: "circuit_breaker"
recovery_strategies:
performance_restoration: "gradual"
feature_reactivation: "conditional"
quality_normalization: "monitored"
# Benchmarking and Testing
benchmarking:
performance_baselines:
establish_on_startup: true
regular_recalibration: true
environment_specific: true
load_testing:
synthetic_workloads: true
stress_testing: true
endurance_testing: true
regression_testing:
performance_regression_detection: true
quality_regression_detection: true
feature_regression_detection: true
# Integration Performance
integration_performance:
cross_hook_coordination: 0.90
mcp_server_orchestration: 0.85
mode_switching_efficiency: 0.80
learning_engine_responsiveness: 0.85
end_to_end_targets:
session_initialization: 500 # ms
complex_operation_completion: 5000 # ms
session_termination: 1000 # ms
system_health_indicators:
overall_efficiency: 0.75
user_experience_quality: 0.80
system_reliability: 0.95
adaptation_effectiveness: 0.70

View File

@@ -1,299 +0,0 @@
# Performance Intelligence Configuration
# Adaptive performance patterns, auto-optimization, and resource management
# Enables intelligent performance monitoring and self-optimization
# Metadata
version: "1.0.0"
last_updated: "2025-01-06"
description: "Performance intelligence and auto-optimization patterns"
# Adaptive Performance Targets
adaptive_targets:
baseline_management:
# Dynamic baseline adjustment based on system performance
adjustment_strategy: "rolling_average"
adjustment_window: 50 # operations
adjustment_sensitivity: 0.15 # 15% threshold for adjustment
min_samples: 10 # minimum samples before adjustment
baseline_metrics:
response_time:
initial_target: 500 # ms
acceptable_variance: 0.3
improvement_threshold: 0.1
resource_usage:
initial_target: 0.7 # 70%
acceptable_variance: 0.2
critical_threshold: 0.9
error_rate:
initial_target: 0.02 # 2%
acceptable_variance: 0.01
critical_threshold: 0.1
target_adaptation:
# How targets adapt to system capabilities
adaptation_triggers:
- condition: {performance_improvement: ">20%", duration: ">7_days"}
action: "tighten_targets"
adjustment: 0.15
- condition: {performance_degradation: ">15%", duration: ">3_days"}
action: "relax_targets"
adjustment: 0.2
- condition: {system_upgrade_detected: true}
action: "recalibrate_baselines"
reset_period: "24_hours"
adaptation_limits:
max_target_tightening: 0.5 # Don't make targets too aggressive
max_target_relaxation: 2.0 # Don't make targets too loose
adaptation_cooldown: 3600 # seconds between major adjustments
# Auto-Optimization Engine
auto_optimization:
optimization_triggers:
# Automatic optimization triggers
performance_triggers:
- name: "response_time_degradation"
condition: {avg_response_time: ">target*1.3", samples: ">10"}
urgency: "high"
actions: ["enable_aggressive_caching", "reduce_analysis_depth", "parallel_processing"]
- name: "memory_pressure"
condition: {memory_usage: ">0.85", duration: ">300_seconds"}
urgency: "critical"
actions: ["garbage_collection", "cache_cleanup", "reduce_context_size"]
- name: "cpu_saturation"
condition: {cpu_usage: ">0.9", duration: ">60_seconds"}
urgency: "high"
actions: ["reduce_concurrent_operations", "defer_non_critical", "enable_throttling"]
- name: "error_rate_spike"
condition: {error_rate: ">0.1", recent_window: "5_minutes"}
urgency: "critical"
actions: ["enable_fallback_mode", "increase_timeouts", "reduce_complexity"]
optimization_strategies:
# Available optimization strategies
aggressive_caching:
description: "Enable aggressive caching of results and computations"
performance_impact: 0.3 # Expected improvement
resource_cost: 0.1 # Memory cost
duration: 1800 # seconds
parallel_processing:
description: "Increase parallelization where possible"
performance_impact: 0.25
resource_cost: 0.2
duration: 3600
reduce_analysis_depth:
description: "Reduce depth of analysis to improve speed"
performance_impact: 0.4
quality_impact: -0.1 # Slight quality reduction
duration: 1800
intelligent_batching:
description: "Batch similar operations for efficiency"
performance_impact: 0.2
resource_cost: -0.05 # Reduces resource usage
duration: 3600
# Resource Management Intelligence
resource_management:
resource_zones:
# Performance zones with different strategies
green_zone:
threshold: 0.60 # Below 60% resource usage
strategy: "optimal_performance"
features_enabled: ["full_analysis", "comprehensive_caching", "background_optimization"]
yellow_zone:
threshold: 0.75 # 60-75% resource usage
strategy: "balanced_optimization"
features_enabled: ["standard_analysis", "selective_caching", "reduced_background"]
optimizations: ["defer_non_critical", "reduce_verbosity"]
orange_zone:
threshold: 0.85 # 75-85% resource usage
strategy: "performance_preservation"
features_enabled: ["essential_analysis", "minimal_caching"]
optimizations: ["aggressive_caching", "parallel_where_safe", "reduce_context"]
red_zone:
threshold: 0.95 # 85-95% resource usage
strategy: "resource_conservation"
features_enabled: ["critical_only"]
optimizations: ["emergency_cleanup", "minimal_processing", "fail_fast"]
critical_zone:
threshold: 1.0 # Above 95% resource usage
strategy: "emergency_mode"
features_enabled: []
optimizations: ["immediate_cleanup", "operation_rejection", "system_protection"]
dynamic_allocation:
# Intelligent resource allocation
allocation_strategies:
workload_based:
description: "Allocate based on current workload patterns"
factors: ["operation_complexity", "expected_duration", "priority"]
predictive:
description: "Allocate based on predicted resource needs"
factors: ["historical_patterns", "operation_type", "context_size"]
adaptive:
description: "Adapt allocation based on real-time performance"
factors: ["current_performance", "resource_availability", "optimization_goals"]
# Performance Regression Detection
regression_detection:
detection_algorithms:
# Algorithms for detecting performance regression
statistical_analysis:
algorithm: "t_test"
confidence_level: 0.95
minimum_samples: 20
window_size: 100 # operations
trend_analysis:
algorithm: "linear_regression"
trend_threshold: 0.1 # 10% degradation trend
analysis_window: 168 # hours (1 week)
anomaly_detection:
algorithm: "isolation_forest"
contamination: 0.1 # Expected anomaly rate
sensitivity: 0.8
regression_patterns:
# Common regression patterns to detect
gradual_degradation:
pattern: {performance_trend: "decreasing", duration: ">5_days"}
severity: "medium"
investigation: "check_for_memory_leaks"
sudden_degradation:
pattern: {performance_drop: ">30%", timeframe: "<1_hour"}
severity: "high"
investigation: "check_recent_changes"
periodic_degradation:
pattern: {performance_cycles: "detected", frequency: "regular"}
severity: "low"
investigation: "analyze_periodic_patterns"
# Intelligent Resource Optimization
intelligent_optimization:
predictive_optimization:
# Predict and prevent performance issues
prediction_models:
resource_exhaustion:
model_type: "time_series"
prediction_horizon: 3600 # seconds
accuracy_threshold: 0.8
performance_degradation:
model_type: "pattern_matching"
pattern_library: "historical_degradations"
confidence_threshold: 0.7
proactive_actions:
- prediction: "memory_exhaustion"
lead_time: 1800 # seconds
actions: ["preemptive_cleanup", "cache_optimization", "context_reduction"]
- prediction: "cpu_saturation"
lead_time: 600 # seconds
actions: ["reduce_parallelism", "defer_background_tasks", "enable_throttling"]
optimization_recommendations:
# Generate optimization recommendations
recommendation_engine:
analysis_depth: "comprehensive"
recommendation_confidence: 0.8
implementation_difficulty: "user_friendly"
recommendation_types:
configuration_tuning:
description: "Suggest configuration changes for better performance"
impact_assessment: "quantified"
resource_allocation:
description: "Recommend better resource allocation strategies"
cost_benefit_analysis: true
workflow_optimization:
description: "Suggest workflow improvements"
user_experience_impact: "minimal"
# Performance Monitoring Intelligence
monitoring_intelligence:
intelligent_metrics:
# Smart metric collection and analysis
adaptive_sampling:
base_sampling_rate: 1.0 # Sample every operation
high_load_rate: 0.5 # Reduce sampling under load
critical_load_rate: 0.1 # Minimal sampling in critical situations
contextual_metrics:
# Collect different metrics based on context
ui_operations:
focus_metrics: ["response_time", "render_time", "user_interaction_delay"]
analysis_operations:
focus_metrics: ["processing_time", "memory_usage", "accuracy_score"]
batch_operations:
focus_metrics: ["throughput", "resource_efficiency", "completion_rate"]
performance_insights:
# Generate performance insights
insight_generation:
pattern_recognition: true
correlation_analysis: true
root_cause_analysis: true
improvement_suggestions: true
insight_types:
bottleneck_identification:
description: "Identify performance bottlenecks"
priority: "high"
optimization_opportunities:
description: "Find optimization opportunities"
priority: "medium"
capacity_planning:
description: "Predict capacity requirements"
priority: "low"
# Performance Validation
performance_validation:
validation_framework:
# Validate performance improvements
a_b_testing:
enable_automatic_testing: true
test_duration: 3600 # seconds
statistical_significance: 0.95
performance_benchmarking:
benchmark_frequency: "weekly"
regression_threshold: 0.05 # 5% regression tolerance
continuous_improvement:
# Continuous performance improvement
improvement_tracking:
track_optimization_effectiveness: true
measure_user_satisfaction: true
monitor_system_health: true
feedback_loops:
performance_feedback: "real_time"
user_feedback_integration: true
system_learning_integration: true

View File

@@ -1,351 +0,0 @@
# SuperClaude-Lite Session Configuration
# SessionStart/Stop lifecycle management and analytics
# Session Lifecycle Configuration
session_lifecycle:
initialization:
performance_target_ms: 50
auto_project_detection: true
context_loading_strategy: "selective"
framework_exclusion_enabled: true
default_modes:
- "adaptive_intelligence"
- "performance_monitoring"
intelligence_activation:
pattern_detection: true
mcp_routing: true
learning_integration: true
compression_optimization: true
termination:
performance_target_ms: 200
analytics_generation: true
learning_consolidation: true
session_persistence: true
cleanup_optimization: true
# Project Type Detection
project_detection:
file_indicators:
nodejs:
- "package.json"
- "node_modules/"
- "yarn.lock"
- "pnpm-lock.yaml"
python:
- "pyproject.toml"
- "setup.py"
- "requirements.txt"
- "__pycache__/"
- ".py"
rust:
- "Cargo.toml"
- "Cargo.lock"
- "src/main.rs"
- "src/lib.rs"
go:
- "go.mod"
- "go.sum"
- "main.go"
web_frontend:
- "index.html"
- "public/"
- "dist/"
- "build/"
- "src/components/"
framework_detection:
react:
- "react"
- "next.js"
- "@types/react"
vue:
- "vue"
- "nuxt"
- "@vue/cli"
angular:
- "@angular/core"
- "angular.json"
express:
- "express"
- "app.js"
- "server.js"
# Intelligence Activation Rules
intelligence_activation:
mode_detection:
brainstorming:
triggers:
- "new project"
- "not sure"
- "thinking about"
- "explore"
- "brainstorm"
confidence_threshold: 0.7
auto_activate: true
task_management:
triggers:
- "multiple files"
- "complex operation"
- "system-wide"
- "comprehensive"
file_count_threshold: 3
complexity_threshold: 0.4
auto_activate: true
token_efficiency:
triggers:
- "resource constraint"
- "brevity"
- "compressed"
- "efficient"
resource_threshold_percent: 75
conversation_length_threshold: 100
auto_activate: true
mcp_server_activation:
context7:
triggers:
- "library"
- "documentation"
- "framework"
- "api reference"
project_indicators:
- "external_dependencies"
- "framework_detected"
auto_activate: true
sequential:
triggers:
- "analyze"
- "debug"
- "complex"
- "systematic"
complexity_threshold: 0.6
auto_activate: true
magic:
triggers:
- "component"
- "ui"
- "frontend"
- "design"
project_type_match: ["web_frontend", "react", "vue", "angular"]
auto_activate: true
playwright:
triggers:
- "test"
- "automation"
- "browser"
- "e2e"
project_indicators:
- "has_tests"
- "test_framework_detected"
auto_activate: false # Manual activation preferred
morphllm:
triggers:
- "edit"
- "modify"
- "quick change"
file_count_max: 10
complexity_max: 0.6
auto_activate: true
serena:
triggers:
- "navigate"
- "find"
- "search"
- "analyze"
file_count_min: 5
complexity_min: 0.4
auto_activate: true
# Session Analytics Configuration
session_analytics:
performance_tracking:
enabled: true
metrics:
- "operation_count"
- "tool_usage_patterns"
- "mcp_server_effectiveness"
- "error_rates"
- "completion_times"
- "resource_utilization"
effectiveness_measurement:
enabled: true
factors:
productivity: "weight: 0.4"
quality: "weight: 0.3"
user_satisfaction: "weight: 0.2"
learning_value: "weight: 0.1"
learning_consolidation:
enabled: true
pattern_detection: true
adaptation_creation: true
effectiveness_feedback: true
insight_generation: true
# Session Persistence
session_persistence:
enabled: true
storage_strategy: "intelligent_compression"
retention_policy:
session_data_days: 90
analytics_data_days: 365
learning_data_persistent: true
compression_settings:
session_metadata: "efficient" # 40-70% compression
analytics_data: "compressed" # 70-85% compression
learning_data: "minimal" # Preserve learning quality
cleanup_automation:
enabled: true
old_session_cleanup: true
max_sessions_retained: 50
storage_optimization: true
# Notification Processing
notifications:
enabled: true
just_in_time_loading: true
pattern_updates: true
intelligence_updates: true
priority_handling:
critical: "immediate_processing"
high: "fast_track_processing"
medium: "standard_processing"
low: "background_processing"
caching_strategy:
documentation_cache_minutes: 30
pattern_cache_minutes: 60
intelligence_cache_minutes: 15
# Task Management Integration
task_management:
enabled: true
delegation_strategies:
files: "file_based_delegation"
folders: "directory_based_delegation"
auto: "intelligent_auto_detection"
wave_orchestration:
enabled: true
complexity_threshold: 0.4
file_count_threshold: 3
operation_types_threshold: 2
performance_optimization:
parallel_execution: true
resource_management: true
coordination_efficiency: true
# User Experience Configuration
user_experience:
session_feedback:
enabled: true
satisfaction_tracking: true
improvement_suggestions: true
personalization:
enabled: true
preference_learning: true
adaptation_application: true
context_awareness: true
progressive_enhancement:
enabled: true
capability_discovery: true
feature_introduction: true
learning_curve_optimization: true
# Performance Targets
performance_targets:
session_start_ms: 50
session_stop_ms: 200
context_loading_ms: 500
analytics_generation_ms: 1000
efficiency_targets:
productivity_score: 0.7
quality_score: 0.8
satisfaction_score: 0.7
learning_value: 0.6
resource_utilization:
memory_efficient: true
cpu_optimization: true
token_management: true
storage_optimization: true
# Error Handling and Recovery
error_handling:
graceful_degradation: true
fallback_strategies: true
error_learning: true
recovery_optimization: true
session_recovery:
auto_recovery: true
state_preservation: true
context_restoration: true
learning_retention: true
error_patterns:
detection: true
prevention: true
learning_integration: true
adaptation_triggers: true
# Integration Configuration
integration:
mcp_servers:
coordination: "seamless"
fallback_handling: "automatic"
performance_monitoring: "continuous"
learning_engine:
session_learning: true
pattern_recognition: true
effectiveness_tracking: true
adaptation_application: true
compression_engine:
session_data_compression: true
quality_preservation: true
selective_application: true
quality_gates:
session_validation: true
analytics_verification: true
learning_quality_assurance: true
# Development and Debugging
development_support:
session_debugging: true
performance_profiling: true
analytics_validation: true
learning_verification: true
metrics_collection:
detailed_timing: true
resource_tracking: true
effectiveness_measurement: true
quality_assessment: true

View File

@@ -1,385 +0,0 @@
# User Experience Intelligence Configuration
# UX optimization, project detection, and user-centric intelligence patterns
# Enables intelligent user experience through smart defaults and proactive assistance
# Metadata
version: "1.0.0"
last_updated: "2025-01-06"
description: "User experience optimization and project intelligence patterns"
# Project Type Detection
project_detection:
detection_patterns:
# Detect project types based on files and structure
frontend_frameworks:
react_project:
file_indicators:
- "package.json"
- "*.tsx"
- "*.jsx"
- "react" # in package.json dependencies
directory_indicators:
- "src/components"
- "public"
- "node_modules"
confidence_threshold: 0.8
recommendations:
mcp_servers: ["magic", "context7", "playwright"]
compression_level: "minimal"
performance_focus: "ui_responsiveness"
vue_project:
file_indicators:
- "package.json"
- "*.vue"
- "vue.config.js"
- "vue" # in dependencies
directory_indicators:
- "src/components"
- "src/views"
confidence_threshold: 0.8
recommendations:
mcp_servers: ["magic", "context7"]
compression_level: "standard"
angular_project:
file_indicators:
- "angular.json"
- "*.component.ts"
- "@angular" # in dependencies
directory_indicators:
- "src/app"
- "e2e"
confidence_threshold: 0.9
recommendations:
mcp_servers: ["magic", "context7", "playwright"]
backend_frameworks:
python_project:
file_indicators:
- "requirements.txt"
- "pyproject.toml"
- "setup.py"
- "*.py"
directory_indicators:
- "src"
- "tests"
- "__pycache__"
confidence_threshold: 0.7
recommendations:
mcp_servers: ["serena", "sequential", "context7"]
compression_level: "standard"
validation_level: "enhanced"
node_backend:
file_indicators:
- "package.json"
- "*.js"
- "express" # in dependencies
- "server.js"
directory_indicators:
- "routes"
- "controllers"
- "middleware"
confidence_threshold: 0.8
recommendations:
mcp_servers: ["sequential", "context7", "serena"]
data_science:
jupyter_project:
file_indicators:
- "*.ipynb"
- "requirements.txt"
- "conda.yml"
directory_indicators:
- "notebooks"
- "data"
confidence_threshold: 0.9
recommendations:
mcp_servers: ["sequential", "context7"]
compression_level: "minimal"
analysis_depth: "comprehensive"
documentation:
docs_project:
file_indicators:
- "*.md"
- "docs/"
- "README.md"
- "mkdocs.yml"
directory_indicators:
- "docs"
- "documentation"
confidence_threshold: 0.8
recommendations:
mcp_servers: ["context7", "sequential"]
focus_areas: ["clarity", "completeness"]
# User Preference Intelligence
user_preferences:
preference_learning:
# Learn user preferences from behavior patterns
interaction_patterns:
command_preferences:
track_command_usage: true
track_flag_preferences: true
track_workflow_patterns: true
learning_window: 100 # operations
performance_preferences:
speed_vs_quality_preference:
indicators: ["timeout_tolerance", "quality_acceptance", "performance_complaints"]
classification: ["speed_focused", "quality_focused", "balanced"]
detail_level_preference:
indicators: ["verbose_mode_usage", "summary_requests", "detail_requests"]
classification: ["concise", "detailed", "adaptive"]
preference_adaptation:
# Adapt behavior based on learned preferences
adaptation_strategies:
speed_focused_user:
optimizations: ["aggressive_caching", "parallel_execution", "reduced_analysis"]
ui_changes: ["shorter_responses", "quick_suggestions", "minimal_explanations"]
quality_focused_user:
optimizations: ["comprehensive_analysis", "detailed_validation", "thorough_documentation"]
ui_changes: ["detailed_responses", "comprehensive_suggestions", "full_explanations"]
efficiency_focused_user:
optimizations: ["smart_defaults", "workflow_automation", "predictive_suggestions"]
ui_changes: ["proactive_help", "automated_optimizations", "efficiency_tips"]
# Proactive User Assistance
proactive_assistance:
intelligent_suggestions:
# Provide intelligent suggestions based on context
optimization_suggestions:
- trigger: {repeated_operations: ">5", same_pattern: true}
suggestion: "Consider creating a script or alias for this repeated operation"
confidence: 0.8
category: "workflow_optimization"
- trigger: {performance_issues: "detected", duration: ">3_sessions"}
suggestion: "Performance optimization recommendations available"
action: "show_performance_guide"
confidence: 0.9
category: "performance"
- trigger: {error_pattern: "recurring", count: ">3"}
suggestion: "Automated error recovery pattern available"
action: "enable_auto_recovery"
confidence: 0.85
category: "error_prevention"
contextual_help:
# Provide contextual help and guidance
help_triggers:
- context: {new_user: true, session_count: "<5"}
help_type: "onboarding_guidance"
content: "Getting started tips and best practices"
- context: {error_rate: ">10%", recent_errors: ">3"}
help_type: "troubleshooting_assistance"
content: "Common error solutions and debugging tips"
- context: {complex_operation: true, user_expertise: "beginner"}
help_type: "step_by_step_guidance"
content: "Detailed guidance for complex operations"
# Smart Defaults Intelligence
smart_defaults:
context_aware_defaults:
# Generate smart defaults based on context
project_based_defaults:
react_project:
default_mcp_servers: ["magic", "context7"]
default_compression: "minimal"
default_analysis_depth: "ui_focused"
default_validation: "component_focused"
python_project:
default_mcp_servers: ["serena", "sequential"]
default_compression: "standard"
default_analysis_depth: "comprehensive"
default_validation: "enhanced"
documentation_project:
default_mcp_servers: ["context7"]
default_compression: "minimal"
default_analysis_depth: "content_focused"
default_validation: "readability_focused"
dynamic_configuration:
# Dynamically adjust configuration
configuration_adaptation:
performance_based:
triggers:
- condition: {system_performance: "high"}
adjustments: {analysis_depth: "comprehensive", features: "all_enabled"}
- condition: {system_performance: "low"}
adjustments: {analysis_depth: "essential", features: "performance_focused"}
user_expertise_based:
triggers:
- condition: {user_expertise: "expert"}
adjustments: {verbosity: "minimal", automation: "high", warnings: "reduced"}
- condition: {user_expertise: "beginner"}
adjustments: {verbosity: "detailed", automation: "guided", warnings: "comprehensive"}
# Error Recovery Intelligence
error_recovery:
intelligent_error_handling:
# Smart error handling and recovery
error_classification:
user_errors:
- type: "syntax_error"
recovery: "suggest_correction"
user_guidance: "detailed"
- type: "configuration_error"
recovery: "auto_fix_with_approval"
user_guidance: "educational"
- type: "workflow_error"
recovery: "suggest_alternative_approach"
user_guidance: "workflow_tips"
system_errors:
- type: "performance_degradation"
recovery: "automatic_optimization"
user_notification: "informational"
- type: "resource_exhaustion"
recovery: "resource_management_mode"
user_notification: "status_update"
- type: "service_unavailable"
recovery: "graceful_fallback"
user_notification: "service_status"
recovery_learning:
# Learn from error recovery patterns
recovery_effectiveness:
track_recovery_success: true
learn_recovery_patterns: true
improve_recovery_strategies: true
user_recovery_preferences:
learn_preferred_recovery: true
adapt_recovery_approach: true
personalize_error_handling: true
# User Expertise Detection
expertise_detection:
expertise_indicators:
# Detect user expertise level
behavioral_indicators:
command_proficiency:
indicators: ["advanced_flags", "complex_operations", "custom_configurations"]
weight: 0.4
error_recovery_ability:
indicators: ["self_correction", "minimal_help_needed", "independent_problem_solving"]
weight: 0.3
workflow_sophistication:
indicators: ["efficient_workflows", "automation_usage", "advanced_patterns"]
weight: 0.3
expertise_adaptation:
# Adapt interface based on expertise
beginner_adaptations:
interface: ["detailed_explanations", "step_by_step_guidance", "comprehensive_warnings"]
defaults: ["safe_options", "guided_workflows", "educational_mode"]
intermediate_adaptations:
interface: ["balanced_explanations", "contextual_help", "smart_suggestions"]
defaults: ["optimized_workflows", "intelligent_automation", "performance_focused"]
expert_adaptations:
interface: ["minimal_explanations", "advanced_options", "efficiency_focused"]
defaults: ["maximum_automation", "performance_optimization", "minimal_interruptions"]
# User Satisfaction Intelligence
satisfaction_intelligence:
satisfaction_tracking:
# Track user satisfaction indicators
satisfaction_metrics:
task_completion_rate:
weight: 0.3
target_threshold: 0.85
error_resolution_speed:
weight: 0.25
target_threshold: "fast"
feature_adoption_rate:
weight: 0.2
target_threshold: 0.6
user_feedback_sentiment:
weight: 0.25
target_threshold: "positive"
satisfaction_optimization:
# Optimize for user satisfaction
optimization_strategies:
low_satisfaction_triggers:
- trigger: {completion_rate: "<0.7"}
action: "simplify_workflows"
priority: "high"
- trigger: {error_rate: ">15%"}
action: "improve_error_prevention"
priority: "critical"
- trigger: {feature_adoption: "<0.3"}
action: "improve_feature_discoverability"
priority: "medium"
# Personalization Engine
personalization:
adaptive_interface:
# Personalize interface based on user patterns
interface_personalization:
layout_preferences:
learn_preferred_layouts: true
adapt_information_density: true
customize_interaction_patterns: true
content_personalization:
learn_content_preferences: true
adapt_explanation_depth: true
customize_suggestion_types: true
workflow_optimization:
# Optimize workflows for individual users
personal_workflow_learning:
common_task_patterns: true
workflow_efficiency_analysis: true
personalized_shortcuts: true
workflow_recommendations:
suggest_workflow_improvements: true
recommend_automation_opportunities: true
provide_efficiency_insights: true
# Accessibility Intelligence
accessibility:
adaptive_accessibility:
# Adapt interface for accessibility needs
accessibility_detection:
detect_accessibility_needs: true
learn_accessibility_preferences: true
adapt_interface_accordingly: true
inclusive_design:
# Ensure inclusive user experience
inclusive_features:
multiple_interaction_modes: true
flexible_interface_scaling: true
comprehensive_keyboard_support: true
screen_reader_optimization: true

View File

@@ -1,291 +0,0 @@
# SuperClaude-Lite Validation Configuration
# RULES.md + PRINCIPLES.md enforcement and quality standards
# Core SuperClaude Rules Validation
rules_validation:
file_operations:
read_before_write:
enabled: true
severity: "error"
message: "RULES violation: No Read operation detected before Write/Edit"
check_recent_tools: 3
exceptions: ["new_file_creation"]
absolute_paths_only:
enabled: true
severity: "error"
message: "RULES violation: Relative path used"
path_parameters: ["file_path", "path", "directory", "output_path"]
allowed_prefixes: ["http://", "https://", "/"]
validate_before_execution:
enabled: true
severity: "warning"
message: "RULES recommendation: High-risk operation should include validation"
high_risk_operations: ["delete", "refactor", "deploy", "migrate"]
complexity_threshold: 0.7
security_requirements:
input_validation:
enabled: true
severity: "error"
message: "RULES violation: User input handling without validation"
check_patterns: ["user_input", "external_data", "api_input"]
no_hardcoded_secrets:
enabled: true
severity: "critical"
message: "RULES violation: Hardcoded sensitive information detected"
patterns: ["password", "api_key", "secret", "token"]
production_safety:
enabled: true
severity: "error"
message: "RULES violation: Unsafe operation in production context"
production_indicators: ["is_production", "prod_env", "production"]
# SuperClaude Principles Validation
principles_validation:
evidence_over_assumptions:
enabled: true
severity: "warning"
message: "PRINCIPLES: Provide evidence to support assumptions"
check_for_assumptions: true
require_evidence: true
confidence_threshold: 0.7
code_over_documentation:
enabled: true
severity: "warning"
message: "PRINCIPLES: Documentation should follow working code, not precede it"
documentation_operations: ["document", "readme", "guide"]
require_working_code: true
efficiency_over_verbosity:
enabled: true
severity: "suggestion"
message: "PRINCIPLES: Consider token efficiency techniques for large outputs"
output_size_threshold: 5000
verbosity_indicators: ["repetitive_content", "unnecessary_detail"]
test_driven_development:
enabled: true
severity: "warning"
message: "PRINCIPLES: Logic changes should include tests"
logic_operations: ["write", "edit", "generate", "implement"]
test_file_patterns: ["*test*", "*spec*", "test_*", "*_test.*"]
single_responsibility:
enabled: true
severity: "suggestion"
message: "PRINCIPLES: Functions/classes should have single responsibility"
complexity_indicators: ["multiple_purposes", "large_function", "many_parameters"]
error_handling_required:
enabled: true
severity: "warning"
message: "PRINCIPLES: Error handling not implemented"
critical_operations: ["write", "edit", "deploy", "api_calls"]
# Quality Standards
quality_standards:
code_quality:
minimum_score: 0.7
factors:
- syntax_correctness
- logical_consistency
- error_handling_presence
- documentation_adequacy
- test_coverage
security_compliance:
minimum_score: 0.8
checks:
- input_validation
- output_sanitization
- authentication_checks
- authorization_verification
- secure_communication
performance_standards:
response_time_threshold_ms: 2000
resource_efficiency_min: 0.6
optimization_indicators:
- algorithm_efficiency
- memory_usage
- processing_speed
maintainability:
minimum_score: 0.6
factors:
- code_clarity
- documentation_quality
- modular_design
- consistent_style
# Validation Workflow
validation_workflow:
pre_validation:
enabled: true
quick_checks:
- syntax_validation
- basic_security_scan
- rule_compliance_check
post_validation:
enabled: true
comprehensive_checks:
- quality_assessment
- principle_alignment
- effectiveness_measurement
- learning_opportunity_detection
continuous_validation:
enabled: true
real_time_monitoring:
- pattern_violation_detection
- quality_degradation_alerts
- performance_regression_detection
# Error Classification and Handling
error_classification:
critical_errors:
severity_level: "critical"
block_execution: true
examples:
- security_vulnerabilities
- data_corruption_risk
- system_instability
standard_errors:
severity_level: "error"
block_execution: false
require_acknowledgment: true
examples:
- rule_violations
- quality_failures
- incomplete_implementation
warnings:
severity_level: "warning"
block_execution: false
examples:
- principle_deviations
- optimization_opportunities
- best_practice_suggestions
suggestions:
severity_level: "suggestion"
informational: true
examples:
- code_improvements
- efficiency_enhancements
- learning_recommendations
# Effectiveness Measurement
effectiveness_measurement:
success_indicators:
task_completion: "weight: 0.4"
quality_achievement: "weight: 0.3"
user_satisfaction: "weight: 0.2"
learning_value: "weight: 0.1"
performance_metrics:
execution_time: "target: <2000ms"
resource_efficiency: "target: >0.6"
error_rate: "target: <0.1"
validation_accuracy: "target: >0.9"
quality_metrics:
code_quality_score: "target: >0.7"
security_compliance: "target: >0.8"
principle_alignment: "target: >0.7"
rule_compliance: "target: >0.9"
# Learning Integration
learning_integration:
pattern_detection:
success_patterns: true
failure_patterns: true
optimization_patterns: true
user_preference_patterns: true
effectiveness_feedback:
real_time_collection: true
user_satisfaction_tracking: true
quality_trend_analysis: true
adaptation_triggers: true
continuous_improvement:
threshold_adjustment: true
rule_refinement: true
principle_enhancement: true
validation_optimization: true
# Context-Aware Validation
context_awareness:
project_type_adaptations:
frontend_projects:
additional_checks: ["accessibility", "responsive_design", "browser_compatibility"]
backend_projects:
additional_checks: ["api_security", "data_validation", "performance_optimization"]
full_stack_projects:
additional_checks: ["integration_testing", "end_to_end_validation", "deployment_safety"]
user_expertise_adjustments:
beginner:
validation_verbosity: "high"
educational_suggestions: true
step_by_step_guidance: true
intermediate:
validation_verbosity: "medium"
best_practice_suggestions: true
optimization_recommendations: true
expert:
validation_verbosity: "low"
advanced_optimization_suggestions: true
architectural_guidance: true
# Performance Configuration
performance_configuration:
validation_targets:
processing_time_ms: 100
memory_usage_mb: 50
cpu_utilization_percent: 30
optimization_strategies:
parallel_validation: true
cached_results: true
incremental_validation: true
smart_rule_selection: true
resource_management:
max_validation_time_ms: 500
memory_limit_mb: 100
cpu_limit_percent: 50
fallback_on_resource_limit: true
# Integration Points
integration_points:
mcp_servers:
serena: "semantic_validation_support"
morphllm: "edit_validation_coordination"
sequential: "complex_validation_analysis"
learning_engine:
effectiveness_tracking: true
pattern_learning: true
adaptation_feedback: true
compression_engine:
validation_result_compression: true
quality_preservation_verification: true
other_hooks:
pre_tool_use: "validation_preparation"
session_start: "validation_configuration"
stop: "validation_summary_generation"

View File

@@ -1,347 +0,0 @@
# Validation Intelligence Configuration
# Health scoring, diagnostic patterns, and proactive system validation
# Enables intelligent health monitoring and predictive diagnostics
# Metadata
version: "1.0.0"
last_updated: "2025-01-06"
description: "Validation intelligence and health scoring patterns"
# Health Scoring Framework
health_scoring:
component_weights:
# Weighted importance of different system components
learning_system: 0.25 # 25% - Core intelligence
performance_system: 0.20 # 20% - System performance
mcp_coordination: 0.20 # 20% - Server coordination
hook_system: 0.15 # 15% - Hook execution
configuration_system: 0.10 # 10% - Configuration management
cache_system: 0.10 # 10% - Caching and storage
scoring_metrics:
learning_system:
pattern_diversity:
weight: 0.3
healthy_range: [0.6, 0.95] # Not too low, not perfect
critical_threshold: 0.3
measurement: "pattern_signature_entropy"
effectiveness_consistency:
weight: 0.3
healthy_range: [0.7, 0.9] # Consistent but not perfect
critical_threshold: 0.5
measurement: "effectiveness_score_variance"
adaptation_responsiveness:
weight: 0.2
healthy_range: [0.6, 1.0]
critical_threshold: 0.4
measurement: "adaptation_success_rate"
learning_velocity:
weight: 0.2
healthy_range: [0.5, 1.0]
critical_threshold: 0.3
measurement: "patterns_learned_per_session"
performance_system:
response_time_stability:
weight: 0.4
healthy_range: [0.7, 1.0] # Low variance preferred
critical_threshold: 0.4
measurement: "response_time_coefficient_variation"
resource_efficiency:
weight: 0.3
healthy_range: [0.6, 0.85] # Efficient but not resource-starved
critical_threshold: 0.4
measurement: "resource_utilization_efficiency"
error_rate:
weight: 0.3
healthy_range: [0.95, 1.0] # Low error rate (inverted)
critical_threshold: 0.8
measurement: "success_rate"
mcp_coordination:
server_selection_accuracy:
weight: 0.4
healthy_range: [0.8, 1.0]
critical_threshold: 0.6
measurement: "optimal_server_selection_rate"
coordination_efficiency:
weight: 0.3
healthy_range: [0.7, 1.0]
critical_threshold: 0.5
measurement: "coordination_overhead_ratio"
server_availability:
weight: 0.3
healthy_range: [0.9, 1.0]
critical_threshold: 0.7
measurement: "average_server_availability"
# Proactive Diagnostic Patterns
proactive_diagnostics:
early_warning_patterns:
# Detect issues before they become critical
learning_system_warnings:
- name: "pattern_overfitting"
pattern:
consecutive_perfect_scores: ">15"
pattern_diversity: "<0.5"
severity: "medium"
lead_time: "2-5_days"
recommendation: "Increase pattern complexity or add noise"
remediation: "automatic_pattern_diversification"
- name: "learning_stagnation"
pattern:
new_patterns_per_day: "<0.1"
effectiveness_improvement: "<0.01"
duration: ">7_days"
severity: "low"
lead_time: "1-2_weeks"
recommendation: "Review learning triggers and thresholds"
- name: "adaptation_failure"
pattern:
failed_adaptations: ">30%"
confidence_scores: "decreasing"
duration: ">3_days"
severity: "high"
lead_time: "1-3_days"
recommendation: "Review adaptation logic and data quality"
performance_warnings:
- name: "performance_degradation_trend"
pattern:
response_time_trend: "increasing"
degradation_rate: ">5%_per_week"
duration: ">10_days"
severity: "medium"
lead_time: "1-2_weeks"
recommendation: "Investigate resource leaks or optimize bottlenecks"
- name: "memory_leak_indication"
pattern:
memory_usage_trend: "steadily_increasing"
memory_cleanup_efficiency: "decreasing"
duration: ">5_days"
severity: "high"
lead_time: "3-7_days"
recommendation: "Check for memory leaks and optimize garbage collection"
- name: "cache_inefficiency"
pattern:
cache_hit_rate: "decreasing"
cache_size: "growing"
cache_cleanup_frequency: "increasing"
severity: "low"
lead_time: "1_week"
recommendation: "Optimize cache strategies and cleanup policies"
coordination_warnings:
- name: "server_selection_degradation"
pattern:
suboptimal_selections: "increasing"
selection_confidence: "decreasing"
user_satisfaction: "decreasing"
severity: "medium"
lead_time: "2-5_days"
recommendation: "Retrain server selection algorithms"
- name: "coordination_overhead_increase"
pattern:
coordination_time: "increasing"
coordination_complexity: "increasing"
efficiency_metrics: "decreasing"
severity: "medium"
lead_time: "1_week"
recommendation: "Optimize coordination protocols"
# Predictive Health Analysis
predictive_analysis:
health_prediction:
# Predict future health issues
prediction_models:
trend_analysis:
model_type: "linear_regression"
prediction_horizon: 14 # days
confidence_threshold: 0.8
pattern_matching:
model_type: "similarity_search"
historical_window: 90 # days
pattern_similarity_threshold: 0.85
anomaly_prediction:
model_type: "isolation_forest"
anomaly_threshold: 0.1
prediction_accuracy_target: 0.75
health_forecasting:
# Forecast health scores
forecasting_metrics:
- metric: "overall_health_score"
horizon: [1, 7, 14, 30] # days
accuracy_target: 0.8
- metric: "component_health_scores"
horizon: [1, 7, 14] # days
accuracy_target: 0.75
- metric: "critical_issue_probability"
horizon: [1, 3, 7] # days
accuracy_target: 0.85
# Diagnostic Intelligence
diagnostic_intelligence:
intelligent_diagnosis:
# Smart diagnostic capabilities
symptom_analysis:
symptom_correlation: true
root_cause_analysis: true
multi_component_diagnosis: true
diagnostic_algorithms:
decision_tree:
algorithm: "gradient_boosted_trees"
feature_importance_threshold: 0.1
pattern_matching:
algorithm: "k_nearest_neighbors"
similarity_metric: "cosine"
k_value: 5
statistical_analysis:
algorithm: "hypothesis_testing"
confidence_level: 0.95
automated_remediation:
# Automated remediation suggestions
remediation_patterns:
- symptom: "high_error_rate"
diagnosis: "configuration_issue"
remediation: "reset_to_known_good_config"
automation_level: "suggest"
- symptom: "memory_leak"
diagnosis: "cache_overflow"
remediation: "aggressive_cache_cleanup"
automation_level: "auto_with_approval"
- symptom: "performance_degradation"
diagnosis: "resource_exhaustion"
remediation: "resource_optimization_mode"
automation_level: "automatic"
# Validation Rules
validation_rules:
system_consistency:
# Validate system consistency
consistency_checks:
configuration_coherence:
check_type: "cross_reference"
validation_frequency: "on_change"
error_threshold: 0
data_integrity:
check_type: "checksum_validation"
validation_frequency: "hourly"
error_threshold: 0
dependency_resolution:
check_type: "graph_validation"
validation_frequency: "on_startup"
error_threshold: 0
performance_validation:
# Validate performance expectations
performance_checks:
response_time_validation:
expected_range: [100, 2000] # ms
validation_window: 20 # operations
failure_threshold: 0.2 # 20% failures allowed
resource_usage_validation:
expected_range: [0.1, 0.9] # utilization
validation_frequency: "continuous"
alert_threshold: 0.85
throughput_validation:
expected_minimum: 0.5 # operations per second
validation_window: 60 # seconds
degradation_threshold: 0.3 # 30% degradation
# Health Monitoring Intelligence
monitoring_intelligence:
adaptive_monitoring:
# Adapt monitoring based on system state
monitoring_intensity:
healthy_state:
sampling_rate: 0.1 # 10% sampling
check_frequency: 300 # seconds
warning_state:
sampling_rate: 0.5 # 50% sampling
check_frequency: 60 # seconds
critical_state:
sampling_rate: 1.0 # 100% sampling
check_frequency: 10 # seconds
intelligent_alerting:
# Smart alerting to reduce noise
alert_intelligence:
alert_correlation: true # Correlate related alerts
alert_suppression: true # Suppress duplicate alerts
alert_escalation: true # Escalate based on severity
alert_thresholds:
health_score_critical: 0.6
health_score_warning: 0.8
component_failure: true
performance_degradation: 0.3 # 30% degradation
# Continuous Validation
continuous_validation:
validation_cycles:
# Continuous validation cycles
real_time_validation:
validation_frequency: "per_operation"
validation_scope: "critical_path"
performance_impact: "minimal"
periodic_validation:
validation_frequency: "hourly"
validation_scope: "comprehensive"
performance_impact: "low"
deep_validation:
validation_frequency: "daily"
validation_scope: "exhaustive"
performance_impact: "acceptable"
validation_evolution:
# Evolve validation based on findings
learning_from_failures: true
adaptive_validation_rules: true
validation_effectiveness_tracking: true
# Quality Assurance Integration
quality_assurance:
quality_gates:
# Integration with quality gates
gate_validation:
syntax_validation: "automatic"
performance_validation: "threshold_based"
integration_validation: "comprehensive"
continuous_improvement:
# Continuous quality improvement
quality_metrics_tracking: true
validation_accuracy_tracking: true
false_positive_reduction: true
diagnostic_accuracy_improvement: true

View File

@@ -1,400 +0,0 @@
# Configuration Guide
Framework-Hooks uses YAML configuration files to control hook behavior, performance targets, and system features. This guide covers the essential configuration options for customizing the system.
## Configuration Files Overview
The `config/` directory contains 12+ YAML files that control different aspects of the hook system:
```
config/
├── session.yaml # Session lifecycle settings
├── performance.yaml # Performance targets and limits
├── compression.yaml # Context compression settings
├── modes.yaml # Mode activation thresholds
├── mcp_orchestration.yaml # MCP server coordination
├── orchestrator.yaml # General orchestration settings
├── logging.yaml # Logging configuration
├── validation.yaml # System validation rules
└── [other specialized configs]
```
## Essential Configuration Files
### logging.yaml - System Logging
Controls logging behavior and output:
```yaml
# Core Logging Settings
logging:
enabled: false # Default: disabled for performance
level: "ERROR" # ERROR, WARNING, INFO, DEBUG
file_settings:
log_directory: "cache/logs"
retention_days: 30
rotation_strategy: "daily"
hook_logging:
log_lifecycle: false # Log hook start/end events
log_decisions: false # Log decision points
log_errors: false # Log error events
log_timing: false # Include timing information
performance:
max_overhead_ms: 1 # Maximum logging overhead
async_logging: false # Keep simple for now
privacy:
sanitize_user_content: true
exclude_sensitive_data: true
anonymize_session_ids: false
```
**Common customizations:**
```yaml
# Enable basic logging
logging:
enabled: true
level: "INFO"
# Enable debugging
logging:
enabled: true
level: "DEBUG"
hook_logging:
log_lifecycle: true
log_decisions: true
log_timing: true
development:
verbose_errors: true
include_stack_traces: true
debug_mode: true
```
### performance.yaml - Performance Targets
Defines execution time targets for each hook:
```yaml
performance_targets:
session_start: 50 # ms - Session initialization
pre_tool_use: 200 # ms - Tool preparation
post_tool_use: 100 # ms - Tool usage recording
pre_compact: 150 # ms - Context compression
notification: 50 # ms - Notification handling
stop: 100 # ms - Session cleanup
subagent_stop: 100 # ms - Subagent coordination
# Pattern loading performance
pattern_loading:
minimal_patterns: 100 # ms - Basic patterns (3-5KB each)
dynamic_patterns: 200 # ms - Feature patterns (8-12KB each)
learned_patterns: 300 # ms - User patterns (10-20KB each)
# Cache operation limits
cache_operations:
read_timeout: 10 # ms
write_timeout: 50 # ms
# Timeouts (from settings.json)
hook_timeouts:
default: 10 # seconds
extended: 15 # seconds (pre_tool_use, pre_compact, etc.)
```
### session.yaml - Session Management
Controls session lifecycle behavior:
```yaml
session_lifecycle:
initialization:
load_minimal_patterns: true
enable_project_detection: true
activate_learning_engine: true
context_management:
preserve_user_content: true
compress_framework_content: false # Keep framework content uncompressed
apply_selective_compression: true
cleanup:
save_learning_data: true
persist_adaptations: true
cleanup_temp_files: true
```
### compression.yaml - Context Compression
Controls how the compression engine handles content:
```yaml
compression_settings:
enabled: true
# Content classification for selective compression
content_types:
framework_content:
compression_level: 0 # No compression for SuperClaude framework
exclusion_patterns:
- "/SuperClaude/"
- "~/.claude/"
- ".claude/"
- "framework_*"
session_data:
compression_level: 0.4 # 40% compression for session operational data
apply_to:
- "session_metadata"
- "checkpoint_data"
- "cache_content"
user_content:
compression_level: 0 # No compression for user content
preserve_patterns:
- "project_files"
- "user_documentation"
- "source_code"
- "configuration_*"
compression_levels:
minimal: 0.40 # 40% compression
efficient: 0.70 # 70% compression
compressed: 0.85 # 85% compression
critical: 0.95 # 95% compression
quality_targets:
preservation_minimum: 0.95 # 95% information preservation required
processing_time_limit: 100 # ms
```
## Hook-Specific Configuration
Each hook can be configured individually. The general pattern is:
### Hook Enable/Disable
In `logging.yaml`:
```yaml
hook_configuration:
pre_tool_use:
enabled: true # Hook enabled/disabled
log_tool_selection: true
log_input_validation: true
post_tool_use:
enabled: true
log_output_processing: true
log_integration_success: true
# Similar blocks for other hooks
```
### MCP Server Coordination
In `mcp_orchestration.yaml`:
```yaml
mcp_servers:
context7:
enabled: true
auto_activation_patterns:
- "external library imports"
- "framework-specific questions"
sequential:
enabled: true
auto_activation_patterns:
- "complex debugging scenarios"
- "system design questions"
magic:
enabled: true
auto_activation_patterns:
- "UI component requests"
- "design system queries"
# Configuration for other MCP servers
```
## Pattern System Configuration
The 3-tier pattern system can be customized:
### Pattern Loading
```yaml
# In session.yaml or dedicated pattern config
pattern_system:
minimal_patterns:
always_load: true
size_limit_kb: 5
dynamic_patterns:
load_on_demand: true
size_limit_kb: 12
learned_patterns:
adaptation_enabled: true
size_limit_kb: 20
evolution_threshold: 0.8
```
### Project Detection Patterns
Add custom patterns in `patterns/minimal/` directories following existing YAML structure:
```yaml
# Example: patterns/minimal/custom-project.yaml
pattern_type: "project_detection"
name: "custom_project"
triggers:
- "specific-file-pattern"
- "directory-structure"
features_to_activate:
- "custom_mode"
- "specific_mcp_servers"
```
## Advanced Configuration
### Learning Engine Tuning
```yaml
# In learning configuration
learning_engine:
adaptation_settings:
learning_rate: 0.1
adaptation_threshold: 0.75
persistence_enabled: true
data_retention:
session_history_days: 90
pattern_evolution_days: 30
cache_cleanup_interval: 7
```
### Validation Rules
```yaml
# In validation.yaml
validation_rules:
hook_performance:
enforce_timing_targets: true
alert_on_timeout: true
configuration_integrity:
yaml_syntax_check: true
required_fields_check: true
system_health:
file_permissions_check: true
directory_structure_check: true
```
## Environment-Specific Configuration
### Development Environment
```yaml
# Enable verbose logging and debugging
logging:
enabled: true
level: "DEBUG"
development:
verbose_errors: true
include_stack_traces: true
debug_mode: true
performance_targets:
# Relaxed targets for development
session_start: 100
pre_tool_use: 500
```
### Production Environment
```yaml
# Minimal logging, strict performance
logging:
enabled: false
level: "ERROR"
performance_targets:
# Strict targets for production
session_start: 30
pre_tool_use: 150
compression_settings:
# Aggressive optimization
enabled: true
default_level: "efficient"
```
## Configuration Validation
### Manual Validation
Test configuration changes:
```bash
cd Framework-Hooks/hooks/shared
python3 validate_system.py --check-config
```
### YAML Syntax Check
```bash
python3 -c "
import yaml
import glob
for file in glob.glob('config/*.yaml'):
try:
yaml.safe_load(open(file))
print(f'{file}: OK')
except Exception as e:
print(f'{file}: ERROR - {e}')
"
```
## Configuration Best Practices
### Performance Optimization
1. **Keep logging disabled** in production for best performance
2. **Set realistic timing targets** based on your hardware
3. **Enable selective compression** to balance performance and quality
4. **Tune pattern loading** based on project complexity
### Debugging and Development
1. **Enable comprehensive logging** during development
2. **Use debug mode** for detailed error information
3. **Test configuration changes** with validation tools
4. **Monitor hook performance** against targets
### Customization Guidelines
1. **Back up configurations** before making changes
2. **Test changes incrementally** rather than bulk modifications
3. **Use validation tools** to verify configuration integrity
4. **Document custom patterns** for team collaboration
## Configuration Troubleshooting
### Common Issues
**YAML Syntax Errors**: Use Python YAML validation or online checkers
**Performance Degradation**: Review enabled features and logging verbosity
**Hook Failures**: Check required configuration fields are present
**Pattern Loading Issues**: Verify pattern file sizes and structure
### Reset to Defaults
```bash
# Reset all configurations (backup first!)
git checkout config/*.yaml
# Or restore from installation backup
```
The configuration system provides extensive customization while maintaining sensible defaults for immediate usability.

View File

@@ -1,514 +0,0 @@
# Compression Configuration (`compression.yaml`)
## Overview
The `compression.yaml` file defines intelligent token optimization strategies for the SuperClaude-Lite framework. This configuration implements the Token Efficiency Mode with adaptive compression levels, selective content preservation, and quality-gated optimization.
## Purpose and Role
The compression configuration serves as the foundation for:
- **Token Efficiency Mode**: Implements intelligent token optimization with 30-50% reduction targets
- **Selective Compression**: Protects framework content while optimizing session data
- **Quality Preservation**: Maintains ≥95% information fidelity during compression
- **Symbol Systems**: Provides efficient communication through standardized symbols
- **Abbreviation Systems**: Intelligent abbreviation for technical terminology
- **Adaptive Intelligence**: Context-aware compression based on user expertise and task complexity
## Configuration Structure
### 1. Compression Levels (`compression_levels`)
The framework implements 5 compression levels with specific targets and use cases:
#### Minimal Compression (0-40%)
```yaml
minimal:
symbol_systems: false
abbreviation_systems: false
structural_optimization: false
quality_threshold: 0.98
use_cases: ["user_content", "low_resource_usage", "high_quality_required"]
```
**Purpose**: Preserves maximum quality for critical content
**Usage**: User project code, important documentation, complex technical content
**Quality**: 98% preservation guarantee
#### Efficient Compression (40-70%)
```yaml
efficient:
symbol_systems: true
abbreviation_systems: false
structural_optimization: true
quality_threshold: 0.95
use_cases: ["moderate_resource_usage", "balanced_efficiency"]
```
**Purpose**: Balanced optimization for standard operations
**Usage**: Session metadata, working artifacts, analysis results
**Quality**: 95% preservation with symbol enhancement
#### Compressed Level (70-85%)
```yaml
compressed:
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
quality_threshold: 0.90
use_cases: ["high_resource_usage", "user_requests_brevity"]
```
**Purpose**: Aggressive optimization for resource constraints
**Usage**: Large-scale operations, user-requested brevity
**Quality**: 90% preservation with full optimization suite
#### Critical Compression (85-95%)
```yaml
critical:
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
quality_threshold: 0.85
use_cases: ["resource_constraints", "emergency_compression"]
```
**Purpose**: Maximum optimization for severe constraints
**Usage**: Resource exhaustion scenarios, emergency situations
**Quality**: 85% preservation with advanced techniques
#### Emergency Compression (95%+)
```yaml
emergency:
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
aggressive_optimization: true
quality_threshold: 0.80
use_cases: ["critical_resource_constraints", "emergency_situations"]
```
**Purpose**: Ultra-compression for critical resource constraints
**Usage**: System overload, critical memory constraints
**Quality**: 80% preservation with aggressive optimization
### 2. Selective Compression (`selective_compression`)
#### Framework Exclusions
```yaml
framework_exclusions:
patterns:
- "~/.claude/"
- ".claude/"
- "SuperClaude/*"
- "CLAUDE.md"
- "FLAGS.md"
- "PRINCIPLES.md"
- "ORCHESTRATOR.md"
- "MCP_*.md"
- "MODE_*.md"
- "SESSION_LIFECYCLE.md"
compression_level: "preserve" # 0% compression
reasoning: "Framework content must be preserved for proper operation"
```
**Critical Protection**: Framework components receive zero compression to ensure operational integrity.
#### User Content Preservation
```yaml
user_content_preservation:
patterns:
- "project_files"
- "user_documentation"
- "source_code"
- "configuration_files"
- "custom_content"
compression_level: "minimal" # Light compression only
reasoning: "User content requires high fidelity preservation"
```
**Quality Guarantee**: User content maintains 98% fidelity through minimal compression only.
#### Session Data Optimization
```yaml
session_data_optimization:
patterns:
- "session_metadata"
- "checkpoint_data"
- "cache_content"
- "working_artifacts"
- "analysis_results"
compression_level: "efficient" # 40-70% compression
reasoning: "Session data can be compressed while maintaining utility"
```
**Balanced Approach**: Session operational data compressed efficiently while preserving utility.
### 3. Symbol Systems (`symbol_systems`)
#### Core Logic and Flow Symbols
```yaml
core_logic_flow:
enabled: true
mappings:
"leads to": "→"
"implies": "→"
"transforms to": "⇒"
"rollback": "←"
"bidirectional": "⇄"
"and": "&"
"separator": "|"
"sequence": "»"
"therefore": "∴"
"because": "∵"
"equivalent": "≡"
"approximately": "≈"
"not equal": "≠"
```
**Purpose**: Express logical relationships and flow with mathematical precision
**Token Savings**: 50-70% reduction in logical expression length
#### Status and Progress Symbols
```yaml
status_progress:
enabled: true
mappings:
"completed": "✅"
"failed": "❌"
"warning": "⚠️"
"information": ""
"in progress": "🔄"
"waiting": "⏳"
"critical": "🚨"
"target": "🎯"
"metrics": "📊"
"insight": "💡"
```
**Purpose**: Visual status communication with immediate recognition
**User Experience**: Enhanced readability through universal symbols
#### Technical Domain Symbols
```yaml
technical_domains:
enabled: true
mappings:
"performance": "⚡"
"analysis": "🔍"
"configuration": "🔧"
"security": "🛡️"
"deployment": "📦"
"design": "🎨"
"network": "🌐"
"mobile": "📱"
"architecture": "🏗️"
"components": "🧩"
```
**Purpose**: Domain-specific communication with contextual relevance
**Persona Integration**: Symbols adapt to active persona domains
### 4. Abbreviation Systems (`abbreviation_systems`)
#### System and Architecture
```yaml
system_architecture:
enabled: true
mappings:
"configuration": "cfg"
"implementation": "impl"
"architecture": "arch"
"performance": "perf"
"operations": "ops"
"environment": "env"
```
**Technical Focus**: Core system terminology with consistent abbreviations
#### Development Process
```yaml
development_process:
enabled: true
mappings:
"requirements": "req"
"dependencies": "deps"
"validation": "val"
"testing": "test"
"documentation": "docs"
"standards": "std"
```
**Workflow Integration**: Development lifecycle terminology optimization
#### Quality Analysis
```yaml
quality_analysis:
enabled: true
mappings:
"quality": "qual"
"security": "sec"
"error": "err"
"recovery": "rec"
"severity": "sev"
"optimization": "opt"
```
**Quality Focus**: Quality assurance and analysis terminology
### 5. Structural Optimization (`structural_optimization`)
#### Whitespace Optimization
```yaml
whitespace_optimization:
enabled: true
remove_redundant_spaces: true
normalize_line_breaks: true
preserve_code_formatting: true
```
**Code Safety**: Preserves code formatting while optimizing prose content
#### Phrase Simplification
```yaml
phrase_simplification:
enabled: true
common_phrase_replacements:
"in order to": "to"
"it is important to note that": "note:"
"please be aware that": "note:"
"for the purpose of": "for"
"with regard to": "regarding"
```
**Natural Language**: Simplifies verbose phrasing while maintaining meaning
#### Redundancy Removal
```yaml
redundancy_removal:
enabled: true
remove_articles: ["the", "a", "an"] # Only in high compression levels
remove_filler_words: ["very", "really", "quite", "rather"]
combine_repeated_concepts: true
```
**Intelligent Reduction**: Context-aware redundancy elimination
### 6. Quality Preservation (`quality_preservation`)
#### Minimum Thresholds
```yaml
minimum_thresholds:
information_preservation: 0.95
semantic_accuracy: 0.95
technical_correctness: 0.98
user_content_fidelity: 0.99
```
**Quality Gates**: Enforces minimum quality standards across all compression levels
#### Validation Criteria
```yaml
validation_criteria:
key_concept_retention: true
technical_term_preservation: true
code_example_accuracy: true
reference_link_preservation: true
```
**Content Integrity**: Ensures critical content elements remain intact
#### Quality Monitoring
```yaml
quality_monitoring:
real_time_validation: true
effectiveness_tracking: true
user_feedback_integration: true
adaptive_threshold_adjustment: true
```
**Continuous Improvement**: Real-time quality assessment and adaptation
### 7. Adaptive Compression (`adaptive_compression`)
#### Context Awareness
```yaml
context_awareness:
user_expertise_factor: true
project_complexity_factor: true
domain_specific_optimization: true
```
**Personalization**: Compression adapts to user expertise and project context
#### Learning Integration
```yaml
learning_integration:
effectiveness_feedback: true
user_preference_learning: true
pattern_optimization: true
```
**Machine Learning**: Continuous improvement through usage patterns
#### Dynamic Adjustment
```yaml
dynamic_adjustment:
resource_pressure_response: true
quality_threshold_adaptation: true
performance_optimization: true
```
**Real-Time Adaptation**: Adjusts compression based on system state
## Performance Targets
### Processing Performance
```yaml
performance_targets:
processing_time_ms: 150
compression_ratio_target: 0.50 # 50% compression
quality_preservation_target: 0.95
token_efficiency_gain: 0.40 # 40% token reduction
```
**Optimization Goals**: Balances speed, compression, and quality
### Compression Level Performance
Each compression level has specific performance characteristics:
- **Minimal**: 1.0x processing time, 98% quality
- **Efficient**: 1.2x processing time, 95% quality
- **Compressed**: 1.5x processing time, 90% quality
- **Critical**: 1.8x processing time, 85% quality
- **Emergency**: 2.0x processing time, 80% quality
## Integration Points
### MCP Server Integration
```yaml
integration:
mcp_servers:
morphllm: "coordinate_compression_with_editing"
serena: "memory_compression_strategies"
modes:
token_efficiency: "primary_compression_mode"
task_management: "session_data_compression"
```
**System Coordination**: Integrates with MCP servers for coordinated optimization
### Learning Engine Integration
```yaml
learning_engine:
effectiveness_tracking: true
pattern_learning: true
adaptation_feedback: true
```
**Continuous Learning**: Improves compression effectiveness through usage analysis
## Cache Configuration
### Compression Results Caching
```yaml
caching:
compression_results:
enabled: true
cache_duration_minutes: 30
max_cache_size_mb: 50
invalidation_strategy: "content_change_detection"
```
**Performance Optimization**: Caches compression results for repeated content
### Pattern Recognition Caching
```yaml
pattern_recognition:
enabled: true
adaptive_pattern_learning: true
user_specific_patterns: true
```
**Intelligent Caching**: Learns and caches user-specific compression patterns
## Best Practices
### 1. Content Classification
**Always classify content before compression**:
- Framework content → Zero compression
- User project content → Minimal compression
- Session data → Efficient compression
- Temporary data → Compressed/Critical levels
### 2. Quality Monitoring
**Monitor compression effectiveness**:
- Track quality preservation metrics
- Monitor user satisfaction with compressed content
- Adjust thresholds based on effectiveness feedback
### 3. Context-Aware Application
**Adapt compression to context**:
- User expertise level (beginner → minimal, expert → compressed)
- Project complexity (simple → efficient, complex → minimal)
- Resource pressure (low → minimal, high → critical)
### 4. Performance Optimization
**Balance compression with performance**:
- Use caching for repeated content
- Monitor processing time vs. quality trade-offs
- Adjust compression levels based on system resources
### 5. Learning Integration
**Enable continuous improvement**:
- Track compression effectiveness
- Learn user preferences for compression levels
- Adapt symbol and abbreviation usage based on domain
## Troubleshooting
### Common Issues
#### Quality Degradation
- **Symptom**: Users report information loss or confusion
- **Solution**: Reduce compression level, adjust quality thresholds
- **Prevention**: Enable real-time quality monitoring
#### Performance Issues
- **Symptom**: Compression takes too long (>150ms target)
- **Solution**: Enable caching, reduce compression complexity
- **Monitoring**: Track processing time per compression level
#### Symbol/Abbreviation Confusion
- **Symptom**: Users don't understand compressed content
- **Solution**: Adjust to user expertise level, provide symbol legend
- **Adaptation**: Learn user preference patterns
#### Cache Issues
- **Symptom**: Stale compression results, cache bloat
- **Solution**: Adjust cache invalidation strategy, reduce cache size
- **Maintenance**: Enable automatic cache cleanup
### Configuration Validation
The framework validates compression configuration:
- **Range Validation**: Quality thresholds between 0.0-1.0
- **Performance Validation**: Processing time targets achievable
- **Pattern Validation**: Symbol and abbreviation mappings are valid
- **Integration Validation**: MCP server and mode coordination settings
## Related Documentation
- **Token Efficiency Mode**: See `MODE_Token_Efficiency.md` for behavioral patterns
- **Pre-Compact Hook**: Review hook implementation for compression execution
- **MCP Integration**: Reference Morphllm documentation for editing coordination
- **Quality Gates**: See validation documentation for quality preservation
## Version History
- **v1.0.0**: Initial compression configuration with 5-level strategy
- Selective compression with framework protection
- Symbol and abbreviation systems implementation
- Adaptive compression with learning integration
- Quality preservation with real-time monitoring

View File

@@ -1,267 +0,0 @@
# Hook Coordination Configuration (`hook_coordination.yaml`)
## Overview
The `hook_coordination.yaml` file configures intelligent hook execution patterns, dependency resolution, and optimization strategies for the SuperClaude-Lite framework. This configuration enables smart coordination of all Framework-Hooks lifecycle events.
## Purpose and Role
This configuration provides:
- **Execution Patterns**: Parallel, sequential, and conditional execution strategies
- **Dependency Resolution**: Smart dependency management between hooks
- **Performance Optimization**: Resource management and caching strategies
- **Error Handling**: Resilient execution with graceful degradation
- **Context Awareness**: Adaptive execution based on operation context
## Configuration Structure
### 1. Execution Patterns
#### Parallel Execution
```yaml
parallel_execution:
groups:
- name: "independent_analysis"
hooks: ["compression_engine", "pattern_detection"]
max_parallel: 2
timeout: 5000 # ms
```
**Purpose**: Run independent hooks simultaneously for performance
**Groups**: Logical groupings of hooks that can execute in parallel
**Limits**: Maximum concurrent hooks and timeout protection
#### Sequential Execution
```yaml
sequential_execution:
chains:
- name: "session_lifecycle"
sequence: ["session_start", "pre_tool_use", "post_tool_use", "stop"]
mandatory: true
break_on_error: true
```
**Purpose**: Enforce execution order for dependent operations
**Chains**: Named sequences with defined order and error handling
**Control**: Mandatory sequences with error breaking behavior
#### Conditional Execution
```yaml
conditional_execution:
rules:
- hook: "compression_engine"
conditions:
- resource_usage: ">0.75"
- conversation_length: ">50"
- enable_compression: true
priority: "high"
```
**Purpose**: Execute hooks based on runtime conditions
**Conditions**: Logical rules for hook activation
**Priority**: Execution priority for resource management
### 2. Dependency Resolution
#### Hook Dependencies
```yaml
hook_dependencies:
session_start:
requires: []
provides: ["session_context", "initial_state"]
pre_tool_use:
requires: ["session_context"]
provides: ["tool_context", "pre_analysis"]
depends_on: ["session_start"]
```
**Dependencies**: What each hook requires and provides
**Resolution**: Automatic dependency chain calculation
**Optional**: Soft dependencies that don't block execution
#### Resolution Strategies
```yaml
resolution_strategies:
missing_dependency:
strategy: "graceful_degradation"
fallback: "skip_optional"
circular_dependency:
strategy: "break_weakest_link"
priority_order: ["session_start", "pre_tool_use", "post_tool_use", "stop"]
```
**Graceful Degradation**: Continue execution without non-critical dependencies
**Circular Resolution**: Break cycles using priority ordering
**Timeout Handling**: Continue execution when dependencies timeout
### 3. Performance Optimization
#### Execution Paths
```yaml
fast_path:
conditions:
- complexity_score: "<0.3"
- operation_type: ["simple", "basic"]
optimizations:
- skip_non_essential_hooks: true
- enable_aggressive_caching: true
- parallel_where_possible: true
```
**Fast Path**: Optimized execution for simple operations
**Comprehensive Path**: Full analysis for complex operations
**Resource Budgets**: CPU, memory, and time limits
#### Caching Strategies
```yaml
cacheable_hooks:
- hook: "pattern_detection"
cache_key: ["session_context", "operation_type"]
cache_duration: 300 # seconds
```
**Hook Caching**: Cache hook results to avoid recomputation
**Cache Keys**: Contextual keys for cache invalidation
**TTL Management**: Time-based cache expiration
### 4. Context Awareness
#### Operation Context
```yaml
context_patterns:
- context_type: "ui_development"
hook_priorities: ["mcp_intelligence", "pattern_detection", "compression_engine"]
preferred_execution: "fast_parallel"
```
**Adaptive Execution**: Adjust hook execution based on operation type
**Priority Ordering**: Context-specific hook priority
**Execution Preference**: Optimal execution strategy per context
#### User Preferences
```yaml
preference_patterns:
- user_type: "performance_focused"
optimizations: ["aggressive_caching", "parallel_execution", "skip_optional"]
```
**User Adaptation**: Adapt to user preferences and patterns
**Performance Profiles**: Optimize for speed, quality, or balance
**Learning Integration**: Improve based on user behavior patterns
### 5. Error Handling and Recovery
#### Recovery Strategies
```yaml
recovery_strategies:
- error_type: "timeout"
recovery: "continue_without_hook"
log_level: "warning"
- error_type: "critical_failure"
recovery: "abort_and_cleanup"
log_level: "error"
```
**Error Types**: Different failure modes with appropriate responses
**Recovery Actions**: Continue, retry, degrade, or abort
**Logging**: Appropriate log levels for different error types
#### Resilience Features
```yaml
resilience_features:
retry_failed_hooks: true
max_retries: 2
graceful_degradation: true
error_isolation: true
```
**Retry Logic**: Automatic retry with backoff for transient failures
**Degradation**: Continue with reduced functionality when possible
**Isolation**: Prevent error cascade across hook execution
### 6. Lifecycle Management
#### State Tracking
```yaml
state_tracking:
- pending
- initializing
- running
- completed
- failed
- skipped
- timeout
```
**Hook States**: Complete lifecycle state management
**Monitoring**: Performance and health tracking
**Events**: Before/after hook execution handling
### 7. Dynamic Configuration
#### Adaptive Execution
```yaml
adaptation_triggers:
- performance_degradation: ">20%"
action: "switch_to_fast_path"
- error_rate: ">10%"
action: "enable_resilience_mode"
```
**Performance Adaptation**: Switch execution strategies based on performance
**Error Response**: Enable resilience mode when error rates increase
**Resource Management**: Reduce scope when resources are constrained
## Configuration Guidelines
### Performance Tuning
- **Fast Path**: Enable for simple operations to reduce overhead
- **Parallel Groups**: Group independent hooks for concurrent execution
- **Caching**: Cache expensive operations like pattern detection
- **Resource Budgets**: Set appropriate limits for your environment
### Reliability Configuration
- **Error Recovery**: Configure appropriate recovery strategies
- **Dependency Management**: Use optional dependencies for non-critical hooks
- **Resilience**: Enable retry and graceful degradation features
- **Monitoring**: Track hook performance and health
### Context Optimization
- **Operation Types**: Define context patterns for your common workflows
- **User Preferences**: Adapt to user performance vs quality preferences
- **Learning**: Enable learning features for continuous improvement
## Integration Points
### Hook Integration
- All Framework-Hooks use this coordination configuration
- Hook execution follows defined patterns and dependencies
- Performance targets integrated with hook implementations
### Resource Management
- Coordinates with performance monitoring systems
- Integrates with caching and optimization frameworks
- Manages resource allocation across hook execution
## Troubleshooting
### Performance Issues
- **Slow Execution**: Check if comprehensive path is being used unnecessarily
- **Resource Usage**: Monitor CPU and memory budgets
- **Caching**: Verify cache hit rates for expensive operations
### Execution Problems
- **Missing Dependencies**: Check dependency resolution strategies
- **Hook Failures**: Review error recovery configuration
- **Timeout Issues**: Adjust timeout values for your environment
### Context Issues
- **Wrong Path Selection**: Review context pattern matching
- **User Preferences**: Check preference pattern configuration
- **Adaptation**: Monitor adaptation trigger effectiveness
## Related Documentation
- **Hook Implementation**: Individual hook documentation for specific behavior
- **Performance Configuration**: `performance.yaml.md` for performance targets
- **Error Handling**: Framework error handling and logging configuration

View File

@@ -1,63 +0,0 @@
# Intelligence Patterns Configuration (`intelligence_patterns.yaml`)
## Overview
The `intelligence_patterns.yaml` file defines core learning intelligence patterns for SuperClaude Framework-Hooks. This configuration enables multi-dimensional pattern recognition, adaptive learning, and intelligent behavior adaptation.
## Purpose and Role
This configuration provides:
- **Pattern Recognition**: Multi-dimensional analysis of operation patterns
- **Adaptive Learning**: Dynamic learning rate and confidence adjustment
- **Behavior Intelligence**: Context-aware decision making and optimization
- **Performance Intelligence**: Success pattern recognition and optimization
## Key Configuration Areas
### 1. Pattern Recognition
- **Multi-Dimensional Analysis**: Context type, complexity, operation type, performance
- **Signature Generation**: Unique pattern identification for caching and learning
- **Pattern Clustering**: Groups similar patterns for behavioral optimization
- **Similarity Thresholds**: Controls pattern matching sensitivity
### 2. Adaptive Learning
- **Dynamic Learning Rates**: Confidence-based learning rate adjustment (0.1-1.0)
- **Confidence Scoring**: Multi-factor confidence assessment
- **Learning Windows**: Time-based and operation-based learning boundaries
- **Adaptation Strategies**: How the system adapts to new patterns
### 3. Intelligence Behaviors
- **Context Intelligence**: Situation-aware decision making
- **Performance Intelligence**: Success pattern recognition and replication
- **User Intelligence**: User behavior pattern learning and adaptation
- **System Intelligence**: System performance pattern optimization
## Configuration Structure
The file includes detailed configurations for:
- Learning intelligence parameters and thresholds
- Pattern recognition algorithms and clustering
- Confidence scoring and adaptation strategies
- Intelligence behavior definitions and triggers
## Integration Points
### Hook Integration
- Pattern recognition runs during hook execution
- Learning updates occur post-operation
- Intelligence behaviors influence hook coordination
### Performance Integration
- Performance patterns inform optimization decisions
- Success patterns guide resource allocation
- Failure patterns trigger adaptation strategies
## Usage Guidelines
This is an advanced configuration file that controls the core learning and intelligence capabilities of the Framework-Hooks system. Most users should not need to modify these settings, as they are tuned for optimal performance across different use cases.
## Related Documentation
- **Hook Coordination**: `hook_coordination.yaml.md` for execution patterns
- **Performance**: `performance.yaml.md` for performance optimization
- **User Experience**: `user_experience.yaml.md` for user-focused intelligence

View File

@@ -1,423 +0,0 @@
# Logging Configuration (`logging.yaml`)
## Overview
The `logging.yaml` file defines the logging configuration for the SuperClaude-Lite framework hooks. This configuration provides comprehensive logging capabilities while maintaining performance and privacy standards for production environments.
## Purpose and Role
The logging configuration serves as:
- **Execution Monitoring**: Tracks hook lifecycle events and execution patterns
- **Performance Analysis**: Logs timing information for optimization analysis
- **Error Tracking**: Captures and logs error events with appropriate detail
- **Privacy Protection**: Sanitizes user content while preserving debugging capability
- **Development Support**: Provides configurable verbosity for development and troubleshooting
## Configuration Structure
### 1. Core Logging Settings (`logging`)
#### Basic Configuration
```yaml
logging:
enabled: false
level: "ERROR" # ERROR, WARNING, INFO, DEBUG
```
**Purpose**: Controls overall logging enablement and verbosity level
**Levels**: ERROR (critical only) → WARNING (issues) → INFO (operations) → DEBUG (detailed)
**Default**: Disabled by default with ERROR level when enabled to minimize overhead
#### File Settings
```yaml
file_settings:
log_directory: "cache/logs"
retention_days: 30
rotation_strategy: "daily"
```
**Log Directory**: Stores logs in cache directory for easy cleanup
**Retention Policy**: 30-day retention balances storage with debugging needs
**Rotation Strategy**: Daily rotation prevents large log files
#### Hook Logging Settings
```yaml
hook_logging:
log_lifecycle: false # Log hook start/end events
log_decisions: false # Log decision points
log_errors: false # Log error events
log_timing: false # Include timing information
```
**Lifecycle Logging**: Disabled by default for performance
**Decision Logging**: Disabled by default to reduce overhead
**Error Logging**: Disabled by default (can be enabled for debugging)
**Timing Logging**: Disabled by default to minimize performance impact
#### Performance Settings
```yaml
performance:
max_overhead_ms: 1 # Maximum acceptable logging overhead
async_logging: false # Keep simple for now
```
**Overhead Limit**: 1ms maximum overhead ensures logging doesn't impact performance
**Synchronous Logging**: Simple synchronous approach for reliability and consistency
#### Privacy Settings
```yaml
privacy:
sanitize_user_content: true
exclude_sensitive_data: true
anonymize_session_ids: false # Keep for correlation
```
**Content Sanitization**: Removes or masks user content from logs
**Sensitive Data Protection**: Excludes passwords, tokens, and personal information
**Session Correlation**: Preserves session IDs for debugging while protecting user identity
### 2. Hook-Specific Configuration (`hook_configuration`)
#### Pre-Tool Use Hook
```yaml
pre_tool_use:
enabled: true
log_tool_selection: true
log_input_validation: true
```
**Tool Selection Logging**: Records MCP server routing decisions
**Input Validation Logging**: Tracks validation results and failures
**Purpose**: Debug routing logic and validate input processing
#### Post-Tool Use Hook
```yaml
post_tool_use:
enabled: true
log_output_processing: true
log_integration_success: true
```
**Output Processing**: Logs quality validation and rule compliance checks
**Integration Success**: Records successful framework integration outcomes
**Purpose**: Monitor quality gates and integration effectiveness
#### Session Start Hook
```yaml
session_start:
enabled: true
log_initialization: true
log_configuration_loading: true
```
**Initialization Logging**: Tracks project detection and mode activation
**Configuration Loading**: Records YAML configuration loading and validation
**Purpose**: Debug session startup issues and configuration problems
#### Pre-Compact Hook
```yaml
pre_compact:
enabled: true
log_compression_decisions: true
```
**Compression Decisions**: Records compression level selection and strategy choices
**Purpose**: Optimize compression effectiveness and debug quality issues
#### Notification Hook
```yaml
notification:
enabled: true
log_notification_handling: true
```
**Notification Handling**: Tracks notification processing and pattern updates
**Purpose**: Debug notification system and monitor pattern update effectiveness
#### Stop Hook
```yaml
stop:
enabled: true
log_cleanup_operations: true
```
**Cleanup Operations**: Records session analytics generation and cleanup processes
**Purpose**: Monitor session termination and ensure proper cleanup
#### Subagent Stop Hook
```yaml
subagent_stop:
enabled: true
log_subagent_cleanup: true
```
**Subagent Cleanup**: Tracks task management analytics and coordination cleanup
**Purpose**: Debug task management delegation and monitor coordination effectiveness
### 3. Development Settings (`development`)
```yaml
development:
verbose_errors: false
include_stack_traces: false # Keep logs clean
debug_mode: false
```
**Verbose Errors**: Disabled by default for minimal output
**Stack Traces**: Disabled by default to keep logs clean and readable
**Debug Mode**: Disabled for production performance, can be enabled for deep debugging
## Default Values and Meanings
### Log Levels
- **ERROR**: Only critical errors that prevent operation (default for production)
- **WARNING**: Issues that don't prevent operation but should be addressed
- **INFO**: Normal operational information and key decision points (recommended default)
- **DEBUG**: Detailed execution information for deep troubleshooting
### Retention Policy
- **30 Days**: Balances debugging capability with storage requirements
- **Daily Rotation**: Prevents large log files, enables efficient log management
- **Automatic Cleanup**: Prevents log directory bloat over time
### Privacy Defaults
- **Sanitize User Content**: Always enabled to protect user privacy
- **Exclude Sensitive Data**: Always enabled to prevent credential exposure
- **Session ID Preservation**: Enabled for debugging correlation while protecting user identity
## Integration with Hooks
### 1. Hook Execution Logging
Each hook logs key execution events:
```
[INFO] [SessionStart] Hook execution started - session_id: abc123
[INFO] [SessionStart] Project type detected: nodejs
[INFO] [SessionStart] Mode activated: task_management
[INFO] [SessionStart] Hook execution completed - duration: 125ms
```
### 2. Decision Point Logging
Critical decisions are logged for analysis:
```
[INFO] [PreToolUse] MCP server selected: serena - confidence: 0.85
[INFO] [PreToolUse] Routing decision: multi_file_operation detected
[WARNING] [PreToolUse] Fallback activated: serena unavailable
```
### 3. Performance Logging
Timing information for optimization:
```
[INFO] [PostToolUse] Quality validation completed - duration: 45ms
[WARNING] [PreCompact] Compression exceeded target - duration: 200ms (target: 150ms)
```
### 4. Error Logging
Comprehensive error capture:
```
[ERROR] [Stop] Analytics generation failed - error: connection_timeout
[ERROR] [Stop] Fallback: basic session cleanup activated
```
## Performance Implications
### 1. Logging Overhead
#### Synchronous Logging Impact
- **Per Log Entry**: <1ms overhead (within target)
- **File I/O**: Batched writes for efficiency
- **String Processing**: Minimal formatting overhead
#### Performance Monitoring
- **Overhead Tracking**: Monitors logging performance impact
- **Threshold Alerts**: Warns when overhead exceeds 1ms target
- **Auto-Adjustment**: Can reduce logging verbosity if performance degrades
### 2. Storage Impact
#### Log File Sizes
- **Typical Session**: 50-200KB log data
- **Daily Logs**: 1-10MB depending on activity
- **Storage Growth**: ~300MB per month with 30-day retention
#### Disk I/O Impact
- **Write Operations**: Minimal impact through batching
- **Log Rotation**: Daily rotation minimizes individual file sizes
- **Cleanup**: Automatic cleanup prevents storage bloat
### 3. Memory Impact
#### Log Buffer Management
- **Buffer Size**: 10KB typical buffer size
- **Flush Strategy**: Regular flushes prevent memory buildup
- **Memory Usage**: <5MB memory overhead for logging system
## Configuration Best Practices
### 1. Production Configuration
```yaml
logging:
enabled: true
level: "INFO"
privacy:
sanitize_user_content: true
exclude_sensitive_data: true
performance:
max_overhead_ms: 1
```
**Recommendations**:
- Use INFO level for production (balances information with performance)
- Always enable privacy protection in production
- Maintain 1ms overhead limit for performance
### 2. Development Configuration
```yaml
logging:
level: "DEBUG"
development:
verbose_errors: true
debug_mode: true
privacy:
sanitize_user_content: false # Only for development
```
**Development Settings**:
- DEBUG level for detailed troubleshooting
- Verbose errors for comprehensive debugging
- Reduced privacy restrictions (development only)
### 3. Performance-Critical Configuration
```yaml
logging:
level: "ERROR"
hook_logging:
log_timing: false
performance:
max_overhead_ms: 0.5
```
**Optimization Settings**:
- ERROR level only for minimal overhead
- Disable timing logs for performance
- Stricter overhead limits
### 4. Debugging Configuration
```yaml
logging:
level: "DEBUG"
hook_logging:
log_lifecycle: true
log_decisions: true
log_timing: true
development:
verbose_errors: true
include_stack_traces: true
```
**Debug Settings**:
- Maximum verbosity for troubleshooting
- All logging features enabled
- Stack traces for deep debugging
## Log File Structure
### 1. Log Entry Format
```
[TIMESTAMP] [LEVEL] [HOOK_NAME] Message - context_key: value
```
**Example**:
```
[2024-12-15T14:30:22Z] [INFO] [PreToolUse] MCP routing completed - server: serena, confidence: 0.85, duration: 125ms
```
### 2. Log Directory Structure
```
cache/logs/
├── superclaude-hooks-2024-12-15.log
├── superclaude-hooks-2024-12-14.log
├── superclaude-hooks-2024-12-13.log
└── archived/
└── older-logs...
```
### 3. Log Rotation Management
- **Daily Files**: New log file each day
- **Automatic Cleanup**: Removes files older than retention period
- **Archive Option**: Can archive old logs instead of deletion
## Troubleshooting
### Common Logging Issues
#### No Logs Generated
- **Check**: Logging enabled in configuration
- **Verify**: Log directory permissions and existence
- **Test**: Hook execution and error handling
- **Debug**: Basic logging functionality
#### Performance Impact
- **Symptoms**: Slow hook execution, high overhead
- **Solutions**: Reduce log level, disable timing logs
- **Monitoring**: Track logging overhead metrics
- **Optimization**: Adjust performance settings
#### Log File Issues
- **Symptoms**: Missing logs, rotation problems
- **Solutions**: Check file permissions, disk space
- **Prevention**: Monitor log directory size
- **Maintenance**: Regular log cleanup
#### Privacy Concerns
- **Symptoms**: User data in logs, sensitive information exposure
- **Solutions**: Enable sanitization, review privacy settings
- **Validation**: Audit log content for sensitive data
- **Compliance**: Ensure privacy settings meet requirements
### Log Analysis
#### Performance Analysis
```bash
# Analyze hook execution times
grep "duration:" superclaude-hooks-*.log | sort -k5 -n
# Find performance outliers
grep "exceeded target" superclaude-hooks-*.log
```
#### Error Analysis
```bash
# Review error patterns
grep "ERROR" superclaude-hooks-*.log
# Analyze fallback activation frequency
grep "Fallback activated" superclaude-hooks-*.log
```
#### Effectiveness Analysis
```bash
# Monitor MCP server selection patterns
grep "MCP server selected" superclaude-hooks-*.log
# Track mode activation patterns
grep "Mode activated" superclaude-hooks-*.log
```
## Related Documentation
- **Hook Implementation**: See individual hook documentation for specific logging patterns
- **Performance Configuration**: Reference `performance.yaml.md` for performance monitoring integration
- **Privacy Guidelines**: Review framework privacy standards for logging compliance
- **Development Support**: See development configuration for debugging techniques
## Version History
- **v1.0.0**: Initial logging configuration
- Simple, performance-focused logging system
- Comprehensive privacy protection
- Hook-specific logging customization
- Development and production configuration support

View File

@@ -1,73 +0,0 @@
# MCP Orchestration Configuration (`mcp_orchestration.yaml`)
## Overview
The `mcp_orchestration.yaml` file configures MCP (Model Context Protocol) server coordination, intelligent routing, and optimization strategies for the SuperClaude-Lite framework.
## Purpose and Role
This configuration provides:
- **MCP Server Routing**: Intelligent selection of MCP servers based on context
- **Server Coordination**: Multi-server coordination and fallback strategies
- **Performance Optimization**: Caching, load balancing, and resource management
- **Context Awareness**: Operation-specific server selection and configuration
## Key Configuration Areas
### 1. Server Selection Patterns
- **Context-Based Routing**: Route requests to appropriate MCP servers based on operation type
- **Confidence Thresholds**: Minimum confidence levels for server selection
- **Fallback Chains**: Backup server selection when primary servers unavailable
- **Performance-Based Selection**: Choose servers based on historical performance
### 2. Multi-Server Coordination
- **Parallel Execution**: Coordinate multiple servers for complex operations
- **Result Aggregation**: Combine results from multiple servers intelligently
- **Conflict Resolution**: Handle conflicting recommendations from different servers
- **Load Distribution**: Balance requests across available servers
### 3. Performance Optimization
- **Response Caching**: Cache server responses to reduce latency
- **Connection Pooling**: Manage persistent connections to MCP servers
- **Request Batching**: Batch similar requests for efficiency
- **Timeout Management**: Handle server timeouts gracefully
### 4. Context Intelligence
- **Operation Type Detection**: Identify operation types for optimal server selection
- **Project Context Awareness**: Route based on detected project characteristics
- **User Preference Integration**: Consider user preferences in server selection
- **Historical Performance**: Learn from past server performance
## Configuration Structure
The file typically includes:
- Server capability mappings (which servers handle which operations)
- Routing rules and decision trees
- Performance thresholds and optimization settings
- Fallback and error handling strategies
## Integration Points
### Hook Integration
- **Pre-Tool Use**: Server selection and preparation
- **Post-Tool Use**: Performance tracking and result validation
- **Session Start**: Server availability checking and initialization
### Framework Integration
- Works with mode detection to optimize server selection
- Integrates with performance monitoring for optimization
- Coordinates with user experience settings for personalization
## Usage Guidelines
This configuration controls how the framework routes operations to different MCP servers. Key considerations:
- **Server Availability**: Configure appropriate fallback chains
- **Performance Tuning**: Adjust timeout and caching settings for your environment
- **Context Mapping**: Ensure operation types map to appropriate servers
- **Load Management**: Configure load balancing for high-usage scenarios
## Related Documentation
- **Hook Coordination**: `hook_coordination.yaml.md` for execution patterns
- **Performance**: `performance.yaml.md` for performance monitoring
- **User Experience**: `user_experience.yaml.md` for user-focused routing

View File

@@ -1,169 +0,0 @@
# Modes Configuration (`modes.yaml`)
## Overview
The `modes.yaml` file defines mode detection patterns for the SuperClaude-Lite framework. This configuration controls trigger patterns and activation thresholds for behavioral mode detection.
## Purpose and Role
The modes configuration provides:
- **Pattern-Based Detection**: Regex and keyword patterns for automatic mode activation
- **Confidence Thresholds**: Minimum confidence levels required for mode activation
- **Auto-Activation Control**: Enable/disable automatic mode detection
- **Performance Tuning**: File count and complexity thresholds for task management mode
## Configuration Structure
### Basic Structure
```yaml
mode_detection:
[mode_name]:
enabled: true/false
trigger_patterns: [list of patterns]
confidence_threshold: 0.0-1.0
auto_activate: true/false
```
### 1. Brainstorming Mode
```yaml
brainstorming:
enabled: true
trigger_patterns:
- "I want to build"
- "thinking about"
- "not sure"
- "maybe.*could"
- "brainstorm"
- "explore"
- "figure out"
- "unclear.*requirements"
- "ambiguous.*needs"
confidence_threshold: 0.7
auto_activate: true
```
**Purpose**: Detects exploration and requirement discovery needs
**Patterns**: Matches uncertain language and exploration keywords
**Threshold**: 70% confidence required for activation
### 2. Task Management Mode
```yaml
task_management:
enabled: true
trigger_patterns:
- "multiple.*tasks"
- "complex.*system"
- "build.*comprehensive"
- "coordinate.*work"
- "large-scale.*operation"
- "manage.*operations"
- "comprehensive.*refactoring"
- "authentication.*system"
confidence_threshold: 0.7
auto_activate: true
auto_activation_thresholds:
file_count: 3
complexity_score: 0.4
```
**Purpose**: Detects complex, multi-step operations requiring coordination
**Patterns**: Matches system-level and coordination keywords
**Thresholds**: 70% confidence, 3+ files, 0.4+ complexity score
### 3. Token Efficiency Mode
```yaml
token_efficiency:
enabled: true
trigger_patterns:
- "brief"
- "concise"
- "compressed"
- "efficient.*output"
- "token.*optimization"
- "short.*response"
- "running.*low.*context"
confidence_threshold: 0.75
auto_activate: true
```
**Purpose**: Detects requests for compressed or efficient output
**Patterns**: Matches brevity and efficiency requests
**Threshold**: 75% confidence required for activation
### 4. Introspection Mode
```yaml
introspection:
enabled: true
trigger_patterns:
- "analyze.*reasoning"
- "examine.*decision"
- "reflect.*on"
- "meta.*cognitive"
- "thinking.*process"
- "reasoning.*process"
- "decision.*made"
confidence_threshold: 0.6
auto_activate: true
```
**Purpose**: Detects requests for meta-cognitive analysis
**Patterns**: Matches reasoning and analysis language
**Threshold**: 60% confidence (lower threshold for broader detection)
## Configuration Guidelines
### Pattern Design
- Use regex patterns for flexible matching
- Include variations of key concepts
- Balance specificity with coverage
- Test patterns against common user inputs
### Threshold Tuning
- **Higher thresholds** (0.8+): Reduce false positives, increase precision
- **Lower thresholds** (0.5-0.6): Increase detection, may include false positives
- **Balanced thresholds** (0.7): Good default for most use cases
### Performance Considerations
- Pattern matching adds ~10-50ms per mode evaluation
- More complex regex patterns increase processing time
- Consider disabling unused modes to improve performance
## Integration Points
### Hook Integration
- **Session Start**: Mode detection runs during session initialization
- **Pre-Tool Use**: Mode coordination affects tool selection
- **Post-Tool Use**: Mode effectiveness tracking and validation
### MCP Server Coordination
- Detected modes influence MCP server routing
- Mode-specific optimization strategies applied
- Performance profiles adapted based on active modes
## Troubleshooting
### Mode Not Activating
- **Check pattern matching**: Test patterns against actual user input
- **Lower threshold**: Reduce confidence threshold for broader detection
- **Add patterns**: Include additional trigger patterns for edge cases
### Wrong Mode Activating
- **Increase threshold**: Raise confidence threshold for more selective activation
- **Refine patterns**: Make patterns more specific to reduce false matches
- **Pattern conflicts**: Check for overlapping patterns between modes
### Performance Issues
- **Disable unused modes**: Set `enabled: false` for unused modes
- **Simplify patterns**: Use simpler regex patterns for better performance
- **Monitor timing**: Track mode detection overhead in logs
## Related Documentation
- **Mode Implementation**: See individual mode documentation (MODE_*.md files)
- **Hook Integration**: Reference `session_start.py` for mode initialization
- **Performance Configuration**: See `performance.yaml.md` for performance monitoring

View File

@@ -1,530 +0,0 @@
# Orchestrator Configuration (`orchestrator.yaml`)
## Overview
The `orchestrator.yaml` file defines intelligent routing patterns and coordination strategies for the SuperClaude-Lite framework. This configuration implements the ORCHESTRATOR.md patterns through automated MCP server selection, hybrid intelligence coordination, and performance optimization strategies.
## Purpose and Role
The orchestrator configuration serves as:
- **Intelligent Routing Engine**: Automatically selects optimal MCP servers based on task characteristics
- **Hybrid Intelligence Coordinator**: Manages coordination between Morphllm and Serena for optimal editing strategies
- **Performance Optimizer**: Implements caching, parallel processing, and resource management strategies
- **Fallback Manager**: Provides graceful degradation when preferred servers are unavailable
- **Learning Coordinator**: Tracks routing effectiveness and adapts selection strategies
## Configuration Structure
### 1. MCP Server Routing Patterns (`routing_patterns`)
#### UI Components Routing
```yaml
ui_components:
triggers: ["component", "button", "form", "modal", "dialog", "card", "input", "design", "frontend", "ui", "interface"]
mcp_server: "magic"
persona: "frontend-specialist"
confidence_threshold: 0.8
priority: "high"
performance_profile: "standard"
capabilities: ["ui_generation", "design_systems", "component_patterns"]
```
**Purpose**: Routes UI-related requests to Magic MCP server with frontend persona activation
**Triggers**: Comprehensive UI terminology detection
**Performance**: Standard performance profile with high priority routing
#### Deep Analysis Routing
```yaml
deep_analysis:
triggers: ["analyze", "complex", "system-wide", "architecture", "debug", "troubleshoot", "investigate", "root cause"]
mcp_server: "sequential"
thinking_mode: "--think-hard"
confidence_threshold: 0.75
priority: "high"
performance_profile: "intensive"
capabilities: ["complex_reasoning", "systematic_analysis", "hypothesis_testing"]
context_expansion: true
```
**Purpose**: Routes complex analysis requests to Sequential with enhanced thinking modes
**Thinking Integration**: Automatically activates `--think-hard` for systematic analysis
**Context Expansion**: Enables broader context analysis for complex problems
#### Library Documentation Routing
```yaml
library_documentation:
triggers: ["library", "framework", "package", "import", "dependency", "documentation", "docs", "api", "reference"]
mcp_server: "context7"
persona: "architect"
confidence_threshold: 0.85
priority: "medium"
performance_profile: "standard"
capabilities: ["documentation_access", "framework_patterns", "best_practices"]
```
**Purpose**: Routes documentation requests to Context7 with architect persona
**High Confidence**: 85% threshold ensures precise documentation routing
**Best Practices**: Integrates framework patterns and best practices into responses
#### Testing Automation Routing
```yaml
testing_automation:
triggers: ["test", "testing", "e2e", "end-to-end", "browser", "automation", "validation", "verify"]
mcp_server: "playwright"
confidence_threshold: 0.8
priority: "medium"
performance_profile: "intensive"
capabilities: ["browser_automation", "testing_frameworks", "performance_testing"]
```
**Purpose**: Routes testing requests to Playwright for browser automation
**Manual Preference**: No auto-activation, prefers manual confirmation for testing operations
**Intensive Profile**: Uses intensive performance profile for testing workloads
#### Intelligent Editing Routing
```yaml
intelligent_editing:
triggers: ["edit", "modify", "refactor", "update", "change", "fix", "improve"]
mcp_server: "morphllm"
confidence_threshold: 0.7
priority: "medium"
performance_profile: "lightweight"
capabilities: ["pattern_application", "fast_apply", "intelligent_editing"]
complexity_threshold: 0.6
file_count_threshold: 10
```
**Purpose**: Routes editing requests to Morphllm for fast, intelligent modifications
**Thresholds**: Complexity ≤0.6 and file count ≤10 for optimal Morphllm performance
**Lightweight Profile**: Optimized for speed and efficiency
#### Semantic Analysis Routing
```yaml
semantic_analysis:
triggers: ["semantic", "symbol", "reference", "find", "search", "navigate", "explore"]
mcp_server: "serena"
confidence_threshold: 0.8
priority: "high"
performance_profile: "standard"
capabilities: ["semantic_understanding", "project_context", "memory_management"]
```
**Purpose**: Routes semantic analysis to Serena for deep project understanding
**High Priority**: Essential for project navigation and context management
**Symbol Operations**: Optimal for symbol-level operations and refactoring
#### Multi-File Operations Routing
```yaml
multi_file_operations:
triggers: ["multiple files", "batch", "bulk", "project-wide", "codebase", "entire"]
mcp_server: "serena"
confidence_threshold: 0.9
priority: "high"
performance_profile: "intensive"
capabilities: ["multi_file_coordination", "project_analysis"]
```
**Purpose**: Routes large-scale operations to Serena for comprehensive project handling
**High Confidence**: 90% threshold ensures accurate detection of multi-file operations
**Intensive Profile**: Resources allocated for complex project-wide operations
### 2. Hybrid Intelligence Selection (`hybrid_intelligence`)
#### Morphllm vs Serena Decision Matrix
```yaml
morphllm_vs_serena:
decision_factors:
- file_count
- complexity_score
- operation_type
- symbol_operations_required
- project_size
morphllm_criteria:
file_count_max: 10
complexity_max: 0.6
preferred_operations: ["edit", "modify", "update", "pattern_application"]
optimization_focus: "token_efficiency"
serena_criteria:
file_count_min: 5
complexity_min: 0.4
preferred_operations: ["analyze", "refactor", "navigate", "symbol_operations"]
optimization_focus: "semantic_understanding"
fallback_strategy:
- try_primary_choice
- fallback_to_alternative
- use_native_tools
```
**Decision Logic**: Multi-factor analysis determines optimal server selection
**Clear Boundaries**: Morphllm for simple edits, Serena for complex analysis
**Fallback Chain**: Graceful degradation through alternative servers to native tools
### 3. Auto-Activation Rules (`auto_activation`)
#### Complexity Thresholds
```yaml
complexity_thresholds:
enable_sequential:
complexity_score: 0.6
file_count: 5
operation_types: ["analyze", "debug", "complex"]
enable_delegation:
file_count: 3
directory_count: 2
complexity_score: 0.4
enable_validation:
is_production: true
risk_level: ["high", "critical"]
operation_types: ["deploy", "refactor", "delete"]
```
**Sequential Activation**: Complex operations with 0.6+ complexity or 5+ files
**Delegation Triggers**: Multi-file operations exceeding thresholds
**Validation Requirements**: Production and high-risk operations
### 4. Performance Optimization (`performance_optimization`)
#### Parallel Execution Strategy
```yaml
parallel_execution:
file_threshold: 3
estimated_speedup_min: 1.4
max_concurrency: 7
```
**Threshold Management**: 3+ files required for parallel processing
**Performance Guarantee**: Minimum 1.4x speedup required for activation
**Concurrency Limits**: Maximum 7 concurrent operations for resource management
#### Caching Strategy
```yaml
caching_strategy:
enable_for_operations: ["documentation_lookup", "analysis_results", "pattern_matching"]
cache_duration_minutes: 30
max_cache_size_mb: 100
```
**Selective Caching**: Focuses on high-benefit operations
**Duration Management**: 30-minute cache lifetime balances freshness with performance
**Size Limits**: 100MB cache prevents excessive memory usage
#### Resource Management
```yaml
resource_management:
memory_threshold_percent: 85
token_threshold_percent: 75
fallback_to_lightweight: true
```
**Memory Protection**: 85% memory threshold triggers resource optimization
**Token Management**: 75% token usage threshold activates efficiency mode
**Automatic Fallback**: Switches to lightweight alternatives under pressure
### 5. Quality Gates Integration (`quality_gates`)
#### Validation Levels
```yaml
validation_levels:
basic: ["syntax_validation"]
standard: ["syntax_validation", "type_analysis", "code_quality"]
comprehensive: ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis"]
production: ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis", "integration_testing", "deployment_validation"]
```
**Progressive Validation**: Escalating validation complexity based on operation risk
**Production Standards**: Comprehensive 7-step validation for production operations
#### Trigger Conditions
```yaml
trigger_conditions:
comprehensive:
- is_production: true
- complexity_score: ">0.7"
- operation_types: ["refactor", "architecture"]
production:
- is_production: true
- operation_types: ["deploy", "release"]
```
**Comprehensive Triggers**: Production context or high complexity operations
**Production Triggers**: Deploy and release operations receive maximum validation
### 6. Fallback Strategies (`fallback_strategies`)
#### MCP Server Unavailable
```yaml
mcp_server_unavailable:
context7: ["web_search", "cached_documentation", "native_analysis"]
sequential: ["native_step_by_step", "basic_analysis"]
magic: ["manual_component_generation", "template_suggestions"]
playwright: ["manual_testing_suggestions", "test_case_generation"]
morphllm: ["native_edit_tools", "manual_editing"]
serena: ["basic_file_operations", "simple_search"]
```
**Graceful Degradation**: Each server has specific fallback alternatives
**Functionality Preservation**: Maintains core functionality even with server failures
**User Guidance**: Provides manual alternatives when automation unavailable
#### Performance Degradation
```yaml
performance_degradation:
high_latency: ["reduce_analysis_depth", "enable_caching", "parallel_processing"]
resource_constraints: ["lightweight_alternatives", "compression_mode", "minimal_features"]
```
**Latency Management**: Reduces analysis depth and increases caching
**Resource Protection**: Switches to lightweight alternatives and compression
#### Quality Issues
```yaml
quality_issues:
validation_failures: ["increase_validation_depth", "manual_review", "rollback_capability"]
error_rates_high: ["enable_pre_validation", "reduce_complexity", "step_by_step_execution"]
```
**Quality Recovery**: Increases validation and enables manual review
**Error Prevention**: Pre-validation and complexity reduction strategies
### 7. Learning Integration (`learning_integration`)
#### Effectiveness Tracking
```yaml
effectiveness_tracking:
track_server_performance: true
track_routing_decisions: true
track_user_satisfaction: true
```
**Performance Monitoring**: Tracks server performance and routing accuracy
**User Feedback**: Incorporates user satisfaction into learning algorithms
**Decision Analysis**: Analyzes routing decision effectiveness over time
#### Adaptation Triggers
```yaml
adaptation_triggers:
effectiveness_threshold: 0.6
confidence_threshold: 0.7
usage_count_min: 3
```
**Effectiveness Gates**: 60% effectiveness threshold triggers adaptation
**Confidence Requirements**: 70% confidence required for routing changes
**Statistical Significance**: Minimum 3 usage instances for pattern recognition
#### Optimization Feedback
```yaml
optimization_feedback:
performance_degradation: "adjust_routing_weights"
user_preference_detected: "update_server_priorities"
error_patterns_found: "enhance_fallback_strategies"
```
**Dynamic Optimization**: Adjusts routing weights based on performance
**Personalization**: Updates priorities based on user preferences
**Error Learning**: Enhances fallback strategies based on error patterns
### 8. Mode Integration (`mode_integration`)
#### Brainstorming Mode
```yaml
brainstorming:
preferred_servers: ["sequential", "context7"]
thinking_modes: ["--think", "--think-hard"]
```
**Server Preference**: Sequential for reasoning, Context7 for documentation
**Enhanced Thinking**: Activates thinking modes for deeper analysis
#### Task Management Mode
```yaml
task_management:
coordination_servers: ["serena", "morphllm"]
delegation_strategies: ["files", "folders", "auto"]
```
**Coordination Focus**: Serena for analysis, Morphllm for execution
**Delegation Options**: Multiple strategies for different operation types
#### Token Efficiency Mode
```yaml
token_efficiency:
optimization_servers: ["morphllm"]
compression_strategies: ["symbol_systems", "abbreviations"]
```
**Efficiency Focus**: Morphllm for token-optimized operations
**Compression Integration**: Symbol systems and abbreviation strategies
## Performance Implications
### 1. Routing Decision Overhead
#### Decision Time Analysis
- **Pattern Matching**: 10-50ms per routing pattern evaluation
- **Confidence Calculation**: 5-20ms per server option
- **Total Routing Decision**: 50-200ms for complete routing analysis
#### Memory Usage
- **Pattern Storage**: 20-50KB for all routing patterns
- **Decision State**: 10-20KB during routing evaluation
- **Cache Storage**: Up to 100MB for cached results
### 2. MCP Server Coordination
#### Server Communication
- **Activation Time**: 100-500ms per MCP server activation
- **Coordination Overhead**: 50-200ms for multi-server operations
- **Fallback Detection**: 100-300ms to detect and switch to fallback
#### Resource Allocation
- **Memory Per Server**: 50-200MB depending on server type
- **CPU Usage**: 20-60% during intensive server operations
- **Network Usage**: Varies by server, cached where possible
### 3. Learning System Impact
#### Learning Overhead
- **Effectiveness Tracking**: 5-20ms per operation for metrics collection
- **Pattern Analysis**: 100-500ms for pattern recognition updates
- **Adaptation Application**: 200ms-2s for routing weight adjustments
#### Storage Requirements
- **Learning Data**: 500KB-2MB per session for effectiveness tracking
- **Pattern Storage**: 100KB-1MB for persistent patterns
- **Cache Data**: Up to 100MB for performance optimization
## Configuration Best Practices
### 1. Production Orchestrator Configuration
```yaml
# Optimize for reliability and performance
routing_patterns:
ui_components:
confidence_threshold: 0.9 # Higher confidence for production
auto_activation:
enable_validation:
is_production: true
risk_level: ["medium", "high", "critical"] # More conservative
```
### 2. Development Orchestrator Configuration
```yaml
# Enable more experimentation and learning
learning_integration:
adaptation_triggers:
effectiveness_threshold: 0.4 # More aggressive learning
usage_count_min: 1 # Learn from fewer samples
```
### 3. Performance-Optimized Configuration
```yaml
# Minimize overhead for performance-critical environments
performance_optimization:
parallel_execution:
file_threshold: 5 # Higher threshold to reduce overhead
caching_strategy:
cache_duration_minutes: 60 # Longer cache for better performance
```
### 4. Learning-Optimized Configuration
```yaml
# Maximum learning and adaptation
learning_integration:
effectiveness_tracking:
detailed_analytics: true
user_interaction_tracking: true
optimization_feedback:
continuous_adaptation: true
```
## Troubleshooting
### Common Orchestration Issues
#### Wrong Server Selected
- **Symptoms**: Suboptimal server choice for task type
- **Analysis**: Review trigger patterns and confidence thresholds
- **Solution**: Adjust routing patterns or increase confidence thresholds
- **Testing**: Test routing with sample inputs and monitor effectiveness
#### Server Unavailable Issues
- **Symptoms**: Frequent fallback activation, degraded functionality
- **Diagnosis**: Check MCP server availability and network connectivity
- **Resolution**: Verify server configurations and fallback strategies
- **Prevention**: Implement server health monitoring
#### Performance Degradation
- **Symptoms**: Slow routing decisions, high overhead
- **Analysis**: Profile routing decision time and resource usage
- **Optimization**: Adjust confidence thresholds, enable caching
- **Monitoring**: Track routing performance metrics
#### Fallback Chain Failures
- **Symptoms**: Complete functionality loss when primary server fails
- **Investigation**: Review fallback strategy completeness
- **Enhancement**: Add more fallback options and manual alternatives
- **Testing**: Test fallback chains under various failure scenarios
### Learning System Troubleshooting
#### No Learning Observed
- **Check**: Learning integration enabled and collecting data
- **Verify**: Effectiveness metrics being calculated and stored
- **Debug**: Review adaptation trigger thresholds
- **Fix**: Ensure learning data persistence and pattern recognition
#### Poor Routing Decisions
- **Analysis**: Review routing effectiveness metrics and user feedback
- **Adjustment**: Modify confidence thresholds and trigger patterns
- **Validation**: Test routing decisions with controlled scenarios
- **Monitoring**: Track long-term routing accuracy trends
#### Resource Usage Issues
- **Monitoring**: Track memory and CPU usage during orchestration
- **Optimization**: Adjust cache sizes and parallel processing limits
- **Tuning**: Optimize resource thresholds and fallback triggers
- **Balancing**: Balance learning sophistication with resource constraints
## Integration with Other Configurations
### 1. MCP Server Coordination
The orchestrator configuration works closely with:
- **superclaude-config.json**: MCP server definitions and capabilities
- **performance.yaml**: Performance targets and optimization strategies
- **modes.yaml**: Mode-specific server preferences and coordination
### 2. Hook Integration
Orchestrator patterns are implemented through:
- **Pre-Tool Use Hook**: Server selection and routing decisions
- **Post-Tool Use Hook**: Effectiveness tracking and learning
- **Session Start Hook**: Initial server availability assessment
### 3. Quality Gates Coordination
Quality validation levels integrate with:
- **validation.yaml**: Specific validation rules and standards
- **Trigger conditions for comprehensive and production validation
- **Performance monitoring for validation effectiveness
## Related Documentation
- **ORCHESTRATOR.md**: Framework orchestration patterns and principles
- **MCP Server Documentation**: Individual server capabilities and integration
- **Hook Documentation**: Implementation details for orchestration hooks
- **Performance Configuration**: Performance targets and optimization strategies
## Version History
- **v1.0.0**: Initial orchestrator configuration
- Comprehensive MCP server routing with 6 server types
- Hybrid intelligence coordination between Morphllm and Serena
- Multi-level quality gates integration with production safeguards
- Learning system integration with effectiveness tracking
- Performance optimization with caching and parallel processing
- Robust fallback strategies for graceful degradation

View File

@@ -1,699 +0,0 @@
# Performance Configuration (`performance.yaml`)
## Overview
The `performance.yaml` file defines comprehensive performance targets, thresholds, and optimization strategies for the SuperClaude-Lite framework. This configuration establishes performance standards across all hooks, MCP servers, modes, and system components while providing monitoring and optimization guidance.
## Purpose and Role
The performance configuration serves as:
- **Performance Standards Definition**: Establishes specific targets for all framework components
- **Threshold Management**: Defines warning and critical thresholds for proactive optimization
- **Optimization Strategy Guide**: Provides systematic approaches to performance improvement
- **Monitoring Framework**: Enables comprehensive performance tracking and alerting
- **Resource Management**: Balances system resources across competing framework demands
## Configuration Structure
### 1. Hook Performance Targets (`hook_targets`)
#### Session Start Hook
```yaml
session_start:
target_ms: 50
warning_threshold_ms: 75
critical_threshold_ms: 100
optimization_priority: "critical"
```
**Purpose**: Fastest initialization for immediate user engagement
**Rationale**: Session start is user-facing and sets performance expectations
**Optimization Priority**: Critical due to user experience impact
#### Pre-Tool Use Hook
```yaml
pre_tool_use:
target_ms: 200
warning_threshold_ms: 300
critical_threshold_ms: 500
optimization_priority: "high"
```
**Purpose**: MCP routing and orchestration decisions
**Complexity**: Higher target accommodates intelligent routing analysis
**Priority**: High due to frequency of execution
#### Post-Tool Use Hook
```yaml
post_tool_use:
target_ms: 100
warning_threshold_ms: 150
critical_threshold_ms: 250
optimization_priority: "medium"
```
**Purpose**: Quality validation and rule compliance
**Balance**: Moderate target balances thoroughness with responsiveness
**Priority**: Medium due to quality importance vs. frequency
#### Pre-Compact Hook
```yaml
pre_compact:
target_ms: 150
warning_threshold_ms: 200
critical_threshold_ms: 300
optimization_priority: "high"
```
**Purpose**: Token efficiency analysis and compression decisions
**Complexity**: Moderate target for compression analysis
**Priority**: High due to token efficiency impact on overall performance
#### Notification Hook
```yaml
notification:
target_ms: 100
warning_threshold_ms: 150
critical_threshold_ms: 200
optimization_priority: "medium"
```
**Purpose**: Documentation loading and pattern updates
**Efficiency**: Fast target for notification processing
**Priority**: Medium due to background nature of operation
#### Stop Hook
```yaml
stop:
target_ms: 200
warning_threshold_ms: 300
critical_threshold_ms: 500
optimization_priority: "low"
```
**Purpose**: Session analytics and cleanup
**Tolerance**: Higher target acceptable for session termination
**Priority**: Low due to end-of-session timing flexibility
#### Subagent Stop Hook
```yaml
subagent_stop:
target_ms: 150
warning_threshold_ms: 200
critical_threshold_ms: 300
optimization_priority: "medium"
```
**Purpose**: Task management analytics and coordination cleanup
**Balance**: Moderate target for coordination analysis
**Priority**: Medium due to task management efficiency impact
### 2. System Performance Targets (`system_targets`)
#### Overall Efficiency Targets
```yaml
overall_session_efficiency: 0.75
mcp_coordination_efficiency: 0.70
compression_effectiveness: 0.50
learning_adaptation_rate: 0.80
user_satisfaction_target: 0.75
```
**Session Efficiency**: 75% overall efficiency across all operations
**MCP Coordination**: 70% efficiency in server selection and coordination
**Compression**: 50% token reduction through intelligent compression
**Learning Rate**: 80% successful adaptation based on feedback
**User Satisfaction**: 75% positive user experience target
#### Resource Utilization Targets
```yaml
resource_utilization:
memory_target_mb: 100
memory_warning_mb: 150
memory_critical_mb: 200
cpu_target_percent: 40
cpu_warning_percent: 60
cpu_critical_percent: 80
token_efficiency_target: 0.40
token_warning_threshold: 0.20
token_critical_threshold: 0.10
```
**Memory Management**: Progressive thresholds for memory optimization
**CPU Utilization**: Conservative targets to prevent system impact
**Token Efficiency**: Aggressive efficiency targets for context optimization
### 3. MCP Server Performance (`mcp_server_performance`)
#### Context7 Performance
```yaml
context7:
activation_target_ms: 150
response_target_ms: 500
cache_hit_ratio_target: 0.70
quality_score_target: 0.90
```
**Purpose**: Documentation lookup and framework patterns
**Cache Strategy**: 70% cache hit ratio for documentation efficiency
**Quality Assurance**: 90% quality score for documentation accuracy
#### Sequential Performance
```yaml
sequential:
activation_target_ms: 200
response_target_ms: 1000
analysis_depth_target: 0.80
reasoning_quality_target: 0.85
```
**Purpose**: Complex reasoning and systematic analysis
**Analysis Depth**: 80% comprehensive analysis coverage
**Quality Focus**: 85% reasoning quality for reliable analysis
#### Magic Performance
```yaml
magic:
activation_target_ms: 120
response_target_ms: 800
component_quality_target: 0.85
generation_speed_target: 0.75
```
**Purpose**: UI component generation and design systems
**Component Quality**: 85% quality for generated UI components
**Generation Speed**: 75% efficiency in component creation
#### Playwright Performance
```yaml
playwright:
activation_target_ms: 300
response_target_ms: 2000
test_reliability_target: 0.90
automation_efficiency_target: 0.80
```
**Purpose**: Browser automation and testing
**Test Reliability**: 90% reliable test execution
**Automation Efficiency**: 80% successful automation operations
#### Morphllm Performance
```yaml
morphllm:
activation_target_ms: 80
response_target_ms: 400
edit_accuracy_target: 0.95
processing_efficiency_target: 0.85
```
**Purpose**: Intelligent editing with fast apply
**Edit Accuracy**: 95% accurate edits for reliable modifications
**Processing Efficiency**: 85% efficient processing for speed optimization
#### Serena Performance
```yaml
serena:
activation_target_ms: 100
response_target_ms: 600
semantic_accuracy_target: 0.90
memory_efficiency_target: 0.80
```
**Purpose**: Semantic analysis and memory management
**Semantic Accuracy**: 90% accurate semantic understanding
**Memory Efficiency**: 80% efficient memory operations
### 4. Compression Performance (`compression_performance`)
#### Core Compression Targets
```yaml
target_compression_ratio: 0.50
quality_preservation_minimum: 0.95
processing_speed_target_chars_per_ms: 100
```
**Compression Ratio**: 50% token reduction target across all compression operations
**Quality Preservation**: 95% minimum information preservation
**Processing Speed**: 100 characters per millisecond processing target
#### Level-Specific Targets
```yaml
level_targets:
minimal:
compression_ratio: 0.15
quality_preservation: 0.98
processing_time_factor: 1.0
efficient:
compression_ratio: 0.40
quality_preservation: 0.95
processing_time_factor: 1.2
compressed:
compression_ratio: 0.60
quality_preservation: 0.90
processing_time_factor: 1.5
critical:
compression_ratio: 0.75
quality_preservation: 0.85
processing_time_factor: 1.8
emergency:
compression_ratio: 0.85
quality_preservation: 0.80
processing_time_factor: 2.0
```
**Progressive Compression**: Higher compression with acceptable quality and time trade-offs
**Time Factors**: Processing time scales predictably with compression level
**Quality Preservation**: Maintains minimum quality standards at all levels
### 5. Learning Engine Performance (`learning_performance`)
#### Core Learning Targets
```yaml
adaptation_response_time_ms: 200
pattern_detection_accuracy: 0.80
effectiveness_prediction_accuracy: 0.75
```
**Adaptation Speed**: 200ms response time for learning adaptations
**Pattern Accuracy**: 80% accurate pattern detection for reliable learning
**Prediction Accuracy**: 75% accurate effectiveness predictions
#### Learning Rate Targets
```yaml
learning_rates:
user_preference_learning: 0.85
operation_pattern_learning: 0.80
performance_optimization_learning: 0.75
error_recovery_learning: 0.90
```
**User Preferences**: 85% successful learning of user patterns
**Operation Patterns**: 80% successful operation pattern recognition
**Performance Learning**: 75% successful performance optimization
**Error Recovery**: 90% successful error pattern learning
#### Memory Efficiency
```yaml
memory_efficiency:
learning_data_compression_ratio: 0.30
memory_cleanup_efficiency: 0.90
cache_hit_ratio: 0.70
```
**Data Compression**: 30% compression of learning data for storage efficiency
**Cleanup Efficiency**: 90% effective memory cleanup operations
**Cache Performance**: 70% cache hit ratio for learning data access
### 6. Quality Gate Performance (`quality_gate_performance`)
#### Validation Speed Targets
```yaml
validation_speed_targets:
syntax_validation_ms: 50
type_analysis_ms: 100
code_quality_ms: 150
security_assessment_ms: 200
performance_analysis_ms: 250
```
**Progressive Timing**: Validation complexity increases with analysis depth
**Fast Basics**: Quick syntax and type validation for immediate feedback
**Comprehensive Analysis**: Longer time allowance for security and performance
#### Accuracy Targets
```yaml
accuracy_targets:
rule_compliance_detection: 0.95
principle_alignment_assessment: 0.90
quality_scoring_accuracy: 0.85
security_vulnerability_detection: 0.98
```
**Rule Compliance**: 95% accurate rule violation detection
**Principle Alignment**: 90% accurate principle assessment
**Quality Scoring**: 85% accurate quality assessment
**Security Detection**: 98% accurate security vulnerability detection
### 7. Task Management Performance (`task_management_performance`)
#### Delegation Efficiency Targets
```yaml
delegation_efficiency_targets:
file_based_delegation: 0.65
folder_based_delegation: 0.70
auto_delegation: 0.75
```
**Progressive Efficiency**: Auto-delegation provides highest efficiency
**File-Based**: 65% efficiency for individual file delegation
**Folder-Based**: 70% efficiency for directory-level delegation
**Auto-Delegation**: 75% efficiency through intelligent strategy selection
#### Wave Orchestration Targets
```yaml
wave_orchestration_targets:
coordination_overhead_max: 0.20
wave_synchronization_efficiency: 0.85
parallel_execution_speedup: 1.50
```
**Coordination Overhead**: Maximum 20% overhead for coordination
**Synchronization**: 85% efficient wave synchronization
**Parallel Speedup**: Minimum 1.5x speedup from parallel execution
#### Task Completion Targets
```yaml
task_completion_targets:
success_rate: 0.90
quality_score: 0.80
time_efficiency: 0.75
```
**Success Rate**: 90% successful task completion
**Quality Score**: 80% quality standard maintenance
**Time Efficiency**: 75% time efficiency compared to baseline
### 8. Mode-Specific Performance (`mode_performance`)
#### Brainstorming Mode
```yaml
brainstorming:
dialogue_response_time_ms: 300
convergence_efficiency: 0.80
brief_generation_quality: 0.85
user_satisfaction_target: 0.85
```
**Dialogue Speed**: 300ms response time for interactive dialogue
**Convergence**: 80% efficient convergence to requirements
**Brief Quality**: 85% quality in generated briefs
**User Experience**: 85% user satisfaction target
#### Task Management Mode
```yaml
task_management:
coordination_overhead_max: 0.15
delegation_efficiency: 0.70
parallel_execution_benefit: 1.40
analytics_generation_time_ms: 500
```
**Coordination Efficiency**: Maximum 15% coordination overhead
**Delegation**: 70% delegation efficiency across operations
**Parallel Benefit**: Minimum 1.4x benefit from parallel execution
**Analytics Speed**: 500ms for analytics generation
#### Token Efficiency Mode
```yaml
token_efficiency:
compression_processing_time_ms: 150
efficiency_gain_target: 0.40
quality_preservation_target: 0.95
user_acceptance_rate: 0.80
```
**Processing Speed**: 150ms compression processing time
**Efficiency Gain**: 40% token efficiency improvement
**Quality Preservation**: 95% information preservation
**User Acceptance**: 80% user acceptance of compressed content
#### Introspection Mode
```yaml
introspection:
analysis_depth_target: 0.80
insight_quality_target: 0.75
transparency_effectiveness: 0.85
learning_value_target: 0.70
```
**Analysis Depth**: 80% comprehensive analysis coverage
**Insight Quality**: 75% quality of generated insights
**Transparency**: 85% effective transparency in analysis
**Learning Value**: 70% learning value from introspection
### 9. Performance Monitoring (`performance_monitoring`)
#### Real-Time Tracking
```yaml
real_time_tracking:
enabled: true
sampling_interval_ms: 100
metric_aggregation_window_s: 60
alert_threshold_breaches: 3
```
**Monitoring Frequency**: 100ms sampling for responsive monitoring
**Aggregation Window**: 60-second windows for trend analysis
**Alert Sensitivity**: 3 threshold breaches trigger alerts
#### Metrics Collection
```yaml
metrics_collection:
execution_times: true
resource_utilization: true
quality_scores: true
user_satisfaction: true
error_rates: true
```
**Comprehensive Coverage**: All key performance dimensions tracked
**Quality Focus**: Quality scores and user satisfaction prioritized
**Error Tracking**: Error rates monitored for reliability
#### Alerting Configuration
```yaml
alerting:
performance_degradation: true
resource_exhaustion: true
quality_threshold_breach: true
user_satisfaction_drop: true
```
**Proactive Alerting**: Early warning for performance issues
**Resource Protection**: Alerts prevent resource exhaustion
**Quality Assurance**: Quality threshold breaches trigger immediate attention
### 10. Performance Thresholds (`performance_thresholds`)
#### Green Zone (0-70% resource usage)
```yaml
green_zone:
all_optimizations_available: true
proactive_caching: true
full_feature_set: true
normal_verbosity: true
```
**Optimal Operation**: All features and optimizations available
**Proactive Measures**: Caching and optimization enabled
**Full Functionality**: Complete feature set accessible
#### Yellow Zone (70-85% resource usage)
```yaml
yellow_zone:
efficiency_mode_activation: true
cache_optimization: true
reduced_verbosity: true
non_critical_feature_deferral: true
```
**Efficiency Focus**: Activates efficiency optimizations
**Resource Conservation**: Reduces non-essential features
**Performance Priority**: Prioritizes core functionality
#### Orange Zone (85-95% resource usage)
```yaml
orange_zone:
aggressive_optimization: true
compression_activation: true
feature_reduction: true
essential_operations_only: true
```
**Aggressive Measures**: Activates all optimization strategies
**Feature Limitation**: Reduces to essential operations only
**Compression**: Activates token efficiency for resource relief
#### Red Zone (95%+ resource usage)
```yaml
red_zone:
emergency_mode: true
maximum_compression: true
minimal_features: true
critical_operations_only: true
```
**Emergency Response**: Activates emergency resource management
**Maximum Optimization**: All optimization strategies active
**Critical Only**: Only critical operations permitted
## Performance Implications
### 1. Target Achievement Rates
#### Hook Performance Achievement
- **Session Start**: 95% operations under 50ms target
- **Pre-Tool Use**: 90% operations under 200ms target
- **Post-Tool Use**: 92% operations under 100ms target
- **Pre-Compact**: 88% operations under 150ms target
#### MCP Server Performance Achievement
- **Context7**: 85% cache hit ratio, 92% quality score achievement
- **Sequential**: 78% analysis depth achievement, 83% reasoning quality
- **Magic**: 82% component quality, 73% generation speed target
- **Morphllm**: 96% edit accuracy, 87% processing efficiency
### 2. Resource Usage Patterns
#### Memory Utilization
- **Typical Usage**: 80-120MB across all hooks and servers
- **Peak Usage**: 150-200MB during complex operations
- **Critical Threshold**: 200MB triggers resource optimization
#### CPU Utilization
- **Average Usage**: 30-50% during active operations
- **Peak Usage**: 60-80% during intensive analysis or parallel operations
- **Critical Threshold**: 80% triggers efficiency mode activation
#### Token Efficiency Impact
- **Compression Effectiveness**: 45-55% token reduction achieved
- **Quality Preservation**: 96% average information preservation
- **Processing Overhead**: 120-180ms average compression time
### 3. Learning System Performance Impact
#### Learning Overhead
- **Metrics Collection**: 2-8ms per operation overhead
- **Pattern Analysis**: 50-200ms for pattern updates
- **Adaptation Application**: 100-500ms for parameter adjustments
#### Effectiveness Improvement
- **User Preference Learning**: 12% improvement in satisfaction over 30 days
- **Operation Optimization**: 18% improvement in efficiency over time
- **Error Recovery**: 25% reduction in repeated errors through learning
## Configuration Best Practices
### 1. Production Performance Configuration
```yaml
# Conservative targets for reliability
hook_targets:
session_start:
target_ms: 75 # Slightly relaxed for stability
critical_threshold_ms: 150
system_targets:
user_satisfaction_target: 0.80 # Higher satisfaction requirement
```
### 2. Development Performance Configuration
```yaml
# Relaxed targets for development flexibility
hook_targets:
session_start:
target_ms: 100 # More relaxed for development
warning_threshold_ms: 150
performance_monitoring:
real_time_tracking:
sampling_interval_ms: 500 # Less frequent sampling
```
### 3. High-Performance Configuration
```yaml
# Aggressive targets for performance-critical environments
hook_targets:
session_start:
target_ms: 25 # Very aggressive target
optimization_priority: "critical"
performance_thresholds:
yellow_zone:
threshold: 60 # Earlier efficiency activation
```
### 4. Resource-Constrained Configuration
```yaml
# Conservative resource usage
system_targets:
memory_target_mb: 50 # Lower memory target
cpu_target_percent: 25 # Lower CPU target
performance_thresholds:
orange_zone:
threshold: 70 # Earlier aggressive optimization
```
## Troubleshooting
### Common Performance Issues
#### Hook Performance Degradation
- **Symptoms**: Hooks consistently exceeding target times
- **Analysis**: Review execution logs and identify bottlenecks
- **Solutions**: Optimize configuration loading, enable caching, reduce feature complexity
- **Monitoring**: Track performance trends and identify patterns
#### MCP Server Latency
- **Symptoms**: High response times from MCP servers
- **Diagnosis**: Check server availability, network connectivity, resource constraints
- **Optimization**: Enable caching, implement server health monitoring
- **Fallbacks**: Ensure fallback strategies are effective
#### Resource Exhaustion
- **Symptoms**: High memory or CPU usage, frequent threshold breaches
- **Immediate Response**: Activate efficiency mode, reduce feature set
- **Long-term Solutions**: Optimize resource usage, implement better cleanup
- **Prevention**: Monitor trends and adjust thresholds proactively
#### Quality vs Performance Trade-offs
- **Symptoms**: Quality targets missed due to performance constraints
- **Analysis**: Review quality-performance balance in configuration
- **Adjustment**: Find optimal balance for specific use case requirements
- **Monitoring**: Track both quality and performance metrics continuously
### Performance Optimization Strategies
#### Caching Optimization
```yaml
# Optimize caching for better performance
caching_strategy:
enable_for_operations: ["all_frequent_operations"]
cache_duration_minutes: 60 # Longer cache duration
max_cache_size_mb: 200 # Larger cache size
```
#### Resource Management Optimization
```yaml
# More aggressive resource management
performance_thresholds:
green_zone: 60 # Smaller green zone for earlier optimization
yellow_zone: 75 # Earlier efficiency activation
```
#### Learning System Optimization
```yaml
# Balance learning with performance
learning_performance:
adaptation_response_time_ms: 100 # Faster adaptations
pattern_detection_accuracy: 0.85 # Higher accuracy requirement
```
## Related Documentation
- **Hook Documentation**: See individual hook documentation for performance implementation details
- **MCP Server Performance**: Reference MCP server documentation for server-specific optimization
- **Mode Performance**: Review mode documentation for mode-specific performance characteristics
- **Monitoring Integration**: See logging configuration for performance monitoring implementation
## Version History
- **v1.0.0**: Initial performance configuration
- Comprehensive performance targets across all framework components
- Progressive threshold management with zone-based optimization
- MCP server performance standards with quality targets
- Mode-specific performance profiles and optimization strategies
- Real-time monitoring with proactive alerting
- Learning system performance integration with effectiveness tracking

View File

@@ -1,75 +0,0 @@
# Performance Intelligence Configuration (`performance_intelligence.yaml`)
## Overview
The `performance_intelligence.yaml` file configures intelligent performance monitoring, optimization patterns, and adaptive performance management for the SuperClaude-Lite framework.
## Purpose and Role
This configuration provides:
- **Performance Pattern Recognition**: Learn from performance trends and patterns
- **Adaptive Optimization**: Automatically adjust settings based on performance data
- **Resource Intelligence**: Smart resource allocation and management
- **Predictive Performance**: Anticipate performance issues before they occur
## Key Configuration Areas
### 1. Performance Pattern Learning
- **Metric Tracking**: Track execution times, resource usage, and success rates
- **Pattern Recognition**: Identify performance patterns across operations
- **Trend Analysis**: Detect performance degradation or improvement trends
- **Correlation Analysis**: Understand relationships between different performance factors
### 2. Adaptive Optimization
- **Dynamic Thresholds**: Adjust performance targets based on system capabilities
- **Auto-Optimization**: Automatically enable optimizations when performance degrades
- **Resource Scaling**: Scale resource allocation based on demand patterns
- **Configuration Adaptation**: Modify settings to maintain performance targets
### 3. Predictive Intelligence
- **Performance Forecasting**: Predict future performance based on current trends
- **Bottleneck Prediction**: Identify potential bottlenecks before they impact users
- **Capacity Planning**: Recommend resource adjustments for optimal performance
- **Proactive Optimization**: Apply optimizations before performance issues occur
### 4. Intelligent Monitoring
- **Context-Aware Monitoring**: Monitor different metrics based on operation context
- **Anomaly Detection**: Identify unusual performance patterns
- **Health Scoring**: Generate overall system health scores
- **Performance Alerting**: Intelligent alerting based on pattern analysis
## Configuration Structure
The file includes:
- Performance learning algorithms and parameters
- Adaptive optimization triggers and thresholds
- Predictive modeling configuration
- Monitoring and alerting rules
## Integration Points
### Framework Integration
- Works with all hooks to collect performance data
- Integrates with hook coordination for optimization
- Provides input to user experience optimization
- Coordinates with resource management systems
### Learning Integration
- Feeds performance patterns to intelligence systems
- Learns from user behavior and performance preferences
- Adapts to project-specific performance characteristics
- Improves optimization strategies over time
## Usage Guidelines
This configuration controls the intelligent performance monitoring and optimization capabilities:
- **Monitoring Depth**: Balance monitoring detail with performance overhead
- **Learning Speed**: Configure how quickly the system adapts to performance changes
- **Optimization Aggressiveness**: Control how aggressively optimizations are applied
- **Prediction Accuracy**: Tune predictive models for your use patterns
## Related Documentation
- **Performance Configuration**: `performance.yaml.md` for basic performance settings
- **Intelligence Patterns**: `intelligence_patterns.yaml.md` for core learning patterns
- **Hook Coordination**: `hook_coordination.yaml.md` for performance-aware execution

View File

@@ -1,716 +0,0 @@
# Session Configuration (`session.yaml`)
## Overview
The `session.yaml` file defines session lifecycle management and analytics configuration for the SuperClaude-Lite framework. This configuration controls session initialization, termination, project detection, intelligence activation, and comprehensive session analytics across the framework.
## Purpose and Role
The session configuration serves as:
- **Session Lifecycle Manager**: Controls initialization and termination patterns for optimal user experience
- **Project Intelligence Engine**: Automatically detects project types and activates appropriate framework features
- **Mode Activation Coordinator**: Manages intelligent activation of behavioral modes based on context
- **Analytics and Learning System**: Tracks session effectiveness and enables continuous framework improvement
- **Performance Optimizer**: Manages session-level performance targets and resource utilization
## Configuration Structure
### 1. Session Lifecycle Configuration (`session_lifecycle`)
#### Initialization Settings
```yaml
initialization:
performance_target_ms: 50
auto_project_detection: true
context_loading_strategy: "selective"
framework_exclusion_enabled: true
default_modes:
- "adaptive_intelligence"
- "performance_monitoring"
intelligence_activation:
pattern_detection: true
mcp_routing: true
learning_integration: true
compression_optimization: true
```
**Performance Target**: 50ms initialization for immediate user engagement
**Selective Loading**: Loads only necessary context for fast startup
**Framework Exclusion**: Protects framework content from modification
**Default Modes**: Activates adaptive intelligence and performance monitoring by default
#### Termination Settings
```yaml
termination:
performance_target_ms: 200
analytics_generation: true
learning_consolidation: true
session_persistence: true
cleanup_optimization: true
```
**Analytics Generation**: Creates comprehensive session analytics on termination
**Learning Consolidation**: Consolidates session learnings for future improvement
**Session Persistence**: Saves session state for potential recovery
**Cleanup Optimization**: Optimizes resource cleanup for performance
### 2. Project Type Detection (`project_detection`)
#### File Indicators
```yaml
file_indicators:
nodejs:
- "package.json"
- "node_modules/"
- "yarn.lock"
- "pnpm-lock.yaml"
python:
- "pyproject.toml"
- "setup.py"
- "requirements.txt"
- "__pycache__/"
- ".py"
rust:
- "Cargo.toml"
- "Cargo.lock"
- "src/main.rs"
- "src/lib.rs"
go:
- "go.mod"
- "go.sum"
- "main.go"
web_frontend:
- "index.html"
- "public/"
- "dist/"
- "build/"
- "src/components/"
```
**Purpose**: Automatically detects project type based on characteristic files
**Multi-Language Support**: Supports major programming languages and frameworks
**Progressive Detection**: Multiple indicators increase detection confidence
#### Framework Detection
```yaml
framework_detection:
react:
- "react"
- "next.js"
- "@types/react"
vue:
- "vue"
- "nuxt"
- "@vue/cli"
angular:
- "@angular/core"
- "angular.json"
express:
- "express"
- "app.js"
- "server.js"
```
**Framework Intelligence**: Detects specific frameworks within project types
**Package Analysis**: Analyzes package.json and similar files for framework indicators
**Enhanced Context**: Framework detection enables specialized optimizations
### 3. Intelligence Activation Rules (`intelligence_activation`)
#### Mode Detection Patterns
```yaml
mode_detection:
brainstorming:
triggers:
- "new project"
- "not sure"
- "thinking about"
- "explore"
- "brainstorm"
confidence_threshold: 0.7
auto_activate: true
task_management:
triggers:
- "multiple files"
- "complex operation"
- "system-wide"
- "comprehensive"
file_count_threshold: 3
complexity_threshold: 0.4
auto_activate: true
token_efficiency:
triggers:
- "resource constraint"
- "brevity"
- "compressed"
- "efficient"
resource_threshold_percent: 75
conversation_length_threshold: 100
auto_activate: true
```
**Automatic Mode Activation**: Intelligent detection and activation based on user patterns
**Confidence Thresholds**: Ensures accurate mode selection
**Context-Aware**: Considers project characteristics and resource constraints
#### MCP Server Activation
```yaml
mcp_server_activation:
context7:
triggers:
- "library"
- "documentation"
- "framework"
- "api reference"
project_indicators:
- "external_dependencies"
- "framework_detected"
auto_activate: true
sequential:
triggers:
- "analyze"
- "debug"
- "complex"
- "systematic"
complexity_threshold: 0.6
auto_activate: true
magic:
triggers:
- "component"
- "ui"
- "frontend"
- "design"
project_type_match: ["web_frontend", "react", "vue", "angular"]
auto_activate: true
serena:
triggers:
- "navigate"
- "find"
- "search"
- "analyze"
file_count_min: 5
complexity_min: 0.4
auto_activate: true
```
**Intelligent Server Selection**: Automatic MCP server activation based on task requirements
**Project Context**: Server selection considers project type and characteristics
**Threshold Management**: Prevents unnecessary server activation through intelligent thresholds
### 4. Session Analytics Configuration (`session_analytics`)
#### Performance Tracking
```yaml
performance_tracking:
enabled: true
metrics:
- "operation_count"
- "tool_usage_patterns"
- "mcp_server_effectiveness"
- "error_rates"
- "completion_times"
- "resource_utilization"
```
**Comprehensive Metrics**: Tracks all key performance dimensions
**Usage Patterns**: Analyzes tool and server usage for optimization
**Error Tracking**: Monitors error rates for reliability improvement
#### Effectiveness Measurement
```yaml
effectiveness_measurement:
enabled: true
factors:
productivity: "weight: 0.4"
quality: "weight: 0.3"
user_satisfaction: "weight: 0.2"
learning_value: "weight: 0.1"
```
**Weighted Effectiveness**: Balanced assessment across multiple factors
**Productivity Focus**: Highest weight on productivity outcomes
**Quality Assurance**: Significant weight on quality maintenance
**User Experience**: Important consideration for user satisfaction
**Learning Value**: Tracks framework learning and improvement
#### Learning Consolidation
```yaml
learning_consolidation:
enabled: true
pattern_detection: true
adaptation_creation: true
effectiveness_feedback: true
insight_generation: true
```
**Pattern Learning**: Identifies successful patterns for replication
**Adaptive Improvement**: Creates adaptations based on session outcomes
**Feedback Integration**: Incorporates effectiveness feedback into learning
**Insight Generation**: Generates actionable insights for framework improvement
### 5. Session Persistence (`session_persistence`)
#### Storage Strategy
```yaml
enabled: true
storage_strategy: "intelligent_compression"
retention_policy:
session_data_days: 90
analytics_data_days: 365
learning_data_persistent: true
compression_settings:
session_metadata: "efficient" # 40-70% compression
analytics_data: "compressed" # 70-85% compression
learning_data: "minimal" # Preserve learning quality
```
**Intelligent Compression**: Applies appropriate compression based on data type
**Retention Management**: Balances storage with analytical value
**Learning Preservation**: Maintains high fidelity for learning data
#### Cleanup Automation
```yaml
cleanup_automation:
enabled: true
old_session_cleanup: true
max_sessions_retained: 50
storage_optimization: true
```
**Automatic Cleanup**: Prevents storage bloat through automated cleanup
**Session Limits**: Maintains reasonable number of retained sessions
**Storage Optimization**: Continuously optimizes storage usage
### 6. Notification Processing (`notifications`)
#### Core Notification Settings
```yaml
enabled: true
just_in_time_loading: true
pattern_updates: true
intelligence_updates: true
priority_handling:
critical: "immediate_processing"
high: "fast_track_processing"
medium: "standard_processing"
low: "background_processing"
```
**Just-in-Time Loading**: Loads documentation and patterns as needed
**Priority Processing**: Handles notifications based on priority levels
**Intelligence Updates**: Updates framework intelligence based on new patterns
#### Caching Strategy
```yaml
caching_strategy:
documentation_cache_minutes: 30
pattern_cache_minutes: 60
intelligence_cache_minutes: 15
```
**Documentation Caching**: 30-minute cache for documentation lookup
**Pattern Caching**: 60-minute cache for pattern recognition
**Intelligence Caching**: 15-minute cache for intelligence updates
### 7. Task Management Integration (`task_management`)
#### Delegation Strategies
```yaml
enabled: true
delegation_strategies:
files: "file_based_delegation"
folders: "directory_based_delegation"
auto: "intelligent_auto_detection"
wave_orchestration:
enabled: true
complexity_threshold: 0.4
file_count_threshold: 3
operation_types_threshold: 2
```
**Multi-Strategy Support**: Supports file, folder, and auto-delegation strategies
**Wave Orchestration**: Enables complex multi-step operation coordination
**Intelligent Thresholds**: Activates advanced features based on operation complexity
#### Performance Optimization
```yaml
performance_optimization:
parallel_execution: true
resource_management: true
coordination_efficiency: true
```
**Parallel Processing**: Enables parallel execution for performance
**Resource Management**: Optimizes resource allocation across tasks
**Coordination**: Efficient coordination of multiple operations
### 8. User Experience Configuration (`user_experience`)
#### Session Feedback
```yaml
session_feedback:
enabled: true
satisfaction_tracking: true
improvement_suggestions: true
```
**Satisfaction Tracking**: Monitors user satisfaction throughout session
**Improvement Suggestions**: Provides suggestions for enhanced experience
#### Personalization
```yaml
personalization:
enabled: true
preference_learning: true
adaptation_application: true
context_awareness: true
```
**Preference Learning**: Learns user preferences over time
**Adaptive Application**: Applies learned preferences to improve experience
**Context Awareness**: Considers context in personalization decisions
#### Progressive Enhancement
```yaml
progressive_enhancement:
enabled: true
capability_discovery: true
feature_introduction: true
learning_curve_optimization: true
```
**Capability Discovery**: Gradually discovers and introduces new capabilities
**Feature Introduction**: Introduces features at appropriate times
**Learning Curve**: Optimizes learning curve for user adoption
### 9. Performance Targets (`performance_targets`)
#### Session Performance
```yaml
session_start_ms: 50
session_stop_ms: 200
context_loading_ms: 500
analytics_generation_ms: 1000
```
**Fast Startup**: 50ms session start for immediate engagement
**Efficient Termination**: 200ms session stop with analytics
**Context Loading**: 500ms context loading for comprehensive initialization
**Analytics**: 1000ms analytics generation for comprehensive insights
#### Efficiency Targets
```yaml
efficiency_targets:
productivity_score: 0.7
quality_score: 0.8
satisfaction_score: 0.7
learning_value: 0.6
```
**Productivity**: 70% productivity score target
**Quality**: 80% quality score maintenance
**Satisfaction**: 70% user satisfaction target
**Learning**: 60% learning value extraction
#### Resource Utilization
```yaml
resource_utilization:
memory_efficient: true
cpu_optimization: true
token_management: true
storage_optimization: true
```
**Comprehensive Optimization**: Optimizes all resource dimensions
**Token Management**: Intelligent token usage optimization
**Storage Efficiency**: Efficient storage utilization and cleanup
### 10. Error Handling and Recovery (`error_handling`)
#### Core Error Handling
```yaml
graceful_degradation: true
fallback_strategies: true
error_learning: true
recovery_optimization: true
```
**Graceful Degradation**: Maintains functionality during errors
**Fallback Strategies**: Multiple fallback options for resilience
**Error Learning**: Learns from errors to prevent recurrence
#### Session Recovery
```yaml
session_recovery:
auto_recovery: true
state_preservation: true
context_restoration: true
learning_retention: true
```
**Automatic Recovery**: Attempts automatic recovery from errors
**State Preservation**: Preserves session state during recovery
**Context Restoration**: Restores context after recovery
**Learning Retention**: Maintains learning data through recovery
#### Error Pattern Detection
```yaml
error_patterns:
detection: true
prevention: true
learning_integration: true
adaptation_triggers: true
```
**Pattern Detection**: Identifies recurring error patterns
**Prevention**: Implements prevention strategies for known patterns
**Learning Integration**: Integrates error learning with overall framework learning
## Integration Points
### 1. Hook Integration (`integration`)
#### MCP Server Coordination
```yaml
mcp_servers:
coordination: "seamless"
fallback_handling: "automatic"
performance_monitoring: "continuous"
```
**Seamless Coordination**: Smooth integration across all MCP servers
**Automatic Fallbacks**: Automatic fallback handling for server issues
**Continuous Monitoring**: Real-time performance monitoring
#### Learning Engine Integration
```yaml
learning_engine:
session_learning: true
pattern_recognition: true
effectiveness_tracking: true
adaptation_application: true
```
**Session Learning**: Comprehensive learning from session patterns
**Pattern Recognition**: Identifies successful session patterns
**Effectiveness Tracking**: Tracks session effectiveness over time
**Adaptation**: Applies learned patterns to improve future sessions
#### Quality Gates Integration
```yaml
quality_gates:
session_validation: true
analytics_verification: true
learning_quality_assurance: true
```
**Session Validation**: Validates session outcomes against quality standards
**Analytics Verification**: Ensures analytics accuracy and completeness
**Learning QA**: Quality assurance for learning data and insights
### 2. Development Support (`development_support`)
```yaml
session_debugging: true
performance_profiling: true
analytics_validation: true
learning_verification: true
metrics_collection:
detailed_timing: true
resource_tracking: true
effectiveness_measurement: true
quality_assessment: true
```
**Debugging Support**: Enhanced debugging capabilities for development
**Performance Profiling**: Detailed performance analysis tools
**Metrics Collection**: Comprehensive metrics for analysis and optimization
## Performance Implications
### 1. Session Lifecycle Performance
#### Initialization Impact
- **Startup Time**: 45-55ms typical session initialization
- **Context Loading**: 400-600ms for selective context loading
- **Memory Usage**: 50-100MB initial memory allocation
- **CPU Usage**: 20-40% CPU during initialization
#### Termination Impact
- **Analytics Generation**: 800ms-1.2s for comprehensive analytics
- **Learning Consolidation**: 200-500ms for learning data processing
- **Cleanup Operations**: 100-300ms for resource cleanup
- **Storage Operations**: 50-200ms for session persistence
### 2. Project Detection Performance
#### Detection Speed
- **File System Scanning**: 10-50ms for project type detection
- **Framework Analysis**: 20-100ms for framework detection
- **Dependency Analysis**: 50-200ms for dependency graph analysis
- **Total Detection**: 100-400ms for complete project analysis
#### Memory Impact
- **Detection Data**: 10-50KB for project detection information
- **Framework Metadata**: 20-100KB for framework-specific data
- **Dependency Cache**: 100KB-1MB for dependency information
### 3. Analytics and Learning Performance
#### Analytics Generation
- **Metrics Collection**: 50-200ms for comprehensive metrics gathering
- **Effectiveness Calculation**: 100-500ms for effectiveness analysis
- **Pattern Analysis**: 200ms-1s for pattern recognition
- **Insight Generation**: 300ms-2s for actionable insights
#### Learning System Impact
- **Pattern Learning**: 100-500ms for pattern updates
- **Adaptation Creation**: 200ms-1s for adaptation generation
- **Effectiveness Feedback**: 50-200ms for feedback integration
- **Storage Updates**: 100-400ms for learning data persistence
## Configuration Best Practices
### 1. Production Session Configuration
```yaml
# Optimize for reliability and performance
session_lifecycle:
initialization:
performance_target_ms: 75 # Slightly relaxed for stability
framework_exclusion_enabled: true # Always protect framework
session_analytics:
performance_tracking:
enabled: true # Essential for production monitoring
session_persistence:
retention_policy:
session_data_days: 30 # Shorter retention for production
analytics_data_days: 180 # Sufficient for trend analysis
```
### 2. Development Session Configuration
```yaml
# Enhanced debugging and learning
development_support:
session_debugging: true
performance_profiling: true
detailed_timing: true
session_analytics:
learning_consolidation:
effectiveness_feedback: true
adaptation_creation: true # Enable aggressive learning
```
### 3. Performance-Optimized Configuration
```yaml
# Minimize overhead for performance-critical environments
session_lifecycle:
initialization:
performance_target_ms: 25 # Aggressive target
context_loading_strategy: "minimal" # Minimal context loading
session_analytics:
performance_tracking:
metrics: ["operation_count", "completion_times"] # Essential metrics only
```
### 4. Learning-Optimized Configuration
```yaml
# Maximum learning and adaptation
session_analytics:
learning_consolidation:
enabled: true
pattern_detection: true
adaptation_creation: true
insight_generation: true
user_experience:
personalization:
preference_learning: true
adaptation_application: true
```
## Troubleshooting
### Common Session Issues
#### Slow Session Initialization
- **Symptoms**: Session startup exceeds 50ms target consistently
- **Analysis**: Check project detection performance, context loading strategy
- **Solutions**: Optimize project detection patterns, reduce initial context loading
- **Monitoring**: Track initialization components and identify bottlenecks
#### Project Detection Failures
- **Symptoms**: Incorrect project type detection or missing framework detection
- **Diagnosis**: Review project indicators and framework patterns
- **Resolution**: Add missing patterns, adjust detection confidence thresholds
- **Validation**: Test detection with various project structures
#### Analytics Generation Issues
- **Symptoms**: Slow or incomplete analytics generation at session end
- **Investigation**: Check metrics collection performance and data completeness
- **Optimization**: Reduce analytics complexity, optimize metrics calculation
- **Quality**: Ensure analytics accuracy while maintaining performance
#### Learning System Problems
- **Symptoms**: No learning observed, ineffective adaptations
- **Analysis**: Review learning data collection and pattern recognition
- **Enhancement**: Adjust learning thresholds, improve pattern detection
- **Validation**: Test learning effectiveness with controlled scenarios
### Performance Troubleshooting
#### Memory Usage Issues
- **Monitoring**: Track session memory usage patterns and growth
- **Optimization**: Optimize context loading, implement better cleanup
- **Limits**: Set appropriate memory limits and cleanup triggers
- **Analysis**: Profile memory usage during different session phases
#### CPU Usage Problems
- **Identification**: Monitor CPU usage during session operations
- **Optimization**: Optimize project detection, reduce analytics complexity
- **Balancing**: Balance functionality with CPU usage requirements
- **Profiling**: Use profiling tools to identify CPU bottlenecks
#### Storage and Persistence Issues
- **Management**: Monitor storage usage and cleanup effectiveness
- **Optimization**: Optimize compression settings, adjust retention policies
- **Maintenance**: Implement regular cleanup and optimization routines
- **Analysis**: Track storage growth patterns and optimize accordingly
## Related Documentation
- **Session Lifecycle**: See SESSION_LIFECYCLE.md for comprehensive session management patterns
- **Hook Integration**: Reference hook documentation for session-hook coordination
- **Analytics and Learning**: Review learning system documentation for detailed analytics
- **Performance Monitoring**: See performance.yaml.md for performance targets and monitoring
## Version History
- **v1.0.0**: Initial session configuration
- Comprehensive session lifecycle management with 50ms initialization target
- Multi-language project detection with framework intelligence
- Automatic mode and MCP server activation based on context
- Session analytics with effectiveness measurement and learning consolidation
- User experience optimization with personalization and progressive enhancement
- Error handling and recovery with pattern detection and prevention

View File

@@ -1,244 +0,0 @@
# Hook Settings Configuration (`settings.json`)
## Overview
The `settings.json` file defines the Claude Code hook configuration settings for the SuperClaude-Lite framework. This file registers all framework hooks with Claude Code and specifies their execution parameters.
## Purpose and Role
This configuration provides:
- **Hook Registration**: Registers all 7 SuperClaude hooks with Claude Code
- **Execution Configuration**: Defines command paths, timeouts, and execution patterns
- **Universal Matching**: Applies hooks to all operations through `"matcher": "*"`
- **Timeout Management**: Establishes execution time limits for each hook
## Configuration Structure
### Basic Pattern
```json
{
"hooks": {
"HookName": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/script.py",
"timeout": 15
}
]
}
]
}
}
```
### Hook Definitions
The actual configuration registers these hooks:
#### SessionStart Hook
```json
"SessionStart": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/session_start.py",
"timeout": 10
}
]
}
]
```
**Purpose**: Initialize sessions and detect project context
**Timeout**: 10 seconds for session initialization
**Execution**: Runs at the start of every Claude Code session
#### PreToolUse Hook
```json
"PreToolUse": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/pre_tool_use.py",
"timeout": 15
}
]
}
]
```
**Purpose**: Pre-process tool usage and provide intelligent routing
**Timeout**: 15 seconds for analysis and routing decisions
**Execution**: Runs before every tool use operation
#### PostToolUse Hook
```json
"PostToolUse": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/post_tool_use.py",
"timeout": 10
}
]
}
]
```
**Purpose**: Post-process tool results and apply quality gates
**Timeout**: 10 seconds for result analysis and validation
**Execution**: Runs after every tool use operation
#### PreCompact Hook
```json
"PreCompact": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/pre_compact.py",
"timeout": 15
}
]
}
]
```
**Purpose**: Apply intelligent compression before context compaction
**Timeout**: 15 seconds for compression analysis and application
**Execution**: Runs before Claude Code compacts conversation context
#### Notification Hook
```json
"Notification": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/notification.py",
"timeout": 10
}
]
}
]
```
**Purpose**: Handle notifications and update learning patterns
**Timeout**: 10 seconds for notification processing
**Execution**: Runs when Claude Code sends notifications
#### Stop Hook
```json
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/stop.py",
"timeout": 15
}
]
}
]
```
**Purpose**: Session cleanup and analytics generation
**Timeout**: 15 seconds for cleanup and analysis
**Execution**: Runs when Claude Code session ends
#### SubagentStop Hook
```json
"SubagentStop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/subagent_stop.py",
"timeout": 15
}
]
}
]
```
**Purpose**: Subagent coordination and task management analytics
**Timeout**: 15 seconds for subagent cleanup
**Execution**: Runs when Claude Code subagent sessions end
## Key Configuration Elements
### Universal Matcher
- **Pattern**: `"matcher": "*"`
- **Effect**: All hooks apply to every operation
- **Purpose**: Ensures consistent framework behavior across all interactions
### Command Type
- **Type**: `"command"`
- **Execution**: Runs external Python scripts
- **Environment**: Uses system Python 3 installation
### File Paths
- **Location**: `~/.claude/hooks/`
- **Naming**: Matches hook names in snake_case (e.g., `session_start.py`)
- **Permissions**: Scripts must be executable
### Timeout Values
- **SessionStart**: 10 seconds (session initialization)
- **PreToolUse**: 15 seconds (analysis and routing)
- **PostToolUse**: 10 seconds (result processing)
- **PreCompact**: 15 seconds (compression)
- **Notification**: 10 seconds (notification handling)
- **Stop**: 15 seconds (cleanup and analytics)
- **SubagentStop**: 15 seconds (subagent coordination)
## Installation Requirements
### File Installation
The framework installation process must:
1. Copy Python hook scripts to `~/.claude/hooks/`
2. Set executable permissions on all hook scripts
3. Install this `settings.json` file for Claude Code to read
4. Verify Python 3 is available in the system PATH
### Dependencies
- Python 3.7+ installation
- Required Python packages (see hook implementations)
- Read/write access to `~/.claude/hooks/` directory
- Network access for MCP server communication (if used)
## Troubleshooting
### Hook Not Executing
- **Check file paths**: Verify scripts exist at specified locations
- **Check permissions**: Ensure scripts are executable
- **Check Python**: Verify Python 3 is available in PATH
- **Check timeouts**: Increase timeout if hooks are timing out
### Performance Issues
- **Timeout Tuning**: Adjust timeout values for your system performance
- **Hook Optimization**: Review hook configuration files for performance settings
- **Parallel Execution**: Some hooks can be optimized for parallel execution
### Path Issues
- **Absolute Paths**: Use absolute paths if relative paths cause issues
- **User Directory**: Ensure `~/.claude/hooks/` expands correctly in your environment
- **File Permissions**: Verify both read and execute permissions on hook files
## Related Documentation
- **Hook Implementation**: Individual hook Python files for specific behavior
- **Configuration Files**: YAML configuration files for hook behavior tuning
- **Installation Guide**: Framework installation and setup documentation

View File

@@ -1,350 +0,0 @@
# SuperClaude Master Configuration (`superclaude-config.json`)
## Overview
The `superclaude-config.json` file serves as the master configuration file for the SuperClaude-Lite framework. This comprehensive JSON configuration controls all aspects of hook execution, MCP server integration, mode coordination, and quality gates within the framework.
## Purpose and Role
The master configuration file acts as the central control system for:
- **Hook Configuration Management**: Defines behavior and settings for all 7 framework hooks
- **MCP Server Integration**: Coordinates intelligent routing and fallback strategies across servers
- **Mode Orchestration**: Manages behavioral mode activation and coordination patterns
- **Quality Gate Enforcement**: Implements the 8-step validation cycle throughout operations
- **Performance Monitoring**: Establishes targets and thresholds for optimization
- **Learning System Integration**: Enables cross-hook learning and adaptation
## File Structure and Organization
### 1. Framework Metadata
```json
{
"superclaude": {
"description": "SuperClaude-Lite Framework Configuration",
"version": "1.0.0",
"framework": "superclaude-lite",
"enabled": true
}
}
```
**Purpose**: Identifies framework version and overall enablement status.
### 2. Hook Configurations (`hook_configurations`)
The master configuration defines settings for all 7 SuperClaude hooks:
#### Session Start Hook
- **Performance Target**: 50ms initialization
- **Features**: Smart project context loading, automatic mode detection, MCP intelligence routing
- **Configuration**: Auto-detection, framework exclusion, intelligence activation
- **Error Handling**: Graceful fallback with context preservation
#### Pre-Tool Use Hook
- **Performance Target**: 200ms routing decision
- **Features**: Intelligent tool routing, MCP server selection, real-time adaptation
- **Integration**: All 6 MCP servers with quality gates and learning engine
- **Configuration**: Pattern detection, learning adaptations, fallback strategies
#### Post-Tool Use Hook
- **Performance Target**: 100ms validation
- **Features**: Quality validation, rules compliance, effectiveness measurement
- **Validation Levels**: Basic → Standard → Comprehensive → Production
- **Configuration**: Rules validation, principles alignment, learning integration
#### Pre-Compact Hook
- **Performance Target**: 150ms compression decision
- **Features**: Intelligent compression strategy selection, selective content preservation
- **Compression Levels**: Minimal (0-40%) → Emergency (95%+)
- **Configuration**: Framework protection, quality preservation target (95%)
#### Notification Hook
- **Performance Target**: 100ms processing
- **Features**: Just-in-time documentation loading, dynamic pattern updates
- **Caching**: Documentation (30min), patterns (60min), intelligence (15min)
- **Configuration**: Real-time learning, performance optimization through caching
#### Stop Hook
- **Performance Target**: 200ms analytics generation
- **Features**: Comprehensive session analytics, learning consolidation
- **Analytics**: Performance metrics, effectiveness measurement, optimization recommendations
- **Configuration**: Session persistence, performance tracking, recommendation generation
#### Subagent Stop Hook
- **Performance Target**: 150ms coordination analytics
- **Features**: Subagent performance analytics, delegation effectiveness measurement
- **Task Management**: Wave orchestration, parallel coordination, performance optimization
- **Configuration**: Delegation analytics, coordination measurement, learning integration
### 3. Global Configuration (`global_configuration`)
#### Framework Integration
- **SuperClaude Compliance**: Ensures adherence to framework standards
- **YAML-Driven Logic**: Hot-reload configuration capability
- **Cross-Hook Coordination**: Enables hooks to share context and learnings
#### Performance Monitoring
- **Real-Time Tracking**: Continuous performance measurement
- **Target Enforcement**: Automatic optimization when targets are missed
- **Analytics**: Performance trend analysis and optimization suggestions
#### Learning System
- **Cross-Hook Learning**: Shared knowledge across hook executions
- **Adaptation Application**: Real-time improvement based on effectiveness
- **Pattern Recognition**: Identifies successful operational patterns
#### Security
- **Input Validation**: Protects against malicious input
- **Path Traversal Protection**: Prevents unauthorized file access
- **Resource Limits**: Prevents resource exhaustion attacks
### 4. MCP Server Integration (`mcp_server_integration`)
Defines integration patterns for all 6 MCP servers:
#### Server Definitions
- **Context7**: Library documentation and framework patterns (standard profile)
- **Sequential**: Multi-step reasoning and complex analysis (intensive profile)
- **Magic**: UI component generation and design systems (standard profile)
- **Playwright**: Browser automation and testing (intensive profile)
- **Morphllm**: Intelligent editing with fast apply (lightweight profile)
- **Serena**: Semantic analysis and memory management (standard profile)
#### Coordination Settings
- **Intelligent Routing**: Automatic server selection based on task requirements
- **Fallback Strategies**: Graceful degradation when servers are unavailable
- **Performance Optimization**: Load balancing and resource management
- **Learning Adaptation**: Real-time improvement of routing decisions
### 5. Mode Integration (`mode_integration`)
#### Supported Modes
- **Brainstorming**: Interactive requirements discovery (sequential, context7)
- **Task Management**: Multi-layer task orchestration (serena, morphllm)
- **Token Efficiency**: Intelligent token optimization (morphllm)
- **Introspection**: Meta-cognitive analysis (sequential)
#### Mode-Hook Coordination
Each mode specifies which hooks it integrates with and which MCP servers it prefers.
### 6. Quality Gates (`quality_gates`)
Implements the 8-step validation cycle:
1. **Syntax Validation**: Language-specific syntax checking
2. **Type Analysis**: Type compatibility and inference
3. **Code Quality**: Linting rules and quality standards
4. **Security Assessment**: Vulnerability scanning and OWASP compliance
5. **Testing Validation**: Test coverage and quality assurance
6. **Performance Analysis**: Performance benchmarking and optimization
7. **Documentation Verification**: Documentation completeness and accuracy
8. **Integration Testing**: End-to-end validation and deployment readiness
#### Hook Integration
- **Pre-Tool Use**: Steps 1-2 (validation preparation)
- **Post-Tool Use**: Steps 3-5 (comprehensive validation)
- **Stop**: Steps 6-8 (final validation and analytics)
### 7. Cache Configuration (`cache_configuration`)
#### Cache Settings
- **Cache Directory**: `./cache` for all cached data
- **Retention Policies**: Learning data (90 days), session data (30 days), performance data (365 days)
- **Automatic Cleanup**: Prevents cache bloat through scheduled cleanup
### 8. Logging Configuration (`logging_configuration`)
#### Logging Levels
- **Log Level**: INFO (configurable: ERROR, WARNING, INFO, DEBUG)
- **Specialized Logging**: Performance, error, learning, and hook execution logging
- **Privacy**: Sanitizes user content while preserving correlation data
### 9. Development Support (`development_support`)
#### Development Features
- **Debugging**: Optional debugging mode (disabled by default)
- **Performance Profiling**: Optional profiling capabilities
- **Verbose Logging**: Enhanced logging for development
- **Test Mode**: Specialized testing configuration
## Key Configuration Sections
### Performance Targets
Each hook has specific performance targets:
- **Session Start**: 50ms (critical priority)
- **Pre-Tool Use**: 200ms (high priority)
- **Post-Tool Use**: 100ms (medium priority)
- **Pre-Compact**: 150ms (high priority)
- **Notification**: 100ms (medium priority)
- **Stop**: 200ms (low priority)
- **Subagent Stop**: 150ms (medium priority)
### Default Values and Meanings
#### Hook Enablement
All hooks are enabled by default (`"enabled": true`) to provide full framework functionality.
#### Performance Monitoring
Real-time tracking is enabled with target enforcement and optimization suggestions.
#### Learning System
Cross-hook learning is enabled to continuously improve framework effectiveness.
#### Security Settings
All security features are enabled by default for production-ready security.
## Integration with Hooks
### Configuration Loading
Hooks load configuration through the shared YAML loader system, enabling:
- **Hot Reload**: Configuration changes without restart
- **Environment-Specific**: Different configs for development/production
- **Validation**: Configuration validation before application
### Cross-Hook Communication
The configuration enables hooks to:
- **Share Context**: Pass relevant information between hooks
- **Coordinate Actions**: Avoid conflicts through intelligent coordination
- **Learn Together**: Share effectiveness insights across hook executions
## Performance Implications
### Memory Usage
- **Configuration Size**: ~50KB typical configuration
- **Cache Impact**: Up to 100MB cache with automatic cleanup
- **Learning Data**: Persistent learning data with compression
### Processing Overhead
- **Configuration Loading**: <10ms initial load
- **Validation**: <5ms per configuration access
- **Hot Reload**: <50ms configuration refresh
### Network Impact
- **MCP Coordination**: Intelligent caching reduces network calls
- **Documentation Loading**: Just-in-time loading minimizes bandwidth usage
## Configuration Best Practices
### 1. Performance Tuning
```json
{
"hook_configurations": {
"session_start": {
"performance_target_ms": 50,
"configuration": {
"auto_project_detection": true,
"performance_monitoring": true
}
}
}
}
```
**Recommendation**: Keep performance targets aggressive but achievable for your environment.
### 2. Security Hardening
```json
{
"global_configuration": {
"security": {
"input_validation": true,
"path_traversal_protection": true,
"timeout_protection": true,
"resource_limits": true
}
}
}
```
**Recommendation**: Never disable security features in production environments.
### 3. Learning Optimization
```json
{
"global_configuration": {
"learning_system": {
"enabled": true,
"cross_hook_learning": true,
"effectiveness_tracking": true,
"pattern_recognition": true
}
}
}
```
**Recommendation**: Enable learning system for continuous improvement, but monitor resource usage.
### 4. Mode Configuration
```json
{
"mode_integration": {
"enabled": true,
"modes": {
"token_efficiency": {
"hooks": ["pre_compact", "session_start"],
"mcp_servers": ["morphllm"]
}
}
}
}
```
**Recommendation**: Configure modes based on your primary use cases and available MCP servers.
### 5. Cache Management
```json
{
"cache_configuration": {
"learning_data_retention_days": 90,
"session_data_retention_days": 30,
"automatic_cleanup": true
}
}
```
**Recommendation**: Balance retention periods with storage requirements and privacy needs.
## Troubleshooting
### Common Configuration Issues
#### Performance Degradation
- **Symptoms**: Hooks exceeding performance targets
- **Solutions**: Adjust performance targets, enable caching, reduce feature complexity
- **Monitoring**: Check `performance_monitoring` settings
#### MCP Server Failures
- **Symptoms**: Routing failures, fallback activation
- **Solutions**: Verify MCP server availability, check fallback strategies
- **Configuration**: Review `mcp_server_integration` settings
#### Learning System Issues
- **Symptoms**: No adaptation observed, effectiveness not improving
- **Solutions**: Check learning data retention, verify effectiveness tracking
- **Debug**: Enable verbose learning logging
#### Memory Usage Issues
- **Symptoms**: High memory consumption, cache bloat
- **Solutions**: Reduce cache retention periods, enable automatic cleanup
- **Monitoring**: Review cache configuration and usage patterns
### Configuration Validation
The framework validates configuration on startup:
- **Schema Validation**: Ensures proper JSON structure
- **Value Validation**: Checks ranges and dependencies
- **Integration Validation**: Verifies hook and MCP server consistency
- **Security Validation**: Ensures security settings are appropriate
## Related Documentation
- **Hook Implementation**: See individual hook documentation in `/docs/Hooks/`
- **MCP Integration**: Reference MCP server documentation for specific server configurations
- **Mode Documentation**: Review mode-specific documentation for behavioral patterns
- **Performance Monitoring**: See performance configuration documentation for optimization strategies
## Version History
- **v1.0.0**: Initial SuperClaude-Lite configuration with all 7 hooks and 6 MCP servers
- Full hook lifecycle support with learning and performance monitoring
- Comprehensive quality gates implementation
- Mode integration with behavioral pattern support

View File

@@ -1,357 +0,0 @@
# User Experience Configuration (`user_experience.yaml`)
## Overview
The `user_experience.yaml` file configures UX optimization, project detection, and user-centric intelligence patterns for the SuperClaude-Lite framework. This configuration enables intelligent user experience through smart defaults, proactive assistance, and adaptive interfaces.
## Purpose and Role
This configuration provides:
- **Project Detection**: Automatically detect project types and optimize accordingly
- **User Preference Learning**: Learn and adapt to user behavior patterns
- **Proactive Assistance**: Provide intelligent suggestions and contextual help
- **Smart Defaults**: Generate context-aware default configurations
- **Error Recovery**: Intelligent error handling with user-focused recovery
## Configuration Structure
### 1. Project Type Detection
#### Frontend Frameworks
```yaml
react_project:
file_indicators:
- "package.json"
- "*.tsx"
- "*.jsx"
- "react" # in package.json dependencies
directory_indicators:
- "src/components"
- "public"
- "node_modules"
confidence_threshold: 0.8
recommendations:
mcp_servers: ["magic", "context7", "playwright"]
compression_level: "minimal"
performance_focus: "ui_responsiveness"
```
**Detection Logic**: File and directory pattern matching with confidence scoring
**Recommendations**: Automatic MCP server selection and optimization settings
**Thresholds**: Confidence levels for reliable project type detection
#### Backend Frameworks
```yaml
python_project:
file_indicators:
- "requirements.txt"
- "pyproject.toml"
- "*.py"
recommendations:
mcp_servers: ["serena", "sequential", "context7"]
compression_level: "standard"
validation_level: "enhanced"
```
**Language Detection**: Python, Node.js, and other backend frameworks
**Tool Selection**: Appropriate MCP servers for backend development
**Configuration**: Optimized settings for backend workflows
### 2. User Preference Intelligence
#### Preference Learning
```yaml
preference_learning:
interaction_patterns:
command_preferences:
track_command_usage: true
track_flag_preferences: true
track_workflow_patterns: true
learning_window: 100 # operations
```
**Pattern Tracking**: Monitor user command and workflow preferences
**Learning Window**: Number of operations used for preference analysis
**Behavioral Analysis**: Speed vs quality preferences, detail level preferences
#### Adaptation Strategies
```yaml
adaptation_strategies:
speed_focused_user:
optimizations: ["aggressive_caching", "parallel_execution", "reduced_analysis"]
ui_changes: ["shorter_responses", "quick_suggestions", "minimal_explanations"]
quality_focused_user:
optimizations: ["comprehensive_analysis", "detailed_validation", "thorough_documentation"]
ui_changes: ["detailed_responses", "comprehensive_suggestions", "full_explanations"]
```
**User Profiles**: Speed-focused, quality-focused, and efficiency-focused adaptations
**Optimization**: Performance tuning based on user preferences
**Interface Adaptation**: UI changes to match user preferences
### 3. Proactive User Assistance
#### Intelligent Suggestions
```yaml
optimization_suggestions:
- trigger: {repeated_operations: ">5", same_pattern: true}
suggestion: "Consider creating a script or alias for this repeated operation"
confidence: 0.8
category: "workflow_optimization"
- trigger: {performance_issues: "detected", duration: ">3_sessions"}
suggestion: "Performance optimization recommendations available"
action: "show_performance_guide"
confidence: 0.9
```
**Pattern Recognition**: Detect repeated operations and inefficiencies
**Contextual Suggestions**: Provide relevant optimization recommendations
**Confidence Scoring**: Reliability ratings for suggestions
#### Contextual Help
```yaml
help_triggers:
- context: {new_user: true, session_count: "<5"}
help_type: "onboarding_guidance"
content: "Getting started tips and best practices"
- context: {error_rate: ">10%", recent_errors: ">3"}
help_type: "troubleshooting_assistance"
content: "Common error solutions and debugging tips"
```
**Trigger-Based Help**: Automatic help based on user context and behavior
**Adaptive Content**: Different help types for different situations
**User Journey**: Onboarding, troubleshooting, and advanced guidance
### 4. Smart Defaults Intelligence
#### Project-Based Defaults
```yaml
project_based_defaults:
react_project:
default_mcp_servers: ["magic", "context7"]
default_compression: "minimal"
default_analysis_depth: "ui_focused"
default_validation: "component_focused"
python_project:
default_mcp_servers: ["serena", "sequential"]
default_compression: "standard"
default_analysis_depth: "comprehensive"
default_validation: "enhanced"
```
**Context-Aware Configuration**: Automatic configuration based on detected project type
**Framework Optimization**: Defaults optimized for specific development frameworks
**Workflow Enhancement**: Pre-configured settings for common development patterns
#### Dynamic Configuration
```yaml
configuration_adaptation:
performance_based:
- condition: {system_performance: "high"}
adjustments: {analysis_depth: "comprehensive", features: "all_enabled"}
- condition: {system_performance: "low"}
adjustments: {analysis_depth: "essential", features: "performance_focused"}
```
**Performance Adaptation**: Adjust configuration based on system performance
**Expertise-Based**: Different defaults for beginner vs expert users
**Resource Management**: Optimize based on available system resources
### 5. Error Recovery Intelligence
#### Error Classification
```yaml
error_classification:
user_errors:
- type: "syntax_error"
recovery: "suggest_correction"
user_guidance: "detailed"
- type: "configuration_error"
recovery: "auto_fix_with_approval"
user_guidance: "educational"
system_errors:
- type: "performance_degradation"
recovery: "automatic_optimization"
user_notification: "informational"
```
**Error Types**: Classification of user vs system errors
**Recovery Strategies**: Appropriate recovery actions for each error type
**User Guidance**: Educational vs informational responses
#### Recovery Learning
```yaml
recovery_effectiveness:
track_recovery_success: true
learn_recovery_patterns: true
improve_recovery_strategies: true
user_recovery_preferences:
learn_preferred_recovery: true
adapt_recovery_approach: true
personalize_error_handling: true
```
**Pattern Learning**: Learn from successful error recovery patterns
**Personalization**: Adapt error handling to user preferences
**Continuous Improvement**: Improve recovery strategies over time
### 6. User Expertise Detection
#### Behavioral Indicators
```yaml
expertise_indicators:
command_proficiency:
indicators: ["advanced_flags", "complex_operations", "custom_configurations"]
weight: 0.4
error_recovery_ability:
indicators: ["self_correction", "minimal_help_needed", "independent_problem_solving"]
weight: 0.3
workflow_sophistication:
indicators: ["efficient_workflows", "automation_usage", "advanced_patterns"]
weight: 0.3
```
**Multi-Factor Detection**: Command proficiency, error recovery, workflow sophistication
**Weighted Scoring**: Balanced assessment of different expertise indicators
**Dynamic Assessment**: Continuous evaluation of user expertise level
#### Expertise Adaptation
```yaml
beginner_adaptations:
interface: ["detailed_explanations", "step_by_step_guidance", "comprehensive_warnings"]
defaults: ["safe_options", "guided_workflows", "educational_mode"]
expert_adaptations:
interface: ["minimal_explanations", "advanced_options", "efficiency_focused"]
defaults: ["maximum_automation", "performance_optimization", "minimal_interruptions"]
```
**Progressive Interface**: Interface complexity matches user expertise
**Default Optimization**: Appropriate defaults for each expertise level
**Learning Curve**: Smooth progression from beginner to expert experience
### 7. Satisfaction Intelligence
#### Satisfaction Metrics
```yaml
satisfaction_metrics:
task_completion_rate:
weight: 0.3
target_threshold: 0.85
error_resolution_speed:
weight: 0.25
target_threshold: "fast"
feature_adoption_rate:
weight: 0.2
target_threshold: 0.6
```
**Multi-Dimensional Tracking**: Completion rates, error resolution, feature adoption
**Weighted Scoring**: Balanced assessment of satisfaction factors
**Target Thresholds**: Performance targets for satisfaction metrics
#### Optimization Strategies
```yaml
optimization_strategies:
low_satisfaction_triggers:
- trigger: {completion_rate: "<0.7"}
action: "simplify_workflows"
priority: "high"
- trigger: {error_rate: ">15%"}
action: "improve_error_prevention"
priority: "critical"
```
**Trigger-Based Optimization**: Automatic improvements based on satisfaction metrics
**Priority Management**: Critical vs high vs medium priority improvements
**Continuous Optimization**: Ongoing satisfaction improvement processes
### 8. Personalization Engine
#### Interface Personalization
```yaml
interface_personalization:
layout_preferences:
learn_preferred_layouts: true
adapt_information_density: true
customize_interaction_patterns: true
content_personalization:
learn_content_preferences: true
adapt_explanation_depth: true
customize_suggestion_types: true
```
**Adaptive Interface**: Layout and content adapted to user preferences
**Information Density**: Adjust detail level based on user preferences
**Interaction Patterns**: Customize based on user behavior patterns
#### Workflow Optimization
```yaml
personal_workflow_learning:
common_task_patterns: true
workflow_efficiency_analysis: true
personalized_shortcuts: true
workflow_recommendations:
suggest_workflow_improvements: true
recommend_automation_opportunities: true
provide_efficiency_insights: true
```
**Pattern Learning**: Learn individual user workflow patterns
**Efficiency Analysis**: Identify optimization opportunities
**Personalized Recommendations**: Workflow improvements tailored to user
## Configuration Guidelines
### Project Detection Tuning
- **Confidence Thresholds**: Higher thresholds reduce false positives
- **File Indicators**: Add project-specific files for better detection
- **Directory Structure**: Include common directory patterns
- **Recommendations**: Align MCP server selection with project needs
### Preference Learning
- **Learning Window**: Adjust based on user activity level
- **Adaptation Speed**: Balance responsiveness with stability
- **Pattern Recognition**: Include relevant behavioral indicators
- **Privacy**: Ensure user preference data remains private
### Proactive Assistance
- **Suggestion Timing**: Avoid interrupting user flow
- **Relevance**: Ensure suggestions are contextually appropriate
- **Frequency**: Balance helpfulness with intrusiveness
- **User Control**: Allow users to adjust assistance level
## Integration Points
### Hook Integration
- **Session Start**: Project detection and user preference loading
- **Pre-Tool Use**: Context-aware defaults and proactive suggestions
- **Post-Tool Use**: Satisfaction tracking and pattern learning
### MCP Server Coordination
- **Server Selection**: Project-based and preference-based routing
- **Configuration**: Context-aware MCP server configuration
- **Performance**: User preference-based optimization
## Troubleshooting
### Project Detection Issues
- **False Positives**: Increase confidence thresholds
- **False Negatives**: Add more file/directory indicators
- **Conflicting Types**: Review indicator specificity
### Preference Learning Problems
- **Slow Adaptation**: Reduce learning window size
- **Wrong Preferences**: Review behavioral indicators
- **Privacy Concerns**: Ensure data anonymization
### Satisfaction Issues
- **Low Completion Rates**: Review workflow complexity
- **High Error Rates**: Improve error prevention
- **Poor Feature Adoption**: Enhance feature discoverability
## Related Documentation
- **Project Detection**: Framework project type detection patterns
- **User Analytics**: User behavior analysis and learning systems
- **Error Recovery**: Comprehensive error handling and recovery strategies

View File

@@ -1,704 +0,0 @@
# Validation Configuration (`validation.yaml`)
## Overview
The `validation.yaml` file defines comprehensive quality validation rules and standards for the SuperClaude-Lite framework. This configuration implements RULES.md and PRINCIPLES.md enforcement through automated validation cycles, quality standards, and continuous improvement mechanisms.
## Purpose and Role
The validation configuration serves as:
- **Rules Enforcement Engine**: Implements SuperClaude RULES.md validation with automatic detection and correction
- **Principles Alignment Validator**: Ensures adherence to PRINCIPLES.md through systematic validation
- **Quality Standards Framework**: Establishes minimum quality thresholds across code, security, performance, and maintainability
- **Validation Workflow Orchestrator**: Manages pre-validation, post-validation, and continuous validation cycles
- **Learning Integration System**: Incorporates validation results into framework learning and adaptation
## Configuration Structure
### 1. Core SuperClaude Rules Validation (`rules_validation`)
#### File Operations Validation
```yaml
file_operations:
read_before_write:
enabled: true
severity: "error"
message: "RULES violation: No Read operation detected before Write/Edit"
check_recent_tools: 3
exceptions: ["new_file_creation"]
```
**Purpose**: Enforces mandatory Read operations before Write/Edit operations
**Severity**: Error level prevents execution without compliance
**Recent Tools Check**: Examines last 3 tool operations for Read operations
**Exceptions**: Allows new file creation without prior Read requirement
```yaml
absolute_paths_only:
enabled: true
severity: "error"
message: "RULES violation: Relative path used"
path_parameters: ["file_path", "path", "directory", "output_path"]
allowed_prefixes: ["http://", "https://", "/"]
```
**Purpose**: Prevents security issues through relative path usage
**Parameter Validation**: Checks all path-related parameters
**Allowed Prefixes**: Permits absolute paths and URLs only
```yaml
validate_before_execution:
enabled: true
severity: "warning"
message: "RULES recommendation: High-risk operation should include validation"
high_risk_operations: ["delete", "refactor", "deploy", "migrate"]
complexity_threshold: 0.7
```
**Purpose**: Recommends validation before high-risk operations
**Risk Assessment**: Identifies operations requiring additional validation
**Complexity Consideration**: Higher complexity operations require validation
#### Security Requirements Validation
```yaml
security_requirements:
input_validation:
enabled: true
severity: "error"
message: "RULES violation: User input handling without validation"
check_patterns: ["user_input", "external_data", "api_input"]
no_hardcoded_secrets:
enabled: true
severity: "critical"
message: "RULES violation: Hardcoded sensitive information detected"
patterns: ["password", "api_key", "secret", "token"]
production_safety:
enabled: true
severity: "error"
message: "RULES violation: Unsafe operation in production context"
production_indicators: ["is_production", "prod_env", "production"]
```
**Input Validation**: Ensures user input is properly validated
**Secret Detection**: Prevents hardcoded sensitive information
**Production Safety**: Protects against unsafe production operations
### 2. SuperClaude Principles Validation (`principles_validation`)
#### Evidence Over Assumptions
```yaml
evidence_over_assumptions:
enabled: true
severity: "warning"
message: "PRINCIPLES: Provide evidence to support assumptions"
check_for_assumptions: true
require_evidence: true
confidence_threshold: 0.7
```
**Purpose**: Enforces evidence-based reasoning and decision-making
**Assumption Detection**: Identifies assumptions requiring evidence support
**Confidence Threshold**: 70% confidence required for assumption validation
#### Code Over Documentation
```yaml
code_over_documentation:
enabled: true
severity: "warning"
message: "PRINCIPLES: Documentation should follow working code, not precede it"
documentation_operations: ["document", "readme", "guide"]
require_working_code: true
```
**Purpose**: Ensures documentation follows working code implementation
**Documentation Operations**: Identifies documentation-focused operations
**Working Code Requirement**: Validates existence of working code before documentation
#### Efficiency Over Verbosity
```yaml
efficiency_over_verbosity:
enabled: true
severity: "suggestion"
message: "PRINCIPLES: Consider token efficiency techniques for large outputs"
output_size_threshold: 5000
verbosity_indicators: ["repetitive_content", "unnecessary_detail"]
```
**Purpose**: Promotes token efficiency and concise communication
**Size Threshold**: 5000 tokens triggers efficiency recommendations
**Verbosity Detection**: Identifies repetitive or unnecessarily detailed content
#### Test-Driven Development
```yaml
test_driven_development:
enabled: true
severity: "warning"
message: "PRINCIPLES: Logic changes should include tests"
logic_operations: ["write", "edit", "generate", "implement"]
test_file_patterns: ["*test*", "*spec*", "test_*", "*_test.*"]
```
**Purpose**: Promotes test-driven development practices
**Logic Operations**: Identifies operations requiring test coverage
**Test Pattern Recognition**: Recognizes various test file naming conventions
#### Single Responsibility Principle
```yaml
single_responsibility:
enabled: true
severity: "suggestion"
message: "PRINCIPLES: Functions/classes should have single responsibility"
complexity_indicators: ["multiple_purposes", "large_function", "many_parameters"]
```
**Purpose**: Enforces single responsibility principle in code design
**Complexity Detection**: Identifies functions/classes violating single responsibility
#### Error Handling Requirement
```yaml
error_handling_required:
enabled: true
severity: "warning"
message: "PRINCIPLES: Error handling not implemented"
critical_operations: ["write", "edit", "deploy", "api_calls"]
```
**Purpose**: Ensures proper error handling in critical operations
**Critical Operations**: Identifies operations requiring error handling
### 3. Quality Standards (`quality_standards`)
#### Code Quality Standards
```yaml
code_quality:
minimum_score: 0.7
factors:
- syntax_correctness
- logical_consistency
- error_handling_presence
- documentation_adequacy
- test_coverage
```
**Minimum Score**: 70% quality score required for code acceptance
**Multi-Factor Assessment**: Comprehensive quality evaluation across multiple dimensions
#### Security Compliance Standards
```yaml
security_compliance:
minimum_score: 0.8
checks:
- input_validation
- output_sanitization
- authentication_checks
- authorization_verification
- secure_communication
```
**Security Score**: 80% security compliance required (higher than code quality)
**Comprehensive Security**: Covers all major security aspects
#### Performance Standards
```yaml
performance_standards:
response_time_threshold_ms: 2000
resource_efficiency_min: 0.6
optimization_indicators:
- algorithm_efficiency
- memory_usage
- processing_speed
```
**Response Time**: 2-second maximum response time threshold
**Resource Efficiency**: 60% minimum resource efficiency requirement
**Optimization Focus**: Algorithm efficiency, memory usage, and processing speed
#### Maintainability Standards
```yaml
maintainability:
minimum_score: 0.6
factors:
- code_clarity
- documentation_quality
- modular_design
- consistent_style
```
**Maintainability Score**: 60% minimum maintainability score
**Sustainability Focus**: Emphasizes long-term code maintainability
### 4. Validation Workflow (`validation_workflow`)
#### Pre-Validation
```yaml
pre_validation:
enabled: true
quick_checks:
- syntax_validation
- basic_security_scan
- rule_compliance_check
```
**Purpose**: Fast validation before operation execution
**Quick Checks**: Essential validations that execute rapidly
**Blocking**: Can prevent operation execution based on results
#### Post-Validation
```yaml
post_validation:
enabled: true
comprehensive_checks:
- quality_assessment
- principle_alignment
- effectiveness_measurement
- learning_opportunity_detection
```
**Purpose**: Comprehensive validation after operation completion
**Thorough Analysis**: Complete quality and principle assessment
**Learning Integration**: Identifies opportunities for framework learning
#### Continuous Validation
```yaml
continuous_validation:
enabled: true
real_time_monitoring:
- pattern_violation_detection
- quality_degradation_alerts
- performance_regression_detection
```
**Purpose**: Ongoing validation throughout operation lifecycle
**Real-Time Monitoring**: Immediate detection of issues as they arise
**Proactive Alerts**: Early warning system for quality issues
### 5. Error Classification and Handling (`error_classification`)
#### Critical Errors
```yaml
critical_errors:
severity_level: "critical"
block_execution: true
examples:
- security_vulnerabilities
- data_corruption_risk
- system_instability
```
**Execution Blocking**: Critical errors prevent operation execution
**System Protection**: Prevents system-level damage or security breaches
#### Standard Errors
```yaml
standard_errors:
severity_level: "error"
block_execution: false
require_acknowledgment: true
examples:
- rule_violations
- quality_failures
- incomplete_implementation
```
**Acknowledgment Required**: User must acknowledge errors before proceeding
**Non-Blocking**: Allows execution with user awareness of issues
#### Warnings and Suggestions
```yaml
warnings:
severity_level: "warning"
block_execution: false
examples:
- principle_deviations
- optimization_opportunities
- best_practice_suggestions
suggestions:
severity_level: "suggestion"
informational: true
examples:
- code_improvements
- efficiency_enhancements
- learning_recommendations
```
**Non-Blocking**: Warnings and suggestions don't prevent execution
**Educational Value**: Provides learning opportunities and improvement suggestions
### 6. Effectiveness Measurement (`effectiveness_measurement`)
#### Success Indicators
```yaml
success_indicators:
task_completion: "weight: 0.4"
quality_achievement: "weight: 0.3"
user_satisfaction: "weight: 0.2"
learning_value: "weight: 0.1"
```
**Weighted Assessment**: Balanced evaluation across multiple success dimensions
**Task Completion**: Highest weight on successful task completion
**Quality Focus**: Significant weight on quality achievement
**User Experience**: Important consideration for user satisfaction
**Learning Value**: Framework learning and improvement value
#### Performance Metrics
```yaml
performance_metrics:
execution_time: "target: <2000ms"
resource_efficiency: "target: >0.6"
error_rate: "target: <0.1"
validation_accuracy: "target: >0.9"
```
**Performance Targets**: Specific measurable targets for performance assessment
**Error Rate**: Low error rate target for system reliability
**Validation Accuracy**: High accuracy target for validation effectiveness
#### Quality Metrics
```yaml
quality_metrics:
code_quality_score: "target: >0.7"
security_compliance: "target: >0.8"
principle_alignment: "target: >0.7"
rule_compliance: "target: >0.9"
```
**Quality Targets**: Specific targets for different quality dimensions
**High Compliance**: Very high rule compliance target (90%)
**Strong Security**: High security compliance target (80%)
### 7. Learning Integration (`learning_integration`)
#### Pattern Detection
```yaml
pattern_detection:
success_patterns: true
failure_patterns: true
optimization_patterns: true
user_preference_patterns: true
```
**Comprehensive Pattern Learning**: Learns from all types of patterns
**Success and Failure**: Learns from both positive and negative outcomes
**User Preferences**: Adapts to individual user patterns and preferences
#### Effectiveness Feedback
```yaml
effectiveness_feedback:
real_time_collection: true
user_satisfaction_tracking: true
quality_trend_analysis: true
adaptation_triggers: true
```
**Real-Time Learning**: Immediate learning from validation outcomes
**User Satisfaction**: Incorporates user satisfaction into learning
**Trend Analysis**: Identifies quality trends over time
**Adaptive Triggers**: Triggers adaptations based on learning insights
#### Continuous Improvement
```yaml
continuous_improvement:
threshold_adjustment: true
rule_refinement: true
principle_enhancement: true
validation_optimization: true
```
**Dynamic Optimization**: Continuously improves validation effectiveness
**Rule Evolution**: Refines rules based on effectiveness data
**Validation Enhancement**: Optimizes validation processes over time
### 8. Context-Aware Validation (`context_awareness`)
#### Project Type Adaptations
```yaml
project_type_adaptations:
frontend_projects:
additional_checks: ["accessibility", "responsive_design", "browser_compatibility"]
backend_projects:
additional_checks: ["api_security", "data_validation", "performance_optimization"]
full_stack_projects:
additional_checks: ["integration_testing", "end_to_end_validation", "deployment_safety"]
```
**Project-Specific Validation**: Adapts validation to project characteristics
**Domain-Specific Checks**: Includes relevant checks for each project type
**Comprehensive Coverage**: Ensures all relevant aspects are validated
#### User Expertise Adjustments
```yaml
user_expertise_adjustments:
beginner:
validation_verbosity: "high"
educational_suggestions: true
step_by_step_guidance: true
intermediate:
validation_verbosity: "medium"
best_practice_suggestions: true
optimization_recommendations: true
expert:
validation_verbosity: "low"
advanced_optimization_suggestions: true
architectural_guidance: true
```
**Expertise-Aware Validation**: Adapts validation approach to user expertise level
**Educational Value**: Provides appropriate learning opportunities
**Efficiency Optimization**: Reduces noise for expert users while maintaining quality
### 9. Performance Configuration (`performance_configuration`)
#### Validation Targets
```yaml
validation_targets:
processing_time_ms: 100
memory_usage_mb: 50
cpu_utilization_percent: 30
```
**Performance Limits**: Ensures validation doesn't impact system performance
**Resource Constraints**: Reasonable resource usage for validation processes
#### Optimization Strategies
```yaml
optimization_strategies:
parallel_validation: true
cached_results: true
incremental_validation: true
smart_rule_selection: true
```
**Performance Optimization**: Multiple strategies to optimize validation speed
**Intelligent Caching**: Caches validation results for repeated operations
**Smart Selection**: Applies only relevant rules based on context
#### Resource Management
```yaml
resource_management:
max_validation_time_ms: 500
memory_limit_mb: 100
cpu_limit_percent: 50
fallback_on_resource_limit: true
```
**Resource Protection**: Prevents validation from consuming excessive resources
**Graceful Fallback**: Falls back to basic validation if resource limits exceeded
### 10. Integration Points (`integration_points`)
#### MCP Server Integration
```yaml
mcp_servers:
serena: "semantic_validation_support"
morphllm: "edit_validation_coordination"
sequential: "complex_validation_analysis"
```
**Server-Specific Integration**: Leverages MCP server capabilities for validation
**Semantic Validation**: Uses Serena for semantic analysis validation
**Edit Coordination**: Coordinates with Morphllm for edit validation
#### Learning Engine Integration
```yaml
learning_engine:
effectiveness_tracking: true
pattern_learning: true
adaptation_feedback: true
```
**Learning Coordination**: Integrates validation results with learning system
**Pattern Learning**: Learns patterns from validation outcomes
**Adaptive Feedback**: Provides feedback for learning adaptation
#### Other Hook Integration
```yaml
other_hooks:
pre_tool_use: "validation_preparation"
session_start: "validation_configuration"
stop: "validation_summary_generation"
```
**Hook Coordination**: Integrates validation across hook lifecycle
**Preparation**: Prepares validation context before tool use
**Summary**: Generates validation summaries at session end
## Performance Implications
### 1. Validation Processing Performance
#### Rule Validation Performance
- **File Operation Rules**: 5-20ms per rule validation
- **Security Rules**: 10-50ms per security check
- **Principle Validation**: 20-100ms per principle assessment
- **Total Rule Validation**: 50-200ms for complete rule validation
#### Quality Assessment Performance
- **Code Quality**: 100-500ms for comprehensive quality assessment
- **Security Compliance**: 200ms-1s for security analysis
- **Performance Analysis**: 150-750ms for performance validation
- **Maintainability**: 50-300ms for maintainability assessment
### 2. Learning Integration Performance
#### Pattern Learning Impact
- **Pattern Detection**: 50-200ms for pattern recognition
- **Learning Updates**: 100-500ms for learning data updates
- **Adaptation Application**: 200ms-1s for adaptation implementation
#### Effectiveness Tracking
- **Metrics Collection**: 10-50ms per validation operation
- **Trend Analysis**: 100-500ms for trend calculation
- **User Satisfaction**: 20-100ms for satisfaction tracking
### 3. Resource Usage
#### Memory Usage
- **Rule Storage**: 100-500KB for validation rules
- **Pattern Data**: 500KB-2MB for learned patterns
- **Validation State**: 50-200KB during validation execution
#### CPU Usage
- **Validation Processing**: 20-60% CPU during comprehensive validation
- **Learning Processing**: 10-40% CPU for pattern learning
- **Background Monitoring**: <5% CPU for continuous validation
## Configuration Best Practices
### 1. Production Validation Configuration
```yaml
# Strict validation for production reliability
rules_validation:
file_operations:
read_before_write:
severity: "critical" # Stricter enforcement
security_requirements:
production_safety:
enabled: true
severity: "critical"
quality_standards:
security_compliance:
minimum_score: 0.9 # Higher security requirement
```
### 2. Development Validation Configuration
```yaml
# Educational and learning-focused validation
user_expertise_adjustments:
default_level: "beginner"
educational_suggestions: true
verbose_explanations: true
learning_integration:
continuous_improvement:
adaptation_triggers: "aggressive" # More learning
```
### 3. Performance-Optimized Configuration
```yaml
# Minimal validation for performance-critical environments
performance_configuration:
optimization_strategies:
parallel_validation: true
cached_results: true
smart_rule_selection: true
resource_management:
max_validation_time_ms: 200 # Stricter time limits
```
### 4. Learning-Optimized Configuration
```yaml
# Maximum learning and adaptation
learning_integration:
pattern_detection:
detailed_analysis: true
cross_session_learning: true
effectiveness_feedback:
real_time_collection: true
detailed_metrics: true
```
## Troubleshooting
### Common Validation Issues
#### False Positive Rule Violations
- **Symptoms**: Valid operations flagged as rule violations
- **Analysis**: Review rule patterns and exception handling
- **Solutions**: Refine rule patterns, add appropriate exceptions
- **Testing**: Test rules with edge cases and valid scenarios
#### Performance Impact
- **Symptoms**: Validation causing significant delays
- **Diagnosis**: Profile validation performance and identify bottlenecks
- **Optimization**: Enable caching, parallel processing, smart rule selection
- **Monitoring**: Track validation performance metrics continuously
#### Learning System Issues
- **Symptoms**: Validation not improving over time, poor adaptations
- **Investigation**: Review learning data collection and pattern recognition
- **Enhancement**: Adjust learning parameters, improve pattern detection
- **Validation**: Test learning effectiveness with controlled scenarios
#### Quality Standards Conflicts
- **Symptoms**: Conflicting quality requirements or unrealistic standards
- **Analysis**: Review quality standard interactions and dependencies
- **Resolution**: Adjust standards based on project requirements and constraints
- **Balancing**: Balance quality with practical implementation constraints
### Validation System Optimization
#### Rule Optimization
```yaml
# Optimize rule execution for performance
rules_validation:
smart_rule_selection:
context_aware: true
performance_optimized: true
minimal_redundancy: true
```
#### Quality Standard Tuning
```yaml
# Adjust quality standards based on project needs
quality_standards:
adaptive_thresholds: true
project_specific_adjustments: true
user_expertise_consideration: true
```
#### Learning System Tuning
```yaml
# Optimize learning for specific environments
learning_integration:
learning_rate_adjustment: "environment_specific"
pattern_recognition_sensitivity: "adaptive"
effectiveness_measurement_accuracy: "high"
```
## Related Documentation
- **RULES.md**: Core SuperClaude rules being enforced through validation
- **PRINCIPLES.md**: SuperClaude principles being validated for alignment
- **Quality Gates**: Integration with 8-step quality validation cycle
- **Hook Integration**: Post-tool use hook implementation for validation execution
## Version History
- **v1.0.0**: Initial validation configuration
- Comprehensive RULES.md enforcement with automatic detection
- PRINCIPLES.md alignment validation with evidence-based requirements
- Multi-dimensional quality standards (code, security, performance, maintainability)
- Context-aware validation with project type and user expertise adaptations
- Learning integration with pattern detection and continuous improvement
- Performance optimization with parallel processing and intelligent caching

View File

@@ -1,75 +0,0 @@
# Validation Intelligence Configuration (`validation_intelligence.yaml`)
## Overview
The `validation_intelligence.yaml` file configures intelligent validation patterns, adaptive quality gates, and smart validation optimization for the SuperClaude-Lite framework.
## Purpose and Role
This configuration provides:
- **Intelligent Validation**: Context-aware validation rules and patterns
- **Adaptive Quality Gates**: Dynamic quality thresholds based on context
- **Validation Learning**: Learn from validation patterns and outcomes
- **Smart Optimization**: Optimize validation processes for efficiency and accuracy
## Key Configuration Areas
### 1. Intelligent Validation Patterns
- **Context-Aware Rules**: Apply different validation rules based on operation context
- **Pattern-Based Validation**: Use learned patterns to improve validation accuracy
- **Risk Assessment**: Assess validation risk based on operation characteristics
- **Adaptive Thresholds**: Adjust validation strictness based on context and history
### 2. Quality Gate Intelligence
- **Dynamic Quality Metrics**: Adjust quality requirements based on operation type
- **Multi-Dimensional Quality**: Consider multiple quality factors simultaneously
- **Quality Learning**: Learn what quality means in different contexts
- **Progressive Quality**: Apply increasingly sophisticated quality checks
### 3. Validation Optimization
- **Efficiency Patterns**: Learn which validations provide the most value
- **Validation Caching**: Cache validation results to avoid redundant checks
- **Selective Validation**: Apply validation selectively based on risk assessment
- **Performance-Quality Balance**: Optimize the trade-off between speed and thoroughness
### 4. Learning and Adaptation
- **Validation Effectiveness**: Track which validations catch real issues
- **False Positive Learning**: Reduce false positive validation failures
- **Pattern Recognition**: Recognize validation patterns across operations
- **Continuous Improvement**: Continuously improve validation accuracy and efficiency
## Configuration Structure
The file includes:
- Intelligent validation rule definitions
- Context-aware quality gate configurations
- Learning and adaptation parameters
- Optimization strategies and thresholds
## Integration Points
### Framework Integration
- Works with all hooks that perform validation
- Integrates with quality gate systems
- Provides input to performance optimization
- Coordinates with error handling and recovery
### Learning Integration
- Learns from validation outcomes and user feedback
- Adapts to project-specific quality requirements
- Improves validation patterns over time
- Shares learning with other intelligence systems
## Usage Guidelines
This configuration controls the intelligent validation capabilities:
- **Validation Depth**: Balance thorough validation with performance needs
- **Learning Sensitivity**: Configure how quickly validation patterns adapt
- **Quality Standards**: Set appropriate quality thresholds for your use cases
- **Optimization Balance**: Balance validation thoroughness with efficiency
## Related Documentation
- **Validation Configuration**: `validation.yaml.md` for basic validation settings
- **Intelligence Patterns**: `intelligence_patterns.yaml.md` for core learning patterns
- **Quality Gates**: Framework quality gate documentation for validation integration

File diff suppressed because it is too large Load Diff

View File

@@ -1,487 +0,0 @@
# Post-Tool-Use Hook Documentation
## Purpose
The `post_tool_use` hook analyzes tool execution results and provides validation, quality assessment, and learning feedback after every tool execution in Claude Code. It implements validation against SuperClaude principles and records learning events for continuous improvement.
**Core Implementation**: A 794-line Python implementation that validates tool results against RULES.md and PRINCIPLES.md, measures effectiveness, detects error patterns, and records learning events with a target execution time of <100ms.
## Execution Context
The post_tool_use hook runs after every tool execution in Claude Code. According to `settings.json`, it has a 10-second timeout and executes via: `python3 ~/.claude/hooks/post_tool_use.py`
**Actual Execution Flow:**
1. Receives tool execution result from Claude Code via stdin (JSON)
2. Initializes PostToolUseHook class with shared module components
3. Processes tool result through `process_tool_result()` method
4. Validates results against SuperClaude principles and measures effectiveness
5. Outputs comprehensive validation report via stdout (JSON)
6. Falls back gracefully on errors with basic validation report
**Input Analysis:**
- Extracts execution context (tool name, status, timing, parameters, results, errors)
- Analyzes operation outcome (success, performance, quality indicators)
- Evaluates quality indicators (code quality, security compliance, performance efficiency)
**Output Reporting:**
- Validation results (quality score, issues, warnings, suggestions)
- Effectiveness metrics (overall effectiveness, quality/performance/satisfaction scores)
- Learning analysis (patterns detected, success/failure factors, optimization opportunities)
- Compliance assessment (rules compliance, principles alignment, SuperClaude score)
## Performance Target
**Primary Target: <100ms execution time**
The hook is designed to provide comprehensive validation while maintaining minimal impact on overall system performance.
**Performance Breakdown:**
- **Initialization**: <20ms (component loading and configuration)
- **Context Extraction**: <15ms (analyzing tool results and parameters)
- **Validation Processing**: <35ms (RULES.md and PRINCIPLES.md compliance checking)
- **Learning Analysis**: <20ms (pattern detection and effectiveness measurement)
- **Report Generation**: <10ms (creating comprehensive validation report)
**Performance Monitoring:**
- Real-time execution time tracking with target enforcement
- Automatic performance degradation detection and alerts
- Resource usage monitoring (memory, CPU utilization)
- Fallback mechanisms for performance constraint scenarios
**Optimization Strategies:**
- Parallel validation processing for independent checks
- Cached validation results for repeated patterns
- Incremental validation for large operations
- Smart rule selection based on operation context
## Validation Levels
The hook implements four distinct validation levels, each providing increasing depth of analysis:
### Basic Level
**Focus**: Syntax and fundamental correctness
- **Syntax Validation**: Ensures generated code is syntactically correct
- **Basic Security Scan**: Detects obvious security vulnerabilities
- **Rule Compliance Check**: Validates core RULES.md requirements
- **Performance Target**: <50ms execution time
- **Use Cases**: Simple operations, low-risk contexts, performance-critical scenarios
### Standard Level (Default)
**Focus**: Comprehensive quality and type safety
- **All Basic Level checks**
- **Type Analysis**: Deep type compatibility checking and inference
- **Code Quality Assessment**: Maintainability, readability, and best practices
- **Principle Alignment**: Verification against PRINCIPLES.md guidelines
- **Performance Target**: <100ms execution time
- **Use Cases**: Regular development operations, standard complexity tasks
### Comprehensive Level
**Focus**: Security and performance optimization
- **All Standard Level checks**
- **Security Assessment**: Vulnerability analysis and threat modeling
- **Performance Analysis**: Bottleneck identification and optimization recommendations
- **Error Pattern Detection**: Advanced pattern recognition for failure modes
- **Learning Integration**: Enhanced effectiveness measurement and adaptation
- **Performance Target**: <150ms execution time
- **Use Cases**: High-risk operations, production deployments, security-sensitive contexts
### Production Level
**Focus**: Integration and deployment readiness
- **All Comprehensive Level checks**
- **Integration Testing**: Cross-component compatibility verification
- **Deployment Validation**: Production readiness assessment
- **Quality Gate Enforcement**: Complete 8-step validation cycle
- **Comprehensive Reporting**: Detailed compliance and quality documentation
- **Performance Target**: <200ms execution time
- **Use Cases**: Production deployments, critical system changes, release preparation
## RULES.md Compliance
The hook implements comprehensive enforcement of SuperClaude's core operational rules:
### File Operation Rules
**Read Before Write/Edit Enforcement:**
- Validates that Read operations precede Write/Edit operations
- Checks recent tool history (last 3 operations) for compliance
- Issues errors for violations with clear remediation guidance
- Provides exceptions for new file creation scenarios
**Absolute Path Validation:**
- Scans all path parameters (file_path, path, directory, output_path)
- Blocks relative path usage with specific violation reporting
- Allows approved prefixes (http://, https://, absolute paths)
- Prevents path traversal attacks and ensures operation security
**High-Risk Operation Validation:**
- Identifies high-risk operations (delete, refactor, deploy, migrate)
- Recommends validation for complex operations (complexity > 0.7)
- Provides warnings for operations lacking pre-validation
- Tracks validation compliance across operation types
### Security Requirements
**Input Validation Enforcement:**
- Detects user input handling patterns without validation
- Scans for external data processing vulnerabilities
- Validates API input sanitization and error handling
- Reports security violations with severity classification
**Secret Management Validation:**
- Scans for hardcoded sensitive information (passwords, API keys, tokens)
- Issues critical alerts for secret exposure risks
- Validates secure credential handling patterns
- Provides guidance for proper secret management
**Production Safety Checks:**
- Identifies production context indicators
- Validates safety measures for production operations
- Blocks unsafe operations in production environments
- Ensures proper rollback and recovery mechanisms
### Systematic Code Changes
**Project-Wide Discovery Validation:**
- Ensures comprehensive discovery before systematic changes
- Validates search completeness across all file types
- Confirms impact assessment documentation
- Verifies coordinated change execution planning
## PRINCIPLES.md Alignment
The hook validates adherence to SuperClaude's core development principles:
### Evidence-Based Decision Making
**Evidence Over Assumptions:**
- Detects assumption-based reasoning without supporting evidence
- Requires measurable data for significant decisions
- Validates hypothesis testing and empirical verification
- Promotes evidence-based development practices
**Decision Documentation:**
- Ensures decision rationale is recorded and accessible
- Validates trade-off analysis and alternative consideration
- Requires evidence for architectural and design choices
- Supports future decision review and learning
### Development Priority Validation
**Code Over Documentation:**
- Validates that documentation follows working code implementation
- Prevents documentation-first development anti-patterns
- Ensures documentation accuracy reflects actual implementation
- Promotes iterative development with validated outcomes
**Working Software Priority:**
- Verifies working implementations before extensive documentation
- Validates incremental development with functional milestones
- Ensures user value delivery through functional software
- Supports rapid prototyping and validation cycles
### Efficiency and Quality Balance
**Efficiency Over Verbosity:**
- Analyzes output size and complexity for unnecessary verbosity
- Recommends token efficiency techniques for large outputs
- Validates communication clarity without redundancy
- Promotes concise, actionable guidance and documentation
**Quality Without Compromise:**
- Ensures efficiency improvements don't sacrifice quality
- Validates testing and validation coverage during optimization
- Maintains code clarity and maintainability standards
- Balances development speed with long-term sustainability
## Learning Integration
The hook implements sophisticated learning mechanisms to continuously improve framework effectiveness:
### Effectiveness Measurement
**Multi-Dimensional Scoring:**
- **Overall Effectiveness**: Weighted combination of quality, performance, and satisfaction
- **Quality Score**: Code quality, security compliance, and principle alignment
- **Performance Score**: Execution time efficiency and resource utilization
- **User Satisfaction Estimate**: Success rate and error impact assessment
- **Learning Value**: Complexity, novelty, and insight generation potential
**Effectiveness Calculation:**
```yaml
effectiveness_weights:
quality_score: 30% # Code quality and compliance
performance_score: 25% # Execution efficiency
user_satisfaction: 35% # Perceived value and success
learning_value: 10% # Knowledge generation potential
```
### Pattern Recognition and Adaptation
**Success Pattern Detection:**
- Identifies effective tool usage patterns and MCP server coordination
- Recognizes high-quality output characteristics and optimal performance
- Records successful validation patterns and compliance strategies
- Builds pattern library for future operation optimization
**Failure Pattern Analysis:**
- Detects recurring error patterns and failure modes
- Analyzes root causes and contributing factors
- Identifies improvement opportunities and prevention strategies
- Generates targeted recommendations for specific failure types
**Adaptation Mechanisms:**
- **Real-Time Adjustment**: Dynamic threshold modification based on effectiveness
- **Rule Refinement**: Continuous improvement of validation rules and criteria
- **Principle Enhancement**: Evolution of principle interpretation and application
- **Validation Optimization**: Performance tuning based on usage patterns
### Learning Event Recording
**Operation Pattern Learning:**
- Records tool usage effectiveness with context and outcomes
- Tracks MCP server coordination patterns and success rates
- Documents user preference patterns and adaptation opportunities
- Builds comprehensive operation effectiveness database
**Error Recovery Learning:**
- Captures error context, recovery actions, and success rates
- Identifies effective error handling patterns and prevention strategies
- Records recovery time and resource requirements
- Builds error pattern knowledge base for future prevention
## Error Pattern Detection
The hook implements advanced error pattern detection to identify and prevent recurring issues:
### Error Classification System
**Severity-Based Classification:**
- **Critical Errors**: Security vulnerabilities, data corruption risks, system instability
- **Standard Errors**: Rule violations, quality failures, incomplete implementations
- **Warnings**: Principle deviations, optimization opportunities, best practice suggestions
- **Suggestions**: Code improvements, efficiency enhancements, learning recommendations
**Pattern Recognition Engine:**
- **Temporal Pattern Detection**: Identifies error trends over time and contexts
- **Contextual Pattern Analysis**: Recognizes error patterns specific to operation types
- **Cross-Operation Correlation**: Detects error patterns spanning multiple tool executions
- **User-Specific Pattern Learning**: Identifies individual user error tendencies
### Error Prevention Strategies
**Proactive Prevention:**
- **Pre-Validation Recommendations**: Suggests validation for similar high-risk operations
- **Security Check Integration**: Implements automated security validation checks
- **Performance Optimization**: Recommends parallel execution for large operations
- **Pattern-Based Warnings**: Provides early warnings for known problematic patterns
**Reactive Learning:**
- **Error Recovery Documentation**: Records successful recovery strategies
- **Pattern Knowledge Base**: Builds comprehensive error pattern database
- **Adaptation Recommendations**: Generates specific guidance for error prevention
- **User Education**: Provides learning opportunities from error analysis
## Configuration
The hook's behavior is controlled through multiple configuration layers providing flexibility and customization:
### Primary Configuration Source
**superclaude-config.json - post_tool_use section:**
```json
{
"post_tool_use": {
"enabled": true,
"performance_target_ms": 100,
"features": [
"quality_validation",
"rules_compliance_checking",
"principles_alignment_verification",
"effectiveness_measurement",
"error_pattern_detection",
"learning_opportunity_identification"
],
"configuration": {
"rules_validation": true,
"principles_validation": true,
"quality_standards_enforcement": true,
"effectiveness_tracking": true,
"learning_integration": true
},
"validation_levels": {
"basic": ["syntax_validation"],
"standard": ["syntax_validation", "type_analysis", "code_quality"],
"comprehensive": ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis"],
"production": ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis", "integration_testing", "deployment_validation"]
}
}
}
```
### Detailed Validation Configuration
**config/validation.yaml** provides comprehensive validation rule definitions:
**Rules Validation Configuration:**
- File operation rules (read_before_write, absolute_paths_only, validate_before_execution)
- Security requirements (input_validation, no_hardcoded_secrets, production_safety)
- Error severity levels and blocking behavior
- Context-aware validation adjustments
**Principles Validation Configuration:**
- Evidence-based decision making requirements
- Code-over-documentation enforcement
- Efficiency-over-verbosity thresholds
- Test-driven development validation
**Quality Standards:**
- Minimum quality scores for different assessment areas
- Performance thresholds and optimization indicators
- Security compliance requirements and checks
- Maintainability factors and measurement criteria
### Performance and Resource Configuration
**Performance Targets:**
```yaml
performance_configuration:
validation_targets:
processing_time_ms: 100 # Primary performance target
memory_usage_mb: 50 # Memory utilization limit
cpu_utilization_percent: 30 # CPU usage threshold
optimization_strategies:
parallel_validation: true # Enable parallel processing
cached_results: true # Cache validation results
incremental_validation: true # Optimize for repeated operations
smart_rule_selection: true # Context-aware rule application
```
**Resource Management:**
- Maximum validation time limits with fallback mechanisms
- Memory and CPU usage constraints with monitoring
- Automatic resource optimization and constraint handling
- Performance degradation detection and response
## Quality Gates Integration
The post_tool_use hook is integral to SuperClaude's 8-step validation cycle, contributing to multiple quality gates:
### Step 3: Code Quality Assessment
**Comprehensive Quality Analysis:**
- **Code Structure**: Evaluates organization, modularity, and architectural patterns
- **Maintainability**: Assesses readability, documentation, and modification ease
- **Best Practices**: Validates adherence to language and framework conventions
- **Technical Debt**: Identifies accumulation and provides reduction recommendations
**Quality Metrics:**
- Code quality score calculation (target: >0.7)
- Maintainability index with trend analysis
- Technical debt assessment and prioritization
- Best practice compliance percentage
### Step 4: Security Assessment
**Multi-Layer Security Validation:**
- **Vulnerability Analysis**: Scans for common security vulnerabilities (OWASP Top 10)
- **Input Validation**: Ensures proper sanitization and validation of external inputs
- **Authentication/Authorization**: Validates proper access control implementation
- **Data Protection**: Verifies secure data handling and storage practices
**Security Compliance:**
- Security score calculation (target: >0.8)
- Vulnerability severity assessment and prioritization
- Compliance reporting for security standards
- Threat modeling and risk assessment integration
### Step 5: Testing Validation
**Test Coverage and Quality:**
- **Test Presence**: Validates existence of appropriate tests for code changes
- **Coverage Analysis**: Measures test coverage depth and breadth
- **Test Quality**: Assesses test effectiveness and maintainability
- **Integration Testing**: Validates cross-component test coverage
**Testing Metrics:**
- Unit test coverage percentage (target: ≥80%)
- Integration test coverage (target: ≥70%)
- Test quality score and effectiveness measurement
- Testing best practice compliance validation
### Integration with Other Quality Gates
**Coordination with Pre-Tool-Use (Steps 1-2):**
- Receives syntax and type validation results for enhanced analysis
- Builds upon initial validation with deeper quality assessment
- Provides feedback for future pre-validation optimization
**Coordination with Session End (Steps 6-8):**
- Contributes validation results to performance analysis
- Provides quality metrics for documentation verification
- Supports integration testing with operation effectiveness data
### Quality Gate Reporting
**Comprehensive Quality Reports:**
- Step-by-step validation results with detailed findings
- Quality score breakdowns by category and importance
- Trend analysis and improvement recommendations
- Compliance status with actionable remediation steps
**Integration Metrics:**
- Overall quality gate passage rate
- Step-specific success rates and failure analysis
- Quality improvement trends over time
- Framework effectiveness measurement and optimization
## Advanced Features
### Context-Aware Validation
**Project Type Adaptations:**
- **Frontend Projects**: Additional accessibility, responsive design, and browser compatibility checks
- **Backend Projects**: Enhanced API security, data validation, and performance optimization focus
- **Full-Stack Projects**: Integration testing, end-to-end validation, and deployment safety verification
**User Expertise Adjustments:**
- **Beginner Users**: High validation verbosity, educational suggestions, step-by-step guidance
- **Intermediate Users**: Medium verbosity, best practice suggestions, optimization recommendations
- **Expert Users**: Low verbosity, advanced optimization suggestions, architectural guidance
### Learning System Integration
**Cross-Hook Learning:**
- Shares effectiveness data with pre_tool_use hook for optimization
- Coordinates with session_start hook for user preference learning
- Integrates with stop hook for comprehensive session analysis
**Adaptive Behavior:**
- Adjusts validation thresholds based on user expertise and project context
- Learns from validation effectiveness and user feedback
- Optimizes rule selection and severity based on operation patterns
### Error Recovery and Resilience
**Graceful Degradation:**
- Maintains essential validation even during system constraints
- Provides fallback validation reports on processing errors
- Preserves user context and operation continuity during failures
**Learning from Failures:**
- Records validation hook errors for system improvement
- Analyzes failure patterns to prevent future issues
- Generates insights from error recovery experiences
## Integration Examples
### MCP Server Coordination
**Serena Integration:**
- Receives semantic validation support for code structure analysis
- Coordinates edit validation for complex refactoring operations
- Leverages project context for enhanced validation accuracy
**Morphllm Integration:**
- Validates intelligent editing operations and pattern applications
- Coordinates edit effectiveness measurement and optimization
- Provides feedback for fast-apply optimization
**Sequential Integration:**
- Leverages complex validation analysis for multi-step operations
- Coordinates systematic validation for architectural changes
- Integrates reasoning validation with decision documentation
### Hook Ecosystem Integration
**Pre-Tool-Use Coordination:**
- Receives validation preparation data for enhanced analysis
- Provides effectiveness feedback for future operation optimization
- Coordinates rule enforcement across the complete execution cycle
**Session Management Integration:**
- Contributes validation metrics to session analytics
- Provides quality insights for session summary generation
- Supports cross-session learning and pattern recognition
## Conclusion
The post_tool_use hook serves as the cornerstone of SuperClaude's quality assurance and continuous improvement system. By providing comprehensive validation, learning integration, and adaptive behavior, it ensures that every tool execution contributes to the framework's overall effectiveness while maintaining the highest standards of quality, security, and compliance.
Through its sophisticated validation levels, error pattern detection, and learning mechanisms, the hook enables SuperClaude to continuously evolve and improve, providing users with increasingly effective and reliable development assistance while maintaining strict adherence to the framework's core principles and operational rules.

View File

@@ -1,658 +0,0 @@
# pre_compact Hook Technical Documentation
## Purpose
The `pre_compact` hook implements token optimization before context compaction in Claude Code. It analyzes content for compression opportunities, applies selective compression strategies, and maintains quality preservation targets while reducing token usage.
**Core Implementation**: Implements MODE_Token_Efficiency.md compression algorithms with selective content classification, symbol systems, and quality-gated compression with a target execution time of <150ms.
## Execution Context
The pre_compact hook runs before context compaction in Claude Code. According to `settings.json`, it has a 15-second timeout and executes via: `python3 ~/.claude/hooks/pre_compact.py`
**Actual Execution Flow:**
1. Receives compaction request from Claude Code via stdin (JSON)
2. Initializes PreCompactHook class with compression engine and shared modules
3. Processes request through `process_pre_compact()` method
4. Analyzes content characteristics and determines compression strategy
5. Outputs compression configuration via stdout (JSON)
6. Falls back gracefully on errors with no compression applied
## Performance Target
**Performance Target: <150ms execution time**
The hook operates within strict performance constraints to ensure real-time compression decisions:
### Performance Benchmarks
- **Target Execution Time**: 150ms maximum
- **Typical Performance**: 50-100ms for standard content
- **Efficiency Metric**: 100 characters per millisecond processing rate
- **Resource Overhead**: <5% additional memory usage during compression
### Performance Monitoring
```python
performance_metrics = {
'compression_time_ms': execution_time,
'target_met': execution_time < 150,
'efficiency_score': chars_per_ms / 100,
'processing_rate': content_length / execution_time
}
```
### Optimization Strategies
- **Parallel Content Analysis**: Concurrent processing of content sections
- **Intelligent Caching**: Reuse compression results for similar content patterns
- **Early Exit Strategies**: Skip compression for framework content immediately
- **Selective Processing**: Apply compression only where beneficial
## Compression Levels
**5-Level Compression Strategy** providing adaptive optimization based on resource constraints and content characteristics:
### Level 1: Minimal (0-40% compression)
```yaml
compression_level: minimal
symbol_systems: false
abbreviation_systems: false
structural_optimization: false
quality_threshold: 0.98
use_cases:
- user_content
- low_resource_usage
- high_quality_required
```
**Application**: User project files, documentation, source code requiring high fidelity preservation.
### Level 2: Efficient (40-70% compression)
```yaml
compression_level: efficient
symbol_systems: true
abbreviation_systems: false
structural_optimization: true
quality_threshold: 0.95
use_cases:
- moderate_resource_usage
- balanced_efficiency
```
**Application**: Session metadata, checkpoint data, working artifacts with acceptable optimization trade-offs.
### Level 3: Compressed (70-85% compression)
```yaml
compression_level: compressed
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
quality_threshold: 0.90
use_cases:
- high_resource_usage
- user_requests_brevity
```
**Application**: Analysis results, cached data, temporary working content with aggressive optimization.
### Level 4: Critical (85-95% compression)
```yaml
compression_level: critical
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
quality_threshold: 0.85
use_cases:
- resource_constraints
- emergency_compression
```
**Application**: Emergency resource situations, historical session data, highly repetitive content.
### Level 5: Emergency (95%+ compression)
```yaml
compression_level: emergency
symbol_systems: true
abbreviation_systems: true
structural_optimization: true
advanced_techniques: true
aggressive_optimization: true
quality_threshold: 0.80
use_cases:
- critical_resource_constraints
- emergency_situations
```
**Application**: Critical resource exhaustion scenarios with maximum token conservation priority.
## Selective Compression
**Framework exclusion and content classification** ensuring optimal compression strategies based on content type and preservation requirements:
### Content Classification System
#### Framework Content (0% compression)
```yaml
framework_exclusions:
patterns:
- "~/.claude/"
- ".claude/"
- "SuperClaude/*"
- "CLAUDE.md"
- "FLAGS.md"
- "PRINCIPLES.md"
- "ORCHESTRATOR.md"
- "MCP_*.md"
- "MODE_*.md"
- "SESSION_LIFECYCLE.md"
compression_level: "preserve"
reasoning: "Framework content must be preserved for proper operation"
```
**Protection Strategy**: Complete exclusion from all compression algorithms with immediate early exit upon framework content detection.
#### User Content Preservation (Minimal compression)
```yaml
user_content_preservation:
patterns:
- "project_files"
- "user_documentation"
- "source_code"
- "configuration_files"
- "custom_content"
compression_level: "minimal"
reasoning: "User content requires high fidelity preservation"
```
**Protection Strategy**: Light compression with whitespace optimization only, preserving semantic accuracy and technical correctness.
#### Session Data Optimization (Efficient compression)
```yaml
session_data_optimization:
patterns:
- "session_metadata"
- "checkpoint_data"
- "cache_content"
- "working_artifacts"
- "analysis_results"
compression_level: "efficient"
reasoning: "Session data can be compressed while maintaining utility"
```
**Optimization Strategy**: Symbol systems and structural optimization applied with 95% quality preservation target.
### Content Detection Algorithm
```python
def _analyze_content_sources(self, content: str, metadata: dict) -> Tuple[float, float]:
"""Analyze ratio of framework vs user content."""
framework_indicators = [
'SuperClaude', 'CLAUDE.md', 'FLAGS.md', 'PRINCIPLES.md',
'ORCHESTRATOR.md', 'MCP_', 'MODE_', 'SESSION_LIFECYCLE'
]
user_indicators = [
'project_files', 'user_documentation', 'source_code',
'configuration_files', 'custom_content'
]
```
## Symbol Systems
**Symbol systems replace verbose text** with standardized symbols for efficient communication while preserving semantic meaning:
### Core Logic & Flow Symbols
| Symbol | Meaning | Example Usage |
|--------|---------|---------------|
| → | leads to, implies | `auth.js:45 → security risk` |
| ⇒ | transforms to | `input ⇒ validated_output` |
| ← | rollback, reverse | `migration ← rollback` |
| ⇄ | bidirectional | `sync ⇄ remote` |
| & | and, combine | `security & performance` |
| \| | separator, or | `react\|vue\|angular` |
| : | define, specify | `scope: file\|module` |
| » | sequence, then | `build » test » deploy` |
| ∴ | therefore | `tests fail ∴ code broken` |
| ∵ | because | `slow ∵ O(n²) algorithm` |
| ≡ | equivalent | `method1 ≡ method2` |
| ≈ | approximately | `≈2.5K tokens` |
| ≠ | not equal | `actual ≠ expected` |
### Status & Progress Symbols
| Symbol | Meaning | Context |
|--------|---------|---------|
| ✅ | completed, passed | Task completion, validation success |
| ❌ | failed, error | Operation failure, validation error |
| ⚠️ | warning | Non-critical issues, attention required |
| | information | Informational messages, context |
| 🔄 | in progress | Active operations, processing |
| ⏳ | waiting, pending | Queued operations, dependencies |
| 🚨 | critical, urgent | High-priority issues, immediate action |
| 🎯 | target, goal | Objectives, milestones |
| 📊 | metrics, data | Performance data, analytics |
| 💡 | insight, learning | Discoveries, optimizations |
### Technical Domain Symbols
| Symbol | Domain | Usage Context |
|--------|---------|---------------|
| ⚡ | Performance | Speed optimization, efficiency |
| 🔍 | Analysis | Investigation, examination |
| 🔧 | Configuration | Setup, tool configuration |
| 🛡️ | Security | Protection, vulnerability analysis |
| 📦 | Deployment | Packaging, distribution |
| 🎨 | Design | UI/UX, frontend development |
| 🌐 | Network | Web services, connectivity |
| 📱 | Mobile | Responsive design, mobile apps |
| 🏗️ | Architecture | System structure, design patterns |
| 🧩 | Components | Modular design, composability |
### Symbol System Implementation
```python
symbol_systems = {
'core_logic_flow': {
'enabled': True,
'mappings': {
'leads to': '',
'transforms to': '',
'therefore': '',
'because': ''
}
},
'status_progress': {
'enabled': True,
'mappings': {
'completed': '',
'failed': '',
'warning': '⚠️',
'in progress': '🔄'
}
}
}
```
## Abbreviation Systems
**Technical abbreviations for efficiency** providing domain-specific shorthand while maintaining clarity and context:
### System & Architecture Abbreviations
| Full Term | Abbreviation | Context |
|-----------|--------------|---------|
| configuration | cfg | System settings, setup files |
| settings | cfg | Configuration parameters |
| implementation | impl | Code structure, algorithms |
| code structure | impl | Software architecture |
| architecture | arch | System design, patterns |
| system design | arch | Architectural decisions |
| performance | perf | Optimization, benchmarks |
| optimization | perf | Efficiency improvements |
| operations | ops | Deployment, DevOps |
| deployment | ops | Release processes |
| environment | env | Runtime context, settings |
| runtime context | env | Execution environment |
### Development Process Abbreviations
| Full Term | Abbreviation | Context |
|-----------|--------------|---------|
| requirements | req | Project specifications |
| dependencies | deps | Package management |
| packages | deps | Library dependencies |
| validation | val | Testing, verification |
| verification | val | Quality assurance |
| testing | test | Quality validation |
| quality assurance | test | Testing processes |
| documentation | docs | Technical writing |
| guides | docs | User documentation |
| standards | std | Coding conventions |
| conventions | std | Style guidelines |
### Quality & Analysis Abbreviations
| Full Term | Abbreviation | Context |
|-----------|--------------|---------|
| quality | qual | Code quality, maintainability |
| maintainability | qual | Long-term code health |
| security | sec | Safety measures, vulnerabilities |
| safety measures | sec | Security protocols |
| error | err | Exception handling |
| exception handling | err | Error management |
| recovery | rec | Resilience, fault tolerance |
| resilience | rec | System robustness |
| severity | sev | Priority levels, criticality |
| priority level | sev | Issue classification |
| optimization | opt | Performance improvements |
| improvement | opt | Enhancement strategies |
### Abbreviation System Implementation
```python
abbreviation_systems = {
'system_architecture': {
'enabled': True,
'mappings': {
'configuration': 'cfg',
'implementation': 'impl',
'architecture': 'arch',
'performance': 'perf'
}
},
'development_process': {
'enabled': True,
'mappings': {
'requirements': 'req',
'dependencies': 'deps',
'validation': 'val',
'testing': 'test'
}
}
}
```
## Quality Preservation
**95% information retention target** through comprehensive quality validation and evidence-based compression effectiveness monitoring:
### Quality Preservation Standards
```yaml
quality_preservation:
minimum_thresholds:
information_preservation: 0.95
semantic_accuracy: 0.95
technical_correctness: 0.98
user_content_fidelity: 0.99
validation_criteria:
key_concept_retention: true
technical_term_preservation: true
code_example_accuracy: true
reference_link_preservation: true
```
### Quality Validation Framework
```python
def _validate_compression_quality(self, compression_results, strategy) -> dict:
"""Validate compression quality against standards."""
validation = {
'overall_quality_met': True,
'preservation_score': 0.0,
'compression_efficiency': 0.0,
'quality_issues': [],
'quality_warnings': []
}
# Calculate preservation score
total_preservation = sum(result.preservation_score for result in compression_results.values())
validation['preservation_score'] = total_preservation / len(compression_results)
# Quality threshold validation
if validation['preservation_score'] < strategy.quality_threshold:
validation['overall_quality_met'] = False
validation['quality_issues'].append(
f"Preservation score {validation['preservation_score']:.2f} below threshold {strategy.quality_threshold}"
)
```
### Quality Monitoring Metrics
- **Information Preservation**: Semantic content retention measurement
- **Technical Correctness**: Code accuracy and reference preservation
- **Compression Efficiency**: Token reduction vs. quality trade-off analysis
- **User Content Fidelity**: Project-specific content preservation verification
### Quality Gate Integration
```python
quality_validation = self._validate_compression_quality(
compression_results, compression_strategy
)
if not quality_validation['overall_quality_met']:
log_decision(
"pre_compact",
"quality_validation",
"failed",
f"Preservation score: {quality_validation['preservation_score']:.2f}"
)
```
## Configuration
**Settings from compression.yaml** providing comprehensive configuration management for adaptive compression strategies:
### Core Configuration Structure
```yaml
# Performance Targets
performance_targets:
processing_time_ms: 150
compression_ratio_target: 0.50
quality_preservation_target: 0.95
token_efficiency_gain: 0.40
# Adaptive Compression Strategy
adaptive_compression:
context_awareness:
user_expertise_factor: true
project_complexity_factor: true
domain_specific_optimization: true
learning_integration:
effectiveness_feedback: true
user_preference_learning: true
pattern_optimization: true
```
### Compression Level Configuration
```python
def __init__(self):
# Load compression configuration
try:
self.compression_config = config_loader.load_config('compression')
except FileNotFoundError:
self.compression_config = self.hook_config.get('configuration', {})
# Performance tracking
self.performance_target_ms = config_loader.get_hook_config(
'pre_compact', 'performance_target_ms', 150
)
```
### Dynamic Configuration Management
- **Context-Aware Settings**: Automatic adjustment based on content type and resource state
- **Learning Integration**: User preference adaptation and pattern optimization
- **Performance Monitoring**: Real-time configuration tuning based on effectiveness metrics
- **Fallback Strategies**: Graceful degradation when configuration loading fails
### Integration with SuperClaude Framework
```yaml
integration:
mcp_servers:
morphllm: "coordinate_compression_with_editing"
serena: "memory_compression_strategies"
modes:
token_efficiency: "primary_compression_mode"
task_management: "session_data_compression"
learning_engine:
effectiveness_tracking: true
pattern_learning: true
adaptation_feedback: true
```
## MODE_Token_Efficiency Integration
**Implementation of MODE_Token_Efficiency compression algorithms** providing seamless integration with SuperClaude's token optimization behavioral mode:
### Mode Integration Architecture
```python
# MODE_Token_Efficiency.md → pre_compact.py implementation
class PreCompactHook:
"""
Pre-compact hook implementing SuperClaude token efficiency intelligence.
Implements MODE_Token_Efficiency.md algorithms:
- 5-level compression strategy
- Selective content classification
- Symbol systems optimization
- Quality preservation validation
"""
```
### Behavioral Mode Coordination
- **Auto-Activation**: Resource usage >75%, large-scale operations, user brevity requests
- **Compression Strategy Selection**: Adaptive algorithm based on MODE configuration
- **Quality Gate Integration**: Validation against MODE preservation targets
- **Performance Compliance**: Sub-150ms execution aligned with MODE efficiency requirements
### MODE Configuration Inheritance
```yaml
# MODE_Token_Efficiency.md settings → compression.yaml
compression_levels:
minimal: # MODE: 0-40% compression
quality_threshold: 0.98
symbol_systems: false
efficient: # MODE: 40-70% compression
quality_threshold: 0.95
symbol_systems: true
compressed: # MODE: 70-85% compression
quality_threshold: 0.90
abbreviation_systems: true
```
### Real-Time Mode Synchronization
```python
def _determine_compression_strategy(self, context: dict, content_analysis: dict) -> CompressionStrategy:
"""Determine optimal compression strategy aligned with MODE_Token_Efficiency."""
# MODE-compliant compression level determination
compression_level = self.compression_engine.determine_compression_level({
'resource_usage_percent': context.get('token_usage_percent', 0),
'conversation_length': context.get('conversation_length', 0),
'user_requests_brevity': context.get('user_requests_compression', False),
'complexity_score': context.get('content_complexity', 0.0)
})
```
### Learning Integration with MODE
```python
def _record_compression_learning(self, context, compression_results, quality_validation):
"""Record compression learning aligned with MODE adaptation."""
self.learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.USER,
context,
{
'compression_level': compression_level.value,
'preservation_score': quality_validation['preservation_score'],
'compression_efficiency': quality_validation['compression_efficiency']
},
overall_effectiveness,
0.9 # High confidence in MODE-aligned compression metrics
)
```
### Framework Compliance Validation
- **Symbol Systems**: Direct implementation of MODE symbol mappings
- **Abbreviation Systems**: MODE-compliant technical abbreviation patterns
- **Quality Preservation**: MODE 95% information retention standards
- **Selective Compression**: MODE content classification and protection strategies
## Key Features
### Intelligent Compression Strategy Selection
```python
def _determine_compression_strategy(self, context: dict, content_analysis: dict) -> CompressionStrategy:
"""
Adaptive compression strategy based on:
- Resource constraints and token usage
- Content type classification
- User preferences and expertise level
- Quality preservation requirements
"""
```
### Selective Content Preservation
- **Framework Exclusion**: Zero compression for SuperClaude components
- **User Content Protection**: High-fidelity preservation for project files
- **Session Data Optimization**: Efficient compression for operational data
- **Quality-Gated Processing**: Real-time validation against preservation targets
### Symbol Systems Optimization
- **Logic Flow Enhancement**: Mathematical and directional symbols
- **Status Communication**: Visual progress and state indicators
- **Domain-Specific Symbols**: Technical context-aware representations
- **Persona-Aware Selection**: Symbol choice based on active domain expertise
### Abbreviation Systems
- **Technical Efficiency**: Domain-specific shorthand for common terms
- **Context-Sensitive Application**: Intelligent abbreviation based on user familiarity
- **Quality Preservation**: Abbreviations that maintain semantic clarity
- **Learning Integration**: Pattern optimization based on effectiveness feedback
### Quality-Gated Compression
- **Real-Time Validation**: Continuous quality monitoring during compression
- **Preservation Score Tracking**: Quantitative information retention measurement
- **Adaptive Threshold Management**: Dynamic quality targets based on content type
- **Fallback Strategies**: Graceful degradation when quality targets not met
## Implementation Details
### Compression Engine Architecture
```python
from compression_engine import (
CompressionEngine, CompressionLevel, ContentType,
CompressionResult, CompressionStrategy
)
class PreCompactHook:
def __init__(self):
self.compression_engine = CompressionEngine()
self.performance_target_ms = 150
```
### Content Analysis Pipeline
1. **Content Characteristics Analysis**: Complexity, repetition, technical density
2. **Source Classification**: Framework vs. user vs. session content identification
3. **Compressibility Assessment**: Potential optimization opportunity evaluation
4. **Strategy Selection**: Optimal compression level and technique determination
5. **Quality Validation**: Real-time preservation score monitoring
### Performance Optimization Techniques
- **Early Exit Strategy**: Framework content bypass for immediate exclusion
- **Parallel Processing**: Concurrent analysis of content sections
- **Intelligent Caching**: Compression result reuse for similar patterns
- **Selective Application**: Compression only where beneficial and safe
### Error Handling and Fallback
```python
def _create_fallback_compression_config(self, compact_request: dict, error: str) -> dict:
"""Create fallback compression configuration on error."""
return {
'compression_enabled': False,
'fallback_mode': True,
'error': error,
'quality': {
'preservation_score': 1.0, # No compression = perfect preservation
'quality_met': False, # But failed to optimize
'issues': [f"Compression hook error: {error}"]
}
}
```
## Results and Benefits
### Typical Performance Metrics
- **Token Reduction**: 30-50% typical savings with quality preservation
- **Processing Speed**: 50-100ms typical execution time (well under 150ms target)
- **Quality Preservation**: ≥95% information retention consistently achieved
- **Framework Protection**: 100% exclusion success rate for SuperClaude components
### Integration Benefits
- **Seamless MODE Integration**: Direct implementation of MODE_Token_Efficiency algorithms
- **Real-Time Optimization**: Sub-150ms compression decisions during active sessions
- **Quality-First Approach**: Preservation targets never compromised for efficiency gains
- **Adaptive Intelligence**: Learning-based optimization for improved effectiveness over time
### User Experience Improvements
- **Transparent Operation**: Compression applied without user intervention or awareness
- **Quality Assurance**: Technical correctness and semantic accuracy maintained
- **Performance Enhancement**: Faster response times through optimized token usage
- **Contextual Adaptation**: Compression strategies tailored to specific use cases and domains
---
*This hook serves as the core implementation of SuperClaude's intelligent token optimization system, providing evidence-based compression with adaptive strategies and quality-first preservation standards.*

View File

@@ -1,773 +0,0 @@
# Pre-Tool-Use Hook Technical Documentation
## Purpose
The `pre_tool_use` hook analyzes tool requests and provides intelligent routing decisions before tool execution in Claude Code. It determines optimal MCP server coordination, performance optimizations, and execution strategies based on tool characteristics and context.
**Core Implementation**: A 648-line Python implementation that processes tool requests, analyzes operation characteristics, routes to appropriate MCP servers, and provides enhanced tool configurations with a target execution time of <200ms.
---
## Execution Context
The pre_tool_use hook runs before every tool execution in Claude Code. According to `settings.json`, it has a 15-second timeout and executes via: `python3 ~/.claude/hooks/pre_tool_use.py`
**Actual Execution Flow:**
1. Receives tool request from Claude Code via stdin (JSON)
2. Initializes PreToolUseHook class with shared module components
3. Processes tool request through `process_tool_use()` method
4. Analyzes operation characteristics and routing patterns
5. Outputs enhanced tool configuration via stdout (JSON)
6. Falls back gracefully on errors with basic tool configuration
**Input Processing:**
- Extracts tool context including tool name, parameters, user intent
- Analyzes operation characteristics (file count, complexity, parallelizability)
- Identifies tool chain patterns (read-edit, multi-file, analysis chains)
**Output Configuration:**
- Enhanced mode flag and MCP server coordination
- Performance optimization settings (parallel execution, caching)
- Quality enhancement settings (validation, error recovery)
- Execution metadata (estimated time, complexity score, intelligence level)
---
## Performance Target
### Primary Target: <200ms Execution Time
- **Requirement**: Complete routing analysis and configuration within 200ms
- **Measurement**: End-to-end hook execution time from input to enhanced configuration
- **Validation**: Real-time performance tracking with target compliance reporting
- **Optimization**: Cached pattern recognition, pre-computed routing tables, intelligent fallbacks
### Performance Architecture
```yaml
Performance Zones:
green_zone: 0-150ms # Optimal performance with full intelligence
yellow_zone: 150-200ms # Target compliance with efficiency mode
red_zone: 200ms+ # Performance fallback with reduced intelligence
```
### Efficiency Calculation
```python
efficiency_score = (
time_efficiency * 0.4 + # Execution speed relative to target
complexity_efficiency * 0.3 + # Handling complexity appropriately
resource_efficiency * 0.3 # Resource utilization optimization
)
```
---
## Core Features
### 1. Intelligent Tool Routing
**Pattern-Based Tool Analysis**:
- Analyzes tool name, parameters, and context to determine optimal execution strategy
- Detects operation complexity (0.0-1.0 scale) based on file count, operation type, and requirements
- Identifies parallelization opportunities for multi-file operations
- Determines intelligence requirements for analysis and generation tasks
**Operation Categorization**:
```python
Operation Types:
- READ: File reading, search, navigation
- WRITE: File creation, editing, updates
- BUILD: Implementation, generation, creation
- TEST: Validation, testing, verification
- ANALYZE: Analysis, debugging, investigation
```
**Complexity Scoring Algorithm**:
```python
base_complexity = {
'READ': 0.0,
'WRITE': 0.2,
'BUILD': 0.4,
'TEST': 0.1,
'ANALYZE': 0.3
}
file_multiplier = (file_count - 1) * 0.1
directory_multiplier = (directory_count - 1) * 0.05
intelligence_bonus = 0.2 if requires_intelligence else 0.0
complexity_score = base_complexity + file_multiplier + directory_multiplier + intelligence_bonus
```
### 2. Context-Aware Configuration
**Session Context Integration**:
- Tracks tool usage patterns across session for optimization opportunities
- Analyzes tool chain patterns (Read→Edit, Multi-file operations, Analysis chains)
- Applies session-specific optimizations based on detected patterns
- Maintains resource state awareness for performance tuning
**Operation Chain Analysis**:
```python
Pattern Detection:
- read_edit_pattern: Read followed by Edit operations
- multi_file_pattern: Multiple file operations in sequence
- analysis_chain: Sequential analysis operations with caching opportunities
```
### 3. Real-Time Adaptation
**Learning Engine Integration**:
- Records tool usage effectiveness for routing optimization
- Adapts routing decisions based on historical performance
- Applies user-specific and project-specific routing preferences
- Continuous improvement through effectiveness measurement
**Adaptation Scopes**:
- **User Level**: Personal routing preferences and patterns
- **Project Level**: Project-specific tool effectiveness patterns
- **Session Level**: Real-time adaptation within current session
---
## MCP Server Routing Logic
### Server Capability Matching
The hook implements sophisticated capability matching to select optimal MCP servers:
```python
Server Capabilities Map:
context7: [documentation_access, framework_patterns, best_practices]
sequential: [complex_reasoning, systematic_analysis, hypothesis_testing]
magic: [ui_generation, design_systems, component_patterns]
playwright: [browser_automation, testing_frameworks, performance_testing]
morphllm: [pattern_application, fast_apply, intelligent_editing]
serena: [semantic_understanding, project_context, memory_management]
```
### Routing Decision Matrix
#### Single Server Selection
```yaml
Context7 Triggers:
- Library/framework keywords in user intent
- Documentation-related operations
- API reference needs
- Best practices queries
Sequential Triggers:
- Complexity score > 0.6
- Multi-step analysis required
- Debugging complex issues
- System architecture analysis
Magic Triggers:
- UI/component keywords
- Frontend development operations
- Design system integration
- Component generation requests
Playwright Triggers:
- Testing operations
- Browser automation needs
- Performance testing requirements
- E2E validation requests
Morphllm Triggers:
- Pattern-based editing
- Fast apply suitable operations
- Token optimization critical
- Simple to moderate complexity
Serena Triggers:
- File count > 5
- Symbol-level operations
- Project-wide analysis
- Memory operations
```
#### Multi-Server Coordination
```python
Coordination Strategies:
- single_server: One MCP server handles the operation
- collaborative: Multiple servers work together
- sequential_handoff: Primary server Secondary server
- parallel_coordination: Servers work on different aspects simultaneously
```
### Server Selection Algorithm
```python
def select_mcp_servers(context, requirements):
servers = []
# Primary capability matching
for server, capabilities in server_capabilities.items():
if any(cap in requirements['capabilities_needed'] for cap in capabilities):
servers.append(server)
# Context-specific routing
if context['complexity_score'] > 0.6:
servers.append('sequential')
if context['file_count'] > 5:
servers.append('serena')
# User intent analysis
intent_lower = context.get('user_intent', '').lower()
if any(word in intent_lower for word in ['component', 'ui', 'frontend']):
servers.append('magic')
# Deduplication and prioritization
return list(dict.fromkeys(servers)) # Preserve order, remove duplicates
```
---
## Fallback Strategies
### Hierarchy of Fallback Options
#### Level 1: Preferred MCP Server Unavailable
```python
Strategy: Alternative Server Selection
- Sequential unavailable Use Morphllm for analysis
- Serena unavailable Use native tools with manual coordination
- Magic unavailable Generate basic components with Context7 patterns
```
#### Level 2: Multiple MCP Servers Unavailable
```python
Strategy: Capability Degradation
- Disable enhanced intelligence features
- Fall back to native Claude Code tools
- Maintain basic functionality with warnings
- Preserve user context and intent
```
#### Level 3: All MCP Servers Unavailable
```python
Strategy: Native Tool Execution
- Execute original tool request without enhancement
- Log degradation for performance analysis
- Provide clear feedback about reduced capabilities
- Maintain operational continuity
```
### Fallback Configuration Generation
```python
def create_fallback_tool_config(tool_request, error):
return {
'tool_name': tool_request.get('tool_name'),
'enhanced_mode': False,
'fallback_mode': True,
'error': error,
'mcp_integration': {
'enabled': False,
'servers': [],
'coordination_strategy': 'none'
},
'performance_optimization': {
'parallel_execution': False,
'caching_enabled': False,
'optimizations': []
},
'performance_metrics': {
'routing_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
```
### Error Recovery Mechanisms
- **Graceful Degradation**: Reduce capability rather than failing completely
- **Context Preservation**: Maintain user intent and session context during fallback
- **Performance Continuity**: Ensure operations continue with acceptable performance
- **Learning Integration**: Record fallback events for routing improvement
---
## Configuration
### Hook-Specific Configuration (superclaude-config.json)
```json
{
"pre_tool_use": {
"enabled": true,
"description": "ORCHESTRATOR + MCP routing intelligence for optimal tool selection",
"performance_target_ms": 200,
"features": [
"intelligent_tool_routing",
"mcp_server_selection",
"performance_optimization",
"context_aware_configuration",
"fallback_strategy_implementation",
"real_time_adaptation"
],
"configuration": {
"mcp_intelligence": true,
"pattern_detection": true,
"learning_adaptations": true,
"performance_optimization": true,
"fallback_strategies": true
},
"integration": {
"mcp_servers": ["context7", "sequential", "magic", "playwright", "morphllm", "serena"],
"quality_gates": true,
"learning_engine": true
}
}
}
```
### MCP Server Integration Configuration
```json
{
"mcp_server_integration": {
"enabled": true,
"servers": {
"context7": {
"description": "Library documentation and framework patterns",
"capabilities": ["documentation_access", "framework_patterns", "best_practices"],
"performance_profile": "standard"
},
"sequential": {
"description": "Multi-step reasoning and complex analysis",
"capabilities": ["complex_reasoning", "systematic_analysis", "hypothesis_testing"],
"performance_profile": "intensive"
},
"serena": {
"description": "Semantic analysis and memory management",
"capabilities": ["semantic_understanding", "project_context", "memory_management"],
"performance_profile": "standard"
}
},
"coordination": {
"intelligent_routing": true,
"fallback_strategies": true,
"performance_optimization": true,
"learning_adaptation": true
}
}
}
```
### Runtime Configuration Loading
```python
class PreToolUseHook:
def __init__(self):
# Load hook-specific configuration
self.hook_config = config_loader.get_hook_config('pre_tool_use')
# Load orchestrator configuration (YAML or fallback)
try:
self.orchestrator_config = config_loader.load_config('orchestrator')
except FileNotFoundError:
self.orchestrator_config = self.hook_config.get('configuration', {})
# Performance targets from configuration
self.performance_target_ms = config_loader.get_hook_config(
'pre_tool_use', 'performance_target_ms', 200
)
```
---
## Learning Integration
### Learning Data Collection
**Operation Pattern Recording**:
```python
def record_tool_learning(context, tool_config):
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'tool_name': context['tool_name'],
'mcp_servers_used': tool_config.get('mcp_integration', {}).get('servers', []),
'execution_strategy': tool_config.get('execution_metadata', {}).get('intelligence_level'),
'optimizations_applied': tool_config.get('performance_optimization', {}).get('optimizations', [])
},
effectiveness_score=0.8, # Updated after execution
confidence_score=0.7,
metadata={'hook': 'pre_tool_use', 'version': '1.0'}
)
```
### Adaptive Routing Enhancement
**Learning-Based Routing Improvements**:
- **User Preferences**: Learn individual user's tool and server preferences
- **Project Patterns**: Adapt to project-specific optimal routing strategies
- **Performance Optimization**: Route based on historical performance data
- **Error Pattern Recognition**: Avoid routing strategies that historically failed
### Learning Scope Hierarchy
```python
Learning Scopes:
1. Session Level: Real-time adaptation within current session
2. User Level: Personal routing preferences across sessions
3. Project Level: Project-specific optimization patterns
4. Global Level: Framework-wide routing intelligence
```
### Effectiveness Measurement
```python
Effectiveness Metrics:
- execution_time: Actual vs estimated execution time
- success_rate: Successful operation completion rate
- quality_score: Output quality assessment
- user_satisfaction: Implicit feedback from continued usage
- resource_efficiency: Resource utilization optimization
```
---
## Performance Optimization
### Caching Strategies
#### Pattern Recognition Cache
```python
Cache Structure:
- Key: hash(user_intent + tool_name + context_hash)
- Value: routing_decision + confidence_score
- TTL: 60 minutes for pattern stability
- Size: 1000 entries with LRU eviction
```
#### MCP Server Response Cache
```python
Cache Strategy:
- Documentation lookups: 30 minutes TTL
- Analysis results: Session-scoped cache
- Pattern templates: 1 hour TTL
- Server availability: 5 minutes TTL
```
#### Performance Optimizations
```python
Optimization Techniques:
1. Pre-computed Routing Tables: Common patterns pre-calculated
2. Lazy Loading: Load components only when needed
3. Parallel Analysis: Run pattern detection and MCP planning concurrently
4. Result Reuse: Cache and reuse analysis results within session
5. Intelligent Fallbacks: Fast fallback paths for common failure modes
```
### Resource Management
```python
Resource Optimization:
- Memory: Bounded caches with intelligent eviction
- CPU: Parallel processing for independent operations
- I/O: Batch operations where possible
- Network: Connection pooling for MCP servers
```
### Execution Time Optimization
```python
Time Budget Allocation:
- Pattern Detection: 50ms (25%)
- MCP Server Selection: 30ms (15%)
- Configuration Generation: 40ms (20%)
- Learning Integration: 20ms (10%)
- Buffer/Safety Margin: 60ms (30%)
Total Target: 200ms
```
---
## Integration with ORCHESTRATOR.md
### Pattern Matching Implementation
The hook implements the ORCHESTRATOR.md pattern matching system:
```python
# Quick Pattern Matching from ORCHESTRATOR.md
pattern_mappings = {
'ui_component': {
'keywords': ['component', 'design', 'frontend', 'UI'],
'mcp_server': 'magic',
'persona': 'frontend'
},
'deep_analysis': {
'keywords': ['architecture', 'complex', 'system-wide'],
'mcp_server': 'sequential',
'thinking_mode': 'think_hard'
},
'large_scope': {
'keywords': ['many files', 'entire codebase'],
'mcp_server': 'serena',
'delegation': True
},
'symbol_operations': {
'keywords': ['rename', 'refactor', 'extract', 'move'],
'mcp_server': 'serena',
'precision': 'lsp'
},
'pattern_edits': {
'keywords': ['framework', 'style', 'cleanup'],
'mcp_server': 'morphllm',
'optimization': 'token'
}
}
```
### Resource Zone Implementation
```python
def get_resource_zone(resource_usage):
if resource_usage <= 0.75:
return 'green_zone' # Full capabilities
elif resource_usage <= 0.85:
return 'yellow_zone' # Efficiency mode
else:
return 'red_zone' # Essential operations only
```
### Tool Selection Guide Integration
The hook implements the ORCHESTRATOR.md tool selection guide:
```python
def apply_orchestrator_routing(context, user_intent):
"""Apply ORCHESTRATOR.md routing patterns"""
# MCP Server selection based on ORCHESTRATOR.md
if any(word in user_intent.lower() for word in ['library', 'docs', 'framework']):
return ['context7']
if any(word in user_intent.lower() for word in ['complex', 'analysis', 'debug']):
return ['sequential']
if any(word in user_intent.lower() for word in ['component', 'ui', 'design']):
return ['magic']
if context.get('file_count', 1) > 5:
return ['serena']
# Default to native tools for simple operations
return []
```
### Quality Gate Integration
```python
Quality Gates Applied:
- Step 1 (Syntax Validation): Tool parameter validation
- Step 2 (Type Analysis): Context type checking and compatibility
- Performance Monitoring: Real-time execution time tracking
- Fallback Validation: Ensure fallback strategies maintain functionality
```
### Auto-Activation Rules Implementation
The hook implements ORCHESTRATOR.md auto-activation rules:
```python
def apply_auto_activation_rules(context):
"""Apply ORCHESTRATOR.md auto-activation patterns"""
activations = []
# Enable Sequential for complex operations
if (context.get('complexity_score', 0) > 0.6 or
context.get('requires_intelligence')):
activations.append('sequential')
# Enable Serena for multi-file operations
if (context.get('file_count', 1) > 5 or
any(op in context.get('operation_sequence', []) for op in ['rename', 'extract'])):
activations.append('serena')
# Enable delegation for large operations
if (context.get('file_count', 1) > 3 or
context.get('directory_count', 1) > 2):
activations.append('delegation')
return activations
```
---
## Technical Implementation Details
### Core Architecture Components
#### 1. Framework Logic Integration
```python
from framework_logic import FrameworkLogic, OperationContext, OperationType, RiskLevel
# Provides SuperClaude framework intelligence
self.framework_logic = FrameworkLogic()
```
#### 2. Pattern Detection Engine
```python
from pattern_detection import PatternDetector, PatternMatch
# Analyzes patterns for routing decisions
detection_result = self.pattern_detector.detect_patterns(
user_intent, context, operation_data
)
```
#### 3. MCP Intelligence Coordination
```python
from mcp_intelligence import MCPIntelligence, MCPActivationPlan
# Creates optimal MCP server activation plans
mcp_plan = self.mcp_intelligence.create_activation_plan(
user_intent, context, operation_data
)
```
#### 4. Learning Engine Integration
```python
from learning_engine import LearningEngine
# Applies learned adaptations and records new patterns
enhanced_routing = self.learning_engine.apply_adaptations(context, base_routing)
```
### Error Handling Architecture
```python
Exception Handling Strategy:
1. Catch all exceptions during routing analysis
2. Log error with context for debugging
3. Generate fallback configuration
4. Preserve user intent and operation continuity
5. Record error for learning and improvement
```
### Performance Monitoring
```python
Performance Tracking:
- Initialization time measurement
- Per-operation execution time tracking
- Target compliance validation (<200ms)
- Efficiency score calculation
- Resource utilization monitoring
```
---
## Usage Examples
### Example 1: Simple File Read
```json
Input Request:
{
"tool_name": "Read",
"parameters": {"file_path": "/src/components/Button.tsx"},
"user_intent": "read button component"
}
Hook Enhancement:
{
"tool_name": "Read",
"enhanced_mode": false,
"mcp_integration": {"enabled": false},
"execution_metadata": {
"complexity_score": 0.0,
"intelligence_level": "low"
}
}
```
### Example 2: Complex Multi-File Analysis
```json
Input Request:
{
"tool_name": "Analyze",
"parameters": {"directory": "/src/**/*.ts"},
"user_intent": "analyze typescript architecture patterns"
}
Hook Enhancement:
{
"tool_name": "Analyze",
"enhanced_mode": true,
"mcp_integration": {
"enabled": true,
"servers": ["sequential", "serena"],
"coordination_strategy": "collaborative"
},
"performance_optimization": {
"parallel_execution": true,
"optimizations": ["parallel_file_processing"]
},
"execution_metadata": {
"complexity_score": 0.75,
"intelligence_level": "high"
}
}
```
### Example 3: UI Component Generation
```json
Input Request:
{
"tool_name": "Generate",
"parameters": {"component_type": "form"},
"user_intent": "create login form component"
}
Hook Enhancement:
{
"tool_name": "Generate",
"enhanced_mode": true,
"mcp_integration": {
"enabled": true,
"servers": ["magic", "context7"],
"coordination_strategy": "sequential_handoff"
},
"build_operations": {
"framework_integration": true,
"component_generation": true,
"quality_validation": true
}
}
```
---
## Monitoring and Debugging
### Performance Metrics
```python
Tracked Metrics:
- routing_time_ms: Time spent in routing analysis
- target_met: Boolean indicating <200ms compliance
- efficiency_score: Overall routing effectiveness (0.0-1.0)
- mcp_servers_activated: Count of MCP servers coordinated
- optimizations_applied: List of performance optimizations
- fallback_triggered: Boolean indicating fallback usage
```
### Logging Integration
```python
Log Events:
- Hook start: Tool name and parameters
- Routing decisions: MCP server selection rationale
- Execution strategy: Chosen execution approach
- Performance metrics: Timing and efficiency data
- Error events: Failures and fallback triggers
- Hook completion: Success/failure status
```
### Debug Information
```python
Debug Output:
- Pattern detection results with confidence scores
- MCP server capability matching analysis
- Optimization opportunity identification
- Learning adaptation application
- Configuration generation process
- Performance target validation
```
---
## Related Documentation
- **ORCHESTRATOR.md**: Core routing patterns and coordination strategies
- **Framework Integration**: Quality gates and mode coordination
- **MCP Server Documentation**: Individual server capabilities and integration
- **Learning Engine**: Adaptive intelligence and pattern recognition
- **Performance Monitoring**: System-wide performance tracking and optimization
---
*The pre_tool_use hook serves as the intelligent routing engine for the SuperClaude framework, ensuring optimal tool selection, MCP server coordination, and performance optimization for every operation within Claude Code.*

View File

@@ -1,481 +0,0 @@
# Session Start Hook Technical Documentation
## Purpose
The session_start hook initializes Claude Code sessions with SuperClaude framework intelligence. It analyzes project context, detects patterns, and configures appropriate modes and MCP servers based on the actual session requirements.
**Core Implementation**: A 704-line Python implementation that performs lazy loading, pattern detection, MCP intelligence routing, compression configuration, and learning adaptations with a target execution time of <50ms.
## Execution Context
The session_start hook runs automatically at the beginning of every Claude Code session. According to `settings.json`, it has a 10-second timeout and executes via: `python3 ~/.claude/hooks/session_start.py`
**Actual Execution Flow:**
1. Receives session data from Claude Code via stdin (JSON)
2. Initializes SessionStartHook class with lazy loading of components
3. Processes session initialization with project analysis and pattern detection
4. Outputs enhanced session configuration via stdout (JSON)
5. Falls back gracefully on errors with basic session configuration
**Performance**: Target <50ms execution time (configurable via superclaude-config.json)
## Performance Target
**Target: <50ms execution time**
This aggressive performance target is critical for maintaining seamless user experience during session initialization. The hook must complete its comprehensive analysis and configuration within this window to avoid perceptible delays.
**Why 50ms Matters:**
- **User Experience**: Sub-perceptible delay maintains natural interaction flow
- **Session Efficiency**: Fast bootstrap enables immediate intelligent behavior
- **Resource Optimization**: Efficient initialization preserves compute budget for actual work
- **Learning System**: Quick analysis allows for real-time adaptation without latency
**Performance Monitoring:**
- Real-time execution time tracking with detailed metrics
- Efficiency score calculation based on target achievement
- Performance degradation alerts and optimization recommendations
- Historical performance analysis for continuous improvement
## Core Features
### 1. Project Structure Analysis
**Implementation**: The `_analyze_project_structure()` method performs quick project analysis by examining key files and directories.
**What it actually does:**
- Enumerates up to 100 files for performance (limits via `files[:100]`)
- Detects project type by checking for manifest files:
- `package.json` → nodejs
- `pyproject.toml` or `setup.py` → python
- `Cargo.toml` → rust
- `go.mod` → go
- Identifies frameworks by parsing package.json dependencies (React, Vue, Angular, Express)
- Checks for test directories and production indicators (Dockerfile, .env.production)
- Returns analysis dict with project_type, framework_detected, has_tests, is_production, etc.
### 2. User Intent Analysis and Mode Detection
**Implementation**: The `_analyze_user_intent()` method examines user input to determine operation type and complexity.
**What it actually does:**
- Analyzes user input text for operation keywords:
- "build", "create", "implement" → BUILD operation (complexity +0.3)
- "fix", "debug", "troubleshoot" → ANALYZE operation (complexity +0.2)
- "refactor", "restructure" → REFACTOR operation (complexity +0.4)
- "test", "validate" → TEST operation (complexity +0.1)
- Detects brainstorming needs via keywords: "not sure", "thinking about", "maybe", "brainstorm"
- Calculates complexity score (0.0-1.0) based on operation type and complexity indicators
- The `_activate_intelligent_modes()` method activates modes based on detected patterns:
- brainstorming mode if `brainstorming_likely` is True
- task_management mode if recommended by pattern detection
- token_efficiency mode if recommended by pattern detection
### 3. MCP Server Configuration
**Implementation**: The `_create_mcp_activation_plan()` and `_configure_mcp_servers()` methods determine which MCP servers to activate.
**What it actually does:**
- Uses MCPIntelligence class to create activation plans based on:
- User intent analysis
- Context characteristics (file count, complexity score, operation type)
- Project analysis results
- Returns MCP plan with:
- `servers_to_activate`: List of servers to enable
- `activation_order`: Sequence for server activation
- `coordination_strategy`: How servers should work together
- `estimated_cost_ms`: Performance impact estimate
- `fallback_strategy`: Backup plan if servers fail
### 4. Learning Engine Integration
**Implementation**: The `_apply_learning_adaptations()` method applies learned patterns to improve session configuration.
**What it actually does:**
- Uses LearningEngine (initialized with `~/.claude/cache` directory) to:
- Apply previous adaptations to current recommendations
- Store user preferences (preferred tools per operation type)
- Update project-specific information (project type, framework)
- Record learning events for future sessions
- The `_record_session_learning()` method stores session initialization patterns for continuous improvement
### 5. Lazy Loading Architecture
**Implementation**: The hook uses lazy loading via Python properties to minimize initialization time.
**What it actually does:**
- Core components are loaded immediately: `FrameworkLogic()`
- Other components use lazy loading properties:
- `pattern_detector` property loads `PatternDetector()` only when first accessed
- `mcp_intelligence` property loads `MCPIntelligence()` only when needed
- `compression_engine` property loads `CompressionEngine()` only when used
- `learning_engine` property loads `LearningEngine()` only when required
- This reduces initialization overhead and improves the <50ms performance target
## Implementation Details
### Architecture Pattern
The session_start hook implements a layered architecture with clear separation of concerns:
**Layer 1: Context Extraction**
```python
def _extract_session_context(self, session_data: dict) -> dict:
# Enriches basic session data with project analysis and user intent detection
context = {
'session_id': session_data.get('session_id', 'unknown'),
'project_path': session_data.get('project_path', ''),
'user_input': session_data.get('user_input', ''),
# ... additional context enrichment
}
```
**Layer 2: Intelligence Analysis**
```python
def _detect_session_patterns(self, context: dict) -> dict:
# Pattern detection using SuperClaude's pattern recognition algorithms
detection_result = self.pattern_detector.detect_patterns(
context.get('user_input', ''),
context,
operation_data
)
```
**Layer 3: Configuration Generation**
```python
def _generate_session_config(self, context: dict, recommendations: dict,
mcp_plan: dict, compression_config: dict) -> dict:
# Comprehensive session configuration assembly
return comprehensive_session_configuration
```
### Error Handling Strategy
**Graceful Degradation**: The hook implements comprehensive error handling that ensures session functionality even when intelligence systems fail.
```python
def initialize_session(self, session_context: dict) -> dict:
try:
# Full intelligence initialization
return enhanced_session_config
except Exception as e:
# Graceful fallback
return self._create_fallback_session_config(session_context, str(e))
```
**Fallback Configuration:**
- Disables SuperClaude intelligence features
- Maintains basic Claude Code functionality
- Provides error context for debugging
- Enables recovery for subsequent sessions
### Performance Measurement
**Real-Time Metrics:**
```python
# Performance tracking integration
execution_time = (time.time() - start_time) * 1000
session_config['performance_metrics'] = {
'initialization_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'efficiency_score': self._calculate_initialization_efficiency(execution_time)
}
```
## Configuration
### Hook-Specific Configuration (superclaude-config.json)
```json
{
"hook_configurations": {
"session_start": {
"enabled": true,
"description": "SESSION_LIFECYCLE + FLAGS logic with intelligent bootstrap",
"performance_target_ms": 50,
"features": [
"smart_project_context_loading",
"automatic_mode_detection",
"mcp_server_intelligence_routing",
"user_preference_adaptation",
"performance_optimized_initialization"
],
"configuration": {
"auto_project_detection": true,
"framework_exclusion_enabled": true,
"intelligence_activation": true,
"learning_integration": true,
"performance_monitoring": true
},
"error_handling": {
"graceful_fallback": true,
"preserve_user_context": true,
"error_learning": true
}
}
}
}
```
### Configuration Loading Strategy
**Primary Configuration Source**: superclaude-config.json hook_configurations.session_start
**Fallback Strategy**: YAML configuration files in config/ directory
**Runtime Adaptation**: Learning engine modifications applied during execution
```python
# Configuration loading with fallback
self.hook_config = config_loader.get_hook_config('session_start')
try:
self.session_config = config_loader.load_config('session')
except FileNotFoundError:
self.session_config = self.hook_config.get('configuration', {})
```
## Pattern Loading Strategy
### Minimal Pattern Bootstrap
The hook implements a strategic pattern loading approach that loads only essential patterns during initialization to meet the 50ms performance target.
**Pattern Loading Phases:**
**Phase 1: Critical Patterns (Target: 3-5KB)**
- Core operation type detection patterns
- Basic project structure recognition
- Essential mode activation triggers
- Primary MCP server routing logic
**Phase 2: Context-Specific Patterns (Lazy Loaded)**
- Framework-specific intelligence patterns
- Advanced optimization strategies
- Historical learning adaptations
- Complex coordination algorithms
**Implementation Strategy:**
```python
def _detect_session_patterns(self, context: dict) -> dict:
# Load minimal patterns for fast detection
detection_result = self.pattern_detector.detect_patterns(
context.get('user_input', ''),
context,
operation_data # Contains only essential pattern data
)
```
**Pattern Optimization Techniques:**
- **Compressed Pattern Storage**: Use efficient data structures for pattern representation
- **Selective Pattern Loading**: Load only patterns relevant to detected project type
- **Cached Pattern Results**: Reuse pattern analysis for similar contexts
- **Progressive Pattern Enhancement**: Enable additional patterns as session progresses
## Shared Modules Used
### framework_logic.py
**Purpose**: Provides SuperClaude framework decision-making capabilities.
**Used in session_start.py:**
- `FrameworkLogic` class for quality gate configuration
- `OperationContext` dataclass for structured context management
- `OperationType` enum for operation classification (READ, WRITE, BUILD, etc.)
- `RiskLevel` enum for risk assessment
- Used in `_configure_quality_gates()` method to determine appropriate quality gates based on operation context
### pattern_detection.py
**Purpose**: Analyzes patterns in user input and context for intelligent routing.
**Used in session_start.py:**
- `PatternDetector` class (lazy loaded)
- `detect_patterns()` method for analyzing user intent, context, and operation data
- Returns pattern matches, recommended modes, recommended MCP servers, and confidence scores
### mcp_intelligence.py
**Purpose**: Provides MCP server activation planning and coordination strategies.
**Used in session_start.py:**
- `MCPIntelligence` class (lazy loaded)
- `create_activation_plan()` method for determining optimal MCP server coordination
- Returns activation plans with servers, order, cost estimates, and coordination strategies
### compression_engine.py
**Purpose**: Handles compression strategy selection for token efficiency.
**Used in session_start.py:**
- `CompressionEngine` class (lazy loaded)
- `determine_compression_level()` method for context-based compression decisions
- Used in `_configure_compression()` to set session compression strategy
### learning_engine.py
**Purpose**: Provides learning and adaptation capabilities for continuous improvement.
**Used in session_start.py:**
- `LearningEngine` class (lazy loaded, initialized with `~/.claude/cache` directory)
- `apply_adaptations()` method for applying learned patterns
- `record_learning_event()` method for storing session initialization data
- `update_project_info()` and preference tracking methods
### yaml_loader.py
**Purpose**: Configuration loading with fallback strategies.
**Used in session_start.py:**
- `config_loader.get_hook_config()` for hook-specific configuration
- `config_loader.load_config()` for YAML configuration files with FileNotFoundError handling
- Fallback to hook configuration when YAML files are missing
### logger.py
**Purpose**: Structured logging for hook execution tracking.
**Used in session_start.py:**
- `log_hook_start()` and `log_hook_end()` for execution timing
- `log_decision()` for mode activation and MCP server selection decisions
- `log_error()` for error context preservation
## Error Handling
**Implementation**: The main `initialize_session()` method includes comprehensive error handling with graceful fallback.
**What actually happens on errors:**
1. **Exception Handling**: All errors are caught in the main try-except block
2. **Error Logging**: Errors are logged via `log_error()` with context
3. **Fallback Configuration**: `_create_fallback_session_config()` returns:
```python
{
'session_id': session_context.get('session_id', 'unknown'),
'superclaude_enabled': False,
'fallback_mode': True,
'error': error,
'basic_config': {
'compression_level': 'minimal',
'mcp_servers_enabled': False,
'learning_disabled': True
}
}
```
4. **Session Continuity**: Basic Claude Code functionality is preserved even when SuperClaude features fail
### Error Learning Integration
**Error Pattern Recognition:**
```python
def _record_session_learning(self, context: dict, session_config: dict):
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
session_config,
success_score,
confidence_score,
metadata
)
```
**Recovery Optimization:**
- Errors are analyzed for pattern recognition
- Successful recovery strategies are learned and applied
- Error frequency analysis drives system improvements
- Proactive error prevention based on historical patterns
## Session Context Enhancement
### Context Enrichment Process
The session_start hook transforms basic Claude Code session data into rich, intelligent context that enables advanced SuperClaude behaviors throughout the session.
**Input Context (Basic):**
- session_id: Basic session identifier
- project_path: File system path
- user_input: Initial user request
- conversation_length: Basic metrics
**Enhanced Context (SuperClaude):**
- Project analysis with technology stack detection
- User intent analysis with complexity scoring
- Mode activation recommendations
- MCP server routing plans
- Performance optimization settings
- Learning adaptations from previous sessions
### Context Preservation Strategy
**Session Configuration Generation:**
```python
def _generate_session_config(self, context: dict, recommendations: dict,
mcp_plan: dict, compression_config: dict) -> dict:
return {
'session_id': context['session_id'],
'superclaude_enabled': True,
'active_modes': recommendations.get('recommended_modes', []),
'mcp_servers': mcp_plan,
'compression': compression_config,
'performance': performance_config,
'learning': learning_config,
'context': context_preservation,
'quality_gates': quality_gate_config
}
```
**Context Utilization Throughout Session:**
- **MCP Server Routing**: Uses project analysis for intelligent server selection
- **Mode Activation**: Applies detected patterns for behavioral mode triggers
- **Performance Optimization**: Uses complexity analysis for resource allocation
- **Quality Gates**: Applies context-appropriate validation levels
- **Learning Integration**: Captures session patterns for future improvement
### Long-Term Context Evolution
**Cross-Session Learning:**
- Session patterns are analyzed and stored for future sessions
- User preferences are extracted and applied automatically
- Project-specific optimizations are learned and reused
- Error patterns are identified and proactively avoided
**Context Continuity:**
- Enhanced context from session_start provides foundation for entire session
- Context elements influence all subsequent hook behaviors
- Learning from current session feeds into future session_start executions
- Continuous improvement cycle maintains and enhances context quality over time
## Integration Points
### SuperClaude Framework Integration
**SESSION_LIFECYCLE.md Compliance:**
- Implements initialization phase of session lifecycle pattern
- Provides foundation for checkpoint and persistence systems
- Enables context continuity across session boundaries
**FLAGS.md Logic Implementation:**
- Automatically detects and applies appropriate flag combinations
- Implements flag precedence and conflict resolution
- Provides intelligent default flag selection based on context
**ORCHESTRATOR.md Pattern Integration:**
- Implements intelligent routing patterns for MCP server selection
- Applies resource management strategies during initialization
- Establishes foundation for quality gate enforcement
### Hook Ecosystem Coordination
**Downstream Hook Preparation:**
- pre_tool_use: Receives enhanced context for intelligent tool routing
- post_tool_use: Gets quality gate configuration for validation
- pre_compact: Receives compression configuration for optimization
- stop: Gets learning configuration for session analytics
**Cross-Hook Data Flow:**
```
session_start → Enhanced Context → All Subsequent Hooks
Learning Engine ← Session Analytics ← stop Hook
```
This comprehensive technical documentation provides a complete understanding of how the session_start hook operates as the foundational intelligence layer of the SuperClaude-Lite framework, transforming basic Claude Code sessions into intelligent, adaptive, and optimized experiences.

View File

@@ -1,372 +0,0 @@
# Stop Hook Documentation
## Purpose
The `stop` hook provides session analytics and persistence when Claude Code sessions end. It implements session summarization, learning consolidation, and data storage for continuous framework improvement.
**Core Implementation**: Analyzes complete session history, consolidates learning events, generates performance metrics, and persists session data for future analysis with a target execution time of <200ms.
## Execution Context
The stop hook runs at Claude Code session termination. According to `settings.json`, it has a 15-second timeout and executes via: `python3 ~/.claude/hooks/stop.py`
**Actual Execution Flow:**
1. Receives session termination data via stdin (JSON)
2. Initializes StopHook class with analytics and learning components
3. Analyzes complete session history and performance data
4. Consolidates learning events and generates session insights
5. Persists session data and analytics for future reference
6. Outputs session summary and analytics via stdout (JSON)
## Performance Target
**Primary Target**: <200ms execution time for complete session analytics
### Performance Benchmarks
- **Initialization**: <50ms for component loading
- **Analytics Generation**: <100ms for comprehensive analysis
- **Session Persistence**: <30ms for data storage
- **Learning Consolidation**: <20ms for learning events processing
- **Total Processing**: <200ms end-to-end execution
### Performance Monitoring
```python
execution_time = (time.time() - start_time) * 1000
target_met = execution_time < self.performance_target_ms
```
## Session Analytics
### Comprehensive Performance Metrics
#### Overall Score Calculation
```python
overall_score = (
productivity * 0.4 +
effectiveness * 0.4 +
(1.0 - error_rate) * 0.2
)
```
#### Performance Categories
- **Productivity Score**: Operations per minute, completion rates
- **Quality Score**: Error rates, operation success rates
- **Intelligence Utilization**: MCP server usage, SuperClaude effectiveness
- **Resource Efficiency**: Memory, CPU, token usage optimization
- **User Satisfaction Estimate**: Derived from session patterns and outcomes
#### Analytics Components
```yaml
performance_metrics:
overall_score: 0.85 # Combined performance indicator
productivity_score: 0.78 # Operations efficiency
quality_score: 0.92 # Error-free execution rate
efficiency_score: 0.84 # Resource utilization
satisfaction_estimate: 0.87 # Estimated user satisfaction
```
### Bottleneck Identification
- **High Error Rate**: >20% operation failure rate
- **Low Productivity**: <50% productivity score
- **Underutilized Intelligence**: <30% MCP usage with SuperClaude enabled
- **Resource Constraints**: Memory/CPU/token usage optimization opportunities
### Optimization Opportunities Detection
- **Tool Usage Optimization**: >10 unique tools suggest coordination improvement
- **MCP Server Coordination**: <2 servers with >5 operations suggest better orchestration
- **Workflow Enhancement**: Pattern analysis for efficiency improvements
## Learning Consolidation
### Learning Events Processing
The hook consolidates all learning events generated during the session:
```python
def _consolidate_learning_events(self, context: dict) -> dict:
# Generate learning insights from session
insights = self.learning_engine.generate_learning_insights()
# Session-specific learning metrics
session_learning = {
'session_effectiveness': context.get('superclaude_effectiveness', 0),
'performance_score': context.get('session_productivity', 0),
'mcp_coordination_effectiveness': min(context.get('mcp_usage_ratio', 0) * 2, 1.0),
'error_recovery_success': 1.0 - context.get('error_rate', 0)
}
```
### Learning Categories
- **Effectiveness Feedback**: Session performance patterns
- **User Preferences**: Interaction and usage patterns
- **Technical Patterns**: Tool usage and coordination effectiveness
- **Error Recovery**: Success patterns for error handling
### Adaptation Creation
- **Session-Level Adaptations**: Immediate session pattern learning
- **User-Level Adaptations**: Long-term preference learning
- **Technical Adaptations**: Tool and workflow optimization patterns
## Session Persistence
### Intelligent Storage Strategy
#### Data Classification
- **Session Analytics**: Complete performance and effectiveness data
- **Learning Events**: Consolidated learning insights and adaptations
- **Context Data**: Session operational context and metadata
- **Recommendations**: Generated suggestions for future sessions
#### Compression Logic
```python
# Apply compression for large session data
if len(analytics_data) > 10000: # 10KB threshold
compression_result = self.compression_engine.compress_content(
analytics_data,
context,
{'content_type': 'session_data'}
)
```
#### Storage Optimization
- **Session Cleanup**: Maintains 50 most recent sessions
- **Automatic Pruning**: Removes sessions older than retention policy
- **Compression**: Applied to sessions >10KB for storage efficiency
### Persistence Results
```yaml
persistence_result:
persistence_enabled: true
session_data_saved: true
analytics_saved: true
learning_data_saved: true
compression_applied: true
compression_ratio: 0.65
storage_optimized: true
```
## Recommendations Generation
### Performance Improvements
Generated when overall score <70%:
- Focus on reducing error rate through validation
- Enable more SuperClaude intelligence features
- Optimize tool selection and usage patterns
### SuperClaude Optimizations
Based on framework effectiveness analysis:
- **Low Effectiveness (<60%)**: Enable more MCP servers, use delegation features
- **Disabled Framework**: Recommend SuperClaude enablement for productivity
- **Underutilization**: Activate compression and intelligence features
### Learning Suggestions
- **Low Learning Events (<3)**: Engage with more complex operations
- **Pattern Recognition**: Suggestions based on successful session patterns
- **Workflow Enhancement**: Recommendations for process improvements
### Workflow Enhancements
Based on error patterns and efficiency analysis:
- **High Error Rate (>10%)**: Use validation hooks, enable pre-tool intelligence
- **Resource Optimization**: Memory, CPU, token usage improvements
- **Coordination Enhancement**: Better MCP server and tool coordination
## Configuration
### Hook Configuration
Loaded from `superclaude-config.json` hook configuration:
```yaml
stop_hook:
performance_target_ms: 200
analytics:
comprehensive_metrics: true
learning_consolidation: true
recommendation_generation: true
persistence:
enabled: true
compression_threshold_bytes: 10000
session_retention_count: 50
learning:
session_adaptations: true
user_preference_tracking: true
technical_pattern_learning: true
```
### Session Configuration
Falls back to session.yaml configuration when available:
```yaml
session:
analytics_enabled: true
learning_consolidation: true
performance_tracking: true
recommendation_generation: true
persistence_optimization: true
```
## Integration with /sc:save
### Command Implementation
The Stop Hook directly implements the `/sc:save` command logic:
#### Core /sc:save Features
- **Session Analytics**: Complete session performance analysis
- **Learning Consolidation**: All learning events processed and stored
- **Intelligent Persistence**: Session data saved with optimization
- **Recommendation Generation**: Actionable suggestions for improvement
- **Performance Tracking**: <200ms execution time monitoring
#### /sc:save Workflow Integration
```python
def process_session_stop(self, session_data: dict) -> dict:
# 1. Extract session context
context = self._extract_session_context(session_data)
# 2. Analyze session performance (/sc:save analytics)
performance_analysis = self._analyze_session_performance(context)
# 3. Consolidate learning events (/sc:save learning)
learning_consolidation = self._consolidate_learning_events(context)
# 4. Generate session analytics (/sc:save metrics)
session_analytics = self._generate_session_analytics(...)
# 5. Perform session persistence (/sc:save storage)
persistence_result = self._perform_session_persistence(...)
# 6. Generate recommendations (/sc:save recommendations)
recommendations = self._generate_recommendations(...)
```
### /sc:save Output Format
```yaml
session_report:
session_id: "session_2025-01-31_14-30-00"
session_completed: true
completion_timestamp: 1704110400
analytics:
session_summary: {...}
performance_metrics: {...}
superclaude_effectiveness: {...}
quality_analysis: {...}
learning_summary: {...}
persistence:
persistence_enabled: true
analytics_saved: true
compression_applied: true
recommendations:
performance_improvements: [...]
superclaude_optimizations: [...]
learning_suggestions: [...]
workflow_enhancements: [...]
```
## Quality Assessment
### Session Success Criteria
A session is considered successful when:
- **Overall Score**: >60% performance score
- **SuperClaude Effectiveness**: >60% when framework enabled
- **Learning Achievement**: >0 insights generated
- **Recommendations**: Actionable suggestions provided
### Quality Metrics
```yaml
quality_analysis:
error_rate: 0.05 # 5% error rate
operation_success_rate: 0.95 # 95% success rate
bottlenecks: ["low_productivity"] # Identified issues
optimization_opportunities: [...] # Improvement areas
```
### Success Indicators
- **Session Success**: `overall_score > 0.6`
- **SuperClaude Effective**: `effectiveness_score > 0.6`
- **Learning Achieved**: `insights_generated > 0`
- **Recommendations Generated**: `total_recommendations > 0`
### User Satisfaction Estimation
```python
def _estimate_user_satisfaction(self, context: dict) -> float:
satisfaction_factors = []
# Error rate impact
satisfaction_factors.append(1.0 - error_rate)
# Productivity impact
satisfaction_factors.append(productivity)
# SuperClaude effectiveness impact
if superclaude_enabled:
satisfaction_factors.append(effectiveness)
# Session duration optimization (15-60 minutes optimal)
satisfaction_factors.append(duration_satisfaction)
return statistics.mean(satisfaction_factors)
```
## Error Handling
### Graceful Degradation
When errors occur during hook execution:
```python
except Exception as e:
log_error("stop", str(e), {"session_data": session_data})
return self._create_fallback_report(session_data, str(e))
```
### Fallback Reporting
```yaml
fallback_report:
session_completed: false
error: "Analysis engine failure"
fallback_mode: true
analytics:
performance_metrics:
overall_score: 0.0
persistence:
persistence_enabled: false
```
### Recovery Strategies
- **Analytics Failure**: Provide basic session summary
- **Persistence Failure**: Continue with recommendations generation
- **Learning Engine Error**: Skip learning consolidation, continue with core analytics
- **Complete Failure**: Return minimal session completion report
## Performance Optimization
### Efficiency Strategies
- **Lazy Loading**: Components initialized only when needed
- **Batch Processing**: Multiple analytics operations combined
- **Compression**: Large session data automatically compressed
- **Caching**: Learning insights cached for reuse
### Resource Management
- **Memory Optimization**: Session cleanup after processing
- **Storage Efficiency**: Old sessions automatically pruned
- **Processing Time**: <200ms target with continuous monitoring
- **Token Efficiency**: Compressed analytics data when appropriate
## Future Enhancements
### Planned Features
- **Cross-Session Analytics**: Performance trends across multiple sessions
- **Predictive Recommendations**: ML-based optimization suggestions
- **Real-Time Monitoring**: Live session analytics during execution
- **Collaborative Learning**: Shared learning patterns across users
- **Advanced Compression**: Context-aware compression algorithms
### Integration Opportunities
- **Dashboard Integration**: Real-time analytics visualization
- **Notification System**: Alerts for performance degradation
- **API Endpoints**: Session analytics via REST API
- **Export Capabilities**: Analytics data export for external analysis
---
*The Stop Hook represents the culmination of session management in SuperClaude Framework, providing comprehensive analytics, learning consolidation, and intelligent persistence to enable continuous improvement and optimization of user productivity.*

View File

@@ -1,457 +0,0 @@
# Subagent Stop Hook Documentation
## Purpose
The `subagent_stop` hook analyzes subagent task completion and provides delegation effectiveness measurement after subagent operations. It implements MODE_Task_Management delegation coordination analytics for multi-agent collaboration optimization.
**Core Implementation**: Measures delegation effectiveness, analyzes cross-agent coordination patterns, and optimizes wave orchestration strategies with a target execution time of <150ms.
## Execution Context
The subagent_stop hook runs after subagent operations complete in Claude Code. According to `settings.json`, it has a 15-second timeout and executes via: `python3 ~/.claude/hooks/subagent_stop.py`
**Execution Triggers:**
- Individual subagent task completion
- Multi-agent coordination end
- Wave orchestration completion
- Delegation strategy assessment
**Actual Processing:**
1. Receives subagent completion data via stdin (JSON)
2. Analyzes delegation effectiveness and coordination patterns
3. Measures multi-agent collaboration success
4. Records learning events for delegation optimization
5. Outputs coordination analytics via stdout (JSON)
## Performance Target
**Target Execution Time: <150ms**
The hook maintains strict performance requirements to ensure minimal overhead during delegation analytics:
```python
# Performance configuration
self.performance_target_ms = config_loader.get_hook_config('subagent_stop', 'performance_target_ms', 150)
# Performance tracking
execution_time = (time.time() - start_time) * 1000
coordination_report['performance_metrics'] = {
'coordination_analysis_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'coordination_efficiency': self._calculate_coordination_efficiency(context, execution_time)
}
```
**Performance Optimization Features:**
- **Fast Context Extraction**: Efficient subagent data parsing and context enrichment
- **Streamlined Analytics**: Optimized delegation effectiveness calculations
- **Batched Operations**: Grouped analysis operations for efficiency
- **Cached Learning**: Reuse of previous coordination patterns for faster analysis
## Delegation Analytics
The hook provides comprehensive **delegation effectiveness measurement** through multiple analytical dimensions:
### Task Completion Analysis
```python
def _analyze_task_completion(self, context: dict) -> dict:
"""Analyze task completion performance."""
task_analysis = {
'completion_success': context.get('task_success', False),
'completion_quality': context.get('output_quality', 0.0),
'completion_efficiency': context.get('resource_efficiency', 0.0),
'completion_time_performance': 0.0,
'success_factors': [],
'improvement_areas': []
}
```
**Key Metrics:**
- **Completion Success Rate**: Binary success/failure tracking for delegated tasks
- **Output Quality Assessment**: Quality scoring (0.0-1.0) based on validation results and error indicators
- **Resource Efficiency**: Memory, CPU, and time utilization effectiveness measurement
- **Time Performance**: Actual vs. expected execution time analysis
- **Success Factor Identification**: Patterns that lead to successful delegation outcomes
- **Improvement Area Detection**: Areas requiring optimization in future delegations
### Delegation Effectiveness Measurement
```python
def _analyze_delegation_effectiveness(self, context: dict, task_analysis: dict) -> dict:
"""Analyze effectiveness of task delegation."""
delegation_analysis = {
'delegation_strategy': context.get('delegation_strategy', 'unknown'),
'delegation_success': context.get('task_success', False),
'delegation_efficiency': 0.0,
'coordination_overhead': 0.0,
'parallel_benefit': 0.0,
'delegation_value': 0.0
}
```
**Delegation Strategies Analyzed:**
- **Files Strategy**: Individual file-based delegation effectiveness
- **Folders Strategy**: Directory-level delegation performance
- **Auto Strategy**: Intelligent delegation strategy effectiveness
- **Custom Strategies**: User-defined delegation pattern analysis
**Effectiveness Dimensions:**
- **Delegation Efficiency**: Ratio of productive work to coordination overhead
- **Coordination Overhead**: Time and resource cost of agent coordination
- **Parallel Benefit**: Actual speedup achieved through parallel execution
- **Overall Delegation Value**: Composite score weighing quality, efficiency, and parallel benefits
## Wave Orchestration
The hook provides advanced **multi-agent coordination analysis** for wave-based task orchestration:
### Wave Coordination Success
```python
def _analyze_coordination_patterns(self, context: dict, delegation_analysis: dict) -> dict:
"""Analyze coordination patterns and effectiveness."""
coordination_analysis = {
'coordination_strategy': 'unknown',
'synchronization_effectiveness': 0.0,
'data_flow_efficiency': 0.0,
'wave_coordination_success': 0.0,
'cross_agent_learning': 0.0,
'coordination_patterns_detected': []
}
```
**Wave Orchestration Features:**
- **Progressive Enhancement**: Iterative improvement through multiple coordination waves
- **Systematic Analysis**: Comprehensive methodical analysis across wave cycles
- **Adaptive Coordination**: Dynamic strategy adjustment based on wave performance
- **Enterprise Orchestration**: Large-scale coordination for complex multi-agent operations
### Wave Performance Metrics
```python
def _update_wave_orchestration_metrics(self, context: dict, coordination_analysis: dict) -> dict:
"""Update wave orchestration performance metrics."""
wave_metrics = {
'wave_performance': 0.0,
'orchestration_efficiency': 0.0,
'wave_learning_value': 0.0,
'next_wave_recommendations': []
}
```
**Wave Strategy Analysis:**
- **Wave Position Tracking**: Current position within multi-wave coordination
- **Inter-Wave Communication**: Data flow and synchronization between waves
- **Wave Success Metrics**: Performance measurement across wave cycles
- **Orchestration Efficiency**: Resource utilization effectiveness in wave coordination
## Cross-Agent Learning
The hook implements **sophisticated learning mechanisms** for continuous delegation improvement:
### Learning Event Recording
```python
def _record_coordination_learning(self, context: dict, delegation_analysis: dict,
optimization_insights: dict):
"""Record coordination learning for future optimization."""
# Record delegation effectiveness
self.learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.PROJECT,
context,
{
'delegation_strategy': context.get('delegation_strategy'),
'task_type': context.get('task_type'),
'delegation_value': delegation_analysis['delegation_value'],
'coordination_overhead': delegation_analysis['coordination_overhead'],
'parallel_benefit': delegation_analysis['parallel_benefit']
},
delegation_analysis['delegation_value'],
0.8,
{'hook': 'subagent_stop', 'coordination_learning': True}
)
```
**Learning Categories:**
- **Performance Optimization**: Delegation strategy effectiveness patterns
- **Operation Patterns**: Successful task completion patterns
- **Coordination Patterns**: Effective multi-agent coordination strategies
- **Error Recovery**: Learning from delegation failures and recovery strategies
**Learning Scopes:**
- **Project-Level Learning**: Delegation patterns specific to current project
- **User-Level Learning**: Cross-project delegation preferences and patterns
- **System-Level Learning**: Framework-wide coordination optimization patterns
## Parallel Execution Tracking
The hook provides comprehensive **parallel operation performance analysis**:
### Parallel Benefit Calculation
```python
# Calculate parallel benefit
parallel_tasks = context.get('parallel_tasks', [])
if len(parallel_tasks) > 1:
# Estimate parallel benefit based on task coordination
parallel_efficiency = context.get('parallel_efficiency', 1.0)
theoretical_speedup = len(parallel_tasks)
actual_speedup = theoretical_speedup * parallel_efficiency
delegation_analysis['parallel_benefit'] = actual_speedup / theoretical_speedup
```
**Parallel Performance Metrics:**
- **Theoretical vs. Actual Speedup**: Comparison of expected and achieved parallel performance
- **Parallel Efficiency**: Effectiveness of parallel task coordination
- **Synchronization Overhead**: Cost of coordinating parallel operations
- **Resource Contention Analysis**: Impact of resource sharing on parallel performance
### Coordination Pattern Detection
```python
# Detect coordination patterns
if delegation_analysis['delegation_value'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('effective_delegation')
if coordination_analysis['synchronization_effectiveness'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('efficient_synchronization')
if coordination_analysis['wave_coordination_success'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('successful_wave_orchestration')
```
**Pattern Categories:**
- **Effective Delegation**: High-value delegation strategies
- **Efficient Synchronization**: Optimal coordination mechanisms
- **Successful Wave Orchestration**: High-performing wave coordination patterns
- **Resource Optimization**: Efficient resource utilization patterns
## Configuration
The hook is configured through `superclaude-config.json` with comprehensive settings for delegation analytics:
### Core Configuration
```json
{
"hooks": {
"subagent_stop": {
"enabled": true,
"priority": 7,
"performance_target_ms": 150,
"delegation_analytics": {
"enabled": true,
"strategy_analysis": ["files", "folders", "auto"],
"effectiveness_threshold": 0.6,
"coordination_overhead_threshold": 0.3
},
"wave_orchestration": {
"enabled": true,
"wave_strategies": ["progressive", "systematic", "adaptive", "enterprise"],
"success_threshold": 0.7,
"learning_enabled": true
},
"parallel_tracking": {
"efficiency_threshold": 0.7,
"synchronization_tracking": true,
"resource_contention_analysis": true
},
"learning_configuration": {
"coordination_learning": true,
"pattern_detection": true,
"cross_agent_learning": true,
"performance_learning": true
}
}
}
}
```
### Task Management Configuration
```json
{
"session": {
"task_management": {
"delegation_strategies": ["files", "folders", "auto"],
"wave_orchestration": {
"enabled": true,
"strategies": ["progressive", "systematic", "adaptive", "enterprise"],
"complexity_threshold": 0.4,
"min_wave_tasks": 3
},
"parallel_coordination": {
"max_parallel_agents": 7,
"synchronization_timeout_ms": 5000,
"resource_sharing_enabled": true
},
"learning_integration": {
"delegation_learning": true,
"wave_learning": true,
"cross_session_learning": true
}
}
}
}
```
## MODE_Task_Management Integration
The hook implements **MODE_Task_Management** through comprehensive integration with the task management framework:
### Task Management Layer Integration
```python
# Load task management configuration
self.task_config = config_loader.get_section('session', 'task_management', {})
# Integration with task management layers
# Layer 1: TodoRead/TodoWrite (Session Tasks) - Real-time state management
# Layer 2: /task Command (Project Management) - Cross-session persistence
# Layer 3: /spawn Command (Meta-Orchestration) - Complex multi-domain operations
# Layer 4: /loop Command (Iterative Enhancement) - Progressive refinement workflows
```
**Framework Integration Points:**
- **Session Task Tracking**: Integration with TodoWrite for task completion analytics
- **Project Task Coordination**: Cross-session task management integration
- **Meta-Orchestration**: Complex multi-domain operation coordination
- **Iterative Enhancement**: Progressive refinement and quality improvement cycles
### Auto-Activation Patterns
The hook supports MODE_Task_Management auto-activation patterns:
```python
# Auto-activation triggers from MODE_Task_Management:
# - Sub-Agent Delegation: >2 directories OR >3 files OR complexity >0.4
# - Wave Mode: complexity ≥0.4 AND files >3 AND operation_types >2
# - Loop Mode: polish, refine, enhance, improve keywords detected
```
**Detection Patterns:**
- **Multi-Step Operations**: 3+ step sequences with dependency analysis
- **Complexity Thresholds**: Operations exceeding 0.4 complexity score
- **File Count Triggers**: 3+ files for delegation, 2+ directories for coordination
- **Performance Opportunities**: Auto-detect parallelizable operations with time estimates
## Coordination Effectiveness
The hook provides comprehensive **success metrics for delegation** through multiple measurement dimensions:
### Overall Effectiveness Calculation
```python
'performance_summary': {
'overall_effectiveness': (
task_analysis['completion_quality'] * 0.4 +
delegation_analysis['delegation_value'] * 0.3 +
coordination_analysis['synchronization_effectiveness'] * 0.3
),
'delegation_success': delegation_analysis['delegation_value'] > 0.6,
'coordination_success': coordination_analysis['synchronization_effectiveness'] > 0.7,
'learning_value': wave_metrics.get('wave_learning_value', 0.5)
}
```
**Effectiveness Dimensions:**
- **Task Quality (40%)**: Output quality and completion success
- **Delegation Value (30%)**: Effectiveness of delegation strategy and execution
- **Coordination Success (30%)**: Synchronization and coordination effectiveness
### Success Thresholds
```python
# Success criteria
delegation_success = delegation_analysis['delegation_value'] > 0.6
coordination_success = coordination_analysis['synchronization_effectiveness'] > 0.7
wave_success = wave_metrics['wave_performance'] > 0.8
```
**Performance Benchmarks:**
- **Delegation Success**: >60% delegation value threshold
- **Coordination Success**: >70% synchronization effectiveness threshold
- **Wave Success**: >80% wave performance threshold
- **Overall Effectiveness**: Composite score incorporating all dimensions
### Optimization Recommendations
```python
def _generate_optimization_insights(self, context: dict, task_analysis: dict,
delegation_analysis: dict, coordination_analysis: dict) -> dict:
"""Generate optimization insights for future delegations."""
insights = {
'delegation_optimizations': [],
'coordination_improvements': [],
'wave_strategy_recommendations': [],
'performance_enhancements': [],
'learning_opportunities': []
}
```
**Recommendation Categories:**
- **Delegation Optimizations**: Alternative strategies, overhead reduction, task partitioning improvements
- **Coordination Improvements**: Synchronization mechanism optimization, data exchange pattern improvements
- **Wave Strategy Recommendations**: Orchestration strategy adjustments, task distribution optimization
- **Performance Enhancements**: Execution speed optimization, resource utilization improvements
- **Learning Opportunities**: Pattern recognition, cross-agent learning, continuous improvement areas
## Error Handling and Resilience
The hook implements robust error handling with graceful degradation:
```python
def _create_fallback_report(self, subagent_data: dict, error: str) -> dict:
"""Create fallback coordination report on error."""
return {
'subagent_id': subagent_data.get('subagent_id', 'unknown'),
'task_id': subagent_data.get('task_id', 'unknown'),
'completion_timestamp': time.time(),
'error': error,
'fallback_mode': True,
'task_completion': {
'success': False,
'quality_score': 0.0,
'efficiency_score': 0.0,
'error_occurred': True
}
}
```
**Error Recovery Strategies:**
- **Graceful Degradation**: Fallback coordination reports when analysis fails
- **Context Preservation**: Maintain essential coordination data even during errors
- **Error Logging**: Comprehensive error tracking for debugging and improvement
- **Performance Monitoring**: Continue performance tracking even in error conditions
## Integration with SuperClaude Framework
The hook integrates seamlessly with the broader SuperClaude framework:
### Framework Components
- **Learning Engine Integration**: Records coordination patterns for continuous improvement
- **Pattern Detection**: Identifies successful delegation and coordination patterns
- **MCP Intelligence**: Coordinates with MCP servers for enhanced analysis
- **Compression Engine**: Optimizes data storage and transfer for coordination analytics
- **Framework Logic**: Implements SuperClaude operational principles and patterns
### Quality Gates Integration
The hook contributes to SuperClaude's 8-step quality validation cycle:
- **Step 2.5**: Task management validation during orchestration operations
- **Step 7.5**: Session completion verification and summary documentation
- **Continuous**: Real-time metrics collection and performance monitoring
- **Post-Session**: Comprehensive session analytics and completion reporting
### Future Enhancements
Planned improvements for enhanced delegation coordination:
- **Predictive Delegation**: ML-based delegation strategy recommendation
- **Cross-Project Learning**: Delegation pattern sharing across projects
- **Real-Time Optimization**: Dynamic delegation adjustment during execution
- **Advanced Wave Strategies**: More sophisticated wave orchestration patterns
- **Resource Prediction**: Predictive resource allocation for delegated tasks

View File

@@ -1,184 +0,0 @@
# Installation Guide
Framework-Hooks provides intelligent session management for Claude Code through Python hooks that run at specific lifecycle events.
## Prerequisites
- Python 3.8+ (Python 3.x required by hook scripts)
- Claude Code application
- Write access to your system's hook installation directory
## Installation Steps
### 1. Verify Python Installation
```bash
python3 --version
# Should show Python 3.8 or higher
```
### 2. Clone or Extract Framework-Hooks
Place the Framework-Hooks directory in your SuperClaude installation:
```
YourProject/
├── SuperClaude/
│ └── Framework-Hooks/ # This repository
└── other-files...
```
### 3. Install Hook Scripts
Framework-Hooks includes pre-configured hook registration files:
- `settings.json` - Claude Code hook configuration
- `superclaude-config.json` - SuperClaude framework settings
These files configure 7 hooks to run at specific lifecycle events:
- `session_start.py` - Session initialization (<50ms target)
- `pre_tool_use.py` - Tool preparation (<200ms target)
- `post_tool_use.py` - Tool usage recording (<100ms target)
- `pre_compact.py` - Context compression (<150ms target)
- `notification.py` - Notification handling (<50ms target)
- `stop.py` - Session cleanup (<100ms target)
- `subagent_stop.py` - Subagent coordination (<100ms target)
### 4. Directory Structure Verification
After installation, verify this structure exists:
```
Framework-Hooks/
├── hooks/
│ ├── session_start.py
│ ├── pre_tool_use.py
│ ├── post_tool_use.py
│ ├── pre_compact.py
│ ├── notification.py
│ ├── stop.py
│ ├── subagent_stop.py
│ └── shared/ # 9 shared modules
│ ├── framework_logic.py
│ ├── compression_engine.py
│ ├── learning_engine.py
│ ├── mcp_intelligence.py
│ ├── pattern_detection.py
│ ├── intelligence_engine.py
│ ├── logger.py
│ ├── yaml_loader.py
│ └── validate_system.py
├── config/ # 12+ YAML configuration files
│ ├── session.yaml
│ ├── performance.yaml
│ ├── compression.yaml
│ ├── modes.yaml
│ ├── mcp_orchestration.yaml
│ ├── orchestrator.yaml
│ ├── logging.yaml
│ └── validation.yaml
├── patterns/ # 3-tier pattern system
│ ├── minimal/ # Basic patterns (3-5KB each)
│ ├── dynamic/ # Feature-specific (8-12KB each)
│ └── learned/ # User adaptations (10-20KB each)
├── cache/ # Runtime cache directory
└── docs/ # Documentation
```
### 5. Configuration Check
The system ships with conservative defaults:
- **Logging**: Disabled by default (`logging.yaml` has `enabled: false`)
- **Performance targets**: session_start <50ms, pre_tool_use <200ms
- **Timeouts**: 10-15 seconds per hook execution
- **All hooks enabled**: via settings.json configuration
### 6. Test Installation
Run the validation system to verify installation:
```bash
cd Framework-Hooks/hooks/shared
python3 validate_system.py --check-installation
```
This will verify:
- Python dependencies available
- All hook files are executable
- Configuration files are valid YAML
- Required directories exist
- Shared modules can be imported
## Verification
### Quick Test
Start a Claude Code session. You should see hook execution if logging is enabled (edit `config/logging.yaml` to enable logging).
### Check Hook Registration
The hooks register automatically through:
- `settings.json` - Defines 7 hooks with 10-15 second timeouts
- Commands like `python3 ~/.claude/hooks/session_start.py`
- Universal matcher `"*"` applies to all sessions
### Performance Verification
Hooks should execute within performance targets:
- Session start: <50ms
- Pre tool use: <200ms
- Post tool use: <100ms
- Other hooks: <100-150ms each
## Configuration
### Default Settings
All hooks start **enabled** with conservative defaults:
- Logging **disabled** (`config/logging.yaml`)
- Error-only log level
- 30-day log retention
- Privacy-safe logging (sanitizes user content)
### Enable Logging (Optional)
To see hook activity, edit `config/logging.yaml`:
```yaml
logging:
enabled: true
level: "INFO" # or DEBUG for verbose output
```
### Pattern System
The 3-tier pattern system loads automatically:
- **minimal/**: Basic project detection (always loaded)
- **dynamic/**: Feature-specific patterns (loaded on demand)
- **learned/**: User-specific adaptations (evolve with usage)
## Next Steps
After installation:
1. **Normal Usage**: Start Claude Code sessions normally - hooks run automatically
2. **Monitor Performance**: Check that hooks execute within target times
3. **Review Logs**: Enable logging to see hook decisions and learning
4. **Customize Patterns**: Add project-specific patterns to `patterns/` directories
## What Framework-Hooks Does
Once installed, the system automatically:
1. **Detects project type** and loads appropriate patterns
2. **Activates relevant modes** and MCP servers based on context
3. **Applies learned preferences** from previous sessions
4. **Optimizes performance** based on resource constraints
5. **Learns from usage patterns** to improve future sessions
The system operates transparently - no manual invocation required.
## Troubleshooting
If you encounter issues, see [TROUBLESHOOTING.md](TROUBLESHOOTING.md) for common problems and solutions.

View File

@@ -1,297 +0,0 @@
# Framework-Hooks Integration with SuperClaude
## Overview
The Framework-Hooks system implements SuperClaude framework patterns through Claude Code lifecycle hooks. The system executes 7 Python hooks during session lifecycle events to provide mode detection, MCP server routing, and configuration management.
## 1. Hook Implementation Architecture
### Lifecycle Hook Integration
The Framework-Hooks system implements SuperClaude patterns through 7 Python hooks:
```
┌─────────────────────────────────────────────────────────────┐
│ Claude Code Runtime │
├─────────────────────────────────────────────────────────────┤
│ SessionStart → PreTool → PostTool → PreCompact → Notify │
│ ↓ ↓ ↓ ↓ ↓ │
│ Mode/MCP Server Learning Token Pattern │
│ Detection Selection Tracking Compression Updates │
└─────────────────────────────────────────────────────────────┘
```
### SuperClaude Framework Implementation
Each hook implements specific SuperClaude framework aspects:
- **session_start.py**: MODE detection patterns from MODE_*.md files
- **pre_tool_use.py**: MCP server routing from ORCHESTRATOR.md patterns
- **post_tool_use.py**: Learning and effectiveness tracking
- **pre_compact.py**: Token efficiency patterns from MODE_Token_Efficiency.md
- **stop.py/subagent_stop.py**: Session analytics and coordination tracking
### Configuration Integration
Hook behavior is configured through:
- **settings.json**: Hook timeouts and execution commands
- **performance.yaml**: Performance targets (50ms session_start, 200ms pre_tool_use, etc.)
- **modes.yaml**: Mode detection patterns and triggers
- **pattern files**: Project-specific behavior in minimal/, dynamic/, learned/ directories
## 2. Hook Lifecycle Integration
### Hook Execution Flow
The hooks execute during specific Claude Code lifecycle events:
```yaml
Hook Execution Sequence:
1. SessionStart (10s timeout)
- Detects project type (Python, React, etc.)
- Loads appropriate pattern files
- Activates SuperClaude modes based on user input
- Routes to MCP servers
2. PreToolUse (15s timeout)
- Analyzes operation type and complexity
- Selects optimal MCP servers
- Applies performance optimizations
3. PostToolUse (10s timeout)
- Validates operation results
- Records learning data and effectiveness metrics
- Updates user preferences
4. PreCompact (15s timeout)
- Applies token compression strategies
- Preserves framework content (0% compression)
- Uses symbols and abbreviations for efficiency
5. Notification (10s timeout)
- Updates pattern caches
- Refreshes configurations
- Handles runtime notifications
6. Stop (15s timeout)
- Generates session analytics
- Saves learning data to files
- Creates performance metrics
7. SubagentStop (15s timeout)
- Tracks delegation performance
- Records coordination effectiveness
```
### Integration Points
- **Pattern Loading**: Minimal patterns loaded during session_start for project-specific behavior
- **Learning Persistence**: User preferences and effectiveness data saved to learned/ directory
- **Performance Monitoring**: Hook execution times tracked against targets in performance.yaml
- **Configuration Updates**: YAML configuration changes applied during runtime
## 3. MCP Server Coordination
### Server Routing Logic
The pre_tool_use hook routes operations to MCP servers based on detected patterns:
```yaml
MCP Server Selection:
Magic:
- Triggers: UI keywords (component, button, form, modal)
- Use case: UI component generation and design
Sequential:
- Triggers: Analysis keywords (analyze, debug, complex)
- Use case: Multi-step reasoning and systematic analysis
Context7:
- Triggers: Documentation keywords (library, framework, api)
- Use case: Library documentation and best practices
Playwright:
- Triggers: Testing keywords (test, e2e, browser)
- Use case: Browser automation and testing
Morphllm vs Serena:
- Morphllm: Simple edits (<10 files, token optimization)
- Serena: Complex operations (>5 files, semantic analysis)
Auto-activation:
- Project patterns trigger appropriate server combinations
- User preferences influence server selection
- Fallback strategies for unavailable servers
```
### Server Configuration
Server routing is configured through:
- **mcp_intelligence.py** (31KB) - Core routing logic and server capability matching
- **mcp_activation.yaml** - Dynamic patterns for server activation
- **Project patterns** - Server preferences by project type (e.g., python_project.yaml specifies Serena + Context7)
- **Learning data** - User preferences for server selection stored in learned/ directory
## 4. SuperClaude Mode Integration
### Mode Detection
The session_start hook detects user intent and activates SuperClaude modes:
```yaml
Mode Detection Patterns:
Brainstorming Mode:
- Triggers: "not sure", "thinking about", "explore", ambiguous requests
- Implementation: Activates interactive requirements discovery
Task Management Mode:
- Triggers: Multi-file operations, "build", "implement", complexity >0.4
- Implementation: Enables delegation and wave orchestration
Token Efficiency Mode:
- Triggers: Resource constraints >75%, "--uc", "brief"
- Implementation: Activates compression in pre_compact hook
Introspection Mode:
- Triggers: "analyze reasoning", meta-cognitive requests
- Implementation: Enables framework compliance analysis
```
### Mode Implementation
Modes are implemented across multiple hooks:
- **session_start.py**: Detects mode triggers and sets activation flags
- **pre_compact.py**: Implements token efficiency compression strategies
- **post_tool_use.py**: Validates mode-specific behaviors and tracks effectiveness
- **stop.py**: Records mode usage analytics and learning data
## 5. Configuration and Validation
### Configuration Management
The system uses 19 YAML configuration files to define behavior:
- **performance.yaml** (345 lines): Performance targets and monitoring thresholds
- **modes.yaml**: Mode detection patterns and activation triggers
- **validation.yaml**: Quality gate definitions and validation rules
- **compression.yaml**: Token efficiency settings and compression levels
- **session.yaml**: Session lifecycle and analytics configuration
### Validation Implementation
Validation is distributed across hooks:
- **pre_tool_use.py**: Basic validation before tool execution
- **post_tool_use.py**: Results validation and quality assessment
- **validate_system.py** (32KB): System health checks and validation utilities
- **stop.py**: Final session validation and analytics generation
### Learning and Analytics
The system tracks effectiveness and adapts behavior:
- **learning_engine.py** (40KB): Records user preferences and operation effectiveness
- **Learned patterns**: Stored in patterns/learned/ directory
- **Performance tracking**: Hook execution times and success rates
- **User preferences**: Saved across sessions for personalized behavior
## 6. Session Management
### Session Integration
Framework-Hooks integrates with Claude Code session lifecycle:
- **Session Start**: session_start hook runs when Claude Code sessions begin
- **Tool Execution**: pre/post_tool_use hooks run for each tool operation
- **Token Optimization**: pre_compact hook runs during token compression
- **Session End**: stop hook runs when sessions complete
### Data Persistence
Session data is persisted through:
- **Learning Records**: User preferences saved to patterns/learned/ directory
- **Performance Metrics**: Hook execution times and success rates logged
- **Session Analytics**: Summary data generated by stop hook
- **Pattern Updates**: Dynamic patterns updated based on usage
### Performance Monitoring
The system tracks performance against configuration targets:
- **Hook Timing**: Each hook execution timed and compared to performance.yaml targets
- **Resource Usage**: Memory and CPU monitoring during hook execution
- **Success Rates**: Operation effectiveness tracked by learning_engine.py
- **User Satisfaction**: Implicit feedback through continued usage patterns
## 7. Pattern System
### Pattern Directory Structure
The system uses a three-tier pattern organization:
```yaml
patterns/
minimal/ # Essential patterns loaded during session start
- python_project.yaml: Python project detection and configuration
- react_project.yaml: React project patterns and MCP routing
dynamic/ # Runtime patterns for adaptive behavior
- mode_detection.yaml: SuperClaude mode triggers and activation
- mcp_activation.yaml: MCP server routing patterns
learned/ # User preference and effectiveness data
- user_preferences.yaml: Personal configuration adaptations
- project_optimizations.yaml: Project-specific learned patterns
```
### Pattern Processing
Pattern loading and application:
- **pattern_detection.py** (45KB): Core pattern recognition and matching logic
- **Session startup**: Minimal patterns loaded based on detected project type
- **Runtime updates**: Dynamic patterns applied during hook execution
- **Learning updates**: Successful patterns saved to learned/ directory for future use
### Pattern Configuration
Patterns define:
- **Project detection**: File patterns and dependency analysis for project type identification
- **MCP server routing**: Which servers to activate for different operation types
- **Mode triggers**: Keywords and contexts that activate SuperClaude modes
- **Performance targets**: Project-specific timing and resource goals
## 8. Implementation Summary
### System Implementation
The Framework-Hooks system implements SuperClaude framework patterns through:
**Core Components:**
- 7 Python lifecycle hooks (17 Python files total)
- 19 YAML configuration files
- 3-tier pattern system (minimal/dynamic/learned)
- 9 shared modules providing common functionality
**Key Features:**
- Project type detection and pattern-based configuration
- SuperClaude mode activation based on user input patterns
- MCP server routing with fallback strategies
- Token compression with selective framework protection
- Learning system that adapts to user preferences
- Performance monitoring against configured targets
**Integration Points:**
- Claude Code lifecycle hooks via settings.json
- SuperClaude framework mode implementations
- MCP server coordination and routing
- Pattern-based project and operation detection
- Cross-session learning and preference persistence
The system provides a Python-based implementation of SuperClaude framework concepts, enabling intelligent behavior through configuration-driven lifecycle hooks that execute during Claude Code sessions.

View File

@@ -1,214 +0,0 @@
# SuperClaude Framework Hooks - Shared Modules Overview
## Architecture Summary
The SuperClaude Framework Hooks shared modules provide the intelligent foundation for all 7 Claude Code hooks. These 10 shared modules implement the core SuperClaude framework patterns from RULES.md, PRINCIPLES.md, and ORCHESTRATOR.md, delivering executable intelligence that transforms static configuration into dynamic, adaptive behavior.
## Module Architecture
```
hooks/shared/
├── __init__.py # Module exports and initialization
├── framework_logic.py # Core SuperClaude decision algorithms
├── pattern_detection.py # Pattern matching and mode activation
├── mcp_intelligence.py # MCP server routing and coordination
├── compression_engine.py # Token efficiency and optimization
├── learning_engine.py # Adaptive learning and feedback
├── intelligence_engine.py # Generic YAML pattern interpreter
├── validate_system.py # YAML-driven system validation
├── yaml_loader.py # Configuration loading and management
├── logger.py # Structured logging utilities
└── tests/ # Test suite for shared modules
```
## Core Design Principles
### 1. **Evidence-Based Intelligence**
All modules implement measurable decision-making with metrics, performance targets, and validation cycles. No assumptions without evidence.
### 2. **Adaptive Learning System**
Cross-hook learning engine that continuously improves effectiveness through pattern recognition, user preference adaptation, and performance optimization.
### 3. **Configuration-Driven Behavior**
YAML-based configuration system supporting hot-reload, environment interpolation, and modular includes for flexible deployment.
### 4. **Performance-First Design**
Sub-200ms operation targets with intelligent caching, optimized algorithms, and resource-aware processing.
### 5. **Quality-Gated Operations**
Every operation includes validation, error handling, fallback strategies, and comprehensive logging for reliability.
## Module Responsibilities
### Intelligence Layer
- **framework_logic.py**: Core SuperClaude decision algorithms and validation
- **pattern_detection.py**: Intelligent pattern matching for automatic activation
- **mcp_intelligence.py**: Smart MCP server selection and coordination
- **intelligence_engine.py**: Generic YAML pattern interpreter for hot-reloadable intelligence
### Optimization Layer
- **compression_engine.py**: Token efficiency with quality preservation
- **learning_engine.py**: Continuous adaptation and improvement
### Infrastructure Layer
- **yaml_loader.py**: High-performance configuration management
- **logger.py**: Structured event logging and analysis
- **validate_system.py**: YAML-driven system health validation and diagnostics
## Key Features
### Intelligent Decision Making
- **Complexity Scoring**: 0.0-1.0 complexity assessment for operation routing
- **Risk Assessment**: Low/Medium/High/Critical risk evaluation
- **Performance Estimation**: Time and resource impact prediction
- **Quality Validation**: Multi-step validation with quality scores
### Pattern Recognition
- **Mode Triggers**: Automatic detection of brainstorming, task management, efficiency needs
- **MCP Server Selection**: Context-aware server activation based on operation patterns
- **Persona Detection**: Domain expertise hints for specialized routing
- **Complexity Indicators**: Multi-file, architectural, and system-wide operation detection
### Adaptive Learning
- **User Preference Learning**: Personalization based on effectiveness feedback
- **Operation Pattern Recognition**: Optimization of common workflows
- **Performance Feedback Integration**: Continuous improvement through metrics
- **Cross-Hook Knowledge Sharing**: Shared learning across all hook implementations
### Configuration Management
- **Dual-Format Support**: JSON (Claude Code settings) + YAML (SuperClaude configs)
- **Hot-Reload Capability**: File modification detection with <1s response time
- **Environment Interpolation**: ${VAR} and ${VAR:default} syntax support
- **Modular Configuration**: Include/merge support for complex deployments
### Performance Optimization
- **Token Compression**: 30-50% reduction with ≥95% quality preservation
- **Intelligent Caching**: Sub-10ms configuration access with change detection
- **Resource Management**: Adaptive behavior based on usage thresholds
- **Parallel Processing**: Coordination strategies for multi-server operations
## Integration Points
### Hook Integration
Each hook imports and uses shared modules for:
```python
from shared import (
FrameworkLogic, # Decision making
PatternDetector, # Pattern recognition
MCPIntelligence, # Server coordination
CompressionEngine, # Token optimization
LearningEngine, # Adaptive learning
UnifiedConfigLoader, # Configuration
)
# Additional modules available for direct import:
from shared.intelligence_engine import IntelligenceEngine # YAML pattern interpreter
from shared.validate_system import YAMLValidationEngine # System health validation
from shared.logger import get_logger # Logging utilities
```
### SuperClaude Framework Compliance
- **RULES.md**: Operational security, validation requirements, systematic approaches
- **PRINCIPLES.md**: Evidence-based decisions, quality standards, error handling
- **ORCHESTRATOR.md**: Intelligent routing, resource management, quality gates
### MCP Server Coordination
- **Context7**: Library documentation and framework patterns
- **Sequential**: Complex analysis and multi-step reasoning
- **Magic**: UI component generation and design systems
- **Playwright**: Testing automation and validation
- **Morphllm**: Intelligent editing with pattern application
- **Serena**: Semantic analysis and project-wide context
## Performance Characteristics
### Operation Timings
- **Configuration Loading**: <10ms (cached), <50ms (reload)
- **Pattern Detection**: <25ms for complex analysis
- **Decision Making**: <15ms for framework logic operations
- **Compression Processing**: <100ms with quality validation
- **Learning Adaptation**: <30ms for preference application
### Memory Efficiency
- **Configuration Cache**: ~2-5KB per config file
- **Pattern Cache**: ~1-3KB per compiled pattern set
- **Learning Records**: ~500B per learning event
- **Compression Cache**: Dynamic based on content size
### Quality Metrics
- **Decision Accuracy**: >90% correct routing decisions
- **Pattern Recognition**: >85% confidence for auto-activation
- **Compression Quality**: ≥95% information preservation
- **Configuration Reliability**: <0.1% cache invalidation errors
## Error Handling Strategy
### Graceful Degradation
- **Module Failures**: Fallback to simpler algorithms
- **Configuration Errors**: Default values with warnings
- **Pattern Recognition Failures**: Manual routing options
- **Learning System Errors**: Continue without adaptation
### Recovery Mechanisms
- **Configuration Reload**: Automatic retry on file corruption
- **Cache Regeneration**: Intelligent cache rebuilding
- **Performance Fallbacks**: Resource constraint adaptation
- **Error Logging**: Comprehensive error context capture
## Usage Patterns
### Basic Hook Integration
```python
# Initialize shared modules
framework_logic = FrameworkLogic()
pattern_detector = PatternDetector()
mcp_intelligence = MCPIntelligence()
# Use in hook implementation
context = {...}
complexity_score = framework_logic.calculate_complexity_score(context)
detection_result = pattern_detector.detect_patterns(user_input, context, operation_data)
activation_plan = mcp_intelligence.create_activation_plan(user_input, context, operation_data)
```
### Advanced Learning Integration
```python
# Record learning events
learning_engine.record_learning_event(
LearningType.USER_PREFERENCE,
AdaptationScope.USER,
context,
pattern,
effectiveness_score=0.85
)
# Apply learned adaptations
enhanced_recommendations = learning_engine.apply_adaptations(
context, base_recommendations
)
```
## Future Enhancements
### Planned Features
- **Multi-Language Support**: Expanded pattern recognition for polyglot projects
- **Cloud Configuration**: Remote configuration management with caching
- **Advanced Analytics**: Deeper learning insights and recommendation engines
- **Real-Time Monitoring**: Live performance dashboards and alerting
### Architecture Evolution
- **Plugin System**: Extensible module architecture for custom intelligence
- **Distributed Learning**: Cross-instance learning coordination
- **Enhanced Caching**: Redis/memcached integration for enterprise deployments
- **API Integration**: REST/GraphQL endpoints for external system integration
## Related Documentation
- **Individual Module Documentation**: See module-specific .md files in this directory
- **Hook Implementation Guides**: /docs/Hooks/ directory
- **Configuration Reference**: /docs/Configuration/ directory
- **Performance Tuning**: /docs/Performance/ directory
---
*This overview provides the architectural foundation for understanding how SuperClaude's intelligent hooks system transforms static configuration into adaptive, evidence-based automation.*

View File

@@ -1,704 +0,0 @@
# compression_engine.py - Intelligent Token Optimization Engine
## Overview
The `compression_engine.py` module implements intelligent token optimization through MODE_Token_Efficiency.md algorithms, providing adaptive compression, symbol systems, and quality-gated validation. This module enables 30-50% token reduction while maintaining ≥95% information preservation through selective compression strategies and evidence-based validation.
## Purpose and Responsibilities
### Primary Functions
- **Adaptive Compression**: 5-level compression strategy from minimal to emergency
- **Selective Content Processing**: Framework/user content protection with intelligent classification
- **Symbol Systems**: Mathematical and logical relationship compression using Unicode symbols
- **Abbreviation Systems**: Technical domain abbreviation with context awareness
- **Quality Validation**: Real-time compression effectiveness monitoring with preservation targets
### Intelligence Capabilities
- **Content Type Classification**: Automatic detection of framework vs user vs session content
- **Compression Level Determination**: Context-aware selection of optimal compression level
- **Quality-Gated Processing**: ≥95% information preservation validation
- **Performance Monitoring**: Sub-100ms processing with effectiveness tracking
## Core Classes and Data Structures
### Enumerations
#### CompressionLevel
```python
class CompressionLevel(Enum):
MINIMAL = "minimal" # 0-40% compression - Full detail preservation
EFFICIENT = "efficient" # 40-70% compression - Balanced optimization
COMPRESSED = "compressed" # 70-85% compression - Aggressive optimization
CRITICAL = "critical" # 85-95% compression - Maximum compression
EMERGENCY = "emergency" # 95%+ compression - Ultra-compression
```
#### ContentType
```python
class ContentType(Enum):
FRAMEWORK_CONTENT = "framework" # SuperClaude framework - EXCLUDE
SESSION_DATA = "session" # Session metadata - COMPRESS
USER_CONTENT = "user" # User project files - PRESERVE
WORKING_ARTIFACTS = "artifacts" # Analysis results - COMPRESS
```
### Data Classes
#### CompressionResult
```python
@dataclass
class CompressionResult:
original_length: int # Original content length
compressed_length: int # Compressed content length
compression_ratio: float # Compression ratio achieved
quality_score: float # 0.0 to 1.0 quality preservation
techniques_used: List[str] # Compression techniques applied
preservation_score: float # Information preservation score
processing_time_ms: float # Processing time in milliseconds
```
#### CompressionStrategy
```python
@dataclass
class CompressionStrategy:
level: CompressionLevel # Target compression level
symbol_systems_enabled: bool # Enable symbol replacements
abbreviation_systems_enabled: bool # Enable abbreviation systems
structural_optimization: bool # Enable structural optimizations
selective_preservation: Dict[str, bool] # Content type preservation rules
quality_threshold: float # Minimum quality threshold
```
## Content Classification System
### classify_content()
```python
def classify_content(self, content: str, metadata: Dict[str, Any]) -> ContentType:
file_path = metadata.get('file_path', '')
context_type = metadata.get('context_type', '')
# Framework content - complete exclusion
framework_patterns = [
'~/.claude/',
'.claude/',
'SuperClaude/',
'CLAUDE.md',
'FLAGS.md',
'PRINCIPLES.md',
'ORCHESTRATOR.md',
'MCP_',
'MODE_',
'SESSION_LIFECYCLE.md'
]
for pattern in framework_patterns:
if pattern in file_path or pattern in content:
return ContentType.FRAMEWORK_CONTENT
# Session data - apply compression
if context_type in ['session_metadata', 'checkpoint_data', 'cache_content']:
return ContentType.SESSION_DATA
# Working artifacts - apply compression
if context_type in ['analysis_results', 'processing_data', 'working_artifacts']:
return ContentType.WORKING_ARTIFACTS
# Default to user content preservation
return ContentType.USER_CONTENT
```
**Classification Logic**:
1. **Framework Content**: Complete exclusion from compression (0% compression)
2. **Session Data**: Session metadata and operational data (apply compression)
3. **Working Artifacts**: Analysis results and processing data (apply compression)
4. **User Content**: Project code, documentation, configurations (minimal compression only)
## Compression Level Determination
### determine_compression_level()
```python
def determine_compression_level(self, context: Dict[str, Any]) -> CompressionLevel:
resource_usage = context.get('resource_usage_percent', 0)
conversation_length = context.get('conversation_length', 0)
user_requests_brevity = context.get('user_requests_brevity', False)
complexity_score = context.get('complexity_score', 0.0)
# Emergency compression for critical resource constraints
if resource_usage >= 95:
return CompressionLevel.EMERGENCY
# Critical compression for high resource usage
if resource_usage >= 85 or conversation_length > 200:
return CompressionLevel.CRITICAL
# Compressed level for moderate constraints
if resource_usage >= 70 or conversation_length > 100 or user_requests_brevity:
return CompressionLevel.COMPRESSED
# Efficient level for mild constraints or complex operations
if resource_usage >= 40 or complexity_score > 0.6:
return CompressionLevel.EFFICIENT
# Minimal compression for normal operations
return CompressionLevel.MINIMAL
```
**Level Selection Criteria**:
- **Emergency (95%+)**: Resource usage ≥95%
- **Critical (85-95%)**: Resource usage ≥85% OR conversation >200 messages
- **Compressed (70-85%)**: Resource usage ≥70% OR conversation >100 OR user requests brevity
- **Efficient (40-70%)**: Resource usage ≥40% OR complexity >0.6
- **Minimal (0-40%)**: Normal operations
## Symbol Systems Framework
### Symbol Mappings
```python
def _load_symbol_mappings(self) -> Dict[str, str]:
return {
# Core Logic & Flow
'leads to': '', 'implies': '',
'transforms to': '', 'converts to': '',
'rollback': '', 'reverse': '',
'bidirectional': '', 'sync': '',
'and': '&', 'combine': '&',
'separator': '|', 'or': '|',
'define': ':', 'specify': ':',
'sequence': '»', 'then': '»',
'therefore': '', 'because': '',
'equivalent': '', 'approximately': '',
'not equal': '',
# Status & Progress
'completed': '', 'passed': '',
'failed': '', 'error': '',
'warning': '⚠️', 'information': '',
'in progress': '🔄', 'processing': '🔄',
'waiting': '', 'pending': '',
'critical': '🚨', 'urgent': '🚨',
'target': '🎯', 'goal': '🎯',
'metrics': '📊', 'data': '📊',
'insight': '💡', 'learning': '💡',
# Technical Domains
'performance': '', 'optimization': '',
'analysis': '🔍', 'investigation': '🔍',
'configuration': '🔧', 'setup': '🔧',
'security': '🛡️', 'protection': '🛡️',
'deployment': '📦', 'package': '📦',
'design': '🎨', 'frontend': '🎨',
'network': '🌐', 'connectivity': '🌐',
'mobile': '📱', 'responsive': '📱',
'architecture': '🏗️', 'system structure': '🏗️',
'components': '🧩', 'modular': '🧩'
}
```
### Symbol Application
```python
def _apply_symbol_systems(self, content: str) -> Tuple[str, List[str]]:
compressed = content
techniques = []
# Apply symbol mappings with word boundary protection
for phrase, symbol in self.symbol_mappings.items():
pattern = r'\b' + re.escape(phrase) + r'\b'
if re.search(pattern, compressed, re.IGNORECASE):
compressed = re.sub(pattern, symbol, compressed, flags=re.IGNORECASE)
techniques.append(f"symbol_{phrase.replace(' ', '_')}")
return compressed, techniques
```
## Abbreviation Systems Framework
### Abbreviation Mappings
```python
def _load_abbreviation_mappings(self) -> Dict[str, str]:
return {
# System & Architecture
'configuration': 'cfg', 'settings': 'cfg',
'implementation': 'impl', 'code structure': 'impl',
'architecture': 'arch', 'system design': 'arch',
'performance': 'perf', 'optimization': 'perf',
'operations': 'ops', 'deployment': 'ops',
'environment': 'env', 'runtime context': 'env',
# Development Process
'requirements': 'req', 'dependencies': 'deps',
'packages': 'deps', 'validation': 'val',
'verification': 'val', 'testing': 'test',
'quality assurance': 'test', 'documentation': 'docs',
'guides': 'docs', 'standards': 'std',
'conventions': 'std',
# Quality & Analysis
'quality': 'qual', 'maintainability': 'qual',
'security': 'sec', 'safety measures': 'sec',
'error': 'err', 'exception handling': 'err',
'recovery': 'rec', 'resilience': 'rec',
'severity': 'sev', 'priority level': 'sev',
'optimization': 'opt', 'improvement': 'opt'
}
```
### Abbreviation Application
```python
def _apply_abbreviation_systems(self, content: str) -> Tuple[str, List[str]]:
compressed = content
techniques = []
# Apply abbreviation mappings with context awareness
for phrase, abbrev in self.abbreviation_mappings.items():
pattern = r'\b' + re.escape(phrase) + r'\b'
if re.search(pattern, compressed, re.IGNORECASE):
compressed = re.sub(pattern, abbrev, compressed, flags=re.IGNORECASE)
techniques.append(f"abbrev_{phrase.replace(' ', '_')}")
return compressed, techniques
```
## Structural Optimization
### _apply_structural_optimization()
```python
def _apply_structural_optimization(self, content: str, level: CompressionLevel) -> Tuple[str, List[str]]:
compressed = content
techniques = []
# Remove redundant whitespace
compressed = re.sub(r'\s+', ' ', compressed)
compressed = re.sub(r'\n\s*\n', '\n', compressed)
techniques.append('whitespace_optimization')
# Aggressive optimizations for higher compression levels
if level in [CompressionLevel.COMPRESSED, CompressionLevel.CRITICAL, CompressionLevel.EMERGENCY]:
# Remove redundant words
compressed = re.sub(r'\b(the|a|an)\s+', '', compressed, flags=re.IGNORECASE)
techniques.append('article_removal')
# Simplify common phrases
phrase_simplifications = {
r'in order to': 'to',
r'it is important to note that': 'note:',
r'please be aware that': 'note:',
r'it should be noted that': 'note:',
r'for the purpose of': 'for',
r'with regard to': 'regarding',
r'in relation to': 'regarding'
}
for pattern, replacement in phrase_simplifications.items():
if re.search(pattern, compressed, re.IGNORECASE):
compressed = re.sub(pattern, replacement, compressed, flags=re.IGNORECASE)
techniques.append(f'phrase_simplification_{replacement}')
return compressed, techniques
```
## Compression Strategy Creation
### _create_compression_strategy()
```python
def _create_compression_strategy(self, level: CompressionLevel, content_type: ContentType) -> CompressionStrategy:
level_configs = {
CompressionLevel.MINIMAL: {
'symbol_systems': True, # Changed: Enable basic optimizations even for minimal
'abbreviations': False,
'structural': True, # Changed: Enable basic structural optimization
'quality_threshold': 0.98
},
CompressionLevel.EFFICIENT: {
'symbol_systems': True,
'abbreviations': False,
'structural': True,
'quality_threshold': 0.95
},
CompressionLevel.COMPRESSED: {
'symbol_systems': True,
'abbreviations': True,
'structural': True,
'quality_threshold': 0.90
},
CompressionLevel.CRITICAL: {
'symbol_systems': True,
'abbreviations': True,
'structural': True,
'quality_threshold': 0.85
},
CompressionLevel.EMERGENCY: {
'symbol_systems': True,
'abbreviations': True,
'structural': True,
'quality_threshold': 0.80
}
}
config = level_configs[level]
# Adjust for content type
if content_type == ContentType.USER_CONTENT:
# More conservative for user content
config['quality_threshold'] = min(config['quality_threshold'] + 0.1, 1.0)
return CompressionStrategy(
level=level,
symbol_systems_enabled=config['symbol_systems'],
abbreviation_systems_enabled=config['abbreviations'],
structural_optimization=config['structural'],
selective_preservation={},
quality_threshold=config['quality_threshold']
)
```
## Quality Validation Framework
### Compression Quality Validation
```python
def _validate_compression_quality(self, original: str, compressed: str, strategy: CompressionStrategy) -> float:
# Check if key information is preserved
original_words = set(re.findall(r'\b\w+\b', original.lower()))
compressed_words = set(re.findall(r'\b\w+\b', compressed.lower()))
# Word preservation ratio
word_preservation = len(compressed_words & original_words) / len(original_words) if original_words else 1.0
# Length efficiency (not too aggressive)
length_ratio = len(compressed) / len(original) if original else 1.0
# Penalize over-compression
if length_ratio < 0.3:
word_preservation *= 0.8
quality_score = (word_preservation * 0.7) + (min(length_ratio * 2, 1.0) * 0.3)
return min(quality_score, 1.0)
```
### Information Preservation Score
```python
def _calculate_information_preservation(self, original: str, compressed: str) -> float:
# Extract key concepts (capitalized words, technical terms)
original_concepts = set(re.findall(r'\b[A-Z][a-z]+\b|\b\w+\.(js|py|md|yaml|json)\b', original))
compressed_concepts = set(re.findall(r'\b[A-Z][a-z]+\b|\b\w+\.(js|py|md|yaml|json)\b', compressed))
if not original_concepts:
return 1.0
preservation_ratio = len(compressed_concepts & original_concepts) / len(original_concepts)
return preservation_ratio
```
## Main Compression Interface
### compress_content()
```python
def compress_content(self,
content: str,
context: Dict[str, Any],
metadata: Dict[str, Any] = None) -> CompressionResult:
import time
start_time = time.time()
if metadata is None:
metadata = {}
# Classify content type
content_type = self.classify_content(content, metadata)
# Framework content - no compression
if content_type == ContentType.FRAMEWORK_CONTENT:
return CompressionResult(
original_length=len(content),
compressed_length=len(content),
compression_ratio=0.0,
quality_score=1.0,
techniques_used=['framework_exclusion'],
preservation_score=1.0,
processing_time_ms=(time.time() - start_time) * 1000
)
# User content - minimal compression only
if content_type == ContentType.USER_CONTENT:
compression_level = CompressionLevel.MINIMAL
else:
compression_level = self.determine_compression_level(context)
# Create compression strategy
strategy = self._create_compression_strategy(compression_level, content_type)
# Apply compression techniques
compressed_content = content
techniques_used = []
if strategy.symbol_systems_enabled:
compressed_content, symbol_techniques = self._apply_symbol_systems(compressed_content)
techniques_used.extend(symbol_techniques)
if strategy.abbreviation_systems_enabled:
compressed_content, abbrev_techniques = self._apply_abbreviation_systems(compressed_content)
techniques_used.extend(abbrev_techniques)
if strategy.structural_optimization:
compressed_content, struct_techniques = self._apply_structural_optimization(
compressed_content, compression_level
)
techniques_used.extend(struct_techniques)
# Calculate metrics
original_length = len(content)
compressed_length = len(compressed_content)
compression_ratio = (original_length - compressed_length) / original_length if original_length > 0 else 0.0
# Quality validation
quality_score = self._validate_compression_quality(content, compressed_content, strategy)
preservation_score = self._calculate_information_preservation(content, compressed_content)
processing_time = (time.time() - start_time) * 1000
# Cache result for performance
cache_key = hashlib.md5(content.encode()).hexdigest()
self.compression_cache[cache_key] = compressed_content
return CompressionResult(
original_length=original_length,
compressed_length=compressed_length,
compression_ratio=compression_ratio,
quality_score=quality_score,
techniques_used=techniques_used,
preservation_score=preservation_score,
processing_time_ms=processing_time
)
```
## Performance Monitoring and Recommendations
### get_compression_recommendations()
```python
def get_compression_recommendations(self, context: Dict[str, Any]) -> Dict[str, Any]:
recommendations = []
current_level = self.determine_compression_level(context)
resource_usage = context.get('resource_usage_percent', 0)
# Resource-based recommendations
if resource_usage > 85:
recommendations.append("Enable emergency compression mode for critical resource constraints")
elif resource_usage > 70:
recommendations.append("Consider compressed mode for better resource efficiency")
elif resource_usage < 40:
recommendations.append("Resource usage low - minimal compression sufficient")
# Performance recommendations
if context.get('processing_time_ms', 0) > 500:
recommendations.append("Compression processing time high - consider caching strategies")
return {
'current_level': current_level.value,
'recommendations': recommendations,
'estimated_savings': self._estimate_compression_savings(current_level),
'quality_impact': self._estimate_quality_impact(current_level),
'performance_metrics': self.performance_metrics
}
```
### Compression Savings Estimation
```python
def _estimate_compression_savings(self, level: CompressionLevel) -> Dict[str, float]:
savings_map = {
CompressionLevel.MINIMAL: {'token_reduction': 0.15, 'time_savings': 0.05},
CompressionLevel.EFFICIENT: {'token_reduction': 0.40, 'time_savings': 0.15},
CompressionLevel.COMPRESSED: {'token_reduction': 0.60, 'time_savings': 0.25},
CompressionLevel.CRITICAL: {'token_reduction': 0.75, 'time_savings': 0.35},
CompressionLevel.EMERGENCY: {'token_reduction': 0.85, 'time_savings': 0.45}
}
return savings_map.get(level, {'token_reduction': 0.0, 'time_savings': 0.0})
```
## Integration with Hooks
### Hook Usage Pattern
```python
# Initialize compression engine
compression_engine = CompressionEngine()
# Compress content with context awareness
context = {
'resource_usage_percent': 75,
'conversation_length': 120,
'user_requests_brevity': False,
'complexity_score': 0.5
}
metadata = {
'file_path': '/project/src/component.js',
'context_type': 'user_content'
}
result = compression_engine.compress_content(
content="This is a complex React component implementation with multiple state management patterns and performance optimizations.",
context=context,
metadata=metadata
)
print(f"Original length: {result.original_length}") # 142
print(f"Compressed length: {result.compressed_length}") # 95
print(f"Compression ratio: {result.compression_ratio:.2%}") # 33%
print(f"Quality score: {result.quality_score:.2f}") # 0.95
print(f"Preservation score: {result.preservation_score:.2f}") # 0.98
print(f"Techniques used: {result.techniques_used}") # ['symbol_performance', 'abbrev_implementation']
print(f"Processing time: {result.processing_time_ms:.1f}ms") # 15.2ms
```
### Compression Strategy Analysis
```python
# Get compression recommendations
recommendations = compression_engine.get_compression_recommendations(context)
print(f"Current level: {recommendations['current_level']}") # 'compressed'
print(f"Recommendations: {recommendations['recommendations']}") # ['Consider compressed mode for better resource efficiency']
print(f"Estimated savings: {recommendations['estimated_savings']}") # {'token_reduction': 0.6, 'time_savings': 0.25}
print(f"Quality impact: {recommendations['quality_impact']}") # 0.90
```
## Performance Characteristics
### Processing Performance
- **Content Classification**: <5ms for typical content analysis
- **Compression Level Determination**: <3ms for context evaluation
- **Symbol System Application**: <10ms for comprehensive replacement
- **Abbreviation System Application**: <8ms for domain-specific replacement
- **Structural Optimization**: <15ms for aggressive optimization
- **Quality Validation**: <20ms for comprehensive validation
### Memory Efficiency
- **Symbol Mappings Cache**: ~2-3KB for all symbol definitions
- **Abbreviation Cache**: ~1-2KB for abbreviation mappings
- **Compression Cache**: Dynamic based on content, LRU eviction
- **Strategy Objects**: ~100-200B per strategy instance
### Quality Metrics
- **Information Preservation**: ≥95% for all compression levels
- **Quality Score Accuracy**: 90%+ correlation with human assessment
- **Processing Reliability**: <0.1% compression failures
- **Cache Hit Rate**: 85%+ for repeated content compression
## Error Handling Strategies
### Compression Failures
```python
try:
# Apply compression techniques
compressed_content, techniques = self._apply_symbol_systems(content)
except Exception as e:
# Fall back to original content with warning
logger.log_error("compression_engine", f"Symbol system application failed: {e}")
compressed_content = content
techniques = ['compression_failed']
```
### Quality Validation Failures
- **Invalid Quality Score**: Use fallback quality estimation
- **Preservation Score Errors**: Default to 1.0 (full preservation)
- **Validation Timeout**: Skip validation, proceed with compression
### Graceful Degradation
- **Pattern Compilation Errors**: Skip problematic patterns, continue with others
- **Resource Constraints**: Reduce compression level automatically
- **Performance Issues**: Enable compression caching, reduce processing complexity
## Configuration Requirements
### Compression Configuration
```yaml
compression:
enabled: true
cache_size_mb: 10
quality_threshold: 0.95
processing_timeout_ms: 100
levels:
minimal:
symbol_systems: false
abbreviations: false
structural: false
quality_threshold: 0.98
efficient:
symbol_systems: true
abbreviations: false
structural: true
quality_threshold: 0.95
compressed:
symbol_systems: true
abbreviations: true
structural: true
quality_threshold: 0.90
```
### Content Classification Rules
```yaml
content_classification:
framework_exclusions:
- "~/.claude/"
- "CLAUDE.md"
- "FLAGS.md"
- "PRINCIPLES.md"
compressible_patterns:
- "session_metadata"
- "checkpoint_data"
- "analysis_results"
preserve_patterns:
- "source_code"
- "user_documentation"
- "project_files"
```
## Usage Examples
### Framework Content Protection
```python
result = compression_engine.compress_content(
content="Content from ~/.claude/CLAUDE.md with framework patterns",
context={'resource_usage_percent': 90},
metadata={'file_path': '~/.claude/CLAUDE.md'}
)
print(f"Compression ratio: {result.compression_ratio}") # 0.0 (no compression)
print(f"Techniques used: {result.techniques_used}") # ['framework_exclusion']
```
### Emergency Compression
```python
result = compression_engine.compress_content(
content="This is a very long document with lots of redundant information that needs to be compressed for emergency situations where resources are critically constrained and every token matters.",
context={'resource_usage_percent': 96},
metadata={'context_type': 'session_data'}
)
print(f"Compression ratio: {result.compression_ratio:.2%}") # 85%+ compression
print(f"Quality preserved: {result.quality_score:.2f}") # ≥0.80
```
## Dependencies and Relationships
### Internal Dependencies
- **yaml_loader**: Configuration loading for compression settings
- **Standard Libraries**: re, json, hashlib, time, typing, dataclasses, enum
### Framework Integration
- **MODE_Token_Efficiency.md**: Direct implementation of token optimization patterns
- **Selective Compression**: Framework content protection with user content preservation
- **Quality Gates**: Real-time validation with measurable preservation targets
### Hook Coordination
- Used by all hooks for consistent token optimization
- Provides standardized compression interface and quality validation
- Enables cross-hook performance monitoring and efficiency tracking
---
*This module serves as the intelligent token optimization engine for the SuperClaude framework, ensuring efficient resource usage while maintaining information quality and framework compliance through selective, quality-gated compression strategies.*

View File

@@ -1,454 +0,0 @@
# framework_logic.py - Core SuperClaude Framework Decision Engine
## Overview
The `framework_logic.py` module implements the core decision-making algorithms from the SuperClaude framework, translating RULES.md, PRINCIPLES.md, and ORCHESTRATOR.md patterns into executable intelligence. This module serves as the central nervous system for all hook operations, providing evidence-based decision making, complexity assessment, risk evaluation, and quality validation.
## Purpose and Responsibilities
### Primary Functions
- **Decision Algorithm Implementation**: Executable versions of SuperClaude framework rules
- **Complexity Assessment**: Multi-factor scoring system for operation routing decisions
- **Risk Evaluation**: Context-aware risk assessment with mitigation strategies
- **Quality Validation**: Multi-step validation cycles with measurable quality scores
- **Performance Estimation**: Resource impact prediction and optimization recommendations
### Framework Pattern Implementation
- **RULES.md Compliance**: Read-before-write validation, systematic codebase changes, session lifecycle rules
- **PRINCIPLES.md Integration**: Evidence-based decisions, quality standards, error handling patterns
- **ORCHESTRATOR.md Logic**: Intelligent routing, resource management, quality gate enforcement
## Core Classes and Data Structures
### Enumerations
#### OperationType
```python
class OperationType(Enum):
READ = "read" # File reading operations
WRITE = "write" # File creation operations
EDIT = "edit" # File modification operations
ANALYZE = "analyze" # Code analysis operations
BUILD = "build" # Build/compilation operations
TEST = "test" # Testing operations
DEPLOY = "deploy" # Deployment operations
REFACTOR = "refactor" # Code restructuring operations
```
#### RiskLevel
```python
class RiskLevel(Enum):
LOW = "low" # Minimal impact, safe operations
MEDIUM = "medium" # Moderate impact, requires validation
HIGH = "high" # Significant impact, requires approval
CRITICAL = "critical" # System-wide impact, maximum validation
```
### Data Classes
#### OperationContext
```python
@dataclass
class OperationContext:
operation_type: OperationType # Type of operation being performed
file_count: int # Number of files involved
directory_count: int # Number of directories involved
has_tests: bool # Whether tests are available
is_production: bool # Production environment flag
user_expertise: str # beginner|intermediate|expert
project_type: str # web|api|cli|library|etc
complexity_score: float # 0.0 to 1.0 complexity rating
risk_level: RiskLevel # Assessed risk level
```
#### ValidationResult
```python
@dataclass
class ValidationResult:
is_valid: bool # Overall validation status
issues: List[str] # Critical issues found
warnings: List[str] # Non-critical warnings
suggestions: List[str] # Improvement recommendations
quality_score: float # 0.0 to 1.0 quality rating
```
## Core Methods and Algorithms
### Framework Rule Implementation
#### should_use_read_before_write()
```python
def should_use_read_before_write(self, context: OperationContext) -> bool:
"""RULES.md: Always use Read tool before Write or Edit operations."""
return context.operation_type in [OperationType.WRITE, OperationType.EDIT]
```
**Implementation Details**:
- Direct mapping from RULES.md operational security requirements
- Returns True for any operation that modifies existing files
- Used by hooks to enforce read-before-write validation
#### should_enable_validation()
```python
def should_enable_validation(self, context: OperationContext) -> bool:
"""ORCHESTRATOR.md: Enable validation for production code or high-risk operations."""
return (
context.is_production or
context.risk_level in [RiskLevel.HIGH, RiskLevel.CRITICAL] or
context.operation_type in [OperationType.DEPLOY, OperationType.REFACTOR]
)
```
### Complexity Assessment Algorithm
#### calculate_complexity_score()
Multi-factor complexity scoring with weighted components:
**File Count Factor (0.0 to 0.3)**:
- 1 file: 0.0
- 2-3 files: 0.1
- 4-10 files: 0.2
- 10+ files: 0.3
**Directory Factor (0.0 to 0.2)**:
- 1 directory: 0.0
- 2 directories: 0.1
- 3+ directories: 0.2
**Operation Type Factor (0.0 to 0.3)**:
- Refactor/Architecture: 0.3
- Build/Implement/Migrate: 0.2
- Fix/Update/Improve: 0.1
- Read/Analyze: 0.0
**Language/Framework Factor (0.0 to 0.2)**:
- Multi-language projects: 0.2
- Framework changes: 0.1
- Single language/no framework: 0.0
**Total Score**: Sum of all factors, capped at 1.0
### Risk Assessment Algorithm
#### assess_risk_level()
Context-based risk evaluation with escalation rules:
1. **Production Environment**: Automatic HIGH risk
2. **Complexity > 0.7**: HIGH risk
3. **Complexity > 0.4**: MEDIUM risk
4. **File Count > 10**: MEDIUM risk
5. **Default**: LOW risk
### Quality Validation Framework
#### validate_operation()
Multi-criteria validation with quality scoring:
**Evidence-Based Validation**:
- Evidence provided: Quality maintained
- No evidence: -0.1 quality score, warning generated
**Error Handling Validation**:
- Write/Edit/Deploy operations require error handling
- Missing error handling: -0.2 quality score, issue generated
**Test Coverage Validation**:
- Logic changes should have tests
- Missing tests: -0.1 quality score, suggestion generated
**Documentation Validation**:
- Public APIs require documentation
- Missing docs: -0.1 quality score, suggestion generated
**Security Validation**:
- User input handling requires validation
- Missing input validation: -0.3 quality score, critical issue
**Quality Thresholds**:
- Valid operation: No issues AND quality_score ≥ 0.7
- Final quality_score: max(calculated_score, 0.0)
### Thinking Mode Selection
#### determine_thinking_mode()
Complexity-based thinking mode selection:
- **Complexity ≥ 0.8**: `--ultrathink` (32K token analysis)
- **Complexity ≥ 0.6**: `--think-hard` (10K token analysis)
- **Complexity ≥ 0.3**: `--think` (4K token analysis)
- **Complexity < 0.3**: No thinking mode required
### Delegation Decision Logic
#### should_enable_delegation()
Multi-factor delegation assessment:
```python
def should_enable_delegation(self, context: OperationContext) -> Tuple[bool, str]:
if context.file_count > 3:
return True, "files" # File-based delegation
elif context.directory_count > 2:
return True, "folders" # Folder-based delegation
elif context.complexity_score > 0.4:
return True, "auto" # Automatic strategy selection
else:
return False, "none" # No delegation needed
```
## Performance Target Management
### Configuration Integration
```python
def __init__(self):
# Load performance targets from SuperClaude configuration
self.performance_targets = {}
# Hook-specific targets
self.performance_targets['session_start_ms'] = config_loader.get_hook_config(
'session_start', 'performance_target_ms', 50
)
self.performance_targets['tool_routing_ms'] = config_loader.get_hook_config(
'pre_tool_use', 'performance_target_ms', 200
)
# ... additional targets
```
### Performance Impact Estimation
```python
def estimate_performance_impact(self, context: OperationContext) -> Dict[str, Any]:
base_time = 100 # ms
estimated_time = base_time * (1 + context.complexity_score * 3)
# Factor in file count impact
if context.file_count > 5:
estimated_time *= 1.5
# Generate optimization suggestions
optimizations = []
if context.file_count > 3:
optimizations.append("Consider parallel processing")
if context.complexity_score > 0.6:
optimizations.append("Enable delegation mode")
return {
'estimated_time_ms': int(estimated_time),
'performance_risk': 'high' if estimated_time > 1000 else 'low',
'suggested_optimizations': optimizations,
'efficiency_gains_possible': len(optimizations) > 0
}
```
## Quality Gates Integration
### get_quality_gates()
Dynamic quality gate selection based on operation context:
**Base Gates** (All Operations):
- `syntax_validation`: Language-specific syntax checking
**Write/Edit Operations**:
- `type_analysis`: Type compatibility validation
- `code_quality`: Linting and style checking
**High-Risk Operations**:
- `security_assessment`: Vulnerability scanning
- `performance_analysis`: Performance impact analysis
**Test-Available Operations**:
- `test_validation`: Test execution and coverage
**Deployment Operations**:
- `integration_testing`: End-to-end validation
- `deployment_validation`: Environment compatibility
## SuperClaude Principles Application
### apply_superclaude_principles()
Automatic principle enforcement with recommendations:
**Evidence > Assumptions**:
```python
if 'assumptions' in enhanced_data and not enhanced_data.get('evidence'):
enhanced_data['recommendations'].append(
"Gather evidence to validate assumptions"
)
```
**Code > Documentation**:
```python
if enhanced_data.get('operation_type') == 'document' and not enhanced_data.get('has_working_code'):
enhanced_data['warnings'].append(
"Ensure working code exists before extensive documentation"
)
```
**Efficiency > Verbosity**:
```python
if enhanced_data.get('output_length', 0) > 1000 and not enhanced_data.get('justification_for_length'):
enhanced_data['efficiency_suggestions'].append(
"Consider token efficiency techniques for long outputs"
)
```
## Integration with Hooks
### Hook Implementation Pattern
```python
# Hook initialization
framework_logic = FrameworkLogic()
# Operation context creation
context = OperationContext(
operation_type=OperationType.EDIT,
file_count=file_count,
directory_count=dir_count,
has_tests=has_tests,
is_production=is_production,
user_expertise="intermediate",
project_type="web",
complexity_score=0.0, # Will be calculated
risk_level=RiskLevel.LOW # Will be assessed
)
# Calculate complexity and assess risk
context.complexity_score = framework_logic.calculate_complexity_score(operation_data)
context.risk_level = framework_logic.assess_risk_level(context)
# Make framework-compliant decisions
should_validate = framework_logic.should_enable_validation(context)
should_delegate, delegation_strategy = framework_logic.should_enable_delegation(context)
thinking_mode = framework_logic.determine_thinking_mode(context)
# Validate operation
validation_result = framework_logic.validate_operation(operation_data)
if not validation_result.is_valid:
# Handle validation issues
handle_validation_issues(validation_result)
```
## Error Handling Strategies
### Graceful Degradation
- **Configuration Errors**: Use default performance targets
- **Calculation Errors**: Return safe default values
- **Validation Failures**: Provide detailed error context
### Fallback Mechanisms
- **Complexity Calculation**: Default to 0.5 if calculation fails
- **Risk Assessment**: Default to MEDIUM risk if assessment fails
- **Quality Validation**: Default to valid with warnings if validation fails
## Performance Characteristics
### Operation Timings
- **Complexity Calculation**: <5ms for typical operations
- **Risk Assessment**: <3ms for context evaluation
- **Quality Validation**: <10ms for comprehensive validation
- **Performance Estimation**: <2ms for impact calculation
### Memory Efficiency
- **Context Objects**: ~200-400 bytes per context
- **Validation Results**: ~500-1000 bytes with full details
- **Configuration Cache**: ~1-2KB for performance targets
## Configuration Requirements
### Required Configuration Sections
```yaml
# Performance targets for each hook
hook_configurations:
session_start:
performance_target_ms: 50
pre_tool_use:
performance_target_ms: 200
post_tool_use:
performance_target_ms: 100
pre_compact:
performance_target_ms: 150
# Global performance settings
global_configuration:
performance_monitoring:
enabled: true
target_percentile: 95
alert_threshold_ms: 500
```
## Usage Examples
### Basic Decision Making
```python
framework_logic = FrameworkLogic()
# Create operation context
context = OperationContext(
operation_type=OperationType.REFACTOR,
file_count=15,
directory_count=3,
has_tests=True,
is_production=False,
user_expertise="expert",
project_type="web",
complexity_score=0.0,
risk_level=RiskLevel.LOW
)
# Calculate complexity and assess risk
context.complexity_score = framework_logic.calculate_complexity_score({
'file_count': 15,
'directory_count': 3,
'operation_type': 'refactor',
'multi_language': False,
'framework_changes': True
})
context.risk_level = framework_logic.assess_risk_level(context)
# Make decisions
should_read_first = framework_logic.should_use_read_before_write(context) # False (refactor)
should_validate = framework_logic.should_enable_validation(context) # True (refactor)
should_delegate, strategy = framework_logic.should_enable_delegation(context) # True, "files"
thinking_mode = framework_logic.determine_thinking_mode(context) # "--think-hard"
```
### Quality Validation
```python
operation_data = {
'operation_type': 'write',
'affects_logic': True,
'has_tests': False,
'is_public_api': True,
'has_documentation': False,
'handles_user_input': True,
'has_input_validation': False,
'has_error_handling': True
}
validation_result = framework_logic.validate_operation(operation_data)
print(f"Valid: {validation_result.is_valid}") # False
print(f"Quality Score: {validation_result.quality_score}") # 0.4
print(f"Issues: {validation_result.issues}") # ['User input handling without validation']
print(f"Warnings: {validation_result.warnings}") # ['No tests found for logic changes', 'Public API lacks documentation']
print(f"Suggestions: {validation_result.suggestions}") # ['Add unit tests for new logic', 'Add API documentation']
```
## Dependencies and Relationships
### Internal Dependencies
- **yaml_loader**: Configuration loading and management
- **Standard Libraries**: json, time, dataclasses, enum, typing
### Framework Integration
- **RULES.md**: Direct implementation of operational rules
- **PRINCIPLES.md**: Quality standards and decision-making principles
- **ORCHESTRATOR.md**: Intelligent routing and resource management patterns
### Hook Coordination
- Used by all 7 hooks for consistent decision-making
- Provides standardized context and validation interfaces
- Enables cross-hook performance monitoring and optimization
---
*This module serves as the foundational intelligence layer for the entire SuperClaude framework, ensuring that all hook operations are evidence-based, quality-validated, and optimally routed according to established patterns and principles.*

View File

@@ -1,459 +0,0 @@
# intelligence_engine.py - Generic YAML Pattern Interpreter
## Overview
The `intelligence_engine.py` module provides a generic YAML pattern interpreter that enables hot-reloadable intelligence without code changes. This module consumes declarative YAML patterns to provide intelligent services, enabling the Framework-Hooks system to adapt behavior dynamically based on configuration rather than requiring code modifications.
## Purpose and Responsibilities
### Primary Functions
- **Hot-Reload YAML Intelligence Patterns**: Dynamically load and reload YAML configuration patterns
- **Context-Aware Pattern Matching**: Evaluate contexts against patterns with intelligent matching logic
- **Decision Tree Execution**: Execute complex decision trees defined in YAML configurations
- **Recommendation Generation**: Generate intelligent recommendations based on pattern analysis
- **Performance Optimization**: Cache pattern evaluations and optimize processing
- **Multi-Pattern Coordination**: Coordinate multiple pattern types for comprehensive intelligence
### Intelligence Capabilities
- **Pattern-Based Decision Making**: Executable intelligence defined in YAML rather than hardcoded logic
- **Real-Time Pattern Updates**: Change intelligence behavior without code deployment
- **Context Evaluation**: Smart context analysis with flexible condition matching
- **Performance Caching**: Sub-300ms pattern evaluation with intelligent caching
## Core Classes and Data Structures
### IntelligenceEngine
```python
class IntelligenceEngine:
"""
Generic YAML pattern interpreter for declarative intelligence.
Features:
- Hot-reload YAML intelligence patterns
- Context-aware pattern matching
- Decision tree execution
- Recommendation generation
- Performance optimization
- Multi-pattern coordination
"""
def __init__(self):
self.patterns: Dict[str, Dict[str, Any]] = {}
self.pattern_cache: Dict[str, Any] = {}
self.pattern_timestamps: Dict[str, float] = {}
self.evaluation_cache: Dict[str, Tuple[Any, float]] = {}
self.cache_duration = 300 # 5 minutes
```
## Pattern Loading and Management
### _load_all_patterns()
```python
def _load_all_patterns(self):
"""Load all intelligence pattern configurations."""
pattern_files = [
'intelligence_patterns',
'mcp_orchestration',
'hook_coordination',
'performance_intelligence',
'validation_intelligence',
'user_experience'
]
for pattern_file in pattern_files:
try:
patterns = config_loader.load_config(pattern_file)
self.patterns[pattern_file] = patterns
self.pattern_timestamps[pattern_file] = time.time()
except Exception as e:
print(f"Warning: Could not load {pattern_file} patterns: {e}")
self.patterns[pattern_file] = {}
```
### reload_patterns()
```python
def reload_patterns(self, force: bool = False) -> bool:
"""
Reload patterns if they have changed.
Args:
force: Force reload even if no changes detected
Returns:
True if patterns were reloaded
"""
reloaded = False
for pattern_file in self.patterns.keys():
try:
if force:
patterns = config_loader.load_config(pattern_file, force_reload=True)
self.patterns[pattern_file] = patterns
self.pattern_timestamps[pattern_file] = time.time()
reloaded = True
else:
# Check if pattern file has been updated
current_patterns = config_loader.load_config(pattern_file)
pattern_hash = self._compute_pattern_hash(current_patterns)
cached_hash = self.pattern_cache.get(f"{pattern_file}_hash")
if pattern_hash != cached_hash:
self.patterns[pattern_file] = current_patterns
self.pattern_cache[f"{pattern_file}_hash"] = pattern_hash
self.pattern_timestamps[pattern_file] = time.time()
reloaded = True
except Exception as e:
print(f"Warning: Could not reload {pattern_file} patterns: {e}")
if reloaded:
# Clear evaluation cache when patterns change
self.evaluation_cache.clear()
return reloaded
```
## Context Evaluation Framework
### evaluate_context()
```python
def evaluate_context(self, context: Dict[str, Any], pattern_type: str) -> Dict[str, Any]:
"""
Evaluate context against patterns to generate recommendations.
Args:
context: Current operation context
pattern_type: Type of patterns to evaluate (e.g., 'mcp_orchestration')
Returns:
Dictionary with recommendations and metadata
"""
# Check cache first
cache_key = f"{pattern_type}_{self._compute_context_hash(context)}"
if cache_key in self.evaluation_cache:
result, timestamp = self.evaluation_cache[cache_key]
if time.time() - timestamp < self.cache_duration:
return result
# Hot-reload patterns if needed
self.reload_patterns()
# Get patterns for this type
patterns = self.patterns.get(pattern_type, {})
if not patterns:
return {'recommendations': {}, 'confidence': 0.0, 'source': 'no_patterns'}
# Evaluate patterns
recommendations = {}
confidence_scores = []
if pattern_type == 'mcp_orchestration':
recommendations = self._evaluate_mcp_patterns(context, patterns)
elif pattern_type == 'hook_coordination':
recommendations = self._evaluate_hook_patterns(context, patterns)
elif pattern_type == 'performance_intelligence':
recommendations = self._evaluate_performance_patterns(context, patterns)
elif pattern_type == 'validation_intelligence':
recommendations = self._evaluate_validation_patterns(context, patterns)
elif pattern_type == 'user_experience':
recommendations = self._evaluate_ux_patterns(context, patterns)
elif pattern_type == 'intelligence_patterns':
recommendations = self._evaluate_learning_patterns(context, patterns)
# Calculate overall confidence
overall_confidence = max(confidence_scores) if confidence_scores else 0.0
result = {
'recommendations': recommendations,
'confidence': overall_confidence,
'source': pattern_type,
'timestamp': time.time()
}
# Cache result
self.evaluation_cache[cache_key] = (result, time.time())
return result
```
## Pattern Evaluation Methods
### MCP Orchestration Pattern Evaluation
```python
def _evaluate_mcp_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate MCP orchestration patterns."""
server_selection = patterns.get('server_selection', {})
decision_tree = server_selection.get('decision_tree', [])
recommendations = {
'primary_server': None,
'support_servers': [],
'coordination_mode': 'sequential',
'confidence': 0.0
}
# Evaluate decision tree
for rule in decision_tree:
if self._matches_conditions(context, rule.get('conditions', {})):
recommendations['primary_server'] = rule.get('primary_server')
recommendations['support_servers'] = rule.get('support_servers', [])
recommendations['coordination_mode'] = rule.get('coordination_mode', 'sequential')
recommendations['confidence'] = rule.get('confidence', 0.5)
break
# Apply fallback if no match
if not recommendations['primary_server']:
fallback = server_selection.get('fallback_chain', {})
recommendations['primary_server'] = fallback.get('default_primary', 'sequential')
recommendations['confidence'] = 0.3
return recommendations
```
### Performance Intelligence Pattern Evaluation
```python
def _evaluate_performance_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate performance intelligence patterns."""
auto_optimization = patterns.get('auto_optimization', {})
optimization_triggers = auto_optimization.get('optimization_triggers', [])
recommendations = {
'optimizations': [],
'resource_zone': 'green',
'performance_actions': []
}
# Check optimization triggers
for trigger in optimization_triggers:
if self._matches_conditions(context, trigger.get('condition', {})):
recommendations['optimizations'].extend(trigger.get('actions', []))
recommendations['performance_actions'].append({
'trigger': trigger.get('name'),
'urgency': trigger.get('urgency', 'medium')
})
# Determine resource zone
resource_usage = context.get('resource_usage', 0.5)
resource_zones = patterns.get('resource_management', {}).get('resource_zones', {})
for zone_name, zone_config in resource_zones.items():
threshold = zone_config.get('threshold', 1.0)
if resource_usage <= threshold:
recommendations['resource_zone'] = zone_name
break
return recommendations
```
## Condition Matching Logic
### _matches_conditions()
```python
def _matches_conditions(self, context: Dict[str, Any], conditions: Union[Dict, List]) -> bool:
"""Check if context matches pattern conditions."""
if isinstance(conditions, list):
# List of conditions (AND logic)
return all(self._matches_single_condition(context, cond) for cond in conditions)
elif isinstance(conditions, dict):
if 'AND' in conditions:
return all(self._matches_single_condition(context, cond) for cond in conditions['AND'])
elif 'OR' in conditions:
return any(self._matches_single_condition(context, cond) for cond in conditions['OR'])
else:
return self._matches_single_condition(context, conditions)
return False
def _matches_single_condition(self, context: Dict[str, Any], condition: Dict[str, Any]) -> bool:
"""Check if context matches a single condition."""
for key, expected_value in condition.items():
context_value = context.get(key)
if context_value is None:
return False
# Handle string operations
if isinstance(expected_value, str):
if expected_value.startswith('>'):
threshold = float(expected_value[1:])
return float(context_value) > threshold
elif expected_value.startswith('<'):
threshold = float(expected_value[1:])
return float(context_value) < threshold
elif isinstance(expected_value, list):
return context_value in expected_value
else:
return context_value == expected_value
elif isinstance(expected_value, list):
return context_value in expected_value
else:
return context_value == expected_value
return True
```
## Performance and Caching
### Pattern Hash Computation
```python
def _compute_pattern_hash(self, patterns: Dict[str, Any]) -> str:
"""Compute hash of pattern configuration for change detection."""
pattern_str = str(sorted(patterns.items()))
return hashlib.md5(pattern_str.encode()).hexdigest()
def _compute_context_hash(self, context: Dict[str, Any]) -> str:
"""Compute hash of context for caching."""
context_str = str(sorted(context.items()))
return hashlib.md5(context_str.encode()).hexdigest()[:8]
```
### Intelligence Summary
```python
def get_intelligence_summary(self) -> Dict[str, Any]:
"""Get summary of current intelligence state."""
return {
'loaded_patterns': list(self.patterns.keys()),
'cache_entries': len(self.evaluation_cache),
'last_reload': max(self.pattern_timestamps.values()) if self.pattern_timestamps else 0,
'pattern_status': {name: 'loaded' for name in self.patterns.keys()}
}
```
## Integration with Hooks
### Hook Usage Pattern
```python
# Initialize intelligence engine
intelligence_engine = IntelligenceEngine()
# Evaluate MCP orchestration patterns
context = {
'operation_type': 'complex_analysis',
'file_count': 15,
'complexity_score': 0.8,
'user_expertise': 'expert'
}
mcp_recommendations = intelligence_engine.evaluate_context(context, 'mcp_orchestration')
print(f"Primary server: {mcp_recommendations['recommendations']['primary_server']}")
print(f"Support servers: {mcp_recommendations['recommendations']['support_servers']}")
print(f"Confidence: {mcp_recommendations['confidence']}")
# Evaluate performance intelligence
performance_recommendations = intelligence_engine.evaluate_context(context, 'performance_intelligence')
print(f"Resource zone: {performance_recommendations['recommendations']['resource_zone']}")
print(f"Optimizations: {performance_recommendations['recommendations']['optimizations']}")
```
## YAML Pattern Examples
### MCP Orchestration Pattern
```yaml
server_selection:
decision_tree:
- conditions:
operation_type: "complex_analysis"
complexity_score: ">0.6"
primary_server: "sequential"
support_servers: ["context7", "serena"]
coordination_mode: "parallel"
confidence: 0.9
- conditions:
operation_type: "ui_component"
primary_server: "magic"
support_servers: ["context7"]
coordination_mode: "sequential"
confidence: 0.8
fallback_chain:
default_primary: "sequential"
```
### Performance Intelligence Pattern
```yaml
auto_optimization:
optimization_triggers:
- name: "high_complexity_parallel"
condition:
complexity_score: ">0.7"
file_count: ">5"
actions:
- "enable_parallel_processing"
- "increase_cache_size"
urgency: "high"
- name: "resource_constraint"
condition:
resource_usage: ">0.8"
actions:
- "enable_compression"
- "reduce_verbosity"
urgency: "critical"
resource_management:
resource_zones:
green:
threshold: 0.6
yellow:
threshold: 0.75
red:
threshold: 0.9
```
## Performance Characteristics
### Operation Timings
- **Pattern Loading**: <50ms for complete pattern set
- **Pattern Reload Check**: <5ms for change detection
- **Context Evaluation**: <25ms for complex pattern matching
- **Cache Lookup**: <1ms for cached results
- **Pattern Hash Computation**: <3ms for configuration changes
### Memory Efficiency
- **Pattern Storage**: ~2-10KB per pattern file depending on complexity
- **Evaluation Cache**: ~500B-2KB per cached evaluation
- **Pattern Cache**: ~1KB for pattern hashes and metadata
- **Total Memory**: <50KB for typical pattern sets
### Quality Metrics
- **Pattern Match Accuracy**: >95% correct pattern application
- **Cache Hit Rate**: 85%+ for repeated evaluations
- **Hot-Reload Responsiveness**: <1s pattern update detection
- **Evaluation Reliability**: <0.1% pattern matching errors
## Error Handling Strategies
### Pattern Loading Failures
- **Malformed YAML**: Skip problematic patterns, log warnings, continue with valid patterns
- **Missing Pattern Files**: Use empty pattern sets with warnings
- **Permission Errors**: Graceful fallback to default recommendations
### Evaluation Failures
- **Invalid Context**: Return no-match result with appropriate metadata
- **Pattern Execution Errors**: Log error, return fallback recommendations
- **Cache Corruption**: Clear cache, re-evaluate patterns
### Performance Degradation
- **Memory Pressure**: Reduce cache size, increase eviction frequency
- **High Latency**: Skip non-critical pattern evaluations
- **Resource Constraints**: Disable complex pattern matching temporarily
## Dependencies and Relationships
### Internal Dependencies
- **yaml_loader**: Configuration loading for YAML pattern files
- **Standard Libraries**: time, hashlib, typing, pathlib
### Framework Integration
- **YAML Configuration**: Consumes intelligence patterns from config/ directory
- **Hot-Reload Capability**: Real-time pattern updates without code changes
- **Performance Caching**: Optimized for hook performance requirements
### Hook Coordination
- Used by hooks for intelligent decision making based on YAML patterns
- Provides standardized pattern evaluation interface
- Enables configuration-driven intelligence across all hook operations
---
*This module enables the SuperClaude framework to evolve its intelligence through configuration rather than code changes, providing hot-reloadable, pattern-based decision making that adapts to changing requirements and optimizes based on operational data.*

View File

@@ -1,760 +0,0 @@
# learning_engine.py - Adaptive Learning and Feedback System
## Overview
The `learning_engine.py` module provides a cross-hook adaptation system that learns from user patterns, operation effectiveness, and system performance to continuously improve SuperClaude intelligence. It implements user preference learning, operation pattern recognition, performance feedback integration, and cross-hook coordination for personalized and project-specific adaptations.
## Purpose and Responsibilities
### Primary Functions
- **User Preference Learning**: Personalization based on effectiveness feedback and usage patterns
- **Operation Pattern Recognition**: Identification and optimization of common workflows
- **Performance Feedback Integration**: Continuous improvement through effectiveness metrics
- **Cross-Hook Knowledge Sharing**: Shared learning across all hook implementations
- **Effectiveness Measurement**: Validation of adaptation success and continuous refinement
### Intelligence Capabilities
- **Pattern Signature Generation**: Unique identification of learning patterns for reuse
- **Adaptation Creation**: Automatic generation of behavioral modifications from patterns
- **Context Matching**: Intelligent matching of current context to learned adaptations
- **Effectiveness Tracking**: Longitudinal monitoring of adaptation success rates
## Core Classes and Data Structures
### Enumerations
#### LearningType
```python
class LearningType(Enum):
USER_PREFERENCE = "user_preference" # Personal preference patterns
OPERATION_PATTERN = "operation_pattern" # Workflow optimization patterns
PERFORMANCE_OPTIMIZATION = "performance_optimization" # Performance improvement patterns
ERROR_RECOVERY = "error_recovery" # Error handling and recovery patterns
EFFECTIVENESS_FEEDBACK = "effectiveness_feedback" # Feedback on adaptation effectiveness
```
#### AdaptationScope
```python
class AdaptationScope(Enum):
SESSION = "session" # Apply only to current session
PROJECT = "project" # Apply to current project
USER = "user" # Apply across all user sessions
GLOBAL = "global" # Apply to all users (anonymized)
```
### Data Classes
#### LearningRecord
```python
@dataclass
class LearningRecord:
timestamp: float # When the learning event occurred
learning_type: LearningType # Type of learning pattern
scope: AdaptationScope # Scope of application
context: Dict[str, Any] # Context in which learning occurred
pattern: Dict[str, Any] # The pattern or behavior observed
effectiveness_score: float # 0.0 to 1.0 effectiveness rating
confidence: float # 0.0 to 1.0 confidence in learning
metadata: Dict[str, Any] # Additional learning metadata
```
#### Adaptation
```python
@dataclass
class Adaptation:
adaptation_id: str # Unique adaptation identifier
pattern_signature: str # Pattern signature for matching
trigger_conditions: Dict[str, Any] # Conditions that trigger this adaptation
modifications: Dict[str, Any] # Modifications to apply
effectiveness_history: List[float] # Historical effectiveness scores
usage_count: int # Number of times applied
last_used: float # Timestamp of last usage
confidence_score: float # Current confidence in adaptation
```
#### LearningInsight
```python
@dataclass
class LearningInsight:
insight_type: str # Type of insight discovered
description: str # Human-readable description
evidence: List[str] # Supporting evidence for insight
recommendations: List[str] # Actionable recommendations
confidence: float # Confidence in insight accuracy
impact_score: float # Expected impact of implementing insight
```
## Learning Record Management
### record_learning_event()
```python
def record_learning_event(self,
learning_type: LearningType,
scope: AdaptationScope,
context: Dict[str, Any],
pattern: Dict[str, Any],
effectiveness_score: float,
confidence: float = 1.0,
metadata: Dict[str, Any] = None) -> str:
record = LearningRecord(
timestamp=time.time(),
learning_type=learning_type,
scope=scope,
context=context,
pattern=pattern,
effectiveness_score=effectiveness_score,
confidence=confidence,
metadata=metadata
)
self.learning_records.append(record)
# Trigger adaptation creation if pattern is significant
if effectiveness_score > 0.7 and confidence > 0.6:
self._create_adaptation_from_record(record)
self._save_learning_data()
return f"learning_{int(record.timestamp)}"
```
**Learning Event Processing**:
1. **Record Creation**: Capture learning event with full context
2. **Significance Assessment**: Evaluate effectiveness and confidence thresholds
3. **Adaptation Trigger**: Create adaptations for significant patterns
4. **Persistence**: Save learning data for future sessions
5. **ID Generation**: Return unique learning record identifier
## Pattern Recognition and Adaptation
### Pattern Signature Generation
```python
def _generate_pattern_signature(self, pattern: Dict[str, Any], context: Dict[str, Any]) -> str:
key_elements = []
# Pattern type
if 'type' in pattern:
key_elements.append(f"type:{pattern['type']}")
# Context elements
if 'operation_type' in context:
key_elements.append(f"op:{context['operation_type']}")
if 'complexity_score' in context:
complexity_bucket = int(context['complexity_score'] * 10) / 10 # Round to 0.1
key_elements.append(f"complexity:{complexity_bucket}")
if 'file_count' in context:
file_bucket = min(context['file_count'], 10) # Cap at 10 for grouping
key_elements.append(f"files:{file_bucket}")
# Pattern-specific elements
for key in ['mcp_server', 'mode', 'compression_level', 'delegation_strategy']:
if key in pattern:
key_elements.append(f"{key}:{pattern[key]}")
return "_".join(sorted(key_elements))
```
**Signature Components**:
- **Pattern Type**: Core pattern classification
- **Operation Context**: Operation type, complexity, file count
- **Domain Elements**: MCP server, mode, compression level, delegation strategy
- **Normalization**: Bucketing and sorting for consistent matching
### Adaptation Creation
```python
def _create_adaptation_from_record(self, record: LearningRecord):
pattern_signature = self._generate_pattern_signature(record.pattern, record.context)
# Check if adaptation already exists
if pattern_signature in self.adaptations:
adaptation = self.adaptations[pattern_signature]
adaptation.effectiveness_history.append(record.effectiveness_score)
adaptation.usage_count += 1
adaptation.last_used = record.timestamp
# Update confidence based on consistency
if len(adaptation.effectiveness_history) > 1:
consistency = 1.0 - statistics.stdev(adaptation.effectiveness_history[-5:]) / max(statistics.mean(adaptation.effectiveness_history[-5:]), 0.1)
adaptation.confidence_score = min(consistency * record.confidence, 1.0)
else:
# Create new adaptation
adaptation_id = f"adapt_{int(record.timestamp)}_{len(self.adaptations)}"
adaptation = Adaptation(
adaptation_id=adaptation_id,
pattern_signature=pattern_signature,
trigger_conditions=self._extract_trigger_conditions(record.context),
modifications=self._extract_modifications(record.pattern),
effectiveness_history=[record.effectiveness_score],
usage_count=1,
last_used=record.timestamp,
confidence_score=record.confidence
)
self.adaptations[pattern_signature] = adaptation
```
**Adaptation Logic**:
- **Existing Adaptation**: Update effectiveness history and confidence based on consistency
- **New Adaptation**: Create adaptation with initial effectiveness and confidence scores
- **Confidence Calculation**: Based on consistency of effectiveness scores over time
## Context Matching and Application
### Context Matching
```python
def _matches_trigger_conditions(self, conditions: Dict[str, Any], context: Dict[str, Any]) -> bool:
for key, expected_value in conditions.items():
if key not in context:
continue
context_value = context[key]
# Exact match for strings and booleans
if isinstance(expected_value, (str, bool)):
if context_value != expected_value:
return False
# Range match for numbers
elif isinstance(expected_value, (int, float)):
tolerance = 0.1 if isinstance(expected_value, float) else 1
if abs(context_value - expected_value) > tolerance:
return False
return True
```
**Matching Strategies**:
- **Exact Match**: String and boolean values must match exactly
- **Range Match**: Numeric values within tolerance (0.1 for floats, 1 for integers)
- **Missing Values**: Ignore missing context keys (graceful degradation)
### Adaptation Application
```python
def apply_adaptations(self,
context: Dict[str, Any],
base_recommendations: Dict[str, Any]) -> Dict[str, Any]:
relevant_adaptations = self.get_adaptations_for_context(context)
enhanced_recommendations = base_recommendations.copy()
for adaptation in relevant_adaptations:
# Apply modifications from adaptation
for modification_type, modification_value in adaptation.modifications.items():
if modification_type == 'preferred_mcp_server':
# Enhance MCP server selection
if 'recommended_mcp_servers' not in enhanced_recommendations:
enhanced_recommendations['recommended_mcp_servers'] = []
servers = enhanced_recommendations['recommended_mcp_servers']
if modification_value not in servers:
servers.insert(0, modification_value) # Prioritize learned preference
elif modification_type == 'preferred_mode':
# Enhance mode selection
if 'recommended_modes' not in enhanced_recommendations:
enhanced_recommendations['recommended_modes'] = []
modes = enhanced_recommendations['recommended_modes']
if modification_value not in modes:
modes.insert(0, modification_value)
elif modification_type == 'suggested_flags':
# Enhance flag suggestions
if 'suggested_flags' not in enhanced_recommendations:
enhanced_recommendations['suggested_flags'] = []
for flag in modification_value:
if flag not in enhanced_recommendations['suggested_flags']:
enhanced_recommendations['suggested_flags'].append(flag)
# Update usage tracking
adaptation.usage_count += 1
adaptation.last_used = time.time()
return enhanced_recommendations
```
**Application Process**:
1. **Context Matching**: Find adaptations that match current context
2. **Recommendation Enhancement**: Apply learned preferences to base recommendations
3. **Prioritization**: Insert learned preferences at the beginning of recommendation lists
4. **Usage Tracking**: Update usage statistics for applied adaptations
5. **Metadata Addition**: Include adaptation metadata in enhanced recommendations
## Learning Insights Generation
### generate_learning_insights()
```python
def generate_learning_insights(self) -> List[LearningInsight]:
insights = []
# User preference insights
insights.extend(self._analyze_user_preferences())
# Performance pattern insights
insights.extend(self._analyze_performance_patterns())
# Error pattern insights
insights.extend(self._analyze_error_patterns())
# Effectiveness insights
insights.extend(self._analyze_effectiveness_patterns())
return insights
```
### User Preference Analysis
```python
def _analyze_user_preferences(self) -> List[LearningInsight]:
insights = []
# Analyze MCP server preferences
mcp_usage = {}
for record in self.learning_records:
if record.learning_type == LearningType.USER_PREFERENCE:
server = record.pattern.get('mcp_server')
if server:
if server not in mcp_usage:
mcp_usage[server] = []
mcp_usage[server].append(record.effectiveness_score)
if mcp_usage:
# Find most effective server
server_effectiveness = {
server: statistics.mean(scores)
for server, scores in mcp_usage.items()
if len(scores) >= 3
}
if server_effectiveness:
best_server = max(server_effectiveness, key=server_effectiveness.get)
best_score = server_effectiveness[best_server]
if best_score > 0.8:
insights.append(LearningInsight(
insight_type="user_preference",
description=f"User consistently prefers {best_server} MCP server",
evidence=[f"Effectiveness score: {best_score:.2f}", f"Usage count: {len(mcp_usage[best_server])}"],
recommendations=[f"Auto-suggest {best_server} for similar operations"],
confidence=min(best_score, 1.0),
impact_score=0.7
))
return insights
```
### Performance Pattern Analysis
```python
def _analyze_performance_patterns(self) -> List[LearningInsight]:
insights = []
# Analyze delegation effectiveness
delegation_records = [
r for r in self.learning_records
if r.learning_type == LearningType.PERFORMANCE_OPTIMIZATION
and 'delegation' in r.pattern
]
if len(delegation_records) >= 5:
avg_effectiveness = statistics.mean([r.effectiveness_score for r in delegation_records])
if avg_effectiveness > 0.75:
insights.append(LearningInsight(
insight_type="performance_optimization",
description="Delegation consistently improves performance",
evidence=[f"Average effectiveness: {avg_effectiveness:.2f}", f"Sample size: {len(delegation_records)}"],
recommendations=["Enable delegation for multi-file operations", "Lower delegation threshold"],
confidence=avg_effectiveness,
impact_score=0.8
))
return insights
```
### Error Pattern Analysis
```python
def _analyze_error_patterns(self) -> List[LearningInsight]:
insights = []
error_records = [
r for r in self.learning_records
if r.learning_type == LearningType.ERROR_RECOVERY
]
if len(error_records) >= 3:
# Analyze common error contexts
error_contexts = {}
for record in error_records:
context_key = record.context.get('operation_type', 'unknown')
if context_key not in error_contexts:
error_contexts[context_key] = []
error_contexts[context_key].append(record)
for context, records in error_contexts.items():
if len(records) >= 2:
avg_recovery_effectiveness = statistics.mean([r.effectiveness_score for r in records])
insights.append(LearningInsight(
insight_type="error_recovery",
description=f"Error patterns identified for {context} operations",
evidence=[f"Occurrence count: {len(records)}", f"Recovery effectiveness: {avg_recovery_effectiveness:.2f}"],
recommendations=[f"Add proactive validation for {context} operations"],
confidence=min(len(records) / 5, 1.0),
impact_score=0.6
))
return insights
```
### Effectiveness Trend Analysis
```python
def _analyze_effectiveness_patterns(self) -> List[LearningInsight]:
insights = []
if len(self.learning_records) >= 10:
recent_records = sorted(self.learning_records, key=lambda r: r.timestamp)[-10:]
avg_effectiveness = statistics.mean([r.effectiveness_score for r in recent_records])
if avg_effectiveness > 0.8:
insights.append(LearningInsight(
insight_type="effectiveness_trend",
description="SuperClaude effectiveness is high and improving",
evidence=[f"Recent average effectiveness: {avg_effectiveness:.2f}"],
recommendations=["Continue current learning patterns", "Consider expanding adaptation scope"],
confidence=avg_effectiveness,
impact_score=0.9
))
elif avg_effectiveness < 0.6:
insights.append(LearningInsight(
insight_type="effectiveness_concern",
description="SuperClaude effectiveness below optimal",
evidence=[f"Recent average effectiveness: {avg_effectiveness:.2f}"],
recommendations=["Review recent adaptations", "Gather more user feedback", "Adjust learning thresholds"],
confidence=1.0 - avg_effectiveness,
impact_score=0.8
))
return insights
```
## Effectiveness Feedback Integration
### record_effectiveness_feedback()
```python
def record_effectiveness_feedback(self,
adaptation_ids: List[str],
effectiveness_score: float,
context: Dict[str, Any]):
for adaptation_id in adaptation_ids:
# Find adaptation by ID
adaptation = None
for adapt in self.adaptations.values():
if adapt.adaptation_id == adaptation_id:
adaptation = adapt
break
if adaptation:
adaptation.effectiveness_history.append(effectiveness_score)
# Update confidence based on consistency
if len(adaptation.effectiveness_history) > 2:
recent_scores = adaptation.effectiveness_history[-5:]
consistency = 1.0 - statistics.stdev(recent_scores) / max(statistics.mean(recent_scores), 0.1)
adaptation.confidence_score = min(consistency, 1.0)
# Record learning event
self.record_learning_event(
LearningType.EFFECTIVENESS_FEEDBACK,
AdaptationScope.USER,
context,
{'adaptation_id': adaptation_id},
effectiveness_score,
adaptation.confidence_score
)
```
**Feedback Processing**:
1. **Adaptation Lookup**: Find adaptation by unique ID
2. **Effectiveness Update**: Append new effectiveness score to history
3. **Confidence Recalculation**: Update confidence based on score consistency
4. **Learning Event Recording**: Create feedback learning record for future analysis
## Data Persistence and Management
### Data Storage
```python
def _save_learning_data(self):
try:
# Save learning records
records_file = self.cache_dir / "learning_records.json"
with open(records_file, 'w') as f:
json.dump([asdict(record) for record in self.learning_records], f, indent=2)
# Save adaptations
adaptations_file = self.cache_dir / "adaptations.json"
with open(adaptations_file, 'w') as f:
json.dump({k: asdict(v) for k, v in self.adaptations.items()}, f, indent=2)
# Save user preferences
preferences_file = self.cache_dir / "user_preferences.json"
with open(preferences_file, 'w') as f:
json.dump(self.user_preferences, f, indent=2)
# Save project patterns
patterns_file = self.cache_dir / "project_patterns.json"
with open(patterns_file, 'w') as f:
json.dump(self.project_patterns, f, indent=2)
except Exception as e:
pass # Silent fail for cache operations
```
### Data Cleanup
```python
def cleanup_old_data(self, max_age_days: int = 30):
cutoff_time = time.time() - (max_age_days * 24 * 60 * 60)
# Remove old learning records
self.learning_records = [
record for record in self.learning_records
if record.timestamp > cutoff_time
]
# Remove unused adaptations
self.adaptations = {
k: v for k, v in self.adaptations.items()
if v.last_used > cutoff_time or v.usage_count > 5
}
self._save_learning_data()
```
**Cleanup Strategy**:
- **Learning Records**: Remove records older than max_age_days
- **Adaptations**: Keep adaptations used within max_age_days OR with usage_count > 5
- **Automatic Cleanup**: Triggered during initialization and periodically
## Integration with Hooks
### Hook Usage Pattern
```python
# Initialize learning engine
learning_engine = LearningEngine(cache_dir=Path("cache"))
# Record learning event during hook operation
learning_engine.record_learning_event(
learning_type=LearningType.USER_PREFERENCE,
scope=AdaptationScope.USER,
context={
'operation_type': 'build',
'complexity_score': 0.6,
'file_count': 8,
'user_expertise': 'intermediate'
},
pattern={
'mcp_server': 'serena',
'mode': 'task_management',
'flags': ['--delegate', '--think-hard']
},
effectiveness_score=0.85,
confidence=0.9
)
# Apply learned adaptations to recommendations
base_recommendations = {
'recommended_mcp_servers': ['morphllm'],
'recommended_modes': ['brainstorming'],
'suggested_flags': ['--think']
}
enhanced_recommendations = learning_engine.apply_adaptations(
context={
'operation_type': 'build',
'complexity_score': 0.6,
'file_count': 8
},
base_recommendations=base_recommendations
)
print(f"Enhanced servers: {enhanced_recommendations['recommended_mcp_servers']}") # ['serena', 'morphllm']
print(f"Enhanced modes: {enhanced_recommendations['recommended_modes']}") # ['task_management', 'brainstorming']
print(f"Enhanced flags: {enhanced_recommendations['suggested_flags']}") # ['--delegate', '--think-hard', '--think']
```
### Learning Insights Usage
```python
# Generate insights from learning patterns
insights = learning_engine.generate_learning_insights()
for insight in insights:
print(f"Insight Type: {insight.insight_type}")
print(f"Description: {insight.description}")
print(f"Evidence: {insight.evidence}")
print(f"Recommendations: {insight.recommendations}")
print(f"Confidence: {insight.confidence:.2f}")
print(f"Impact Score: {insight.impact_score:.2f}")
print("---")
```
### Effectiveness Feedback Integration
```python
# Record effectiveness feedback after operation completion
adaptation_ids = enhanced_recommendations.get('applied_adaptations', [])
if adaptation_ids:
adaptation_ids_list = [adapt['id'] for adapt in adaptation_ids]
learning_engine.record_effectiveness_feedback(
adaptation_ids=adaptation_ids_list,
effectiveness_score=0.92,
context={'operation_result': 'success', 'user_satisfaction': 'high'}
)
```
## Performance Characteristics
### Learning Operations
- **Learning Event Recording**: <5ms for single event with persistence
- **Pattern Signature Generation**: <3ms for typical context and pattern
- **Adaptation Creation**: <10ms including condition extraction and modification setup
- **Context Matching**: <2ms per adaptation for trigger condition evaluation
- **Adaptation Application**: <15ms for typical enhancement with multiple adaptations
### Memory Efficiency
- **Learning Records**: ~500B per record with full context and metadata
- **Adaptations**: ~300-500B per adaptation with effectiveness history
- **Pattern Signatures**: ~50-100B per signature for matching
- **Cache Storage**: JSON serialization with compression for large datasets
### Effectiveness Metrics
- **Adaptation Accuracy**: >85% correct context matching for learned adaptations
- **Effectiveness Prediction**: 80%+ correlation between predicted and actual effectiveness
- **Learning Convergence**: 3-5 similar events required for stable adaptation creation
- **Data Persistence Reliability**: <0.1% data loss rate with automatic recovery
## Error Handling Strategies
### Learning Event Failures
```python
try:
self.record_learning_event(learning_type, scope, context, pattern, effectiveness_score)
except Exception as e:
# Log error but continue operation
logger.log_error("learning_engine", f"Failed to record learning event: {e}")
# Return dummy learning ID for caller consistency
return f"learning_failed_{int(time.time())}"
```
### Adaptation Application Failures
- **Context Matching Errors**: Skip problematic adaptations, continue with others
- **Modification Application Errors**: Log warning, apply partial modifications
- **Effectiveness Tracking Errors**: Continue without tracking, log for later analysis
### Data Persistence Failures
- **File Write Errors**: Cache in memory, retry on next operation
- **Data Corruption**: Use backup files, regenerate from memory if needed
- **Permission Errors**: Fall back to temporary storage, warn user
## Configuration Requirements
### Learning Configuration
```yaml
learning_engine:
enabled: true
cache_directory: "cache/learning"
max_learning_records: 10000
max_adaptations: 1000
cleanup_interval_days: 30
thresholds:
significant_effectiveness: 0.7
significant_confidence: 0.6
adaptation_usage_threshold: 5
insights:
min_records_for_analysis: 10
min_pattern_occurrences: 3
confidence_threshold: 0.6
```
### Adaptation Scopes
```yaml
adaptation_scopes:
session:
enabled: true
max_adaptations: 100
project:
enabled: true
max_adaptations: 500
user:
enabled: true
max_adaptations: 1000
global:
enabled: false # Privacy-sensitive, disabled by default
anonymization_required: true
```
## Usage Examples
### Basic Learning Integration
```python
learning_engine = LearningEngine(cache_dir=Path("cache/learning"))
# Record successful MCP server selection
learning_engine.record_learning_event(
LearningType.USER_PREFERENCE,
AdaptationScope.USER,
context={'operation_type': 'analyze', 'complexity_score': 0.7},
pattern={'mcp_server': 'sequential'},
effectiveness_score=0.9
)
# Apply learned preferences
recommendations = learning_engine.apply_adaptations(
context={'operation_type': 'analyze', 'complexity_score': 0.7},
base_recommendations={'recommended_mcp_servers': ['morphllm']}
)
print(recommendations['recommended_mcp_servers']) # ['sequential', 'morphllm']
```
### Performance Optimization Learning
```python
# Record performance optimization success
learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.PROJECT,
context={'file_count': 25, 'operation_type': 'refactor'},
pattern={'delegation': 'auto', 'flags': ['--delegate', 'auto']},
effectiveness_score=0.85,
metadata={'time_saved_ms': 3000, 'quality_preserved': 0.95}
)
# Generate performance insights
insights = learning_engine.generate_learning_insights()
performance_insights = [i for i in insights if i.insight_type == "performance_optimization"]
```
## Dependencies and Relationships
### Internal Dependencies
- **yaml_loader**: Configuration loading for learning settings
- **Standard Libraries**: json, time, statistics, pathlib, typing, dataclasses, enum
### Framework Integration
- **Cross-Hook Learning**: Shared learning across all 7 hook implementations
- **Pattern Recognition**: Integration with pattern_detection.py for enhanced recommendations
- **Performance Monitoring**: Effectiveness tracking for framework optimization
### Hook Coordination
- Used by all hooks for consistent learning and adaptation
- Provides standardized learning interfaces and effectiveness tracking
- Enables cross-hook knowledge sharing and personalization
---
*This module serves as the intelligent learning and adaptation system for the SuperClaude framework, enabling continuous improvement through user preference learning, pattern recognition, and effectiveness feedback integration across all hook operations.*

View File

@@ -1,688 +0,0 @@
# logger.py - Structured Logging Utilities for SuperClaude Hooks
## Overview
The `logger.py` module provides structured logging of hook events for later analysis, focusing on capturing hook lifecycle, decisions, and errors in a structured JSON format. It implements simple, efficient logging without complex features, prioritizing performance and structured data collection for operational analysis and debugging.
## Purpose and Responsibilities
### Primary Functions
- **Hook Lifecycle Logging**: Structured capture of hook start/end events with timing
- **Decision Logging**: Record decision-making processes and rationale within hooks
- **Error Logging**: Comprehensive error capture with context and recovery information
- **Performance Monitoring**: Timing and performance metrics collection for optimization
- **Session Tracking**: Correlation of events across hook executions with session IDs
### Design Philosophy
- **Structured Data**: JSON-formatted logs for machine readability and analysis
- **Performance First**: Minimal overhead with efficient logging operations
- **Operational Focus**: Data collection for debugging and operational insights
- **Simple Interface**: Easy integration with hooks without complex configuration
## Core Architecture
### HookLogger Class
```python
class HookLogger:
"""Simple logger for SuperClaude-Lite hooks."""
def __init__(self, log_dir: str = None, retention_days: int = None):
"""
Initialize the logger.
Args:
log_dir: Directory to store log files. Defaults to cache/logs/
retention_days: Number of days to keep log files. Defaults to 30.
"""
```
### Initialization and Configuration
```python
def __init__(self, log_dir: str = None, retention_days: int = None):
# Load configuration
self.config = self._load_config()
# Check if logging is enabled
if not self.config.get('logging', {}).get('enabled', True):
self.enabled = False
return
self.enabled = True
# Set up log directory
if log_dir is None:
root_dir = Path(__file__).parent.parent.parent
log_dir_config = self.config.get('logging', {}).get('file_settings', {}).get('log_directory', 'cache/logs')
log_dir = root_dir / log_dir_config
self.log_dir = Path(log_dir)
self.log_dir.mkdir(parents=True, exist_ok=True)
# Session ID for correlating events
self.session_id = str(uuid.uuid4())[:8]
```
**Initialization Features**:
- **Configurable Enablement**: Logging can be disabled via configuration
- **Flexible Directory**: Log directory configurable via parameter or configuration
- **Session Correlation**: Unique session ID for event correlation
- **Automatic Cleanup**: Old log file cleanup on initialization
## Configuration Management
### Configuration Loading
```python
def _load_config(self) -> Dict[str, Any]:
"""Load logging configuration from YAML file."""
if UnifiedConfigLoader is None:
# Return default configuration if loader not available
return {
'logging': {
'enabled': True,
'level': 'INFO',
'file_settings': {
'log_directory': 'cache/logs',
'retention_days': 30
}
}
}
try:
# Get project root
root_dir = Path(__file__).parent.parent.parent
loader = UnifiedConfigLoader(root_dir)
# Load logging configuration
config = loader.load_yaml('logging')
return config or {}
except Exception:
# Return default configuration on error
return {
'logging': {
'enabled': True,
'level': 'INFO',
'file_settings': {
'log_directory': 'cache/logs',
'retention_days': 30
}
}
}
```
**Configuration Fallback Strategy**:
1. **Primary**: Load from logging.yaml via UnifiedConfigLoader
2. **Fallback**: Use hardcoded default configuration if loader unavailable
3. **Error Recovery**: Default configuration on any loading error
4. **Graceful Degradation**: Continue operation even with configuration issues
### Configuration Structure
```python
default_config = {
'logging': {
'enabled': True, # Enable/disable logging
'level': 'INFO', # Log level (DEBUG, INFO, WARNING, ERROR)
'file_settings': {
'log_directory': 'cache/logs', # Log file directory
'retention_days': 30 # Days to keep log files
}
},
'hook_configuration': {
'hook_name': {
'enabled': True # Per-hook logging control
}
}
}
```
## Python Logger Integration
### Logger Setup
```python
def _setup_logger(self):
"""Set up the Python logger with JSON formatting."""
self.logger = logging.getLogger("superclaude_lite_hooks")
# Set log level from configuration
log_level_str = self.config.get('logging', {}).get('level', 'INFO').upper()
log_level = getattr(logging, log_level_str, logging.INFO)
self.logger.setLevel(log_level)
# Remove existing handlers to avoid duplicates
self.logger.handlers.clear()
# Create daily log file
today = datetime.now().strftime("%Y-%m-%d")
log_file = self.log_dir / f"superclaude-lite-{today}.log"
# File handler
handler = logging.FileHandler(log_file, mode='a', encoding='utf-8')
handler.setLevel(logging.INFO)
# Simple formatter - just output the message (which is already JSON)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
self.logger.addHandler(handler)
```
**Logger Features**:
- **Daily Log Files**: Separate log file per day for easy management
- **JSON Message Format**: Messages are pre-formatted JSON for structure
- **UTF-8 Encoding**: Support for international characters
- **Configurable Log Level**: Log level set from configuration
- **Handler Management**: Automatic cleanup of duplicate handlers
## Structured Event Logging
### Event Structure
```python
def _create_event(self, event_type: str, hook_name: str, data: Dict[str, Any] = None) -> Dict[str, Any]:
"""Create a structured event."""
event = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"session": self.session_id,
"hook": hook_name,
"event": event_type
}
if data:
event["data"] = data
return event
```
**Event Structure Components**:
- **timestamp**: ISO 8601 UTC timestamp for precise timing
- **session**: 8-character session ID for event correlation
- **hook**: Hook name for operation identification
- **event**: Event type (start, end, decision, error)
- **data**: Optional additional data specific to event type
### Event Filtering
```python
def _should_log_event(self, hook_name: str, event_type: str) -> bool:
"""Check if this event should be logged based on configuration."""
if not self.enabled:
return False
# Check hook-specific configuration
hook_config = self.config.get('hook_configuration', {}).get(hook_name, {})
if not hook_config.get('enabled', True):
return False
# Check event type configuration
hook_logging = self.config.get('logging', {}).get('hook_logging', {})
event_mapping = {
'start': 'log_lifecycle',
'end': 'log_lifecycle',
'decision': 'log_decisions',
'error': 'log_errors'
}
config_key = event_mapping.get(event_type, 'log_lifecycle')
return hook_logging.get(config_key, True)
```
**Filtering Logic**:
1. **Global Enable Check**: Respect global logging enabled/disabled setting
2. **Hook-Specific Check**: Allow per-hook logging control
3. **Event Type Check**: Filter by event type (lifecycle, decisions, errors)
4. **Default Behavior**: Log all events if configuration not specified
## Core Logging Methods
### Hook Lifecycle Logging
```python
def log_hook_start(self, hook_name: str, context: Optional[Dict[str, Any]] = None):
"""Log the start of a hook execution."""
if not self._should_log_event(hook_name, 'start'):
return
event = self._create_event("start", hook_name, context)
self.logger.info(json.dumps(event))
def log_hook_end(self, hook_name: str, duration_ms: int, success: bool, result: Optional[Dict[str, Any]] = None):
"""Log the end of a hook execution."""
if not self._should_log_event(hook_name, 'end'):
return
data = {
"duration_ms": duration_ms,
"success": success
}
if result:
data["result"] = result
event = self._create_event("end", hook_name, data)
self.logger.info(json.dumps(event))
```
**Lifecycle Event Data**:
- **Start Events**: Hook name, optional context data, timestamp
- **End Events**: Duration in milliseconds, success/failure status, optional results
- **Session Correlation**: All events include session ID for correlation
### Decision Logging
```python
def log_decision(self, hook_name: str, decision_type: str, choice: str, reason: str):
"""Log a decision made by a hook."""
if not self._should_log_event(hook_name, 'decision'):
return
data = {
"type": decision_type,
"choice": choice,
"reason": reason
}
event = self._create_event("decision", hook_name, data)
self.logger.info(json.dumps(event))
```
**Decision Event Components**:
- **type**: Category of decision (e.g., "mcp_server_selection", "mode_activation")
- **choice**: The decision made (e.g., "sequential", "brainstorming_mode")
- **reason**: Explanation for the decision (e.g., "complexity_score > 0.6")
### Error Logging
```python
def log_error(self, hook_name: str, error: str, context: Optional[Dict[str, Any]] = None):
"""Log an error that occurred in a hook."""
if not self._should_log_event(hook_name, 'error'):
return
data = {
"error": error
}
if context:
data["context"] = context
event = self._create_event("error", hook_name, data)
self.logger.info(json.dumps(event))
```
**Error Event Components**:
- **error**: Error message or description
- **context**: Optional additional context about error conditions
- **Timestamp**: Precise timing for error correlation
## Log File Management
### Automatic Cleanup
```python
def _cleanup_old_logs(self):
"""Remove log files older than retention_days."""
if self.retention_days <= 0:
return
cutoff_date = datetime.now() - timedelta(days=self.retention_days)
# Find all log files
log_pattern = self.log_dir / "superclaude-lite-*.log"
for log_file in glob.glob(str(log_pattern)):
try:
# Extract date from filename
filename = os.path.basename(log_file)
date_str = filename.replace("superclaude-lite-", "").replace(".log", "")
file_date = datetime.strptime(date_str, "%Y-%m-%d")
# Remove if older than cutoff
if file_date < cutoff_date:
os.remove(log_file)
except (ValueError, OSError):
# Skip files that don't match expected format or can't be removed
continue
```
**Cleanup Features**:
- **Configurable Retention**: Retention period set via configuration
- **Date-Based**: Log files named with YYYY-MM-DD format for easy parsing
- **Error Resilience**: Skip problematic files rather than failing entire cleanup
- **Initialization Cleanup**: Cleanup performed during logger initialization
### Log File Naming Convention
```
superclaude-lite-2024-12-15.log
superclaude-lite-2024-12-16.log
superclaude-lite-2024-12-17.log
```
**Naming Benefits**:
- **Chronological Sorting**: Files sort naturally by name
- **Easy Filtering**: Date-based filtering for log analysis
- **Rotation-Friendly**: Daily rotation without complex log rotation tools
## Global Interface and Convenience Functions
### Global Logger Instance
```python
# Global logger instance
_logger = None
def get_logger() -> HookLogger:
"""Get the global logger instance."""
global _logger
if _logger is None:
_logger = HookLogger()
return _logger
```
### Convenience Functions
```python
def log_hook_start(hook_name: str, context: Optional[Dict[str, Any]] = None):
"""Log the start of a hook execution."""
get_logger().log_hook_start(hook_name, context)
def log_hook_end(hook_name: str, duration_ms: int, success: bool, result: Optional[Dict[str, Any]] = None):
"""Log the end of a hook execution."""
get_logger().log_hook_end(hook_name, duration_ms, success, result)
def log_decision(hook_name: str, decision_type: str, choice: str, reason: str):
"""Log a decision made by a hook."""
get_logger().log_decision(hook_name, decision_type, choice, reason)
def log_error(hook_name: str, error: str, context: Optional[Dict[str, Any]] = None):
"""Log an error that occurred in a hook."""
get_logger().log_error(hook_name, error, context)
```
**Global Interface Benefits**:
- **Simplified Import**: Single import for all logging functions
- **Consistent Configuration**: Shared configuration across all hooks
- **Lazy Initialization**: Logger created only when first used
- **Memory Efficiency**: Single logger instance for entire application
## Hook Integration Patterns
### Basic Hook Integration
```python
from shared.logger import log_hook_start, log_hook_end, log_decision, log_error
import time
def pre_tool_use_hook(context):
start_time = time.time()
# Log hook start
log_hook_start("pre_tool_use", {"operation_type": context.get("operation_type")})
try:
# Hook logic
if context.get("complexity_score", 0) > 0.6:
# Log decision
log_decision("pre_tool_use", "delegation_activation", "enabled", "complexity_score > 0.6")
result = {"delegation_enabled": True}
else:
result = {"delegation_enabled": False}
# Log successful completion
duration_ms = int((time.time() - start_time) * 1000)
log_hook_end("pre_tool_use", duration_ms, True, result)
return result
except Exception as e:
# Log error
log_error("pre_tool_use", str(e), {"context": context})
# Log failed completion
duration_ms = int((time.time() - start_time) * 1000)
log_hook_end("pre_tool_use", duration_ms, False)
raise
```
### Advanced Integration with Context
```python
def session_start_hook(context):
# Start with rich context
log_hook_start("session_start", {
"project_path": context.get("project_path"),
"user_expertise": context.get("user_expertise", "intermediate"),
"session_type": context.get("session_type", "interactive")
})
# Log multiple decisions
log_decision("session_start", "configuration_load", "superclaude-config.json", "project configuration detected")
log_decision("session_start", "learning_engine", "enabled", "user preference learning available")
# Complex result logging
result = {
"configuration_loaded": True,
"hooks_initialized": 7,
"performance_targets": {
"session_start_ms": 50,
"pre_tool_use_ms": 200
}
}
log_hook_end("session_start", 45, True, result)
```
## Log Analysis and Monitoring
### Log Entry Format
```json
{
"timestamp": "2024-12-15T14:30:22.123456Z",
"session": "abc12345",
"hook": "pre_tool_use",
"event": "start",
"data": {
"operation_type": "build",
"complexity_score": 0.7
}
}
```
### Example Log Sequence
```json
{"timestamp": "2024-12-15T14:30:22.123Z", "session": "abc12345", "hook": "pre_tool_use", "event": "start", "data": {"operation_type": "build"}}
{"timestamp": "2024-12-15T14:30:22.125Z", "session": "abc12345", "hook": "pre_tool_use", "event": "decision", "data": {"type": "mcp_server_selection", "choice": "sequential", "reason": "complex analysis required"}}
{"timestamp": "2024-12-15T14:30:22.148Z", "session": "abc12345", "hook": "pre_tool_use", "event": "end", "data": {"duration_ms": 25, "success": true, "result": {"mcp_servers": ["sequential"]}}}
```
### Analysis Queries
```bash
# Find all errors in the last day
jq 'select(.event == "error")' superclaude-lite-2024-12-15.log
# Calculate average hook execution times
jq 'select(.event == "end") | .data.duration_ms' superclaude-lite-2024-12-15.log | awk '{sum+=$1; count++} END {print sum/count}'
# Find all decisions made by specific hook
jq 'select(.hook == "pre_tool_use" and .event == "decision")' superclaude-lite-2024-12-15.log
# Track session completion rates
jq 'select(.hook == "session_start" and .event == "end") | .data.success' superclaude-lite-2024-12-15.log
```
## Performance Characteristics
### Logging Performance
- **Event Creation**: <1ms for structured event creation
- **File Writing**: <5ms for typical log entry with JSON serialization
- **Configuration Loading**: <10ms during initialization
- **Cleanup Operations**: <50ms for cleanup of old log files (depends on file count)
### Memory Efficiency
- **Logger Instance**: ~1-2KB for logger instance with configuration
- **Session Tracking**: ~100B for session ID and correlation data
- **Event Buffer**: Direct write-through, no event buffering for reliability
- **Configuration Cache**: ~500B for logging configuration
### File System Impact
- **Daily Log Files**: Automatic daily rotation with configurable retention
- **Log File Size**: Typical ~10-50KB per day depending on hook activity
- **Directory Structure**: Simple flat file structure in configurable directory
- **Cleanup Efficiency**: O(n) cleanup where n is number of log files
## Error Handling and Reliability
### Logging Error Handling
```python
def log_hook_start(self, hook_name: str, context: Optional[Dict[str, Any]] = None):
"""Log the start of a hook execution."""
try:
if not self._should_log_event(hook_name, 'start'):
return
event = self._create_event("start", hook_name, context)
self.logger.info(json.dumps(event))
except Exception:
# Silent failure - logging should never break hook execution
pass
```
### Reliability Features
- **Silent Failure**: Logging errors never interrupt hook execution
- **Graceful Degradation**: Continue operation even if logging fails
- **Configuration Fallback**: Default configuration if loading fails
- **File System Resilience**: Handle permission errors and disk space issues
### Recovery Mechanisms
- **Logger Recreation**: Recreate logger if file handle issues occur
- **Directory Creation**: Automatically create log directory if missing
- **Permission Handling**: Graceful fallback if log directory not writable
- **Disk Space**: Continue operation even if disk space limited
## Configuration Examples
### Basic Configuration (logging.yaml)
```yaml
logging:
enabled: true
level: INFO
file_settings:
log_directory: cache/logs
retention_days: 30
hook_logging:
log_lifecycle: true
log_decisions: true
log_errors: true
```
### Advanced Configuration
```yaml
logging:
enabled: true
level: DEBUG
file_settings:
log_directory: ${LOG_DIR:./logs}
retention_days: ${LOG_RETENTION:7}
max_file_size_mb: 10
hook_logging:
log_lifecycle: true
log_decisions: true
log_errors: true
log_performance: true
hook_configuration:
session_start:
enabled: true
pre_tool_use:
enabled: true
post_tool_use:
enabled: false # Disable logging for this hook
pre_compact:
enabled: true
```
### Production Configuration
```yaml
logging:
enabled: true
level: WARNING # Reduce verbosity in production
file_settings:
log_directory: /var/log/superclaude
retention_days: 90
hook_logging:
log_lifecycle: false # Disable lifecycle logging
log_decisions: true # Keep decision logging
log_errors: true # Always log errors
```
## Usage Examples
### Basic Logging
```python
from shared.logger import log_hook_start, log_hook_end, log_decision, log_error
# Simple hook with logging
def my_hook(context):
log_hook_start("my_hook")
try:
# Do work
result = perform_operation()
log_hook_end("my_hook", 150, True, {"result": result})
return result
except Exception as e:
log_error("my_hook", str(e))
log_hook_end("my_hook", 150, False)
raise
```
### Decision Logging
```python
def intelligent_hook(context):
log_hook_start("intelligent_hook", {"complexity": context.get("complexity_score")})
# Log decision-making process
if context.get("complexity_score", 0) > 0.6:
log_decision("intelligent_hook", "server_selection", "sequential", "high complexity detected")
server = "sequential"
else:
log_decision("intelligent_hook", "server_selection", "morphllm", "low complexity operation")
server = "morphllm"
log_hook_end("intelligent_hook", 85, True, {"selected_server": server})
```
### Error Context Logging
```python
def error_prone_hook(context):
log_hook_start("error_prone_hook")
try:
risky_operation()
except SpecificError as e:
log_error("error_prone_hook", f"Specific error: {e}", {
"context": context,
"error_type": "SpecificError",
"recovery_attempted": True
})
# Attempt recovery
recovery_operation()
except Exception as e:
log_error("error_prone_hook", f"Unexpected error: {e}", {
"context": context,
"error_type": type(e).__name__
})
raise
```
## Dependencies and Relationships
### Internal Dependencies
- **yaml_loader**: Configuration loading (optional, fallback available)
- **Standard Libraries**: json, logging, os, time, datetime, pathlib, glob, uuid
### Framework Integration
- **Hook Lifecycle**: Integrated into all 7 SuperClaude hooks for consistent logging
- **Global Interface**: Shared logger instance across all hooks and modules
- **Configuration Management**: Unified configuration via yaml_loader integration
### External Analysis
- **JSON Format**: Structured logs for analysis with jq, logstash, elasticsearch
- **Daily Rotation**: Compatible with log analysis tools expecting daily files
- **Session Correlation**: Event correlation for debugging and monitoring
---
*This module provides the essential logging infrastructure for the SuperClaude framework, enabling comprehensive operational monitoring, debugging, and analysis through structured, high-performance event logging with reliable error handling and flexible configuration.*

View File

@@ -1,631 +0,0 @@
# mcp_intelligence.py - Intelligent MCP Server Management Engine
## Overview
The `mcp_intelligence.py` module provides intelligent MCP server activation, coordination, and optimization based on ORCHESTRATOR.md patterns and real-time context analysis. It implements smart server selection, performance-optimized activation sequences, fallback strategies, cross-server coordination, and real-time adaptation based on effectiveness metrics.
## Purpose and Responsibilities
### Primary Functions
- **Smart Server Selection**: Context-aware MCP server recommendation and activation
- **Performance Optimization**: Optimized activation sequences with cost/benefit analysis
- **Fallback Strategy Management**: Robust error handling with alternative server routing
- **Cross-Server Coordination**: Intelligent coordination strategies for multi-server operations
- **Real-Time Adaptation**: Dynamic adaptation based on server effectiveness and availability
### Intelligence Capabilities
- **Hybrid Intelligence Routing**: Morphllm vs Serena decision matrix based on complexity
- **Resource-Aware Activation**: Adaptive server selection based on resource constraints
- **Performance Monitoring**: Real-time tracking of activation costs and effectiveness
- **Coordination Strategy Selection**: Dynamic coordination patterns based on operation characteristics
## Core Classes and Data Structures
### Enumerations
#### MCPServerState
```python
class MCPServerState(Enum):
AVAILABLE = "available" # Server ready for activation
UNAVAILABLE = "unavailable" # Server not accessible
LOADING = "loading" # Server currently activating
ERROR = "error" # Server in error state
```
### Data Classes
#### MCPServerCapability
```python
@dataclass
class MCPServerCapability:
server_name: str # Server identifier
primary_functions: List[str] # Core capabilities list
performance_profile: str # lightweight|standard|intensive
activation_cost_ms: int # Activation time in milliseconds
token_efficiency: float # 0.0 to 1.0 efficiency rating
quality_impact: float # 0.0 to 1.0 quality improvement rating
```
#### MCPActivationPlan
```python
@dataclass
class MCPActivationPlan:
servers_to_activate: List[str] # Servers to enable
activation_order: List[str] # Optimal activation sequence
estimated_cost_ms: int # Total activation time estimate
efficiency_gains: Dict[str, float] # Expected gains per server
fallback_strategy: Dict[str, str] # Fallback mappings
coordination_strategy: str # Coordination approach
```
## Server Capability Definitions
### Server Specifications
```python
def _load_server_capabilities(self) -> Dict[str, MCPServerCapability]:
capabilities = {}
capabilities['context7'] = MCPServerCapability(
server_name='context7',
primary_functions=['library_docs', 'framework_patterns', 'best_practices'],
performance_profile='standard',
activation_cost_ms=150,
token_efficiency=0.8,
quality_impact=0.9
)
capabilities['sequential'] = MCPServerCapability(
server_name='sequential',
primary_functions=['complex_analysis', 'multi_step_reasoning', 'debugging'],
performance_profile='intensive',
activation_cost_ms=200,
token_efficiency=0.6,
quality_impact=0.95
)
capabilities['magic'] = MCPServerCapability(
server_name='magic',
primary_functions=['ui_components', 'design_systems', 'frontend_generation'],
performance_profile='standard',
activation_cost_ms=120,
token_efficiency=0.85,
quality_impact=0.9
)
capabilities['playwright'] = MCPServerCapability(
server_name='playwright',
primary_functions=['e2e_testing', 'browser_automation', 'performance_testing'],
performance_profile='intensive',
activation_cost_ms=300,
token_efficiency=0.7,
quality_impact=0.85
)
capabilities['morphllm'] = MCPServerCapability(
server_name='morphllm',
primary_functions=['intelligent_editing', 'pattern_application', 'fast_apply'],
performance_profile='lightweight',
activation_cost_ms=80,
token_efficiency=0.9,
quality_impact=0.8
)
capabilities['serena'] = MCPServerCapability(
server_name='serena',
primary_functions=['semantic_analysis', 'project_context', 'memory_management'],
performance_profile='standard',
activation_cost_ms=100,
token_efficiency=0.75,
quality_impact=0.95
)
```
## Intelligent Activation Planning
### create_activation_plan()
```python
def create_activation_plan(self,
user_input: str,
context: Dict[str, Any],
operation_data: Dict[str, Any]) -> MCPActivationPlan:
```
**Planning Pipeline**:
1. **Pattern Detection**: Use PatternDetector to identify server needs
2. **Intelligent Optimization**: Apply context-aware server selection
3. **Activation Sequencing**: Calculate optimal activation order
4. **Cost Estimation**: Predict activation costs and efficiency gains
5. **Fallback Strategy**: Create robust error handling plan
6. **Coordination Strategy**: Determine multi-server coordination approach
### Server Selection Optimization
#### Hybrid Intelligence Decision Matrix
```python
def _optimize_server_selection(self,
recommended_servers: List[str],
context: Dict[str, Any],
operation_data: Dict[str, Any]) -> List[str]:
# Morphllm vs Serena intelligence selection
file_count = operation_data.get('file_count', 1)
complexity_score = operation_data.get('complexity_score', 0.0)
if 'morphllm' in optimized and 'serena' in optimized:
# Choose the more appropriate server based on complexity
if file_count > 10 or complexity_score > 0.6:
optimized.remove('morphllm') # Use Serena for complex operations
else:
optimized.remove('serena') # Use Morphllm for efficient operations
```
**Decision Criteria**:
- **Serena Optimal**: file_count > 10 OR complexity_score > 0.6
- **Morphllm Optimal**: file_count ≤ 10 AND complexity_score ≤ 0.6
#### Resource Constraint Optimization
```python
# Resource constraint optimization
resource_usage = context.get('resource_usage_percent', 0)
if resource_usage > 85:
# Remove intensive servers under resource constraints
intensive_servers = {
name for name, cap in self.server_capabilities.items()
if cap.performance_profile == 'intensive'
}
optimized -= intensive_servers
```
#### Context-Based Auto-Addition
```python
# Performance optimization based on operation type
operation_type = operation_data.get('operation_type', '')
if operation_type in ['read', 'analyze'] and 'sequential' not in optimized:
# Add Sequential for analysis operations
optimized.add('sequential')
# Auto-add Context7 if external libraries detected
if operation_data.get('has_external_dependencies', False):
optimized.add('context7')
```
## Activation Sequencing
### Optimal Activation Order
```python
def _calculate_activation_order(self, servers: List[str], context: Dict[str, Any]) -> List[str]:
ordered = []
# 1. Serena first if present (provides context for others)
if 'serena' in servers:
ordered.append('serena')
servers = [s for s in servers if s != 'serena']
# 2. Context7 early for documentation context
if 'context7' in servers:
ordered.append('context7')
servers = [s for s in servers if s != 'context7']
# 3. Remaining servers by activation cost (lightweight first)
remaining_costs = [
(server, self.server_capabilities[server].activation_cost_ms)
for server in servers
]
remaining_costs.sort(key=lambda x: x[1])
ordered.extend([server for server, _ in remaining_costs])
return ordered
```
**Activation Priorities**:
1. **Serena**: Provides project context for other servers
2. **Context7**: Supplies documentation context early
3. **Remaining**: Sorted by activation cost (lightweight → intensive)
## Performance Estimation
### Activation Cost Calculation
```python
def _calculate_activation_cost(self, servers: List[str]) -> int:
"""Calculate total activation cost in milliseconds."""
return sum(
self.server_capabilities[server].activation_cost_ms
for server in servers
if server in self.server_capabilities
)
```
### Efficiency Gains Calculation
```python
def _calculate_efficiency_gains(self, servers: List[str], operation_data: Dict[str, Any]) -> Dict[str, float]:
gains = {}
for server in servers:
capability = self.server_capabilities[server]
# Base efficiency gain
base_gain = capability.token_efficiency * capability.quality_impact
# Context-specific adjustments
if server == 'morphllm' and operation_data.get('file_count', 1) <= 5:
gains[server] = base_gain * 1.2 # Extra efficiency for small operations
elif server == 'serena' and operation_data.get('complexity_score', 0) > 0.6:
gains[server] = base_gain * 1.3 # Extra value for complex operations
elif server == 'sequential' and 'debug' in operation_data.get('operation_type', ''):
gains[server] = base_gain * 1.4 # Extra value for debugging
else:
gains[server] = base_gain
return gains
```
## Fallback Strategy Management
### Fallback Mappings
```python
def _create_fallback_strategy(self, servers: List[str]) -> Dict[str, str]:
"""Create fallback strategy for server failures."""
fallback_map = {
'morphllm': 'serena', # Serena can handle editing
'serena': 'morphllm', # Morphllm can handle simple edits
'sequential': 'context7', # Context7 for documentation-based analysis
'context7': 'sequential', # Sequential for complex analysis
'magic': 'morphllm', # Morphllm for component generation
'playwright': 'sequential' # Sequential for test planning
}
fallbacks = {}
for server in servers:
fallback = fallback_map.get(server)
if fallback and fallback not in servers:
fallbacks[server] = fallback
else:
fallbacks[server] = 'native_tools' # Fall back to native Claude tools
return fallbacks
```
## Coordination Strategy Selection
### Strategy Determination
```python
def _determine_coordination_strategy(self, servers: List[str], operation_data: Dict[str, Any]) -> str:
if len(servers) <= 1:
return 'single_server'
# Sequential coordination for complex analysis
if 'sequential' in servers and operation_data.get('complexity_score', 0) > 0.6:
return 'sequential_lead'
# Serena coordination for multi-file operations
if 'serena' in servers and operation_data.get('file_count', 1) > 5:
return 'serena_lead'
# Parallel coordination for independent operations
if len(servers) >= 3:
return 'parallel_with_sync'
return 'collaborative'
```
**Coordination Strategies**:
- **single_server**: Single server operation
- **sequential_lead**: Sequential server coordinates analysis
- **serena_lead**: Serena server coordinates multi-file operations
- **parallel_with_sync**: Parallel execution with synchronization points
- **collaborative**: Equal collaboration between servers
## Activation Plan Execution
### execute_activation_plan()
```python
def execute_activation_plan(self, plan: MCPActivationPlan, context: Dict[str, Any]) -> Dict[str, Any]:
start_time = time.time()
activated_servers = []
failed_servers = []
fallback_activations = []
for server in plan.activation_order:
try:
# Check server availability
if self.server_states.get(server) == MCPServerState.UNAVAILABLE:
failed_servers.append(server)
self._handle_server_fallback(server, plan, fallback_activations)
continue
# Activate server (simulated - real implementation would call MCP)
self.server_states[server] = MCPServerState.LOADING
activation_start = time.time()
# Simulate activation with realistic variance
expected_cost = self.server_capabilities[server].activation_cost_ms
actual_cost = expected_cost * (0.8 + 0.4 * hash(server) % 1000 / 1000)
self.server_states[server] = MCPServerState.AVAILABLE
activated_servers.append(server)
# Track performance metrics
activation_time = (time.time() - activation_start) * 1000
self.performance_metrics[server] = {
'last_activation_ms': activation_time,
'expected_ms': expected_cost,
'efficiency_ratio': expected_cost / max(activation_time, 1)
}
except Exception as e:
failed_servers.append(server)
self.server_states[server] = MCPServerState.ERROR
self._handle_server_fallback(server, plan, fallback_activations)
total_time = (time.time() - start_time) * 1000
return {
'activated_servers': activated_servers,
'failed_servers': failed_servers,
'fallback_activations': fallback_activations,
'total_activation_time_ms': total_time,
'coordination_strategy': plan.coordination_strategy,
'performance_metrics': self.performance_metrics
}
```
## Performance Monitoring and Optimization
### Real-Time Performance Tracking
```python
# Track activation performance
self.performance_metrics[server] = {
'last_activation_ms': activation_time,
'expected_ms': expected_cost,
'efficiency_ratio': expected_cost / max(activation_time, 1)
}
# Maintain activation history
self.activation_history.append({
'timestamp': time.time(),
'plan': plan,
'activated': activated_servers,
'failed': failed_servers,
'fallbacks': fallback_activations,
'total_time_ms': total_time
})
```
### Optimization Recommendations
```python
def get_optimization_recommendations(self, context: Dict[str, Any]) -> Dict[str, Any]:
recommendations = []
# Analyze recent activation patterns
if len(self.activation_history) >= 5:
recent_activations = self.activation_history[-5:]
# Check for frequently failing servers
failed_counts = {}
for activation in recent_activations:
for failed in activation['failed']:
failed_counts[failed] = failed_counts.get(failed, 0) + 1
for server, count in failed_counts.items():
if count >= 3:
recommendations.append(f"Server {server} failing frequently - consider fallback strategy")
# Check for performance issues
avg_times = {}
for activation in recent_activations:
total_time = activation['total_time_ms']
server_count = len(activation['activated'])
if server_count > 0:
avg_time_per_server = total_time / server_count
avg_times[len(activation['activated'])] = avg_time_per_server
if avg_times and max(avg_times.values()) > 500:
recommendations.append("Consider reducing concurrent server activations for better performance")
return {
'recommendations': recommendations,
'performance_metrics': self.performance_metrics,
'server_states': {k: v.value for k, v in self.server_states.items()},
'efficiency_score': self._calculate_overall_efficiency()
}
```
## Integration with Hooks
### Hook Usage Pattern
```python
# Initialize MCP intelligence
mcp_intelligence = MCPIntelligence()
# Create activation plan
activation_plan = mcp_intelligence.create_activation_plan(
user_input="I need to analyze this complex React application and optimize its performance",
context={
'resource_usage_percent': 65,
'user_expertise': 'intermediate',
'project_type': 'web'
},
operation_data={
'file_count': 25,
'complexity_score': 0.7,
'operation_type': 'analyze',
'has_external_dependencies': True
}
)
# Execute activation plan
execution_result = mcp_intelligence.execute_activation_plan(activation_plan, context)
# Process results
activated_servers = execution_result['activated_servers'] # ['serena', 'context7', 'sequential']
coordination_strategy = execution_result['coordination_strategy'] # 'sequential_lead'
total_time = execution_result['total_activation_time_ms'] # 450ms
```
### Activation Plan Analysis
```python
print(f"Servers to activate: {activation_plan.servers_to_activate}")
print(f"Activation order: {activation_plan.activation_order}")
print(f"Estimated cost: {activation_plan.estimated_cost_ms}ms")
print(f"Efficiency gains: {activation_plan.efficiency_gains}")
print(f"Fallback strategy: {activation_plan.fallback_strategy}")
print(f"Coordination: {activation_plan.coordination_strategy}")
```
### Performance Optimization
```python
# Get optimization recommendations
recommendations = mcp_intelligence.get_optimization_recommendations(context)
print(f"Recommendations: {recommendations['recommendations']}")
print(f"Efficiency score: {recommendations['efficiency_score']}")
print(f"Server states: {recommendations['server_states']}")
```
## Performance Characteristics
### Activation Planning
- **Pattern Detection Integration**: <25ms for pattern analysis
- **Server Selection Optimization**: <10ms for decision matrix
- **Activation Sequencing**: <5ms for ordering calculation
- **Cost Estimation**: <3ms for performance prediction
### Execution Performance
- **Single Server Activation**: 80-300ms depending on server type
- **Multi-Server Coordination**: 200-800ms for parallel activation
- **Fallback Handling**: <50ms additional overhead per failure
- **Performance Tracking**: <5ms per server for metrics collection
### Memory Efficiency
- **Server Capability Cache**: ~2-3KB for all server definitions
- **Activation History**: ~500B per activation record
- **Performance Metrics**: ~200B per server per activation
- **State Tracking**: ~100B per server state
## Error Handling Strategies
### Server Failure Handling
```python
def _handle_server_fallback(self, failed_server: str, plan: MCPActivationPlan, fallback_activations: List[str]):
"""Handle server activation failure with fallback strategy."""
fallback = plan.fallback_strategy.get(failed_server)
if fallback and fallback != 'native_tools' and fallback not in plan.servers_to_activate:
# Try to activate fallback server
if self.server_states.get(fallback) == MCPServerState.AVAILABLE:
fallback_activations.append(f"{failed_server}->{fallback}")
```
### Graceful Degradation
- **Server Unavailable**: Use fallback server or native tools
- **Activation Timeout**: Mark as failed, attempt fallback
- **Performance Issues**: Recommend optimization strategies
- **Resource Constraints**: Auto-disable intensive servers
### Recovery Mechanisms
- **Automatic Retry**: One retry attempt for transient failures
- **State Reset**: Clear error states after successful operations
- **History Cleanup**: Remove old activation history to prevent memory issues
- **Performance Adjustment**: Adapt expectations based on actual performance
## Configuration Requirements
### MCP Server Configuration
```yaml
mcp_server_integration:
servers:
context7:
enabled: true
activation_cost_ms: 150
performance_profile: "standard"
primary_functions:
- "library_docs"
- "framework_patterns"
- "best_practices"
sequential:
enabled: true
activation_cost_ms: 200
performance_profile: "intensive"
primary_functions:
- "complex_analysis"
- "multi_step_reasoning"
- "debugging"
```
### Orchestrator Configuration
```yaml
routing_patterns:
complexity_thresholds:
serena_threshold: 0.6
morphllm_threshold: 0.6
file_count_threshold: 10
resource_constraints:
intensive_disable_threshold: 85
performance_warning_threshold: 75
coordination_strategies:
sequential_lead_complexity: 0.6
serena_lead_files: 5
parallel_threshold: 3
```
## Usage Examples
### Basic Activation Planning
```python
mcp_intelligence = MCPIntelligence()
plan = mcp_intelligence.create_activation_plan(
user_input="Build a responsive React component with accessibility features",
context={'resource_usage_percent': 40, 'user_expertise': 'expert'},
operation_data={'file_count': 3, 'complexity_score': 0.4, 'operation_type': 'build'}
)
print(f"Recommended servers: {plan.servers_to_activate}") # ['magic', 'morphllm']
print(f"Activation order: {plan.activation_order}") # ['morphllm', 'magic']
print(f"Coordination: {plan.coordination_strategy}") # 'collaborative'
print(f"Estimated cost: {plan.estimated_cost_ms}ms") # 200ms
```
### Complex Multi-Server Operation
```python
plan = mcp_intelligence.create_activation_plan(
user_input="Analyze and refactor this large codebase with comprehensive testing",
context={'resource_usage_percent': 30, 'is_production': True},
operation_data={
'file_count': 50,
'complexity_score': 0.8,
'operation_type': 'refactor',
'has_tests': True,
'has_external_dependencies': True
}
)
print(f"Servers: {plan.servers_to_activate}") # ['serena', 'context7', 'sequential', 'playwright']
print(f"Order: {plan.activation_order}") # ['serena', 'context7', 'sequential', 'playwright']
print(f"Strategy: {plan.coordination_strategy}") # 'serena_lead'
print(f"Cost: {plan.estimated_cost_ms}ms") # 750ms
```
## Dependencies and Relationships
### Internal Dependencies
- **pattern_detection**: PatternDetector for intelligent server selection
- **yaml_loader**: Configuration loading for server capabilities
- **Standard Libraries**: time, typing, dataclasses, enum
### Framework Integration
- **ORCHESTRATOR.md**: Intelligent routing and coordination patterns
- **Performance Targets**: Sub-200ms activation goals with optimization
- **Quality Gates**: Server activation validation and monitoring
### Hook Coordination
- Used by all hooks for consistent MCP server management
- Provides standardized activation planning and execution
- Enables cross-hook performance monitoring and optimization
---
*This module serves as the intelligent orchestration layer for MCP server management, ensuring optimal server selection, efficient activation sequences, and robust error handling for all SuperClaude hook operations.*

View File

@@ -1,557 +0,0 @@
# pattern_detection.py - Intelligent Pattern Recognition Engine
## Overview
The `pattern_detection.py` module provides intelligent pattern detection for automatic mode activation, MCP server selection, and operational optimization. It analyzes user input, context, and operation patterns to make smart recommendations about which SuperClaude modes should be activated, which MCP servers are needed, and what optimization flags to apply.
## Purpose and Responsibilities
### Primary Functions
- **Mode Trigger Detection**: Automatic identification of when SuperClaude modes should be activated
- **MCP Server Selection**: Context-aware recommendation of which MCP servers to enable
- **Complexity Analysis**: Pattern-based complexity assessment and scoring
- **Persona Recognition**: Detection of domain expertise hints in user requests
- **Performance Optimization**: Pattern-based performance optimization recommendations
### Intelligence Capabilities
- **Regex Pattern Matching**: Compiled patterns for efficient text analysis
- **Context-Aware Analysis**: Integration of user input, session context, and operation data
- **Confidence Scoring**: Probabilistic assessment of pattern matches
- **Multi-Factor Decision Making**: Combination of multiple pattern types for comprehensive analysis
## Core Classes and Data Structures
### Enumerations
#### PatternType
```python
class PatternType(Enum):
MODE_TRIGGER = "mode_trigger" # SuperClaude mode activation patterns
MCP_SERVER = "mcp_server" # MCP server selection patterns
OPERATION_TYPE = "operation_type" # Operation classification patterns
COMPLEXITY_INDICATOR = "complexity_indicator" # Complexity assessment patterns
PERSONA_HINT = "persona_hint" # Domain expertise detection patterns
PERFORMANCE_HINT = "performance_hint" # Performance optimization patterns
```
### Data Classes
#### PatternMatch
```python
@dataclass
class PatternMatch:
pattern_type: PatternType # Type of pattern detected
pattern_name: str # Specific pattern identifier
confidence: float # 0.0 to 1.0 confidence score
matched_text: str # Text that triggered the match
suggestions: List[str] # Actionable recommendations
metadata: Dict[str, Any] # Additional pattern-specific data
```
#### DetectionResult
```python
@dataclass
class DetectionResult:
matches: List[PatternMatch] # All detected pattern matches
recommended_modes: List[str] # SuperClaude modes to activate
recommended_mcp_servers: List[str] # MCP servers to enable
suggested_flags: List[str] # Command-line flags to apply
complexity_score: float # Overall complexity assessment
confidence_score: float # Overall confidence in detection
```
## Pattern Detection Engine
### Initialization and Configuration
```python
def __init__(self):
self.patterns = config_loader.load_config('modes')
self.mcp_patterns = config_loader.load_config('orchestrator')
self._compile_patterns()
```
**Pattern Compilation Process**:
1. Load mode detection patterns from YAML configuration
2. Load MCP routing patterns from orchestrator configuration
3. Compile regex patterns for efficient matching
4. Cache compiled patterns for performance
### Core Detection Method
#### detect_patterns()
```python
def detect_patterns(self,
user_input: str,
context: Dict[str, Any],
operation_data: Dict[str, Any]) -> DetectionResult:
```
**Detection Pipeline**:
1. **Mode Pattern Detection**: Identify SuperClaude mode triggers
2. **MCP Server Pattern Detection**: Determine required MCP servers
3. **Complexity Pattern Detection**: Assess operation complexity indicators
4. **Persona Pattern Detection**: Detect domain expertise hints
5. **Score Calculation**: Compute overall complexity and confidence scores
6. **Recommendation Generation**: Generate actionable recommendations
## Mode Detection Patterns
### Brainstorming Mode Detection
**Trigger Indicators**:
```python
brainstorm_indicators = [
r"(?:i want to|thinking about|not sure|maybe|could we)\s+(?:build|create|make)",
r"(?:brainstorm|explore|figure out|discuss)",
r"(?:new project|startup idea|feature concept)",
r"(?:ambiguous|uncertain|unclear)\s+(?:requirements|needs)"
]
```
**Pattern Match Example**:
```python
PatternMatch(
pattern_type=PatternType.MODE_TRIGGER,
pattern_name="brainstorming",
confidence=0.8,
matched_text="thinking about building",
suggestions=["Enable brainstorming mode for requirements discovery"],
metadata={"mode": "brainstorming", "auto_activate": True}
)
```
### Task Management Mode Detection
**Trigger Indicators**:
```python
task_management_indicators = [
r"(?:multiple|many|several)\s+(?:tasks|files|components)",
r"(?:build|implement|create)\s+(?:system|feature|application)",
r"(?:complex|comprehensive|large-scale)",
r"(?:manage|coordinate|orchestrate)\s+(?:work|tasks|operations)"
]
```
### Token Efficiency Mode Detection
**Trigger Indicators**:
```python
efficiency_indicators = [
r"(?:brief|concise|compressed|short)",
r"(?:token|resource|memory)\s+(?:limit|constraint|optimization)",
r"(?:efficient|optimized|minimal)\s+(?:output|response)"
]
```
**Automatic Resource-Based Activation**:
```python
resource_usage = context.get('resource_usage_percent', 0)
if resource_usage > 75:
# Auto-enable token efficiency mode
match = PatternMatch(
pattern_type=PatternType.MODE_TRIGGER,
pattern_name="token_efficiency",
confidence=0.85,
matched_text="high_resource_usage",
suggestions=["Auto-enable token efficiency due to resource constraints"],
metadata={"mode": "token_efficiency", "trigger": "resource_constraint"}
)
```
## MCP Server Detection Patterns
### Context7 (Library Documentation)
**Trigger Patterns**:
```python
context7_patterns = [
r"(?:library|framework|package)\s+(?:documentation|docs|patterns)",
r"(?:react|vue|angular|express|django|flask)",
r"(?:import|require|install|dependency)",
r"(?:official|standard|best practice)\s+(?:way|pattern|approach)"
]
```
### Sequential (Complex Analysis)
**Trigger Patterns**:
```python
sequential_patterns = [
r"(?:analyze|debug|troubleshoot|investigate)",
r"(?:complex|complicated|multi-step|systematic)",
r"(?:architecture|system|design)\s+(?:review|analysis)",
r"(?:root cause|performance|bottleneck)"
]
```
### Magic (UI Components)
**Trigger Patterns**:
```python
magic_patterns = [
r"(?:component|button|form|modal|dialog)",
r"(?:ui|frontend|interface|design)",
r"(?:react|vue|angular)\s+(?:component|element)",
r"(?:responsive|mobile|accessibility)"
]
```
### Playwright (Testing)
**Trigger Patterns**:
```python
playwright_patterns = [
r"(?:test|testing|e2e|end-to-end)",
r"(?:browser|cross-browser|automation)",
r"(?:performance|visual|regression)\s+(?:test|testing)",
r"(?:validate|verify|check)\s+(?:functionality|behavior)"
]
```
### Hybrid Intelligence Selection (Morphllm vs Serena)
```python
def _detect_mcp_patterns(self, user_input: str, context: Dict[str, Any], operation_data: Dict[str, Any]):
file_count = operation_data.get('file_count', 1)
complexity = operation_data.get('complexity_score', 0.0)
if file_count > 10 or complexity > 0.6:
# Recommend Serena for complex operations
return PatternMatch(
pattern_type=PatternType.MCP_SERVER,
pattern_name="serena",
confidence=0.9,
matched_text="high_complexity_operation",
suggestions=["Use Serena for complex multi-file operations"],
metadata={"mcp_server": "serena", "reason": "complexity_threshold"}
)
elif file_count <= 10 and complexity <= 0.6:
# Recommend Morphllm for efficient operations
return PatternMatch(
pattern_type=PatternType.MCP_SERVER,
pattern_name="morphllm",
confidence=0.8,
matched_text="moderate_complexity_operation",
suggestions=["Use Morphllm for efficient editing operations"],
metadata={"mcp_server": "morphllm", "reason": "efficiency_optimized"}
)
```
## Complexity Detection Patterns
### High Complexity Indicators
```python
high_complexity_patterns = [
r"(?:entire|whole|complete)\s+(?:codebase|system|application)",
r"(?:refactor|migrate|restructure)\s+(?:all|everything|entire)",
r"(?:architecture|system-wide|comprehensive)\s+(?:change|update|redesign)",
r"(?:complex|complicated|sophisticated)\s+(?:logic|algorithm|system)"
]
```
**Pattern Processing**:
```python
for pattern in high_complexity_patterns:
if re.search(pattern, user_input, re.IGNORECASE):
matches.append(PatternMatch(
pattern_type=PatternType.COMPLEXITY_INDICATOR,
pattern_name="high_complexity",
confidence=0.8,
matched_text=re.search(pattern, user_input, re.IGNORECASE).group(),
suggestions=["Consider delegation and thinking modes"],
metadata={"complexity_level": "high", "score_boost": 0.3}
))
```
### File Count-Based Complexity
```python
file_count = operation_data.get('file_count', 1)
if file_count > 5:
matches.append(PatternMatch(
pattern_type=PatternType.COMPLEXITY_INDICATOR,
pattern_name="multi_file_operation",
confidence=0.9,
matched_text=f"{file_count}_files",
suggestions=["Enable delegation for multi-file operations"],
metadata={"file_count": file_count, "delegation_recommended": True}
))
```
## Persona Detection Patterns
### Domain-Specific Patterns
```python
persona_patterns = {
"architect": [r"(?:architecture|design|structure|system)\s+(?:review|analysis|planning)"],
"performance": [r"(?:performance|optimization|speed|efficiency|bottleneck)"],
"security": [r"(?:security|vulnerability|audit|secure|safety)"],
"frontend": [r"(?:ui|frontend|interface|component|design|responsive)"],
"backend": [r"(?:api|server|database|backend|service)"],
"devops": [r"(?:deploy|deployment|ci|cd|infrastructure|docker|kubernetes)"],
"testing": [r"(?:test|testing|qa|quality|coverage|validation)"]
}
```
**Pattern Matching Process**:
```python
for persona, patterns in persona_patterns.items():
for pattern in patterns:
if re.search(pattern, user_input, re.IGNORECASE):
matches.append(PatternMatch(
pattern_type=PatternType.PERSONA_HINT,
pattern_name=persona,
confidence=0.7,
matched_text=re.search(pattern, user_input, re.IGNORECASE).group(),
suggestions=[f"Consider {persona} persona for specialized expertise"],
metadata={"persona": persona, "domain_specific": True}
))
```
## Scoring Algorithms
### Complexity Score Calculation
```python
def _calculate_complexity_score(self, matches: List[PatternMatch], operation_data: Dict[str, Any]) -> float:
base_score = operation_data.get('complexity_score', 0.0)
# Add complexity from pattern matches
for match in matches:
if match.pattern_type == PatternType.COMPLEXITY_INDICATOR:
score_boost = match.metadata.get('score_boost', 0.1)
base_score += score_boost
return min(base_score, 1.0)
```
### Confidence Score Calculation
```python
def _calculate_confidence_score(self, matches: List[PatternMatch]) -> float:
if not matches:
return 0.0
total_confidence = sum(match.confidence for match in matches)
return min(total_confidence / len(matches), 1.0)
```
## Recommendation Generation
### Mode Recommendations
```python
def _get_recommended_modes(self, matches: List[PatternMatch], complexity_score: float) -> List[str]:
modes = set()
# Add modes from pattern matches
for match in matches:
if match.pattern_type == PatternType.MODE_TRIGGER:
modes.add(match.pattern_name)
# Auto-activate based on complexity
if complexity_score > 0.6:
modes.add("task_management")
return list(modes)
```
### Flag Suggestions
```python
def _get_suggested_flags(self, matches: List[PatternMatch], complexity_score: float, context: Dict[str, Any]) -> List[str]:
flags = []
# Thinking flags based on complexity
if complexity_score >= 0.8:
flags.append("--ultrathink")
elif complexity_score >= 0.6:
flags.append("--think-hard")
elif complexity_score >= 0.3:
flags.append("--think")
# Delegation flags
for match in matches:
if match.metadata.get("delegation_recommended"):
flags.append("--delegate auto")
break
# Efficiency flags
for match in matches:
if match.metadata.get("compression_needed") or context.get('resource_usage_percent', 0) > 75:
flags.append("--uc")
break
# Validation flags for high-risk operations
if complexity_score > 0.7 or context.get('is_production', False):
flags.append("--validate")
return flags
```
## Performance Characteristics
### Pattern Compilation
- **Initialization Time**: <50ms for full pattern compilation
- **Memory Usage**: ~5-10KB for compiled pattern cache
- **Pattern Count**: ~50-100 patterns across all categories
### Detection Performance
- **Single Pattern Match**: <1ms average
- **Full Detection Pipeline**: <25ms for complex analysis
- **Regex Operations**: Optimized with compiled patterns
- **Context Processing**: <5ms for typical context sizes
### Cache Efficiency
- **Pattern Reuse**: 95%+ pattern reuse across requests
- **Compilation Avoidance**: Patterns compiled once per session
- **Memory Efficiency**: Patterns shared across all detection calls
## Integration with Hooks
### Hook Usage Pattern
```python
# Initialize pattern detector
pattern_detector = PatternDetector()
# Perform pattern detection
detection_result = pattern_detector.detect_patterns(
user_input="I want to build a complex web application with multiple components",
context={
'resource_usage_percent': 45,
'conversation_length': 25,
'user_expertise': 'intermediate'
},
operation_data={
'file_count': 12,
'complexity_score': 0.0, # Will be enhanced by detection
'operation_type': 'build'
}
)
# Apply recommendations
recommended_modes = detection_result.recommended_modes # ['brainstorming', 'task_management']
recommended_servers = detection_result.recommended_mcp_servers # ['serena', 'magic']
suggested_flags = detection_result.suggested_flags # ['--think-hard', '--delegate auto']
complexity_score = detection_result.complexity_score # 0.7
```
### Pattern Match Processing
```python
for match in detection_result.matches:
if match.pattern_type == PatternType.MODE_TRIGGER:
# Activate detected modes
activate_mode(match.pattern_name)
elif match.pattern_type == PatternType.MCP_SERVER:
# Enable recommended MCP servers
enable_mcp_server(match.pattern_name)
elif match.pattern_type == PatternType.COMPLEXITY_INDICATOR:
# Apply complexity-based optimizations
apply_complexity_optimizations(match.metadata)
```
## Configuration Requirements
### Mode Configuration (modes.yaml)
```yaml
mode_detection:
brainstorming:
trigger_patterns:
- "(?:i want to|thinking about|not sure)\\s+(?:build|create)"
- "(?:brainstorm|explore|figure out)"
- "(?:new project|startup idea)"
confidence_threshold: 0.7
task_management:
trigger_patterns:
- "(?:multiple|many)\\s+(?:files|components)"
- "(?:complex|comprehensive)"
- "(?:build|implement)\\s+(?:system|feature)"
confidence_threshold: 0.6
```
### MCP Routing Configuration (orchestrator.yaml)
```yaml
routing_patterns:
context7:
triggers:
- "(?:library|framework)\\s+(?:docs|patterns)"
- "(?:react|vue|angular)"
- "(?:official|standard)\\s+(?:way|approach)"
activation_threshold: 0.8
sequential:
triggers:
- "(?:analyze|debug|troubleshoot)"
- "(?:complex|multi-step)"
- "(?:architecture|system)\\s+(?:analysis|review)"
activation_threshold: 0.75
```
## Error Handling Strategies
### Pattern Compilation Errors
```python
def _compile_patterns(self):
"""Compile regex patterns for efficient matching."""
self.compiled_patterns = {}
try:
# Compile patterns with error handling
for mode_name, mode_config in self.patterns.get('mode_detection', {}).items():
patterns = mode_config.get('trigger_patterns', [])
self.compiled_patterns[f"mode_{mode_name}"] = [
re.compile(pattern, re.IGNORECASE) for pattern in patterns
]
except re.error as e:
# Log pattern compilation error and continue with empty patterns
logger.log_error("pattern_detection", f"Pattern compilation error: {e}")
self.compiled_patterns[f"mode_{mode_name}"] = []
```
### Detection Failures
- **Regex Errors**: Skip problematic patterns, continue with others
- **Context Errors**: Use default values for missing context keys
- **Scoring Errors**: Return safe default scores (0.5 complexity, 0.0 confidence)
### Graceful Degradation
- **Configuration Missing**: Use hardcoded fallback patterns
- **Pattern Compilation Failed**: Continue with available patterns
- **Performance Issues**: Implement timeout mechanisms for complex patterns
## Usage Examples
### Basic Pattern Detection
```python
detector = PatternDetector()
result = detector.detect_patterns(
user_input="I need to analyze the performance bottlenecks in this complex React application",
context={'resource_usage_percent': 60, 'user_expertise': 'expert'},
operation_data={'file_count': 25, 'operation_type': 'analyze'}
)
print(f"Detected modes: {result.recommended_modes}") # ['task_management']
print(f"MCP servers: {result.recommended_mcp_servers}") # ['sequential', 'context7']
print(f"Suggested flags: {result.suggested_flags}") # ['--think-hard', '--delegate auto']
print(f"Complexity score: {result.complexity_score}") # 0.7
```
### Pattern Match Analysis
```python
for match in result.matches:
print(f"Pattern: {match.pattern_name}")
print(f"Type: {match.pattern_type.value}")
print(f"Confidence: {match.confidence}")
print(f"Matched text: {match.matched_text}")
print(f"Suggestions: {match.suggestions}")
print(f"Metadata: {match.metadata}")
print("---")
```
## Dependencies and Relationships
### Internal Dependencies
- **yaml_loader**: Configuration loading for pattern definitions
- **Standard Libraries**: re, json, typing, dataclasses, enum
### Framework Integration
- **MODE Detection**: Triggers for SuperClaude behavioral modes
- **MCP Coordination**: Server selection for intelligent tool routing
- **Performance Optimization**: Flag suggestions for efficiency improvements
### Hook Coordination
- Used by all hooks for consistent pattern-based decision making
- Provides standardized detection interface and result formats
- Enables cross-hook pattern learning and optimization
---
*This module serves as the intelligent pattern recognition system that transforms user input and context into actionable recommendations, enabling SuperClaude to automatically adapt its behavior based on detected patterns and requirements.*

View File

@@ -1,621 +0,0 @@
# validate_system.py - YAML-Driven System Validation Engine
## Overview
The `validate_system.py` module provides a comprehensive YAML-driven system validation engine for the SuperClaude Framework-Hooks. This module implements intelligent health scoring, proactive diagnostics, and predictive analysis by consuming declarative YAML patterns from validation_intelligence.yaml, enabling comprehensive system health monitoring without hardcoded validation logic.
## Purpose and Responsibilities
### Primary Functions
- **YAML-Driven Validation Patterns**: Hot-reloadable validation patterns for comprehensive system analysis
- **Health Scoring**: Weighted component-based health scoring with configurable thresholds
- **Proactive Diagnostic Pattern Matching**: Early warning system based on pattern recognition
- **Predictive Health Analysis**: Trend analysis and predictive health assessments
- **Automated Remediation Suggestions**: Intelligence-driven remediation recommendations
- **Continuous Validation Cycles**: Ongoing system health monitoring and alerting
### Intelligence Capabilities
- **Pattern-Based Health Assessment**: Configurable health scoring based on YAML intelligence patterns
- **Component-Weighted Scoring**: Intelligent weighting of system components for overall health
- **Proactive Issue Detection**: Early warning patterns that predict potential system issues
- **Automated Fix Application**: Safe auto-remediation for known fixable issues
## Core Classes and Data Structures
### Enumerations
#### ValidationSeverity
```python
class ValidationSeverity(Enum):
INFO = "info" # Informational notices
LOW = "low" # Minor issues, no immediate action required
MEDIUM = "medium" # Moderate issues, should be addressed
HIGH = "high" # Significant issues, requires attention
CRITICAL = "critical" # System-threatening issues, immediate action required
```
#### HealthStatus
```python
class HealthStatus(Enum):
HEALTHY = "healthy" # System operating normally
WARNING = "warning" # Some issues detected, monitoring needed
CRITICAL = "critical" # Serious issues, immediate intervention required
UNKNOWN = "unknown" # Health status cannot be determined
```
### Data Classes
#### ValidationIssue
```python
@dataclass
class ValidationIssue:
component: str # System component with the issue
issue_type: str # Type of issue identified
severity: ValidationSeverity # Severity level of the issue
description: str # Human-readable description
evidence: List[str] # Supporting evidence for the issue
recommendations: List[str] # Suggested remediation actions
remediation_action: Optional[str] # Automated fix action if available
auto_fixable: bool # Whether the issue can be auto-fixed
timestamp: float # When the issue was detected
```
#### HealthScore
```python
@dataclass
class HealthScore:
component: str # Component name
score: float # Health score 0.0 to 1.0
status: HealthStatus # Overall health status
contributing_factors: List[str] # Factors that influenced the score
trend: str # improving|stable|degrading
last_updated: float # Timestamp of last update
```
#### DiagnosticResult
```python
@dataclass
class DiagnosticResult:
component: str # Component being diagnosed
diagnosis: str # Diagnostic conclusion
confidence: float # Confidence in diagnosis (0.0 to 1.0)
symptoms: List[str] # Observed symptoms
root_cause: Optional[str] # Identified root cause
recommendations: List[str] # Recommended actions
predicted_impact: str # Expected impact if not addressed
timeline: str # Timeline for resolution
```
## Core Validation Engine
### YAMLValidationEngine
```python
class YAMLValidationEngine:
"""
YAML-driven validation engine that consumes intelligence patterns.
Features:
- Hot-reloadable YAML validation patterns
- Component-based health scoring
- Proactive diagnostic pattern matching
- Predictive health analysis
- Intelligent remediation suggestions
"""
def __init__(self, framework_root: Path, fix_issues: bool = False):
self.framework_root = Path(framework_root)
self.fix_issues = fix_issues
self.cache_dir = self.framework_root / "cache"
self.config_dir = self.framework_root / "config"
# Initialize intelligence engine for YAML patterns
self.intelligence_engine = IntelligenceEngine()
# Validation state
self.issues: List[ValidationIssue] = []
self.fixes_applied: List[str] = []
self.health_scores: Dict[str, HealthScore] = {}
self.diagnostic_results: List[DiagnosticResult] = []
# Load validation intelligence patterns
self.validation_patterns = self._load_validation_patterns()
```
## System Context Gathering
### _gather_system_context()
```python
def _gather_system_context(self) -> Dict[str, Any]:
"""Gather current system context for validation analysis."""
context = {
'timestamp': time.time(),
'framework_root': str(self.framework_root),
'cache_directory_exists': self.cache_dir.exists(),
'config_directory_exists': self.config_dir.exists(),
}
# Learning system context
learning_records_path = self.cache_dir / "learning_records.json"
if learning_records_path.exists():
try:
with open(learning_records_path, 'r') as f:
records = json.load(f)
context['learning_records_count'] = len(records)
if records:
context['recent_learning_activity'] = len([
r for r in records
if r.get('timestamp', 0) > time.time() - 86400 # Last 24h
])
except:
context['learning_records_count'] = 0
context['recent_learning_activity'] = 0
# Adaptations context
adaptations_path = self.cache_dir / "adaptations.json"
if adaptations_path.exists():
try:
with open(adaptations_path, 'r') as f:
adaptations = json.load(f)
context['adaptations_count'] = len(adaptations)
# Calculate effectiveness statistics
all_effectiveness = []
for adaptation in adaptations.values():
history = adaptation.get('effectiveness_history', [])
all_effectiveness.extend(history)
if all_effectiveness:
context['average_effectiveness'] = statistics.mean(all_effectiveness)
context['effectiveness_variance'] = statistics.variance(all_effectiveness) if len(all_effectiveness) > 1 else 0
context['perfect_score_count'] = sum(1 for score in all_effectiveness if score == 1.0)
except:
context['adaptations_count'] = 0
# Configuration files context
yaml_files = list(self.config_dir.glob("*.yaml")) if self.config_dir.exists() else []
context['yaml_config_count'] = len(yaml_files)
context['intelligence_patterns_available'] = len([
f for f in yaml_files
if f.name in ['intelligence_patterns.yaml', 'mcp_orchestration.yaml',
'hook_coordination.yaml', 'performance_intelligence.yaml',
'validation_intelligence.yaml', 'user_experience.yaml']
])
return context
```
## Component Validation Methods
### Learning System Validation
```python
def _validate_learning_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate learning system using YAML patterns."""
print("📊 Validating learning system...")
component_weight = self.validation_patterns.get('component_weights', {}).get('learning_system', 0.25)
scoring_metrics = self.validation_patterns.get('scoring_metrics', {}).get('learning_system', {})
issues = []
score_factors = []
# Pattern diversity validation
adaptations_count = context.get('adaptations_count', 0)
if adaptations_count > 0:
# Simplified diversity calculation
diversity_score = min(adaptations_count / 50.0, 0.95) # Cap at 0.95
pattern_diversity_config = scoring_metrics.get('pattern_diversity', {})
healthy_range = pattern_diversity_config.get('healthy_range', [0.6, 0.95])
if diversity_score < healthy_range[0]:
issues.append(ValidationIssue(
component="learning_system",
issue_type="pattern_diversity",
severity=ValidationSeverity.MEDIUM,
description=f"Pattern diversity low: {diversity_score:.2f}",
evidence=[f"Only {adaptations_count} unique patterns learned"],
recommendations=["Expose system to more diverse operational patterns"]
))
score_factors.append(diversity_score)
# Effectiveness consistency validation
effectiveness_variance = context.get('effectiveness_variance', 0)
if effectiveness_variance is not None:
consistency_score = max(0, 1.0 - effectiveness_variance)
effectiveness_config = scoring_metrics.get('effectiveness_consistency', {})
healthy_range = effectiveness_config.get('healthy_range', [0.7, 0.9])
if consistency_score < healthy_range[0]:
issues.append(ValidationIssue(
component="learning_system",
issue_type="effectiveness_consistency",
severity=ValidationSeverity.LOW,
description=f"Effectiveness variance high: {effectiveness_variance:.3f}",
evidence=[f"Effectiveness consistency score: {consistency_score:.2f}"],
recommendations=["Review learning patterns for instability"]
))
score_factors.append(consistency_score)
# Calculate health score
component_health = statistics.mean(score_factors) if score_factors else 0.5
health_status = (
HealthStatus.HEALTHY if component_health >= 0.8 else
HealthStatus.WARNING if component_health >= 0.6 else
HealthStatus.CRITICAL
)
self.health_scores['learning_system'] = HealthScore(
component='learning_system',
score=component_health,
status=health_status,
contributing_factors=[f"pattern_diversity", "effectiveness_consistency"],
trend="stable" # Would need historical data to determine trend
)
self.issues.extend(issues)
```
### Configuration System Validation
```python
def _validate_configuration_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate configuration system using YAML patterns."""
print("📝 Validating configuration system...")
issues = []
score_factors = []
# Check YAML configuration files
expected_intelligence_files = [
'intelligence_patterns.yaml',
'mcp_orchestration.yaml',
'hook_coordination.yaml',
'performance_intelligence.yaml',
'validation_intelligence.yaml',
'user_experience.yaml'
]
available_files = [f.name for f in self.config_dir.glob("*.yaml")] if self.config_dir.exists() else []
missing_files = [f for f in expected_intelligence_files if f not in available_files]
if missing_files:
issues.append(ValidationIssue(
component="configuration_system",
issue_type="missing_intelligence_configs",
severity=ValidationSeverity.HIGH,
description=f"Missing {len(missing_files)} intelligence configuration files",
evidence=[f"Missing files: {', '.join(missing_files)}"],
recommendations=["Ensure all intelligence pattern files are available"]
))
score_factors.append(0.5)
else:
score_factors.append(0.9)
# Validate YAML syntax
yaml_issues = 0
if self.config_dir.exists():
for yaml_file in self.config_dir.glob("*.yaml"):
try:
with open(yaml_file, 'r') as f:
config_loader.load_config(yaml_file.stem)
except Exception as e:
yaml_issues += 1
issues.append(ValidationIssue(
component="configuration_system",
issue_type="yaml_syntax_error",
severity=ValidationSeverity.HIGH,
description=f"YAML syntax error in {yaml_file.name}",
evidence=[f"Error: {str(e)}"],
recommendations=[f"Fix YAML syntax in {yaml_file.name}"]
))
syntax_score = max(0, 1.0 - yaml_issues * 0.2)
score_factors.append(syntax_score)
overall_score = statistics.mean(score_factors) if score_factors else 0.5
self.health_scores['configuration_system'] = HealthScore(
component='configuration_system',
score=overall_score,
status=HealthStatus.HEALTHY if overall_score >= 0.8 else
HealthStatus.WARNING if overall_score >= 0.6 else
HealthStatus.CRITICAL,
contributing_factors=["file_availability", "yaml_syntax", "intelligence_patterns"],
trend="stable"
)
self.issues.extend(issues)
```
## Proactive Diagnostics
### _run_proactive_diagnostics()
```python
def _run_proactive_diagnostics(self, context: Dict[str, Any]):
"""Run proactive diagnostic pattern matching from YAML."""
print("🔮 Running proactive diagnostics...")
# Get early warning patterns from YAML
early_warning_patterns = self.validation_patterns.get(
'proactive_diagnostics', {}
).get('early_warning_patterns', {})
# Check learning system warnings
learning_warnings = early_warning_patterns.get('learning_system_warnings', [])
for warning_pattern in learning_warnings:
if self._matches_warning_pattern(context, warning_pattern):
severity_map = {
'low': ValidationSeverity.LOW,
'medium': ValidationSeverity.MEDIUM,
'high': ValidationSeverity.HIGH,
'critical': ValidationSeverity.CRITICAL
}
self.issues.append(ValidationIssue(
component="learning_system",
issue_type=warning_pattern.get('name', 'unknown_warning'),
severity=severity_map.get(warning_pattern.get('severity', 'medium'), ValidationSeverity.MEDIUM),
description=f"Proactive warning: {warning_pattern.get('name')}",
evidence=[f"Pattern matched: {warning_pattern.get('pattern', {})}"],
recommendations=[warning_pattern.get('recommendation', 'Review system state')],
remediation_action=warning_pattern.get('remediation')
))
```
## Health Score Calculation
### _calculate_overall_health_score()
```python
def _calculate_overall_health_score(self):
"""Calculate overall system health score using YAML component weights."""
component_weights = self.validation_patterns.get('component_weights', {
'learning_system': 0.25,
'performance_system': 0.20,
'mcp_coordination': 0.20,
'hook_system': 0.15,
'configuration_system': 0.10,
'cache_system': 0.10
})
weighted_score = 0.0
total_weight = 0.0
for component, weight in component_weights.items():
if component in self.health_scores:
weighted_score += self.health_scores[component].score * weight
total_weight += weight
overall_score = weighted_score / total_weight if total_weight > 0 else 0.0
overall_status = (
HealthStatus.HEALTHY if overall_score >= 0.8 else
HealthStatus.WARNING if overall_score >= 0.6 else
HealthStatus.CRITICAL
)
self.health_scores['overall'] = HealthScore(
component='overall_system',
score=overall_score,
status=overall_status,
contributing_factors=list(component_weights.keys()),
trend="stable"
)
```
## Automated Remediation
### _generate_remediation_suggestions()
```python
def _generate_remediation_suggestions(self):
"""Generate intelligent remediation suggestions based on issues found."""
auto_fixable_issues = [issue for issue in self.issues if issue.auto_fixable]
if auto_fixable_issues and self.fix_issues:
for issue in auto_fixable_issues:
if issue.remediation_action == "create_cache_directory":
try:
self.cache_dir.mkdir(parents=True, exist_ok=True)
self.fixes_applied.append(f"✅ Created cache directory: {self.cache_dir}")
except Exception as e:
print(f"Failed to create cache directory: {e}")
```
## Main Validation Interface
### validate_all()
```python
def validate_all(self) -> Tuple[List[ValidationIssue], List[str], Dict[str, HealthScore]]:
"""
Run comprehensive YAML-driven validation.
Returns:
Tuple of (issues, fixes_applied, health_scores)
"""
print("🔍 Starting YAML-driven framework validation...")
# Clear previous state
self.issues.clear()
self.fixes_applied.clear()
self.health_scores.clear()
self.diagnostic_results.clear()
# Get current system context
context = self._gather_system_context()
# Run validation intelligence analysis
validation_intelligence = self.intelligence_engine.evaluate_context(
context, 'validation_intelligence'
)
# Core component validations using YAML patterns
self._validate_learning_system(context, validation_intelligence)
self._validate_performance_system(context, validation_intelligence)
self._validate_mcp_coordination(context, validation_intelligence)
self._validate_hook_system(context, validation_intelligence)
self._validate_configuration_system(context, validation_intelligence)
self._validate_cache_system(context, validation_intelligence)
# Run proactive diagnostics
self._run_proactive_diagnostics(context)
# Calculate overall health score
self._calculate_overall_health_score()
# Generate remediation recommendations
self._generate_remediation_suggestions()
return self.issues, self.fixes_applied, self.health_scores
```
## Results Reporting
### print_results()
```python
def print_results(self, verbose: bool = False):
"""Print comprehensive validation results."""
print("\n" + "="*70)
print("🎯 YAML-DRIVEN VALIDATION RESULTS")
print("="*70)
# Overall health score
overall_health = self.health_scores.get('overall')
if overall_health:
status_emoji = {
HealthStatus.HEALTHY: "🟢",
HealthStatus.WARNING: "🟡",
HealthStatus.CRITICAL: "🔴",
HealthStatus.UNKNOWN: ""
}
print(f"\n{status_emoji.get(overall_health.status, '')} Overall Health Score: {overall_health.score:.2f}/1.0 ({overall_health.status.value})")
# Component health scores
if verbose and len(self.health_scores) > 1:
print(f"\n📊 Component Health Scores:")
for component, health in self.health_scores.items():
if component != 'overall':
status_emoji = {
HealthStatus.HEALTHY: "🟢",
HealthStatus.WARNING: "🟡",
HealthStatus.CRITICAL: "🔴"
}
print(f" {status_emoji.get(health.status, '')} {component}: {health.score:.2f}")
# Issues found
if not self.issues:
print("\n✅ All validations passed! System appears healthy.")
else:
severity_counts = {}
for issue in self.issues:
severity_counts[issue.severity] = severity_counts.get(issue.severity, 0) + 1
print(f"\n🔍 Found {len(self.issues)} issues:")
for severity in [ValidationSeverity.CRITICAL, ValidationSeverity.HIGH,
ValidationSeverity.MEDIUM, ValidationSeverity.LOW, ValidationSeverity.INFO]:
if severity in severity_counts:
severity_emoji = {
ValidationSeverity.CRITICAL: "🚨",
ValidationSeverity.HIGH: "⚠️ ",
ValidationSeverity.MEDIUM: "🟡",
ValidationSeverity.LOW: " ",
ValidationSeverity.INFO: "💡"
}
print(f" {severity_emoji.get(severity, '')} {severity.value.title()}: {severity_counts[severity]}")
```
## CLI Interface
### main()
```python
def main():
"""Main entry point for YAML-driven validation."""
parser = argparse.ArgumentParser(
description="YAML-driven Framework-Hooks validation engine"
)
parser.add_argument("--fix", action="store_true",
help="Attempt to fix auto-fixable issues")
parser.add_argument("--verbose", action="store_true",
help="Verbose output with detailed results")
parser.add_argument("--framework-root",
default=".",
help="Path to Framework-Hooks directory")
args = parser.parse_args()
framework_root = Path(args.framework_root).resolve()
if not framework_root.exists():
print(f"❌ Framework root directory not found: {framework_root}")
sys.exit(1)
# Initialize YAML-driven validation engine
validator = YAMLValidationEngine(framework_root, args.fix)
# Run comprehensive validation
issues, fixes, health_scores = validator.validate_all()
# Print results
validator.print_results(args.verbose)
# Exit with health score as return code (0 = perfect, higher = issues)
overall_health = health_scores.get('overall')
health_score = overall_health.score if overall_health else 0.0
exit_code = max(0, min(10, int((1.0 - health_score) * 10))) # 0-10 range
sys.exit(exit_code)
```
## Performance Characteristics
### Operation Timings
- **System Context Gathering**: <50ms for comprehensive context analysis
- **Component Validation**: <100ms per component with full pattern matching
- **Proactive Diagnostics**: <25ms for early warning pattern evaluation
- **Health Score Calculation**: <10ms for weighted component scoring
- **Remediation Generation**: <15ms for intelligent suggestion generation
### Memory Efficiency
- **Validation State**: ~5-15KB for complete validation run
- **Health Scores**: ~200-500B per component score
- **Issue Storage**: ~500B-2KB per validation issue
- **Intelligence Cache**: Shared with IntelligenceEngine (~50KB)
### Quality Metrics
- **Health Score Accuracy**: 95%+ correlation with actual system health
- **Issue Detection Rate**: 90%+ detection of actual system problems
- **False Positive Rate**: <5% for critical and high severity issues
- **Auto-Fix Success Rate**: 98%+ for auto-fixable issues
## Error Handling Strategies
### Validation Failures
- **Component Validation Errors**: Skip problematic components, log warnings, continue with others
- **Pattern Matching Failures**: Use fallback scoring, proceed with available data
- **Context Gathering Errors**: Use partial context, note missing information
### YAML Pattern Errors
- **Malformed Intelligence Patterns**: Skip invalid patterns, use defaults where possible
- **Missing Configuration**: Provide default component weights and thresholds
- **Permission Issues**: Log errors, continue with available patterns
### Auto-Fix Failures
- **Remediation Errors**: Log failures, provide manual remediation instructions
- **Permission Denied**: Skip auto-fixes, recommend manual intervention
- **Partial Fixes**: Apply successful fixes, report failures for manual resolution
## Dependencies and Relationships
### Internal Dependencies
- **intelligence_engine**: YAML pattern interpretation and hot-reload capability
- **yaml_loader**: Configuration loading for validation intelligence patterns
- **Standard Libraries**: os, json, time, statistics, sys, argparse, pathlib
### Framework Integration
- **validation_intelligence.yaml**: Consumes validation patterns and health scoring rules
- **System Health Monitoring**: Continuous validation with configurable thresholds
- **Proactive Diagnostics**: Early warning system for predictive issue detection
### Hook Coordination
- Provides system health validation for all hook operations
- Enables proactive health monitoring with intelligent diagnostics
- Supports automated remediation for common system issues
---
*This module provides comprehensive, intelligence-driven system validation that adapts to changing requirements through YAML configuration, enabling proactive health monitoring and automated remediation for the SuperClaude Framework-Hooks system.*

View File

@@ -1,622 +0,0 @@
# yaml_loader.py - Unified Configuration Management System
## Overview
The `yaml_loader.py` module provides unified configuration loading with support for both JSON and YAML formats, featuring intelligent caching, hot-reload capabilities, and comprehensive error handling. It serves as the central configuration management system for all SuperClaude hooks, supporting Claude Code settings.json, SuperClaude superclaude-config.json, and YAML configuration files.
## Purpose and Responsibilities
### Primary Functions
- **Dual-Format Support**: JSON (Claude Code + SuperClaude) and YAML configuration handling
- **Intelligent Caching**: Sub-10ms configuration access with file modification detection
- **Hot-Reload Capability**: Automatic detection and reload of configuration changes
- **Environment Interpolation**: ${VAR} and ${VAR:default} syntax support for dynamic configuration
- **Modular Configuration**: Include/merge support for complex deployment scenarios
### Performance Characteristics
- **Sub-10ms Access**: Cached configuration retrieval for optimal hook performance
- **<50ms Reload**: Configuration file reload when changes detected
- **1-Second Check Interval**: Rate-limited file modification checks for efficiency
- **Comprehensive Error Handling**: Graceful degradation with fallback configurations
## Core Architecture
### UnifiedConfigLoader Class
```python
class UnifiedConfigLoader:
"""
Intelligent configuration loader with support for JSON and YAML formats.
Features:
- Dual-configuration support (Claude Code + SuperClaude)
- File modification detection for hot-reload
- In-memory caching for performance (<10ms access)
- Comprehensive error handling and validation
- Environment variable interpolation
- Include/merge support for modular configs
- Unified configuration interface
"""
```
### Configuration Source Registry
```python
def __init__(self, project_root: Union[str, Path]):
self.project_root = Path(project_root)
self.config_dir = self.project_root / "config"
# Configuration file paths
self.claude_settings_path = self.project_root / "settings.json"
self.superclaude_config_path = self.project_root / "superclaude-config.json"
# Configuration source registry
self._config_sources = {
'claude_settings': self.claude_settings_path,
'superclaude_config': self.superclaude_config_path
}
```
**Supported Configuration Sources**:
- **claude_settings**: Claude Code settings.json file
- **superclaude_config**: SuperClaude superclaude-config.json file
- **YAML Files**: config/*.yaml files for modular configuration
## Intelligent Caching System
### Cache Structure
```python
# Cache for all configuration sources
self._cache: Dict[str, Dict[str, Any]] = {}
self._file_hashes: Dict[str, str] = {}
self._last_check: Dict[str, float] = {}
self.check_interval = 1.0 # Check files every 1 second max
```
### Cache Validation
```python
def _should_use_cache(self, config_name: str, config_path: Path) -> bool:
if config_name not in self._cache:
return False
# Rate limit file checks
now = time.time()
if now - self._last_check.get(config_name, 0) < self.check_interval:
return True
# Check if file changed
current_hash = self._compute_hash(config_path)
return current_hash == self._file_hashes.get(config_name)
```
**Cache Invalidation Strategy**:
1. **Rate Limiting**: File checks limited to once per second per configuration
2. **Hash-Based Detection**: File modification detection using mtime and size hash
3. **Automatic Reload**: Cache invalidation triggers automatic configuration reload
4. **Memory Optimization**: Only cache active configurations to minimize memory usage
### File Change Detection
```python
def _compute_hash(self, file_path: Path) -> str:
"""Compute file hash for change detection."""
stat = file_path.stat()
return hashlib.md5(f"{stat.st_mtime}:{stat.st_size}".encode()).hexdigest()
```
**Hash Components**:
- **Modification Time**: File system mtime for change detection
- **File Size**: Content size changes for additional validation
- **MD5 Hash**: Combined hash for efficient comparison
## Configuration Loading Interface
### Primary Loading Method
```python
def load_config(self, config_name: str, force_reload: bool = False) -> Dict[str, Any]:
"""
Load configuration with intelligent caching (supports JSON and YAML).
Args:
config_name: Name of config file or special config identifier
- For YAML: config file name without .yaml extension
- For JSON: 'claude_settings' or 'superclaude_config'
force_reload: Force reload even if cached
Returns:
Parsed configuration dictionary
Raises:
FileNotFoundError: If config file doesn't exist
ValueError: If config parsing fails
"""
```
**Loading Logic**:
1. **Source Identification**: Determine if request is for JSON or YAML configuration
2. **Cache Validation**: Check if cached version is still valid
3. **File Loading**: Read and parse configuration file if reload needed
4. **Environment Interpolation**: Process ${VAR} and ${VAR:default} syntax
5. **Include Processing**: Handle __include__ directives for modular configuration
6. **Cache Update**: Store parsed configuration with metadata
### Specialized Access Methods
#### Section Access with Dot Notation
```python
def get_section(self, config_name: str, section_path: str, default: Any = None) -> Any:
"""
Get specific section from configuration using dot notation.
Args:
config_name: Configuration file name or identifier
section_path: Dot-separated path (e.g., 'routing.ui_components')
default: Default value if section not found
Returns:
Configuration section value or default
"""
config = self.load_config(config_name)
try:
result = config
for key in section_path.split('.'):
result = result[key]
return result
except (KeyError, TypeError):
return default
```
#### Hook-Specific Configuration
```python
def get_hook_config(self, hook_name: str, section_path: str = None, default: Any = None) -> Any:
"""
Get hook-specific configuration from SuperClaude config.
Args:
hook_name: Hook name (e.g., 'session_start', 'pre_tool_use')
section_path: Optional dot-separated path within hook config
default: Default value if not found
Returns:
Hook configuration or specific section
"""
base_path = f"hook_configurations.{hook_name}"
if section_path:
full_path = f"{base_path}.{section_path}"
else:
full_path = base_path
return self.get_section('superclaude_config', full_path, default)
```
#### Claude Code Integration
```python
def get_claude_hooks(self) -> Dict[str, Any]:
"""Get Claude Code hook definitions from settings.json."""
return self.get_section('claude_settings', 'hooks', {})
def get_superclaude_config(self, section_path: str = None, default: Any = None) -> Any:
"""Get SuperClaude framework configuration."""
if section_path:
return self.get_section('superclaude_config', section_path, default)
else:
return self.load_config('superclaude_config')
```
#### MCP Server Configuration
```python
def get_mcp_server_config(self, server_name: str = None) -> Dict[str, Any]:
"""Get MCP server configuration."""
if server_name:
return self.get_section('superclaude_config', f'mcp_server_integration.servers.{server_name}', {})
else:
return self.get_section('superclaude_config', 'mcp_server_integration', {})
def get_performance_targets(self) -> Dict[str, Any]:
"""Get performance targets for all components."""
return self.get_section('superclaude_config', 'global_configuration.performance_monitoring', {})
```
## Environment Variable Interpolation
### Interpolation Processing
```python
def _interpolate_env_vars(self, content: str) -> str:
"""Replace environment variables in YAML content."""
import re
def replace_env_var(match):
var_name = match.group(1)
default_value = match.group(2) if match.group(2) else ""
return os.getenv(var_name, default_value)
# Support ${VAR} and ${VAR:default} syntax
pattern = r'\$\{([^}:]+)(?::([^}]*))?\}'
return re.sub(pattern, replace_env_var, content)
```
**Supported Syntax**:
- **${VAR_NAME}**: Replace with environment variable value or empty string
- **${VAR_NAME:default_value}**: Replace with environment variable or default value
- **Nested Variables**: Support for complex environment variable combinations
### Usage Examples
```yaml
# Configuration with environment interpolation
database:
host: ${DB_HOST:localhost}
port: ${DB_PORT:5432}
username: ${DB_USER}
password: ${DB_PASS:}
logging:
level: ${LOG_LEVEL:INFO}
directory: ${LOG_DIR:./logs}
```
## Modular Configuration Support
### Include Directive Processing
```python
def _process_includes(self, config: Dict[str, Any], base_dir: Path) -> Dict[str, Any]:
"""Process include directives in configuration."""
if not isinstance(config, dict):
return config
# Handle special include key
if '__include__' in config:
includes = config.pop('__include__')
if isinstance(includes, str):
includes = [includes]
for include_file in includes:
include_path = base_dir / include_file
if include_path.exists():
with open(include_path, 'r', encoding='utf-8') as f:
included_config = yaml.safe_load(f.read())
if isinstance(included_config, dict):
# Merge included config (current config takes precedence)
included_config.update(config)
config = included_config
return config
```
### Modular Configuration Example
```yaml
# main.yaml
__include__:
- "common/logging.yaml"
- "environments/production.yaml"
application:
name: "SuperClaude Hooks"
version: "1.0.0"
# Override included values
logging:
level: "DEBUG" # Overrides value from logging.yaml
```
## JSON Configuration Support
### JSON Loading with Error Handling
```python
def _load_json_config(self, config_name: str, force_reload: bool = False) -> Dict[str, Any]:
"""Load JSON configuration file."""
config_path = self._config_sources[config_name]
if not config_path.exists():
raise FileNotFoundError(f"Configuration file not found: {config_path}")
# Check if we need to reload
if not force_reload and self._should_use_cache(config_name, config_path):
return self._cache[config_name]
# Load and parse the JSON configuration
try:
with open(config_path, 'r', encoding='utf-8') as f:
content = f.read()
# Environment variable interpolation
content = self._interpolate_env_vars(content)
# Parse JSON
config = json.loads(content)
# Update cache
self._cache[config_name] = config
self._file_hashes[config_name] = self._compute_hash(config_path)
self._last_check[config_name] = time.time()
return config
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error in {config_path}: {e}")
except Exception as e:
raise RuntimeError(f"Error loading JSON config {config_name}: {e}")
```
**JSON Support Features**:
- **Environment Interpolation**: ${VAR} syntax support in JSON files
- **Error Handling**: Comprehensive JSON parsing error messages
- **Cache Integration**: Same caching behavior as YAML configurations
- **Encoding Support**: UTF-8 encoding for international character support
## Configuration Validation and Error Handling
### Error Handling Strategy
```python
def load_config(self, config_name: str, force_reload: bool = False) -> Dict[str, Any]:
try:
# Configuration loading logic
pass
except yaml.YAMLError as e:
raise ValueError(f"YAML parsing error in {config_path}: {e}")
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error in {config_path}: {e}")
except FileNotFoundError as e:
raise FileNotFoundError(f"Configuration file not found: {config_path}")
except Exception as e:
raise RuntimeError(f"Error loading config {config_name}: {e}")
```
**Error Categories**:
- **File Not Found**: Configuration file missing or inaccessible
- **Parsing Errors**: YAML or JSON syntax errors with detailed messages
- **Permission Errors**: File system permission issues
- **General Errors**: Unexpected errors with full context
### Graceful Degradation
```python
def get_section(self, config_name: str, section_path: str, default: Any = None) -> Any:
try:
result = config
for key in section_path.split('.'):
result = result[key]
return result
except (KeyError, TypeError):
return default # Graceful fallback to default value
```
## Performance Optimization
### Cache Reload Management
```python
def reload_all(self) -> None:
"""Force reload of all cached configurations."""
for config_name in list(self._cache.keys()):
self.load_config(config_name, force_reload=True)
```
### Hook Status Checking
```python
def is_hook_enabled(self, hook_name: str) -> bool:
"""Check if a specific hook is enabled."""
return self.get_hook_config(hook_name, 'enabled', False)
```
**Performance Optimizations**:
- **Selective Reloading**: Only reload changed configurations
- **Rate-Limited Checks**: File modification checks limited to once per second
- **Memory Efficient**: Cache only active configurations
- **Batch Operations**: Multiple configuration accesses use cached versions
## Integration with Hooks
### Global Instance
```python
# Global instance for shared use across hooks
config_loader = UnifiedConfigLoader(".")
```
### Hook Usage Pattern
```python
from shared.yaml_loader import config_loader
# Load hook-specific configuration
hook_config = config_loader.get_hook_config('pre_tool_use')
performance_target = config_loader.get_hook_config('pre_tool_use', 'performance_target_ms', 200)
# Load MCP server configuration
mcp_config = config_loader.get_mcp_server_config('sequential')
all_mcp_servers = config_loader.get_mcp_server_config()
# Load global performance targets
performance_targets = config_loader.get_performance_targets()
# Check if hook is enabled
if config_loader.is_hook_enabled('pre_tool_use'):
# Execute hook logic
pass
```
### Configuration Structure Examples
#### SuperClaude Configuration (superclaude-config.json)
```json
{
"hook_configurations": {
"session_start": {
"enabled": true,
"performance_target_ms": 50,
"initialization_timeout_ms": 1000
},
"pre_tool_use": {
"enabled": true,
"performance_target_ms": 200,
"pattern_detection_enabled": true,
"mcp_intelligence_enabled": true
}
},
"mcp_server_integration": {
"servers": {
"sequential": {
"enabled": true,
"activation_cost_ms": 200,
"performance_profile": "intensive"
},
"context7": {
"enabled": true,
"activation_cost_ms": 150,
"performance_profile": "standard"
}
}
},
"global_configuration": {
"performance_monitoring": {
"enabled": true,
"target_percentile": 95,
"alert_threshold_ms": 500
}
}
}
```
#### YAML Configuration (config/logging.yaml)
```yaml
logging:
enabled: true
level: ${LOG_LEVEL:INFO}
file_settings:
log_directory: ${LOG_DIR:cache/logs}
retention_days: ${LOG_RETENTION:30}
max_file_size_mb: 10
hook_logging:
log_lifecycle: true
log_decisions: true
log_errors: true
log_performance: true
# Include common configuration
__include__:
- "common/base.yaml"
```
## Performance Characteristics
### Access Performance
- **Cached Access**: <10ms average for configuration retrieval
- **Initial Load**: <50ms for typical configuration files
- **Hot Reload**: <75ms for configuration file changes
- **Bulk Access**: <5ms per additional section access from cached config
### Memory Efficiency
- **Configuration Cache**: ~1-5KB per cached configuration file
- **File Hash Cache**: ~50B per tracked configuration file
- **Include Processing**: Dynamic memory usage based on included file sizes
- **Memory Cleanup**: Automatic cleanup of unused cached configurations
### File System Optimization
- **Rate-Limited Checks**: Maximum one file system check per second per configuration
- **Efficient Hashing**: mtime + size based change detection
- **Batch Processing**: Multiple configuration accesses use single file check
- **Error Caching**: Failed configuration loads cached to prevent repeated failures
## Error Handling and Recovery
### Configuration Loading Failures
```python
# Graceful degradation for missing configurations
try:
config = config_loader.load_config('optional_config')
except FileNotFoundError:
config = {} # Use empty configuration
except ValueError as e:
logger.log_error("config_loader", f"Configuration parsing failed: {e}")
config = {} # Use empty configuration with error logging
```
### Cache Corruption Recovery
- **Hash Mismatch**: Automatic cache invalidation and reload
- **Memory Corruption**: Cache clearing and fresh reload
- **File Permission Changes**: Graceful fallback to default values
- **Network File System Issues**: Retry logic with exponential backoff
### Environment Variable Issues
- **Missing Variables**: Use default values or empty strings as specified
- **Invalid Syntax**: Log warning and use literal value
- **Circular References**: Detection and prevention of infinite loops
## Configuration Best Practices
### File Organization
```
project_root/
├── settings.json # Claude Code settings
├── superclaude-config.json # SuperClaude framework config
└── config/
├── logging.yaml # Logging configuration
├── orchestrator.yaml # MCP server routing
├── modes.yaml # Mode detection patterns
└── common/
└── base.yaml # Shared configuration elements
```
### Configuration Conventions
- **JSON for Integration**: Use JSON for Claude Code and SuperClaude integration configs
- **YAML for Modularity**: Use YAML for complex, hierarchical configurations
- **Environment Variables**: Use ${VAR} syntax for deployment-specific values
- **Include Files**: Use __include__ for shared configuration elements
## Usage Examples
### Basic Configuration Loading
```python
from shared.yaml_loader import config_loader
# Load hook configuration
hook_config = config_loader.get_hook_config('pre_tool_use')
print(f"Hook enabled: {hook_config.get('enabled', False)}")
print(f"Performance target: {hook_config.get('performance_target_ms', 200)}ms")
# Load MCP server configuration
sequential_config = config_loader.get_mcp_server_config('sequential')
print(f"Sequential activation cost: {sequential_config.get('activation_cost_ms', 200)}ms")
```
### Advanced Configuration Access
```python
# Get nested configuration with dot notation
logging_level = config_loader.get_section('logging', 'file_settings.log_level', 'INFO')
performance_target = config_loader.get_section('superclaude_config', 'hook_configurations.pre_tool_use.performance_target_ms', 200)
# Check hook status
if config_loader.is_hook_enabled('mcp_intelligence'):
# Initialize MCP intelligence
pass
# Force reload all configurations
config_loader.reload_all()
```
### Environment Variable Integration
```python
# Configuration automatically processes environment variables
# In config/database.yaml:
# database:
# host: ${DB_HOST:localhost}
# port: ${DB_PORT:5432}
db_config = config_loader.get_section('database', 'host') # Uses DB_HOST env var or 'localhost'
```
## Dependencies and Relationships
### Internal Dependencies
- **Standard Libraries**: os, json, yaml, time, hashlib, pathlib, re
- **No External Dependencies**: Self-contained configuration management system
### Framework Integration
- **Hook Configuration**: Centralized configuration for all 7 SuperClaude hooks
- **MCP Server Integration**: Configuration management for MCP server coordination
- **Performance Monitoring**: Configuration-driven performance target management
### Global Availability
- **Shared Instance**: config_loader global instance available to all hooks
- **Consistent Interface**: Standardized configuration access across all modules
- **Hot-Reload Support**: Dynamic configuration updates without hook restart
---
*This module serves as the foundational configuration management system for the entire SuperClaude framework, providing high-performance, flexible, and reliable configuration loading with comprehensive error handling and hot-reload capabilities.*

View File

@@ -1,246 +0,0 @@
# Framework-Hooks System Overview
## System Architecture
The Framework-Hooks system provides lifecycle hooks for Claude Code that implement SuperClaude framework patterns. The system consists of:
### Core Components
1. **Lifecycle Hooks** - 7 Python modules (session_start.py, pre_tool_use.py, post_tool_use.py, pre_compact.py, notification.py, stop.py, subagent_stop.py)
2. **Shared Modules** - 9 Python modules providing shared functionality (framework_logic.py, pattern_detection.py, mcp_intelligence.py, learning_engine.py, compression_engine.py, intelligence_engine.py, validate_system.py, yaml_loader.py, logger.py)
3. **Configuration System** - 19 YAML files defining behavior and settings
4. **Pattern System** - YAML pattern files in minimal/, dynamic/, and learned/ directories
### Architecture Layers
```
┌─────────────────────────────────────────┐
│ Claude Code Interface │
├─────────────────────────────────────────┤
│ Lifecycle Hooks │
│ ┌─────┬─────┬─────┬─────┬─────┬─────┐ │
│ │Start│Pre │Post │Pre │Notif│Stop │ │
│ │ │Tool │Tool │Comp │ │ │ │
│ └─────┴─────┴─────┴─────┴─────┴─────┘ │
├─────────────────────────────────────────┤
│ Shared Intelligence │
│ ┌────────────┬─────────────┬─────────┐ │
│ │ Framework │ Pattern │Learning │ │
│ │ Logic │ Detection │Engine │ │
│ └────────────┴─────────────┴─────────┘ │
├─────────────────────────────────────────┤
│ YAML Configuration │
│ ┌─────────────┬────────────┬─────────┐ │
│ │Performance │Modes │Logging │ │
│ │Targets │Config │Config │ │
│ └─────────────┴────────────┴─────────┘ │
└─────────────────────────────────────────┘
```
## Purpose
The Framework-Hooks system implements the SuperClaude framework through lifecycle hooks that run during Claude Code execution.
### Implementation Features
1. **Session Management** - Implements session lifecycle patterns from SESSION_LIFECYCLE.md
2. **Mode Detection** - Activates SuperClaude modes (brainstorming, task management, token efficiency, introspection) based on user input patterns
3. **MCP Server Routing** - Routes operations to appropriate MCP servers (Context7, Sequential, Magic, Playwright, Morphllm, Serena)
4. **Configuration Management** - Loads settings from YAML files to customize behavior
5. **Pattern Recognition** - Detects project types and operation patterns to apply appropriate configurations
### Design Goals
- **Framework Compliance**: Implement SuperClaude patterns and principles
- **Configuration Flexibility**: YAML-driven behavior customization
- **Performance Targets**: 50ms session_start, 200ms pre_tool_use, etc. (as defined in performance.yaml)
- **Pattern-Based Operation**: Use project type and operation detection for intelligent behavior
## Pattern-Based Operation
The system uses pattern files to configure behavior based on detected project characteristics:
### Pattern Detection
```
User Request → Project Type Detection → Load Pattern Files → Apply Configuration → Execute
```
### Pattern System Components
1. **Minimal Patterns** - Essential patterns loaded during session initialization (e.g., python_project.yaml, react_project.yaml)
2. **Dynamic Patterns** - Runtime patterns for mode detection and MCP activation
3. **Learned Patterns** - User preference and project-specific optimizations (stored in learned/ directory)
### Core Modules
1. **Pattern Detection (pattern_detection.py)**
- 45KB module detecting project types and operation patterns
- Analyzes file structures, dependencies, and user input
2. **Learning Engine (learning_engine.py)**
- 40KB module for user preference tracking
- Records effectiveness of different configurations
3. **MCP Intelligence (mcp_intelligence.py)**
- 31KB module for MCP server routing decisions
- Maps operations to appropriate servers based on capabilities
4. **Framework Logic (framework_logic.py)**
- 12KB module implementing SuperClaude principles
- Handles complexity scoring and risk assessment
## Configuration System
The system is configured through YAML files and settings:
### Configuration Files (19 total)
Configuration is defined in /config/ directory:
- **performance.yaml** (345 lines) - Performance targets and thresholds
- **modes.yaml** - Mode detection patterns and settings
- **session.yaml** - Session lifecycle configuration
- **logging.yaml** - Logging configuration and levels
- **compression.yaml** - Token efficiency settings
- Other specialized configuration files
### Settings Integration
Claude Code hooks are configured through settings.json:
- **Hook Timeouts**: session_start (10s), pre_tool_use (15s), etc.
- **Hook Commands**: Python execution paths for each lifecycle hook
- **Hook Matching**: All hooks configured with "*" matcher (apply to all sessions)
### Performance Targets (from performance.yaml)
| Component | Target | Warning | Critical |
|---------------|--------|---------|----------|
| session_start | 50ms | 75ms | 100ms |
| pre_tool_use | 200ms | 300ms | 500ms |
| post_tool_use | 100ms | 150ms | 250ms |
| pre_compact | 150ms | 200ms | 300ms |
| notification | 100ms | 150ms | 200ms |
| stop | 200ms | 300ms | 500ms |
| subagent_stop | 150ms | 200ms | 300ms |
## Directory Structure
```
Framework-Hooks/
├── hooks/ # Lifecycle hook implementations (7 Python files)
│ ├── session_start.py # 703 lines - Session initialization
│ ├── pre_tool_use.py # MCP server selection and optimization
│ ├── post_tool_use.py # Validation and learning integration
│ ├── pre_compact.py # Token efficiency and compression
│ ├── notification.py # Pattern updates and notifications
│ ├── stop.py # Session analytics and persistence
│ ├── subagent_stop.py # Task management coordination
│ └── shared/ # Shared modules (9 Python files)
│ ├── framework_logic.py # 12KB - SuperClaude principles
│ ├── pattern_detection.py # 45KB - Pattern recognition
│ ├── mcp_intelligence.py # 31KB - MCP server routing
│ ├── learning_engine.py # 40KB - User preference learning
│ ├── compression_engine.py # 27KB - Token optimization
│ ├── intelligence_engine.py # 18KB - Core intelligence
│ ├── validate_system.py # 32KB - System validation
│ ├── yaml_loader.py # 16KB - Configuration loading
│ └── logger.py # 11KB - Logging utilities
├── config/ # Configuration files (19 YAML files)
│ ├── performance.yaml # 345 lines - Performance targets
│ ├── modes.yaml # Mode detection patterns
│ ├── session.yaml # Session management settings
│ ├── logging.yaml # Logging configuration
│ ├── compression.yaml # Token efficiency settings
│ └── ... # Additional configuration files
├── patterns/ # Pattern storage
│ ├── dynamic/ # Runtime pattern detection (mode_detection.yaml, mcp_activation.yaml)
│ ├── learned/ # User preferences (user_preferences.yaml, project_optimizations.yaml)
│ └── minimal/ # Project patterns (python_project.yaml, react_project.yaml)
├── docs/ # Documentation
└── settings.json # Claude Code hook configuration
```
## Key Components
### Lifecycle Hooks
1. **session_start.py** (703 lines)
- Runs at session start with 10-second timeout
- Detects project type and loads appropriate patterns
- Activates modes based on user input (brainstorming, task management, etc.)
- Routes to appropriate MCP servers
2. **pre_tool_use.py**
- Runs before each tool use with 15-second timeout
- Selects MCP servers based on operation type
- Applies performance optimizations
3. **post_tool_use.py**
- Runs after tool execution with 10-second timeout
- Validates results and logs learning data
- Updates effectiveness tracking
4. **pre_compact.py**
- Runs before token compression with 15-second timeout
- Applies compression strategies based on content type
- Preserves important content while optimizing tokens
5. **notification.py**
- Handles notifications with 10-second timeout
- Updates pattern caches and configurations
6. **stop.py**
- Runs at session end with 15-second timeout
- Generates session analytics and saves learning data
7. **subagent_stop.py**
- Handles subagent coordination with 15-second timeout
- Tracks delegation performance
### Shared Modules
Core functionality shared across hooks:
- **pattern_detection.py** (45KB) - Project and operation pattern recognition
- **learning_engine.py** (40KB) - User preference and effectiveness tracking
- **validate_system.py** (32KB) - System validation and health checks
- **mcp_intelligence.py** (31KB) - MCP server routing logic
- **compression_engine.py** (27KB) - Token optimization algorithms
- **intelligence_engine.py** (18KB) - Core intelligence coordination
- **yaml_loader.py** (16KB) - Configuration file loading
- **framework_logic.py** (12KB) - SuperClaude framework implementation
- **logger.py** (11KB) - Logging and debugging utilities
## Integration with SuperClaude
The Framework-Hooks system implements SuperClaude framework patterns through lifecycle hooks:
### Mode Detection and Activation
The session_start hook detects user intent and activates appropriate SuperClaude modes:
1. **Brainstorming Mode** - Activated for ambiguous requests ("not sure", "thinking about")
2. **Task Management Mode** - Activated for multi-step operations and complex builds
3. **Token Efficiency Mode** - Activated during resource constraints or when brevity requested
4. **Introspection Mode** - Activated for meta-analysis requests
### MCP Server Routing
The hooks route operations to appropriate MCP servers based on detected patterns:
- **Context7** - Library documentation and framework patterns
- **Sequential** - Multi-step reasoning and complex analysis
- **Magic** - UI component generation and design systems
- **Playwright** - Browser automation and testing
- **Morphllm** - File editing with pattern optimization
- **Serena** - Semantic analysis and memory management
### Framework Implementation
The hooks implement core SuperClaude concepts:
- **Rules Compliance** - File operation validation and security protocols
- **Principles Enforcement** - Evidence-based decisions and code quality standards
- **Performance Targets** - Sub-200ms operation targets with monitoring
- **Configuration Management** - YAML-driven behavior customization
The system provides SuperClaude framework functionality through Python hooks that run during Claude Code execution, enabling intelligent behavior based on project patterns and user preferences while maintaining performance targets defined in the configuration files.

View File

@@ -1,218 +0,0 @@
# Creating Patterns: Developer Guide
## Overview
This guide explains how to create new patterns for the Framework-Hooks pattern system. Patterns are YAML files that define automatic behavior, MCP server activation, and optimization rules for different contexts.
## Pattern Development Process
1. **Identify the Need**: Determine what behavior should be automated
2. **Choose Pattern Type**: Select minimal, dynamic, or learned pattern
3. **Define Structure**: Create YAML structure following the schema
4. **Test Pattern**: Verify the pattern works correctly
5. **Document Pattern**: Add appropriate documentation
## Creating Minimal Patterns
Minimal patterns detect project types and configure initial MCP server activation.
### Minimal Pattern Template
```yaml
# File: /patterns/minimal/{project_type}_project.yaml
project_type: "unique_identifier" # e.g., "python", "react", "vue"
detection_patterns: # File/directory existence patterns
- "*.{ext} files present" # File extension patterns
- "{manifest_file} dependency" # Dependency manifest detection
- "{directory}/ directories" # Directory structure patterns
auto_flags: # Automatic flag activation
- "--{primary_server}" # Primary server flag
- "--{secondary_server}" # Secondary server flag
mcp_servers:
primary: "{server_name}" # Primary MCP server
secondary: ["{server1}", "{server2}"] # Fallback servers
patterns:
file_structure: # Expected project structure
- "{directory}/" # Key directories
- "{file_pattern}" # Important files
common_tasks: # Typical operations
- "{task_description}" # Task patterns
intelligence:
mode_triggers: # Mode activation patterns
- "{mode_name}: {trigger_condition}"
validation_focus: # Quality validation priorities
- "{validation_type}"
performance_targets:
bootstrap_ms: {target_milliseconds} # Bootstrap time target
context_size: "{size}KB" # Context footprint
cache_duration: "{duration}min" # Cache retention time
```
### Example: Vue.js Project Pattern
```yaml
project_type: "vue"
detection_patterns:
- "package.json with vue dependency"
- "src/ directory with .vue files"
- "vue.config.js or vite.config.js"
auto_flags:
- "--magic" # Vue component generation
- "--context7" # Vue documentation
mcp_servers:
primary: "magic"
secondary: ["context7", "morphllm"]
patterns:
file_structure:
- "src/components/"
- "src/views/"
- "src/composables/"
- "src/stores/"
common_tasks:
- "component development"
- "composable creation"
- "store management"
- "routing configuration"
intelligence:
mode_triggers:
- "task_management: component|view|composable"
- "token_efficiency: context >75%"
validation_focus:
- "vue_syntax"
- "composition_api"
- "reactivity_patterns"
- "performance"
performance_targets:
bootstrap_ms: 32
context_size: "3.2KB"
cache_duration: "55min"
## Creating Dynamic Patterns
Dynamic patterns define runtime activation rules for modes and MCP servers.
### Dynamic Pattern Structure
```yaml
# For mode detection patterns
mode_detection:
{mode_name}:
triggers:
- "trigger description"
patterns:
- "keyword or phrase"
confidence_threshold: 0.7
activation_hooks: ["session_start", "pre_tool_use"]
coordination:
command: "/sc:command"
mcp_servers: ["server1", "server2"]
# For MCP activation patterns
activation_patterns:
{server_name}:
triggers:
- "activation trigger"
context_keywords:
- "keyword"
activation_confidence: 0.8
coordination_patterns:
hybrid_intelligence:
{server1}_{server2}:
condition: "coordination condition"
strategy: "coordination strategy"
confidence_threshold: 0.8
performance_optimization:
cache_activation_decisions: true
cache_duration_minutes: 15
batch_similar_requests: true
lazy_loading: true
```
## Creating Learned Patterns
Learned patterns track usage data and adapt behavior over time.
### Learned Pattern Structure
```yaml
# For user preferences
user_profile:
id: "user_identifier"
created: "2025-01-31"
last_updated: "2025-01-31"
sessions_analyzed: 0
learned_preferences:
communication_style:
verbosity_preference: "balanced"
technical_depth: "high"
symbol_usage_comfort: "high"
workflow_patterns:
preferred_thinking_mode: "--think-hard"
mcp_server_preferences:
- "serena"
- "sequential"
mode_activation_frequency:
task_management: 0.8
token_efficiency: 0.6
# For project optimizations
project_profile:
id: "project_identifier"
type: "project_type"
created: "2025-01-31"
optimization_cycles: 0
learned_optimizations:
workflow_optimizations:
effective_sequences:
- sequence: ["Read", "Edit", "Validate"]
success_rate: 0.95
context: "documentation updates"
mcp_server_effectiveness:
serena:
effectiveness: 0.9
optimal_contexts:
- "framework analysis"
performance_notes: "excellent for project context"
```
## Best Practices
### Pattern Design Guidelines
1. **Be Specific**: Use unique identifiers and clear detection criteria
2. **Set Appropriate Thresholds**: Match confidence thresholds to resource impact
3. **Include Fallbacks**: Define secondary servers and graceful degradation
4. **Document Rationale**: Explain why specific servers or settings were chosen
5. **Test Thoroughly**: Verify patterns work in different scenarios
### Performance Considerations
1. **Keep Minimal Patterns Small**: Aim for fast loading and low memory usage
2. **Set Realistic Targets**: Bootstrap times should be achievable
3. **Cache Wisely**: Set appropriate cache durations for stability
4. **Monitor Effectiveness**: Track pattern performance over time
### Integration Testing
1. **Hook Integration**: Verify patterns work with Framework-Hooks lifecycle
2. **MCP Coordination**: Test server activation and coordination rules
3. **Mode Transitions**: Ensure smooth transitions between modes
4. **Error Handling**: Test graceful degradation when servers unavailable
Pattern creation requires understanding the specific use case, careful design of activation rules, and thorough testing to ensure reliable operation within the Framework-Hooks system.

View File

@@ -1,191 +0,0 @@
# Dynamic Patterns: Runtime Mode Detection and MCP Activation
## Overview
Dynamic patterns provide runtime mode detection and MCP server activation based on user context and requests. These patterns are stored in `/patterns/dynamic/` and use confidence thresholds to determine when to activate specific modes or MCP servers during operation.
## Purpose
Dynamic patterns handle:
- **Mode Detection**: Detect when to activate behavioral modes (brainstorming, task management, etc.)
- **MCP Server Activation**: Determine which MCP servers to activate based on context
- **Confidence Thresholds**: Use probability scores to make activation decisions
- **Coordination Rules**: Define how multiple servers or modes work together
## Pattern Structure
Dynamic patterns use confidence-based activation with trigger patterns and context analysis.
## Current Dynamic Patterns
### Mode Detection Pattern (`mode_detection.yaml`)
This pattern defines how different behavioral modes are detected and activated:
```yaml
mode_detection:
brainstorming:
triggers:
- "vague project requests"
- "exploration keywords"
- "uncertainty indicators"
- "new project discussions"
patterns:
- "I want to build"
- "thinking about"
- "not sure"
- "explore"
- "brainstorm"
- "figure out"
confidence_threshold: 0.7
activation_hooks: ["session_start", "pre_tool_use"]
coordination:
command: "/sc:brainstorm"
mcp_servers: ["sequential", "context7"]
task_management:
triggers:
- "multi-step operations"
- "build/implement keywords"
- "system-wide scope"
- "delegation indicators"
patterns:
- "build"
- "implement"
- "create"
- "system"
- "comprehensive"
- "multiple files"
confidence_threshold: 0.8
activation_hooks: ["pre_tool_use", "subagent_stop"]
coordination:
wave_orchestration: true
delegation_patterns: true
```
The pattern includes similar configurations for `token_efficiency` (threshold 0.75) and `introspection` (threshold 0.6) modes.
### MCP Activation Pattern (`mcp_activation.yaml`)
This pattern defines how MCP servers are activated based on context and user requests:
```yaml
activation_patterns:
context7:
triggers:
- "import statements from external libraries"
- "framework-specific questions"
- "documentation requests"
- "best practices queries"
context_keywords:
- "how to use"
- "documentation"
- "examples"
- "patterns"
activation_confidence: 0.8
sequential:
triggers:
- "complex debugging scenarios"
- "multi-step analysis requests"
- "--think flags detected"
- "system design questions"
context_keywords:
- "analyze"
- "debug"
- "complex"
- "system"
- "architecture"
activation_confidence: 0.85
magic:
triggers:
- "UI component requests"
- "design system queries"
- "frontend development"
- "component keywords"
context_keywords:
- "component"
- "UI"
- "frontend"
- "design"
- "interface"
activation_confidence: 0.9
serena:
triggers:
- "semantic analysis"
- "project-wide operations"
- "symbol navigation"
- "memory management"
context_keywords:
- "analyze"
- "project"
- "semantic"
- "memory"
- "context"
activation_confidence: 0.75
coordination_patterns:
hybrid_intelligence:
serena_morphllm:
condition: "complex editing with semantic understanding"
strategy: "serena analyzes, morphllm executes"
confidence_threshold: 0.8
multi_server_activation:
max_concurrent: 3
priority_order:
- "serena"
- "sequential"
- "context7"
- "magic"
- "morphllm"
- "playwright"
performance_optimization:
cache_activation_decisions: true
cache_duration_minutes: 15
batch_similar_requests: true
lazy_loading: true
```
## Confidence Thresholds
Dynamic patterns use confidence scores to determine activation:
- **Higher Thresholds (0.8-0.9)**: Used for resource-intensive operations (task management, magic)
- **Medium Thresholds (0.7-0.8)**: Used for standard operations (brainstorming, context7)
- **Lower Thresholds (0.6-0.75)**: Used for lightweight operations (introspection, serena)
## Coordination Patterns
The `mcp_activation.yaml` pattern includes coordination rules for:
- **Hybrid Intelligence**: Coordinated server usage (e.g., serena analyzes, morphllm executes)
- **Multi-Server Limits**: Maximum 3 concurrent servers to manage resources
- **Priority Ordering**: Server activation priority when multiple servers are relevant
- **Performance Optimization**: Caching, batching, and lazy loading strategies
## Hook Integration
Dynamic patterns integrate with Framework-Hooks at these points:
- **pre_tool_use**: Analyze user input for mode and server activation
- **session_start**: Apply initial context-based activations
- **post_tool_use**: Update activation patterns based on results
- **subagent_stop**: Re-evaluate activation patterns after sub-agent operations
## Creating Dynamic Patterns
To create new dynamic patterns:
1. **Define Triggers**: Identify the conditions that should activate the pattern
2. **Set Keywords**: Define specific words or phrases that indicate activation
3. **Choose Thresholds**: Set confidence thresholds appropriate for the operation's resource cost
4. **Specify Coordination**: Define how the pattern works with other systems
5. **Add Performance Rules**: Configure caching and optimization strategies
Dynamic patterns provide flexible, context-aware activation of Framework-Hooks features without requiring code changes.

View File

@@ -1,185 +0,0 @@
# Learned Patterns: Adaptive Behavior Learning
## Overview
Learned patterns store adaptive behaviors that evolve based on project usage and user preferences. These patterns are stored in `/patterns/learned/` and track effectiveness, optimizations, and personalization data to improve Framework-Hooks behavior over time.
## Purpose
Learned patterns handle:
- **Project Optimizations**: Track effective workflows and performance improvements for specific projects
- **User Preferences**: Learn individual user behavior patterns and communication styles
- **Performance Metrics**: Monitor effectiveness of different MCP servers and coordination strategies
- **Error Prevention**: Learn from past issues to prevent recurring problems
## Current Learned Patterns
### User Preferences Pattern (`user_preferences.yaml`)
This pattern tracks individual user behavior and preferences:
```yaml
user_profile:
id: "example_user"
created: "2025-01-31"
last_updated: "2025-01-31"
sessions_analyzed: 0
learned_preferences:
communication_style:
verbosity_preference: "balanced" # minimal, balanced, detailed
technical_depth: "high" # low, medium, high
symbol_usage_comfort: "high" # low, medium, high
abbreviation_tolerance: "medium" # low, medium, high
workflow_patterns:
preferred_thinking_mode: "--think-hard"
mcp_server_preferences:
- "serena" # Most frequently beneficial
- "sequential" # High success rate
- "context7" # Frequently requested
mode_activation_frequency:
task_management: 0.8 # High usage
token_efficiency: 0.6 # Medium usage
brainstorming: 0.3 # Low usage
introspection: 0.4 # Medium usage
project_type_expertise:
python: 0.9 # High proficiency
react: 0.7 # Good proficiency
javascript: 0.8 # High proficiency
documentation: 0.6 # Medium proficiency
performance_preferences:
speed_vs_quality: "quality_focused" # speed_focused, balanced, quality_focused
compression_tolerance: 0.7 # How much compression user accepts
context_size_preference: "medium" # small, medium, large
learning_insights:
effective_patterns:
- pattern: "serena + morphllm hybrid"
success_rate: 0.92
context: "large refactoring tasks"
- pattern: "sequential + context7"
success_rate: 0.88
context: "complex debugging"
- pattern: "magic + context7"
success_rate: 0.85
context: "UI component creation"
adaptive_thresholds:
mode_activation:
brainstorming: 0.6 # Lowered from 0.7 due to user preference
task_management: 0.9 # Raised from 0.8 due to frequent use
token_efficiency: 0.65 # Adjusted based on tolerance
introspection: 0.5 # Lowered due to user comfort with meta-analysis
### Project Optimizations Pattern (`project_optimizations.yaml`)
This pattern tracks project-specific performance and optimization data:
```yaml
project_profile:
id: "superclaude_framework"
type: "python_framework"
created: "2025-01-31"
last_analyzed: "2025-01-31"
optimization_cycles: 0
learned_optimizations:
file_patterns:
high_frequency_files:
patterns:
- "commands/*.md"
- "Core/*.md"
- "Modes/*.md"
- "MCP/*.md"
frequency_weight: 0.9
cache_priority: "high"
structural_patterns:
patterns:
- "markdown documentation with YAML frontmatter"
- "python scripts with comprehensive docstrings"
- "modular architecture with clear separation"
optimization: "maintain full context for these patterns"
workflow_optimizations:
effective_sequences:
- sequence: ["Read", "Edit", "Validate"]
success_rate: 0.95
context: "documentation updates"
- sequence: ["Glob", "Read", "MultiEdit"]
success_rate: 0.88
context: "multi-file refactoring"
- sequence: ["Serena analyze", "Morphllm execute"]
success_rate: 0.92
context: "large codebase changes"
mcp_server_effectiveness:
serena:
effectiveness: 0.9
optimal_contexts:
- "framework documentation analysis"
- "cross-file relationship mapping"
- "memory-driven development"
performance_notes: "excellent for project context"
sequential:
effectiveness: 0.85
optimal_contexts:
- "complex architectural decisions"
- "multi-step problem solving"
- "systematic analysis"
performance_notes: "valuable for thinking-intensive tasks"
morphllm:
effectiveness: 0.8
optimal_contexts:
- "pattern-based editing"
- "documentation updates"
- "style consistency"
performance_notes: "efficient for text transformations"
performance_insights:
bottleneck_identification:
- area: "large markdown file processing"
impact: "medium"
optimization: "selective reading with targeted edits"
- area: "cross-file reference validation"
impact: "low"
optimization: "cached reference mapping"
acceleration_opportunities:
- opportunity: "pattern-based file detection"
potential_improvement: "40% faster file processing"
implementation: "regex pre-filtering"
- opportunity: "intelligent caching"
potential_improvement: "60% faster repeated operations"
implementation: "content-aware cache keys"
## Learning Process
Learned patterns evolve through:
1. **Data Collection**: Track user interactions, tool effectiveness, and performance metrics
2. **Pattern Analysis**: Identify successful workflows and optimization opportunities
3. **Threshold Adjustment**: Adapt confidence thresholds based on user behavior
4. **Performance Tracking**: Monitor the effectiveness of different strategies
5. **Cross-Session Persistence**: Maintain learning across multiple work sessions
## Integration Notes
Learned patterns integrate with Framework-Hooks through:
- **Adaptive Thresholds**: Modify activation thresholds based on learned preferences
- **Server Selection**: Prioritize MCP servers based on measured effectiveness
- **Workflow Optimization**: Apply learned effective sequences to new tasks
- **Performance Monitoring**: Track and optimize based on measured performance
The learned patterns provide a feedback mechanism that allows Framework-Hooks to improve its behavior based on actual usage patterns and results.

View File

@@ -1,225 +0,0 @@
# Minimal Patterns: Project Detection and Bootstrap
## Overview
Minimal patterns provide project type detection and initial Framework-Hooks configuration. These patterns are stored in `/patterns/minimal/` and automatically configure MCP server activation and auto-flags based on detected project characteristics.
## Purpose
Minimal patterns handle:
- **Project Detection**: Identify project type from file structure and dependencies
- **MCP Server Selection**: Configure primary and secondary MCP servers
- **Auto-Flag Configuration**: Set automatic flags for immediate activation
- **Performance Targets**: Define bootstrap timing and context size goals
## Pattern Structure
All minimal patterns follow this YAML structure:
```yaml
project_type: "string" # Unique project identifier
detection_patterns: [] # File/directory detection rules
auto_flags: [] # Automatic flag activation
mcp_servers:
primary: "string" # Primary MCP server
secondary: [] # Fallback servers
patterns:
file_structure: [] # Expected project files/dirs
common_tasks: [] # Typical operations
intelligence:
mode_triggers: [] # Mode activation conditions
validation_focus: [] # Quality validation priorities
performance_targets:
bootstrap_ms: number # Bootstrap time target
context_size: "string" # Context footprint target
cache_duration: "string" # Cache retention time
```
### Detection Rules
Detection patterns identify projects through:
- **File Extensions**: Look for specific file types (`.py`, `.jsx`, etc.)
- **Dependency Files**: Check for `package.json`, `requirements.txt`, `pyproject.toml`
- **Directory Structure**: Verify expected directories exist
- **Configuration Files**: Detect framework-specific config files
## Current Minimal Patterns
### Python Project Pattern (`python_project.yaml`)
This is the actual pattern file for Python projects:
```yaml
project_type: "python"
detection_patterns:
- "*.py files present"
- "requirements.txt or pyproject.toml"
- "__pycache__/ directories"
auto_flags:
- "--serena" # Semantic analysis
- "--context7" # Python documentation
mcp_servers:
primary: "serena"
secondary: ["context7", "sequential", "morphllm"]
patterns:
file_structure:
- "src/ or lib/"
- "tests/"
- "docs/"
- "requirements.txt"
common_tasks:
- "function refactoring"
- "class extraction"
- "import optimization"
- "testing setup"
intelligence:
mode_triggers:
- "token_efficiency: context >75%"
- "task_management: refactor|test|analyze"
validation_focus:
- "python_syntax"
- "pep8_compliance"
- "type_hints"
- "testing_coverage"
performance_targets:
bootstrap_ms: 40
context_size: "4KB"
cache_duration: "45min"
```
This pattern automatically activates Serena (for semantic analysis) and Context7 (for Python documentation) when Python projects are detected.
### React Project Pattern (`react_project.yaml`)
```yaml
project_type: "react"
detection_patterns:
- "package.json with react dependency"
- "src/ directory with .jsx/.tsx files"
- "public/index.html"
auto_flags:
- "--magic" # UI component generation
- "--context7" # React documentation
mcp_servers:
primary: "magic"
secondary: ["context7", "morphllm"]
patterns:
file_structure:
- "src/components/"
- "src/hooks/"
- "src/pages/"
- "src/utils/"
common_tasks:
- "component creation"
- "state management"
- "routing setup"
- "performance optimization"
intelligence:
mode_triggers:
- "token_efficiency: context >75%"
- "task_management: build|implement|create"
validation_focus:
- "jsx_syntax"
- "react_patterns"
- "accessibility"
- "performance"
performance_targets:
bootstrap_ms: 30
context_size: "3KB"
cache_duration: "60min"
```
This pattern activates Magic (for UI component generation) and Context7 (for React documentation) when React projects are detected.
## Creating New Minimal Patterns
### Pattern Creation Process
1. **Identify Project Type**: Determine unique characteristics of the project type
2. **Define Detection Rules**: Create file/directory patterns for identification
3. **Select MCP Servers**: Choose primary and secondary servers for the project type
4. **Configure Auto-Flags**: Set flags that should activate automatically
5. **Define Intelligence**: Specify mode triggers and validation focus
6. **Set Performance Targets**: Define bootstrap time and context size goals
### Pattern Template
```yaml
project_type: "your_project_type"
detection_patterns:
- "unique file or directory patterns"
- "dependency or configuration files"
- "framework-specific indicators"
auto_flags:
- "--primary_server"
- "--supporting_server"
mcp_servers:
primary: "most_relevant_server"
secondary: ["fallback", "servers"]
patterns:
file_structure:
- "expected/directories/"
- "important files"
common_tasks:
- "typical operations"
- "common workflows"
intelligence:
mode_triggers:
- "mode_name: trigger_conditions"
validation_focus:
- "syntax_validation"
- "best_practices"
- "quality_checks"
performance_targets:
bootstrap_ms: target_milliseconds
context_size: "target_size"
cache_duration: "cache_time"
```
## Best Practices
### Detection Pattern Guidelines
1. **Use Specific Identifiers**: Look for unique files or dependency patterns
2. **Multiple Signals**: Combine file extensions, directories, and config files
3. **Avoid Generic Patterns**: Don't rely on common files like `README.md`
4. **Test Edge Cases**: Handle missing files or permission errors gracefully
### MCP Server Selection
1. **Primary Server**: Choose the most relevant MCP server for the project type
2. **Secondary Servers**: Add complementary servers as fallbacks
3. **Auto-Flags**: Set flags that provide immediate value for the project type
4. **Performance Targets**: Set realistic bootstrap and context size goals
## Integration Notes
Minimal patterns integrate with Framework-Hooks through:
- **session_start hook**: Loads and applies patterns during initialization
- **Project detection**: Scans files and directories to identify project type
- **MCP activation**: Automatically starts relevant MCP servers
- **Flag processing**: Sets auto-flags for immediate feature activation
The pattern system provides a declarative way to configure Framework-Hooks behavior for different project types without requiring code changes.

View File

@@ -1,227 +0,0 @@
# SuperClaude Pattern System Overview
## Overview
The SuperClaude Pattern System provides a three-tier architecture for project detection, mode activation, and adaptive learning within the Framework-Hooks system. The system uses YAML-based patterns to configure automatic behavior, MCP server activation, and performance optimization.
## System Architecture
### Core Structure
The pattern system consists of three directories with distinct purposes:
```
patterns/
├── minimal/ # Project detection and bootstrap configuration
├── dynamic/ # Mode detection and MCP server activation
└── learned/ # Project-specific adaptations and user preferences
```
### Pattern Types
**Minimal Patterns**: Project type detection and initial MCP server selection
- File detection patterns for project types (Python, React, etc.)
- Auto-flag configuration for immediate MCP server activation
- Basic project structure recognition
**Dynamic Patterns**: Runtime activation based on context analysis
- Mode detection patterns (brainstorming, task management, etc.)
- MCP server activation based on user requests
- Cross-mode coordination rules
**Learned Patterns**: Adaptation based on usage patterns
- Project-specific optimizations that evolve over time
- User preference learning and adaptation
- Performance metrics and effectiveness tracking
## Pattern Structure
### 1. Minimal Patterns
**Purpose**: Project detection and bootstrap configuration
- **Location**: `/patterns/minimal/`
- **Files**: `python_project.yaml`, `react_project.yaml`
- **Content**: Detection patterns, auto-flags, MCP server configuration
### 2. Dynamic Patterns
**Purpose**: Runtime mode detection and MCP server activation
- **Location**: `/patterns/dynamic/`
- **Files**: `mcp_activation.yaml`, `mode_detection.yaml`
- **Content**: Activation patterns, confidence thresholds, coordination rules
### 3. Learned Patterns
**Purpose**: Adaptive behavior based on usage patterns
- **Location**: `/patterns/learned/`
- **Files**: `project_optimizations.yaml`, `user_preferences.yaml`
- **Content**: Performance metrics, user preferences, optimization tracking
## Pattern Schema
### Minimal Pattern Structure
Based on actual files like `python_project.yaml`:
```yaml
project_type: "python"
detection_patterns:
- "*.py files present"
- "requirements.txt or pyproject.toml"
- "__pycache__/ directories"
auto_flags:
- "--serena" # Semantic analysis
- "--context7" # Python documentation
mcp_servers:
primary: "serena"
secondary: ["context7", "sequential", "morphllm"]
patterns:
file_structure:
- "src/ or lib/"
- "tests/"
- "docs/"
- "requirements.txt"
common_tasks:
- "function refactoring"
- "class extraction"
- "import optimization"
- "testing setup"
intelligence:
mode_triggers:
- "token_efficiency: context >75%"
- "task_management: refactor|test|analyze"
validation_focus:
- "python_syntax"
- "pep8_compliance"
- "type_hints"
- "testing_coverage"
performance_targets:
bootstrap_ms: 40
context_size: "4KB"
cache_duration: "45min"
```
### Dynamic Pattern Structure
Based on `mcp_activation.yaml`:
```yaml
activation_patterns:
context7:
triggers:
- "import statements from external libraries"
- "framework-specific questions"
- "documentation requests"
context_keywords:
- "documentation"
- "examples"
- "patterns"
activation_confidence: 0.8
coordination_patterns:
hybrid_intelligence:
serena_morphllm:
condition: "complex editing with semantic understanding"
strategy: "serena analyzes, morphllm executes"
confidence_threshold: 0.8
performance_optimization:
cache_activation_decisions: true
cache_duration_minutes: 15
batch_similar_requests: true
lazy_loading: true
```
## Hook Integration
### Hook Points
The pattern system integrates with Framework-Hooks at these points:
**session_start**: Load minimal patterns for project detection
**pre_tool_use**: Apply dynamic patterns for mode detection
**post_tool_use**: Update learned patterns with usage data
**stop**: Persist learned optimizations and preferences
### MCP Server Activation
Patterns control MCP server activation through:
1. **Auto-flags**: Immediate activation based on project type
2. **Dynamic activation**: Context-based activation during operation
3. **Coordination patterns**: Rules for multi-server interactions
### Mode Detection
Mode activation is controlled by patterns in `mode_detection.yaml`:
- **Brainstorming**: Triggered by vague project requests, exploration keywords
- **Task Management**: Multi-step operations, system-wide scope
- **Token Efficiency**: Context usage >75%, resource constraints
- **Introspection**: Self-analysis requests, framework discussions
## Current Pattern Files
### Minimal Patterns
**python_project.yaml** (45 lines):
- Detects Python projects by `.py` files, `requirements.txt`, `pyproject.toml`
- Auto-activates `--serena` and `--context7` flags
- Targets 40ms bootstrap, 4KB context size
- Primary server: serena, with context7/sequential/morphllm fallback
**react_project.yaml**:
- Detects React projects by `package.json` with react dependency
- Auto-activates `--magic` and `--context7` flags
- Targets 30ms bootstrap, 3KB context size
- Primary server: magic, with context7/morphllm fallback
### Dynamic Patterns
**mcp_activation.yaml** (114 lines):
- Defines activation patterns for all 6 MCP servers
- Includes context keywords and confidence thresholds
- Hybrid intelligence coordination (serena + morphllm)
- Performance optimization settings (caching, lazy loading)
**mode_detection.yaml**:
- Mode detection for brainstorming, task management, token efficiency, introspection
- Confidence thresholds from 0.6-0.8 depending on mode
- Cross-mode coordination and transition rules
- Adaptive learning configuration
### Learned Patterns
**project_optimizations.yaml**:
- Project-specific learning for SuperClaude framework
- File pattern analysis and workflow optimization tracking
- MCP server effectiveness measurements
- Performance bottleneck identification and solutions
**user_preferences.yaml**:
- User behavior adaptation patterns
- Communication style preferences
- Workflow pattern effectiveness tracking
- Personalized thresholds and server preferences
## Usage
### Creating New Patterns
1. **Minimal Patterns**: Create project detection patterns in `/patterns/minimal/`
2. **Dynamic Patterns**: Define activation rules in `/patterns/dynamic/`
3. **Learned Patterns**: Configure adaptation tracking in `/patterns/learned/`
### Pattern Development
Patterns are YAML files that follow specific schema formats. They control:
- Project type detection based on file patterns
- Automatic MCP server activation
- Mode detection and activation thresholds
- Performance optimization preferences
- User behavior adaptation
The pattern system provides a declarative way to configure Framework-Hooks behavior without modifying code, enabling customization and optimization based on project types and usage patterns.

View File

@@ -1,231 +0,0 @@
# Framework-Hooks Performance Documentation
## Performance Targets
The Framework-Hooks system defines performance targets in performance.yaml for each hook:
**Core Performance Requirements:**
- Hook execution should complete within configured timeouts to avoid blocking Claude Code
- Performance targets provide goals for optimization but do not guarantee actual execution times
- Resource usage should remain reasonable during normal operation
- Performance monitoring tracks actual execution against configured targets
**Performance Target Rationale:**
```
Fast Hook Performance → Reduced Session Latency → Better User Experience
```
**Configured Thresholds:**
- **Target times**: Optimal performance goals from performance.yaml
- **Warning thresholds**: ~1.5x target (monitoring threshold)
- **Critical thresholds**: ~2x target (performance alert threshold)
- **Timeout limits**: Configured in settings.json (10-15 seconds per hook)
## Hook Performance Targets
### Performance Targets from performance.yaml
The system defines performance targets for each lifecycle hook:
#### session_start: 50ms target
**Current Implementation**: 703 lines of Python code
**Timeout**: 10 seconds (from settings.json)
**Thresholds**: 50ms target, 75ms warning, 100ms critical
**Primary Tasks**:
- Project type detection (file analysis)
- Pattern loading from minimal/ directory
- Mode activation based on user input
- MCP server routing decisions
#### pre_tool_use: 200ms target
**Timeout**: 15 seconds
**Thresholds**: 200ms target, 300ms warning, 500ms critical
**Primary Tasks**:
- Operation pattern analysis
- MCP server selection logic
- Performance optimization planning
#### post_tool_use: 100ms target
**Timeout**: 10 seconds
**Thresholds**: 100ms target, 150ms warning, 250ms critical
**Primary Tasks**:
- Operation result validation
- Learning data recording
- Effectiveness tracking
#### pre_compact: 150ms target
**Timeout**: 15 seconds
**Thresholds**: 150ms target, 200ms warning, 300ms critical
**Primary Tasks**:
- Content type classification
- Compression strategy selection
- Token optimization application
#### notification: 100ms target
**Timeout**: 10 seconds
**Thresholds**: 100ms target, 150ms warning, 200ms critical
**Primary Tasks**:
- Pattern cache updates
- Configuration refreshes
- Notification processing
#### stop: 200ms target
**Timeout**: 15 seconds
**Thresholds**: 200ms target, 300ms warning, 500ms critical
**Primary Tasks**:
- Session analytics generation
- Learning data persistence
- Performance metrics collection
#### subagent_stop: 150ms target
**Timeout**: 15 seconds
**Thresholds**: 150ms target, 200ms warning, 300ms critical
**Primary Tasks**:
- Delegation performance tracking
- Coordination effectiveness measurement
- Task management analytics
## Implementation Architecture
### Core Implementation Components
The Framework-Hooks system consists of:
**Python Modules:**
- 7 main hook files (session_start.py: 703 lines, others vary)
- 9 shared modules totaling ~250KB of Python code
- pattern_detection.py (45KB) handles project type and pattern recognition
- learning_engine.py (40KB) manages user preferences and effectiveness tracking
**Configuration System:**
- 19 YAML configuration files
- performance.yaml (345 lines) defines all timing targets and thresholds
- Pattern files organized in minimal/, dynamic/, learned/ directories
- settings.json configures hook execution and timeouts
**Pattern Loading Strategy:**
```yaml
Minimal Patterns (loaded at startup):
- python_project.yaml: Python-specific configuration
- react_project.yaml: React project patterns
- Basic mode detection triggers
Dynamic Patterns (loaded as needed):
- mcp_activation.yaml: Server routing patterns
- mode_detection.yaml: SuperClaude mode triggers
Learned Patterns (updated during use):
- user_preferences.yaml: Personal configuration adaptations
- project_optimizations.yaml: Project-specific learned patterns
```
### Compression Implementation
The pre_compact hook implements token compression through compression_engine.py (27KB):
**Compression Strategies:**
```python
Compression Levels (from compression.yaml):
MINIMAL: Framework content excluded, user content preserved
EFFICIENT: Selective compression with quality validation
COMPRESSED: Symbol systems and abbreviations applied
CRITICAL: Aggressive optimization when needed
EMERGENCY: Maximum compression for resource constraints
```
**Symbol Systems Implementation:**
- Mathematical operators: 'leads to' → '→'
- Status indicators: 'completed' → '✅'
- Technical domains: 'performance' → '⚡'
- Applied selectively based on content type and compression level
**Content Classification:**
```python
Compression Strategy by Content Type:
FRAMEWORK_CONTENT: 0% compression (complete preservation)
SESSION_DATA: Variable compression (based on compression level)
USER_CONTENT: Minimal compression (quality preservation priority)
WORKING_ARTIFACTS: Higher compression allowed (temporary data)
```
## Session Startup Implementation
### session_start Hook Operation
The session_start.py hook (703 lines) implements session initialization:
**Primary Operations:**
1. **Project Detection**: Analyzes file structure to determine project type (Python, React, etc.)
2. **Pattern Loading**: Loads appropriate minimal pattern files based on detected project type
3. **Mode Activation**: Parses user input to detect SuperClaude mode triggers
4. **MCP Routing**: Determines which MCP servers to activate based on patterns and project type
**Implementation Approach:**
```python
Session Startup Process:
1. Initialize shared modules (framework_logic, pattern_detection, etc.)
2. Load YAML configurations (performance.yaml, modes.yaml, etc.)
3. Analyze current directory for project type indicators
4. Load appropriate minimal pattern files
5. Process user input for mode detection
6. Generate MCP server activation recommendations
7. Record startup metrics for performance monitoring
```
**Performance Considerations:**
- Hook has 10-second timeout limit (configured in settings.json)
- Target execution time: 50ms (from performance.yaml)
- Actual performance depends on system resources and project complexity
- Pattern loading optimized by only loading relevant minimal patterns initially
## Performance Monitoring Implementation
### Performance Tracking System
The Framework-Hooks system includes performance monitoring through:
**Performance Configuration:**
- **performance.yaml** (345 lines): Defines targets, warning, and critical thresholds for each hook
- **logger.py** (11KB): Provides logging utilities for performance tracking and debugging
- **Hook timing**: Each hook execution is measured and compared against configured targets
**Monitoring Components:**
```yaml
Performance Tracking:
Target Times: Defined in performance.yaml for each hook
Warning Thresholds: ~1.5x target time (monitoring alerts)
Critical Thresholds: ~2x target time (performance degradation alerts)
Timeout Limits: Configured in settings.json (10-15 seconds per hook)
```
**Learning Integration:**
- **learning_engine.py** (40KB): Tracks operation effectiveness and user preferences
- **Learned patterns**: Performance data influences future pattern selection
- **User preferences**: Successful configurations saved for future sessions
- **Analytics**: stop.py hook generates session performance summaries
## Implementation Summary
### Framework-Hooks Performance Characteristics
The Framework-Hooks system implements performance monitoring and optimization through:
**Performance Configuration:**
- Performance targets defined in performance.yaml for each of 7 lifecycle hooks
- Timeout limits configured in settings.json (10-15 seconds per hook)
- Warning and critical thresholds for performance degradation detection
**Implementation Approach:**
- Python hooks execute during Claude Code lifecycle events
- Shared modules provide common functionality (pattern detection, learning, compression)
- YAML configuration files customize behavior without code changes
- Pattern system enables project-specific optimizations
**Performance Considerations:**
- Hook performance depends on system resources, project complexity, and configuration
- Targets provide optimization goals but do not guarantee actual execution times
- Learning system adapts behavior based on measured effectiveness
- Compression system balances token efficiency with content quality preservation
The system aims to provide SuperClaude framework functionality through efficient lifecycle hooks while maintaining reasonable resource usage and execution times.

View File

@@ -1,241 +0,0 @@
# Quick Reference
Essential commands and information for Framework-Hooks developers and users.
## System Overview
- **7 hooks**: Execute at specific Claude Code lifecycle events
- **9 shared modules**: Common functionality across hooks
- **12+ config files**: YAML-based configuration system
- **3-tier patterns**: minimal/dynamic/learned pattern system
- **Performance targets**: <50ms to <200ms per hook
## Installation Quick Check
```bash
# Verify Python version
python3 --version # Need 3.8+
# Check directory structure
ls Framework-Hooks/
# Should see: hooks/ config/ patterns/ cache/ docs/
# Test hook execution
python3 Framework-Hooks/hooks/session_start.py
# Validate system
cd Framework-Hooks/hooks/shared
python3 validate_system.py --check-installation
```
## Configuration Files
| File | Purpose | Key Settings |
|------|---------|-------------|
| `logging.yaml` | System logging | `enabled: false` (default) |
| `performance.yaml` | Timing targets | session_start: 50ms, pre_tool_use: 200ms |
| `session.yaml` | Session lifecycle | Context management, cleanup behavior |
| `compression.yaml` | Content compression | Selective compression rules |
| `mcp_orchestration.yaml` | MCP server routing | Server activation patterns |
## Performance Targets
| Hook | Target Time | Timeout |
|------|-------------|---------|
| session_start.py | <50ms | 10s |
| pre_tool_use.py | <200ms | 15s |
| post_tool_use.py | <100ms | 10s |
| pre_compact.py | <150ms | 15s |
| notification.py | <50ms | 10s |
| stop.py | <100ms | 15s |
| subagent_stop.py | <100ms | 15s |
## Common Operations
### Enable Logging
```yaml
# Edit config/logging.yaml
logging:
enabled: true
level: "INFO" # or DEBUG
```
### Check System Health
```bash
cd Framework-Hooks/hooks/shared
python3 validate_system.py --health-check
```
### Test Individual Hook
```bash
cd Framework-Hooks/hooks
python3 session_start.py # Test session initialization
python3 pre_tool_use.py # Test tool preparation
```
### Clear Cache
```bash
# Reset learning data and cache
rm -rf Framework-Hooks/cache/*
# System will recreate on next run
```
### View Recent Logs
```bash
# Check latest logs (if logging enabled)
tail -f Framework-Hooks/cache/logs/hooks-$(date +%Y-%m-%d).log
```
## Hook Execution Flow
```
Session Start → Load Config → Detect Project → Apply Patterns →
Activate Features → [Work Session with Tool Use Hooks] →
Record Learning → Save State → Session End
```
## Directory Structure
```
Framework-Hooks/
├── hooks/ # 7 hook scripts
│ ├── session_start.py # <50ms - Session init
│ ├── pre_tool_use.py # <200ms - Tool prep
│ ├── post_tool_use.py # <100ms - Usage recording
│ ├── pre_compact.py # <150ms - Context compression
│ ├── notification.py # <50ms - Notifications
│ ├── stop.py # <100ms - Session cleanup
│ ├── subagent_stop.py # <100ms - Subagent coordination
│ └── shared/ # 9 shared modules
├── config/ # 12+ YAML config files
├── patterns/ # 3-tier pattern system
│ ├── minimal/ # Always loaded (3-5KB each)
│ ├── dynamic/ # On-demand (8-12KB each)
│ └── learned/ # User adaptations (10-20KB each)
├── cache/ # Runtime cache and logs
└── docs/ # Documentation
```
## Shared Modules
| Module | Purpose |
|--------|---------|
| `framework_logic.py` | SuperClaude framework integration |
| `compression_engine.py` | Context compression and optimization |
| `learning_engine.py` | Adaptive learning from usage |
| `mcp_intelligence.py` | MCP server coordination |
| `pattern_detection.py` | Project and usage pattern detection |
| `intelligence_engine.py` | Central intelligence coordination |
| `logger.py` | Structured logging system |
| `yaml_loader.py` | Configuration loading utilities |
| `validate_system.py` | System validation and health checks |
## Troubleshooting Quick Fixes
### Hook Timeout
- Check performance targets in `config/performance.yaml`
- Clear cache: `rm -rf cache/*`
- Reduce pattern loading
### Import Errors
```bash
# Verify shared modules path
ls Framework-Hooks/hooks/shared/
# Should show all 9 .py files
# Check permissions
chmod +x Framework-Hooks/hooks/*.py
```
### YAML Errors
```bash
# Validate YAML files
python3 -c "
import yaml, glob
for f in glob.glob('config/*.yaml'):
yaml.safe_load(open(f))
print(f'{f}: OK')
"
```
### No Log Output
```yaml
# Enable in config/logging.yaml
logging:
enabled: true
level: "INFO"
hook_logging:
log_lifecycle: true
```
## Configuration Shortcuts
### Development Mode
```yaml
# config/logging.yaml
logging:
enabled: true
level: "DEBUG"
development:
debug_mode: true
verbose_errors: true
```
### Production Mode
```yaml
# config/logging.yaml
logging:
enabled: false
level: "ERROR"
```
### Reset to Defaults
```bash
# Backup first, then:
git checkout config/*.yaml
rm -rf cache/
rm -rf patterns/learned/
```
## File Locations
- **Hook scripts**: `hooks/*.py`
- **Configuration**: `config/*.yaml`
- **Logs**: `cache/logs/hooks-YYYY-MM-DD.log`
- **Learning data**: `cache/learning/`
- **Pattern cache**: `cache/patterns/`
- **Installation config**: `settings.json`, `superclaude-config.json`
## Debug Commands
```bash
# Full system validation
python3 hooks/shared/validate_system.py --full-check
# Check configuration integrity
python3 hooks/shared/validate_system.py --check-config
# Test pattern loading
python3 hooks/shared/pattern_detection.py --test-patterns
# Verify learning engine
python3 hooks/shared/learning_engine.py --test-learning
```
## System Behavior
### Default State
- All hooks **enabled** via settings.json
- Logging **disabled** for performance
- Conservative timeouts (10-15 seconds)
- Selective compression preserves user content
- Learning engine adapts to usage patterns
### What Happens Automatically
1. **Project detection** - Identifies project type and loads patterns
2. **Mode activation** - Enables relevant SuperClaude modes
3. **MCP coordination** - Routes to appropriate servers
4. **Performance optimization** - Applies compression and caching
5. **Learning adaptation** - Records and learns from usage
Framework-Hooks operates transparently without requiring manual intervention.

View File

@@ -1,124 +0,0 @@
# Framework-Hooks
Framework-Hooks is a hook system for Claude Code that provides intelligent session management and context adaptation. It runs Python hooks at different points in Claude Code's lifecycle to optimize performance and adapt behavior based on usage patterns.
## What it does
The system runs 7 hooks that execute at specific lifecycle events:
- `session_start.py` - Initializes session context and activates appropriate features
- `pre_tool_use.py` - Prepares for tool execution and applies optimizations
- `post_tool_use.py` - Records tool usage patterns and updates learning data
- `pre_compact.py` - Applies compression before context compaction
- `notification.py` - Handles system notifications and adaptive responses
- `stop.py` - Performs cleanup and saves session data at shutdown
- `subagent_stop.py` - Manages subagent cleanup and coordination
## Components
### Hooks
Each hook is a Python script that runs at a specific lifecycle point. Hooks share common functionality through shared modules and can access configuration through YAML files.
### Shared Modules
- `framework_logic.py` - Core logic for SuperClaude framework integration
- `compression_engine.py` - Context compression and optimization
- `learning_engine.py` - Adaptive learning from usage patterns
- `mcp_intelligence.py` - MCP server coordination and routing
- `pattern_detection.py` - Project and usage pattern detection
- `logger.py` - Structured logging for hook operations
- `yaml_loader.py` - Configuration loading utilities
- `validate_system.py` - System validation and health checks
### Configuration
12 YAML configuration files control different aspects:
- `session.yaml` - Session lifecycle settings
- `performance.yaml` - Performance targets and limits
- `compression.yaml` - Context compression settings
- `modes.yaml` - Mode activation thresholds
- `mcp_orchestration.yaml` - MCP server coordination
- `orchestrator.yaml` - General orchestration settings
- `logging.yaml` - Logging configuration
- `validation.yaml` - System validation rules
- Others for specialized features
### Patterns
3-tier pattern system for adaptability:
- `minimal/` - Basic project detection patterns (3-5KB each)
- `dynamic/` - Feature-specific patterns loaded on demand (8-12KB each)
- `learned/` - User-specific adaptations that evolve with usage (10-20KB each)
### Cache
The system maintains JSON cache files for:
- User preferences and adaptations
- Project-specific patterns
- Learning records and effectiveness data
- Session state and metrics
## Installation
1. Ensure Python 3.8+ is available
2. Place Framework-Hooks directory in your SuperClaude installation
3. Claude Code will automatically discover and use the hooks
## Usage
Framework-Hooks runs automatically when Claude Code starts a session. You don't need to invoke it manually.
The system will:
1. Detect your project type and load appropriate patterns
2. Activate relevant modes and MCP servers based on context
3. Apply learned preferences from previous sessions
4. Optimize performance based on resource constraints
5. Learn from your usage patterns to improve future sessions
## How it works
### Session Flow
```
Session Start → Load Config → Detect Project → Apply Patterns →
Activate Features → Work Session → Record Learning → Save State
```
### Hook Coordination
Hooks coordinate through shared state and configuration. Earlier hooks prepare context for later ones, and the system maintains consistency across the entire session lifecycle.
### Learning System
The system tracks what works well for your specific projects and usage patterns. Over time, it adapts thresholds, preferences, and feature activation to match your workflow.
### Performance Targets
- Session initialization: <50ms
- Pattern loading: <100ms per pattern
- Hook execution: <30ms per hook
- Cache operations: <10ms
## Architecture
Framework-Hooks operates as a lightweight layer between Claude Code and the SuperClaude framework. It provides just-in-time intelligence loading instead of loading comprehensive framework documentation upfront.
The hook system allows Claude Code sessions to:
- Start faster by loading only necessary context
- Adapt to project-specific needs automatically
- Learn from usage patterns over time
- Coordinate MCP servers intelligently
- Apply compression and optimization transparently
## Development
### Adding Hooks
Create new hooks by:
1. Adding a Python file in `hooks/` directory
2. Following existing hook patterns for initialization
3. Using shared modules for common functionality
4. Adding corresponding configuration if needed
### Modifying Configuration
YAML files in `config/` control hook behavior. Changes take effect on next session start.
### Pattern Development
Add new patterns in appropriate `patterns/` subdirectories following existing YAML structure.
## Troubleshooting
Logs are written to `cache/logs/` directory. Check these files if hooks aren't behaving as expected.
The system includes validation utilities in `validate_system.py` for checking configuration and installation integrity.

View File

@@ -1,348 +0,0 @@
# Troubleshooting Guide
Common issues and solutions for Framework-Hooks based on actual implementation patterns.
## Installation Issues
### Python Import Errors
**Problem**: Hook fails with `ModuleNotFoundError` for shared modules
**Cause**: Python path not finding shared modules in `hooks/shared/`
**Solution**:
```python
# Each hook script includes this path setup:
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
```
Verify the `shared/` directory exists and contains all 9 modules:
- `framework_logic.py`
- `compression_engine.py`
- `learning_engine.py`
- `mcp_intelligence.py`
- `pattern_detection.py`
- `intelligence_engine.py`
- `logger.py`
- `yaml_loader.py`
- `validate_system.py`
### Hook Execution Permissions
**Problem**: Hooks fail to execute with permission errors
**Solution**:
```bash
cd Framework-Hooks/hooks
chmod +x *.py
chmod +x shared/*.py
```
The following hooks need execute permissions:
- `pre_compact.py`
- `stop.py`
- `subagent_stop.py`
### YAML Configuration Errors
**Problem**: Hook fails with YAML parsing errors
**Cause**: Invalid YAML syntax in config files
**Solution**:
```bash
# Test YAML validity
python3 -c "import yaml; yaml.safe_load(open('config/session.yaml'))"
```
Check these configuration files for syntax issues:
- `config/logging.yaml`
- `config/session.yaml`
- `config/performance.yaml`
- `config/compression.yaml`
- All other `.yaml` files in `config/`
## Performance Issues
### Hook Timeout Errors
**Problem**: Hooks timing out (default 10-15 seconds from settings.json)
**Cause**: Performance targets not met:
- session_start.py: >50ms target
- pre_tool_use.py: >200ms target
- Other hooks: >100-150ms targets
**Diagnosis**:
```bash
# Enable timing logs in config/logging.yaml
logging:
enabled: true
level: "INFO"
hook_logging:
log_timing: true
```
**Solutions**:
1. **Reduce pattern loading**: Remove unnecessary patterns from `patterns/` directories
2. **Check disk I/O**: Ensure `cache/` directory is writable and has space
3. **Disable verbose features**: Set `logging.level: "ERROR"`
4. **Check Python performance**: Use faster Python interpreter if available
### Memory Usage Issues
**Problem**: High memory usage during hook execution
**Cause**: Large pattern files or cache accumulation
**Solutions**:
1. **Clear cache**: Remove files from `cache/` directory
2. **Reduce pattern size**: Check for oversized files in `patterns/learned/`
3. **Limit learning data**: Review learning_engine.py cache size limits
### Pattern Loading Slow
**Problem**: Session start delays due to pattern loading
**Cause**: Pattern system loading large files from:
- `patterns/minimal/`: Should be 3-5KB each
- `patterns/dynamic/`: Should be 8-12KB each
- `patterns/learned/`: Should be 10-20KB each
**Solutions**:
1. **Check pattern sizes**: Identify oversized pattern files
2. **Remove unused patterns**: Delete patterns not relevant to your projects
3. **Reset learned patterns**: Clear `patterns/learned/` to start fresh
## Configuration Issues
### Logging Not Working
**Problem**: No log output despite enabling logging
**Cause**: Default logging configuration in `config/logging.yaml`:
```yaml
logging:
enabled: false # Default is disabled
level: "ERROR" # Only shows errors by default
```
**Solution**: Enable logging properly:
```yaml
logging:
enabled: true
level: "INFO" # or "DEBUG" for verbose output
hook_logging:
log_lifecycle: true
log_decisions: true
log_timing: true
```
### Cache Directory Issues
**Problem**: Hooks fail with cache write errors
**Cause**: Missing or permission issues with `cache/` directory
**Solution**:
```bash
mkdir -p Framework-Hooks/cache/logs
chmod 755 Framework-Hooks/cache
chmod 755 Framework-Hooks/cache/logs
```
Required cache structure:
```
cache/
├── logs/ # Log files (30-day retention)
├── patterns/ # Cached pattern data
├── learning/ # Learning engine data
└── session/ # Session state
```
### MCP Intelligence Failures
**Problem**: MCP server coordination not working
**Cause**: `mcp_intelligence.py` configuration issues
**Diagnosis**: Check `config/mcp_orchestration.yaml` for valid server configurations
**Solution**: Verify MCP server availability and configuration in:
- Context7, Sequential, Magic, Playwright, Morphllm, Serena
## Runtime Issues
### Hook Script Failures
**Problem**: Individual hook scripts crash or fail
**Diagnosis Steps**:
1. **Test hook directly**:
```bash
cd Framework-Hooks/hooks
python3 session_start.py
```
2. **Check imports**: Verify all shared modules import correctly:
```python
from framework_logic import FrameworkLogic
from pattern_detection import PatternDetector
from mcp_intelligence import MCPIntelligence
from compression_engine import CompressionEngine
from learning_engine import LearningEngine
```
3. **Check YAML loading**:
```python
from yaml_loader import config_loader
config = config_loader.load_config('session')
```
### Learning System Issues
**Problem**: Learning engine not adapting to usage patterns
**Cause**: Learning data not persisting or invalid
**Solutions**:
1. **Check cache permissions**: Ensure `cache/learning/` is writable
2. **Reset learning data**: Remove `cache/learning/*` files to start fresh
3. **Verify pattern detection**: Check that `pattern_detection.py` identifies your project type
### Validation Failures
**Problem**: System validation reports errors
**Run validation manually**:
```bash
cd Framework-Hooks/hooks/shared
python3 validate_system.py --full-check
```
Common validation issues:
- Missing configuration files
- Invalid YAML syntax
- Permission problems
- Missing directories
## Debug Mode
### Enable Comprehensive Debugging
Edit `config/logging.yaml`:
```yaml
logging:
enabled: true
level: "DEBUG"
development:
verbose_errors: true
include_stack_traces: true
debug_mode: true
```
This provides detailed information about:
- Hook execution flow
- Pattern loading decisions
- MCP server coordination
- Learning system adaptations
- Performance timing data
### Manual Hook Testing
Test individual hooks outside Claude Code:
```bash
# Test session start
python3 hooks/session_start.py
# Test tool use hooks
python3 hooks/pre_tool_use.py
python3 hooks/post_tool_use.py
# Test cleanup hooks
python3 hooks/stop.py
```
## System Health Checks
### Automated Validation
Run the comprehensive system check:
```bash
cd Framework-Hooks/hooks/shared
python3 validate_system.py --health-check
```
This checks:
- File permissions and structure
- YAML configuration validity
- Python module imports
- Cache directory accessibility
- Pattern file integrity
### Performance Monitoring
Enable performance logging:
```yaml
# In config/performance.yaml
performance_monitoring:
enabled: true
track_execution_time: true
alert_on_slow_hooks: true
target_times:
session_start: 50 # ms
pre_tool_use: 200 # ms
post_tool_use: 100 # ms
other_hooks: 100 # ms
```
## Common Error Messages
### "Hook timeout exceeded"
- **Cause**: Hook execution taking longer than 10-15 seconds (settings.json timeout)
- **Solution**: Check performance issues section above
### "YAML load failed"
- **Cause**: Invalid YAML syntax in configuration files
- **Solution**: Validate YAML files using Python or online validator
### "Pattern detection failed"
- **Cause**: Issues with pattern files or pattern_detection.py
- **Solution**: Check pattern file sizes and YAML validity
### "Learning engine initialization failed"
- **Cause**: Cache directory issues or learning data corruption
- **Solution**: Clear cache and reset learning data
### "MCP intelligence routing failed"
- **Cause**: MCP server configuration or availability issues
- **Solution**: Check MCP server status and configuration
## Getting Help
### Log Analysis
Logs are written to `cache/logs/` with daily rotation (30-day retention). Check recent logs for detailed error information.
### Clean Installation
To reset to clean state:
```bash
# Backup any custom patterns first
rm -rf cache/
rm -rf patterns/learned/
# Restart Claude Code session
```
### Configuration Reset
To reset all configurations to defaults:
```bash
git checkout config/*.yaml
# Or restore from backup if modified
```
The system is designed to be resilient with conservative defaults. Most issues resolve with basic file permission fixes and configuration validation.

View File

@@ -1,604 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Notification Hook
Implements just-in-time MCP documentation loading and pattern updates.
Performance target: <100ms execution time.
This hook runs when Claude Code sends notifications and provides:
- Just-in-time loading of MCP server documentation
- Dynamic pattern updates based on operation context
- Framework intelligence updates and adaptations
- Real-time learning from notification patterns
- Performance optimization through intelligent caching
"""
import sys
import json
import time
import os
from pathlib import Path
from typing import Dict, Any, List, Optional
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic
from pattern_detection import PatternDetector
from mcp_intelligence import MCPIntelligence
from compression_engine import CompressionEngine
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class NotificationHook:
"""
Notification hook implementing just-in-time intelligence loading.
Responsibilities:
- Process Claude Code notifications for intelligence opportunities
- Load relevant MCP documentation on-demand
- Update pattern detection based on real-time context
- Provide framework intelligence updates
- Cache and optimize frequently accessed information
- Learn from notification patterns for future optimization
"""
def __init__(self):
start_time = time.time()
# Initialize core components
self.framework_logic = FrameworkLogic()
self.pattern_detector = PatternDetector()
self.mcp_intelligence = MCPIntelligence()
self.compression_engine = CompressionEngine()
# Initialize learning engine with installation directory cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self.learning_engine = LearningEngine(cache_dir)
# Load notification configuration
self.notification_config = config_loader.get_section('session', 'notifications', {})
# Initialize notification cache
self.notification_cache = {}
self.pattern_cache = {}
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('notification')
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('notification', 'performance_target_ms', 100)
def process_notification(self, notification: dict) -> dict:
"""
Process notification with just-in-time intelligence loading.
Args:
notification: Notification from Claude Code
Returns:
Enhanced notification response with intelligence updates
"""
start_time = time.time()
# Log hook start
log_hook_start("notification", {
"notification_type": notification.get('type', 'unknown'),
"has_context": bool(notification.get('context')),
"priority": notification.get('priority', 'normal')
})
try:
# Extract notification context
context = self._extract_notification_context(notification)
# Analyze notification for intelligence opportunities
intelligence_analysis = self._analyze_intelligence_opportunities(context)
# Determine intelligence needs
intelligence_needs = self._analyze_intelligence_needs(context)
# Log intelligence loading decision
if intelligence_needs.get('mcp_docs_needed'):
log_decision(
"notification",
"mcp_docs_loading",
",".join(intelligence_needs.get('mcp_servers', [])),
f"Documentation needed for: {intelligence_needs.get('reason', 'notification context')}"
)
# Load just-in-time documentation if needed
documentation_updates = self._load_jit_documentation(context, intelligence_analysis)
# Update patterns if needed
pattern_updates = self._update_patterns_if_needed(context, intelligence_needs)
# Log pattern update decision
if pattern_updates.get('patterns_updated'):
log_decision(
"notification",
"pattern_update",
pattern_updates.get('pattern_type', 'unknown'),
f"Updated {pattern_updates.get('update_count', 0)} patterns"
)
# Generate framework intelligence updates
framework_updates = self._generate_framework_updates(context, intelligence_analysis)
# Record learning events
self._record_notification_learning(context, intelligence_analysis)
# Create intelligence response
intelligence_response = self._create_intelligence_response(
context, documentation_updates, pattern_updates, framework_updates
)
# Performance validation
execution_time = (time.time() - start_time) * 1000
intelligence_response['performance_metrics'] = {
'processing_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'cache_hit_rate': self._calculate_cache_hit_rate()
}
# Log successful completion
log_hook_end(
"notification",
int(execution_time),
True,
{
"notification_type": context['notification_type'],
"intelligence_loaded": bool(intelligence_needs.get('mcp_docs_needed')),
"patterns_updated": pattern_updates.get('patterns_updated', False)
}
)
return intelligence_response
except Exception as e:
# Log error
execution_time = (time.time() - start_time) * 1000
log_error(
"notification",
str(e),
{"notification_type": notification.get('type', 'unknown')}
)
log_hook_end("notification", int(execution_time), False)
# Graceful fallback on error
return self._create_fallback_response(notification, str(e))
def _extract_notification_context(self, notification: dict) -> dict:
"""Extract and enrich notification context."""
context = {
'notification_type': notification.get('type', 'unknown'),
'notification_data': notification.get('data', {}),
'session_context': notification.get('session_context', {}),
'user_context': notification.get('user_context', {}),
'operation_context': notification.get('operation_context', {}),
'trigger_event': notification.get('trigger', ''),
'timestamp': time.time()
}
# Analyze notification importance
context['priority'] = self._assess_notification_priority(context)
# Extract operation characteristics
context.update(self._extract_operation_characteristics(context))
return context
def _assess_notification_priority(self, context: dict) -> str:
"""Assess notification priority for processing."""
notification_type = context['notification_type']
# High priority notifications
if notification_type in ['error', 'failure', 'security_alert']:
return 'high'
elif notification_type in ['performance_issue', 'validation_failure']:
return 'high'
# Medium priority notifications
elif notification_type in ['tool_request', 'context_change', 'resource_constraint']:
return 'medium'
# Low priority notifications
elif notification_type in ['info', 'debug', 'status_update']:
return 'low'
return 'medium'
def _extract_operation_characteristics(self, context: dict) -> dict:
"""Extract operation characteristics from notification."""
operation_context = context.get('operation_context', {})
return {
'operation_type': operation_context.get('type', 'unknown'),
'complexity_indicators': operation_context.get('complexity', 0.0),
'tool_requests': operation_context.get('tools_requested', []),
'mcp_server_hints': operation_context.get('mcp_hints', []),
'performance_requirements': operation_context.get('performance', {}),
'intelligence_requirements': operation_context.get('intelligence_needed', False)
}
def _analyze_intelligence_opportunities(self, context: dict) -> dict:
"""Analyze notification for intelligence loading opportunities."""
analysis = {
'documentation_needed': [],
'pattern_updates_needed': [],
'framework_updates_needed': [],
'learning_opportunities': [],
'optimization_opportunities': []
}
notification_type = context['notification_type']
operation_type = context.get('operation_type', 'unknown')
# Documentation loading opportunities
if notification_type == 'tool_request':
requested_tools = context.get('tool_requests', [])
for tool in requested_tools:
if tool in ['ui_component', 'component_generation']:
analysis['documentation_needed'].append('magic_patterns')
elif tool in ['library_integration', 'framework_usage']:
analysis['documentation_needed'].append('context7_patterns')
elif tool in ['complex_analysis', 'debugging']:
analysis['documentation_needed'].append('sequential_patterns')
elif tool in ['testing', 'validation']:
analysis['documentation_needed'].append('playwright_patterns')
# Pattern update opportunities
if notification_type in ['context_change', 'operation_start']:
analysis['pattern_updates_needed'].extend([
'operation_patterns',
'context_patterns'
])
# Framework update opportunities
if notification_type in ['performance_issue', 'optimization_request']:
analysis['framework_updates_needed'].extend([
'performance_optimization',
'resource_management'
])
# Learning opportunities
if notification_type in ['error', 'failure']:
analysis['learning_opportunities'].append('error_pattern_learning')
elif notification_type in ['success', 'completion']:
analysis['learning_opportunities'].append('success_pattern_learning')
# Optimization opportunities
if context.get('performance_requirements'):
analysis['optimization_opportunities'].append('performance_optimization')
return analysis
def _analyze_intelligence_needs(self, context: dict) -> dict:
"""Determine intelligence needs based on context."""
needs = {
'mcp_docs_needed': False,
'mcp_servers': [],
'reason': ''
}
# Check for MCP server hints
mcp_hints = context.get('mcp_server_hints', [])
if mcp_hints:
needs['mcp_docs_needed'] = True
needs['mcp_servers'] = mcp_hints
needs['reason'] = 'MCP server hints'
# Check for tool requests
tool_requests = context.get('tool_requests', [])
if tool_requests:
needs['mcp_docs_needed'] = True
needs['mcp_servers'] = [tool for tool in tool_requests if tool in ['ui_component', 'component_generation', 'library_integration', 'framework_usage', 'complex_analysis', 'debugging', 'testing', 'validation']]
needs['reason'] = 'Tool requests'
# Check for performance requirements
performance_requirements = context.get('performance_requirements', {})
if performance_requirements:
needs['mcp_docs_needed'] = True
needs['mcp_servers'] = ['performance_optimization', 'resource_management']
needs['reason'] = 'Performance requirements'
return needs
def _load_jit_documentation(self, context: dict, intelligence_analysis: dict) -> dict:
"""Load just-in-time documentation based on analysis."""
documentation_updates = {
'loaded_patterns': [],
'cached_content': {},
'documentation_summaries': {}
}
needed_docs = intelligence_analysis.get('documentation_needed', [])
for doc_type in needed_docs:
# Check cache first
if doc_type in self.notification_cache:
documentation_updates['cached_content'][doc_type] = self.notification_cache[doc_type]
documentation_updates['loaded_patterns'].append(f"{doc_type}_cached")
continue
# Load documentation on-demand
doc_content = self._load_documentation_content(doc_type, context)
if doc_content:
# Cache for future use
self.notification_cache[doc_type] = doc_content
documentation_updates['cached_content'][doc_type] = doc_content
documentation_updates['loaded_patterns'].append(f"{doc_type}_loaded")
# Create summary for quick access
summary = self._create_documentation_summary(doc_content)
documentation_updates['documentation_summaries'][doc_type] = summary
return documentation_updates
def _load_documentation_content(self, doc_type: str, context: dict) -> Optional[dict]:
"""Load specific documentation content."""
# Simulated documentation loading - real implementation would fetch from MCP servers
documentation_patterns = {
'magic_patterns': {
'ui_components': ['button', 'form', 'modal', 'card'],
'design_systems': ['theme', 'tokens', 'spacing'],
'accessibility': ['aria-labels', 'keyboard-navigation', 'screen-readers']
},
'context7_patterns': {
'library_integration': ['import_patterns', 'configuration', 'best_practices'],
'framework_usage': ['react_patterns', 'vue_patterns', 'angular_patterns'],
'documentation_access': ['api_docs', 'examples', 'tutorials']
},
'sequential_patterns': {
'analysis_workflows': ['step_by_step', 'hypothesis_testing', 'validation'],
'debugging_strategies': ['systematic_approach', 'root_cause', 'verification'],
'complex_reasoning': ['decomposition', 'synthesis', 'optimization']
},
'playwright_patterns': {
'testing_strategies': ['e2e_tests', 'unit_tests', 'integration_tests'],
'automation_patterns': ['page_objects', 'test_data', 'assertions'],
'performance_testing': ['load_testing', 'stress_testing', 'monitoring']
}
}
return documentation_patterns.get(doc_type, {})
def _create_documentation_summary(self, doc_content: dict) -> dict:
"""Create summary of documentation content for quick access."""
summary = {
'categories': list(doc_content.keys()),
'total_patterns': sum(len(patterns) if isinstance(patterns, list) else 1
for patterns in doc_content.values()),
'quick_access_items': []
}
# Extract most commonly used patterns
for category, patterns in doc_content.items():
if isinstance(patterns, list) and patterns:
summary['quick_access_items'].append({
'category': category,
'top_pattern': patterns[0],
'pattern_count': len(patterns)
})
return summary
def _update_patterns_if_needed(self, context: dict, intelligence_needs: dict) -> dict:
"""Update pattern detection based on context."""
pattern_updates = {
'updated_patterns': [],
'new_patterns_detected': [],
'pattern_effectiveness': {}
}
if intelligence_needs.get('mcp_docs_needed'):
# Update operation-specific patterns
operation_type = context.get('operation_type', 'unknown')
self._update_operation_patterns(operation_type, pattern_updates)
# Update context-specific patterns
session_context = context.get('session_context', {})
self._update_context_patterns(session_context, pattern_updates)
return pattern_updates
def _update_operation_patterns(self, operation_type: str, pattern_updates: dict):
"""Update operation-specific patterns."""
if operation_type in ['build', 'implement']:
pattern_updates['updated_patterns'].append('build_operation_patterns')
# Update pattern detection for build operations
elif operation_type in ['analyze', 'debug']:
pattern_updates['updated_patterns'].append('analysis_operation_patterns')
# Update pattern detection for analysis operations
elif operation_type in ['test', 'validate']:
pattern_updates['updated_patterns'].append('testing_operation_patterns')
# Update pattern detection for testing operations
def _update_context_patterns(self, session_context: dict, pattern_updates: dict):
"""Update context-specific patterns."""
if session_context.get('project_type') == 'frontend':
pattern_updates['updated_patterns'].append('frontend_context_patterns')
elif session_context.get('project_type') == 'backend':
pattern_updates['updated_patterns'].append('backend_context_patterns')
elif session_context.get('project_type') == 'fullstack':
pattern_updates['updated_patterns'].append('fullstack_context_patterns')
def _generate_framework_updates(self, context: dict, intelligence_analysis: dict) -> dict:
"""Generate framework intelligence updates."""
framework_updates = {
'configuration_updates': {},
'optimization_recommendations': [],
'intelligence_enhancements': []
}
needed_updates = intelligence_analysis.get('framework_updates_needed', [])
for update_type in needed_updates:
if update_type == 'performance_optimization':
framework_updates['optimization_recommendations'].extend([
'Enable parallel processing for multi-file operations',
'Activate compression for resource-constrained scenarios',
'Use intelligent caching for repeated operations'
])
elif update_type == 'resource_management':
resource_usage = context.get('session_context', {}).get('resource_usage', 0)
if resource_usage > 75:
framework_updates['configuration_updates']['compression'] = 'enable_aggressive'
framework_updates['optimization_recommendations'].append(
'Resource usage high - enabling aggressive compression'
)
return framework_updates
def _record_notification_learning(self, context: dict, intelligence_analysis: dict):
"""Record notification learning for optimization."""
learning_opportunities = intelligence_analysis.get('learning_opportunities', [])
for opportunity in learning_opportunities:
if opportunity == 'error_pattern_learning':
self.learning_engine.record_learning_event(
LearningType.ERROR_RECOVERY,
AdaptationScope.USER,
context,
{
'notification_type': context['notification_type'],
'error_context': context.get('notification_data', {}),
'intelligence_loaded': len(intelligence_analysis.get('documentation_needed', []))
},
0.7, # Learning value from errors
0.8,
{'hook': 'notification', 'learning_type': 'error'}
)
elif opportunity == 'success_pattern_learning':
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'notification_type': context['notification_type'],
'success_context': context.get('notification_data', {}),
'patterns_updated': len(intelligence_analysis.get('pattern_updates_needed', []))
},
0.9, # High learning value from success
0.9,
{'hook': 'notification', 'learning_type': 'success'}
)
def _calculate_cache_hit_rate(self) -> float:
"""Calculate cache hit ratio for performance metrics."""
if not hasattr(self, '_cache_requests'):
self._cache_requests = 0
self._cache_hits = 0
if self._cache_requests == 0:
return 0.0
return self._cache_hits / self._cache_requests
def _create_intelligence_response(self, context: dict, documentation_updates: dict,
pattern_updates: dict, framework_updates: dict) -> dict:
"""Create comprehensive intelligence response."""
return {
'notification_type': context['notification_type'],
'priority': context['priority'],
'timestamp': context['timestamp'],
'intelligence_updates': {
'documentation_loaded': len(documentation_updates.get('loaded_patterns', [])) > 0,
'patterns_updated': len(pattern_updates.get('updated_patterns', [])) > 0,
'framework_enhanced': len(framework_updates.get('optimization_recommendations', [])) > 0
},
'documentation': {
'patterns_loaded': documentation_updates.get('loaded_patterns', []),
'summaries': documentation_updates.get('documentation_summaries', {}),
'cache_status': 'active'
},
'patterns': {
'updated_patterns': pattern_updates.get('updated_patterns', []),
'new_patterns': pattern_updates.get('new_patterns_detected', []),
'effectiveness': pattern_updates.get('pattern_effectiveness', {})
},
'framework': {
'configuration_updates': framework_updates.get('configuration_updates', {}),
'optimization_recommendations': framework_updates.get('optimization_recommendations', []),
'intelligence_enhancements': framework_updates.get('intelligence_enhancements', [])
},
'optimization': {
'just_in_time_loading': True,
'intelligent_caching': True,
'performance_optimized': True,
'learning_enabled': True
},
'metadata': {
'hook_version': 'notification_1.0',
'processing_timestamp': time.time(),
'intelligence_level': 'adaptive'
}
}
def _create_fallback_response(self, notification: dict, error: str) -> dict:
"""Create fallback response on error."""
return {
'notification_type': notification.get('type', 'unknown'),
'priority': 'low',
'error': error,
'fallback_mode': True,
'intelligence_updates': {
'documentation_loaded': False,
'patterns_updated': False,
'framework_enhanced': False
},
'documentation': {
'patterns_loaded': [],
'summaries': {},
'cache_status': 'error'
},
'performance_metrics': {
'processing_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def main():
"""Main hook execution function."""
try:
# Read notification from stdin
notification = json.loads(sys.stdin.read())
# Initialize and run hook
hook = NotificationHook()
result = hook.process_notification(notification)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'intelligence_updates_enabled': False,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,794 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Post-Tool-Use Hook
Implements RULES.md + PRINCIPLES.md validation and learning system.
Performance target: <100ms execution time.
This hook runs after every tool usage and provides:
- Quality validation against SuperClaude principles
- Effectiveness measurement and learning
- Error pattern detection and prevention
- Performance optimization feedback
- Adaptation and improvement recommendations
"""
import sys
import json
import time
import os
from pathlib import Path
from typing import Dict, Any, List, Optional, Tuple
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic, ValidationResult, OperationContext, OperationType, RiskLevel
from pattern_detection import PatternDetector
from mcp_intelligence import MCPIntelligence
from compression_engine import CompressionEngine
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class PostToolUseHook:
"""
Post-tool-use hook implementing SuperClaude validation and learning.
Responsibilities:
- Validate tool execution against RULES.md and PRINCIPLES.md
- Measure operation effectiveness and quality
- Learn from successful and failed patterns
- Detect error patterns and suggest improvements
- Record performance metrics for optimization
- Generate adaptation recommendations
"""
def __init__(self):
start_time = time.time()
# Initialize core components
self.framework_logic = FrameworkLogic()
self.pattern_detector = PatternDetector()
self.mcp_intelligence = MCPIntelligence()
self.compression_engine = CompressionEngine()
# Initialize learning engine with installation directory cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self.learning_engine = LearningEngine(cache_dir)
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('post_tool_use')
# Load validation configuration (from YAML if exists, otherwise use hook config)
try:
self.validation_config = config_loader.load_config('validation')
except FileNotFoundError:
# Fall back to hook configuration if YAML file not found
self.validation_config = self.hook_config.get('configuration', {})
# Load quality standards (from YAML if exists, otherwise use hook config)
try:
self.quality_standards = config_loader.load_config('performance')
except FileNotFoundError:
# Fall back to performance targets from global configuration
self.quality_standards = config_loader.get_performance_targets()
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('post_tool_use', 'performance_target_ms', 100)
def process_tool_result(self, tool_result: dict) -> dict:
"""
Process tool execution result with validation and learning.
Args:
tool_result: Tool execution result from Claude Code
Returns:
Enhanced result with SuperClaude validation and insights
"""
start_time = time.time()
# Log hook start
log_hook_start("post_tool_use", {
"tool_name": tool_result.get('tool_name', 'unknown'),
"success": tool_result.get('success', False),
"has_error": bool(tool_result.get('error'))
})
try:
# Extract execution context
context = self._extract_execution_context(tool_result)
# Validate against SuperClaude principles
validation_result = self._validate_tool_result(context)
# Log validation decision
if not validation_result.is_valid:
log_decision(
"post_tool_use",
"validation_failure",
validation_result.failed_checks[0] if validation_result.failed_checks else "unknown",
f"Tool '{context['tool_name']}' failed validation: {validation_result.message}"
)
# Measure effectiveness and quality
effectiveness_metrics = self._measure_effectiveness(context, validation_result)
# Detect patterns and learning opportunities
learning_analysis = self._analyze_learning_opportunities(context, effectiveness_metrics)
# Record learning events
self._record_learning_events(context, effectiveness_metrics, learning_analysis)
# Generate recommendations
recommendations = self._generate_recommendations(context, validation_result, learning_analysis)
# Create validation report
validation_report = self._create_validation_report(
context, validation_result, effectiveness_metrics,
learning_analysis, recommendations
)
# Detect patterns in tool execution
pattern_analysis = self._analyze_execution_patterns(context, validation_result)
# Log pattern detection
if pattern_analysis.get('error_pattern_detected'):
log_decision(
"post_tool_use",
"error_pattern_detected",
pattern_analysis.get('pattern_type', 'unknown'),
pattern_analysis.get('description', 'Error pattern identified')
)
# Performance tracking
execution_time = (time.time() - start_time) * 1000
validation_report['performance_metrics'] = {
'processing_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'quality_score': self._calculate_quality_score(context, validation_result)
}
# Log successful completion
log_hook_end(
"post_tool_use",
int(execution_time),
True,
{
"tool_name": context['tool_name'],
"validation_passed": validation_result.is_valid,
"quality_score": validation_report['performance_metrics']['quality_score']
}
)
return validation_report
except Exception as e:
# Log error
execution_time = (time.time() - start_time) * 1000
log_error(
"post_tool_use",
str(e),
{"tool_name": tool_result.get('tool_name', 'unknown')}
)
log_hook_end("post_tool_use", int(execution_time), False)
# Graceful fallback on error
return self._create_fallback_result(tool_result, str(e))
def _extract_execution_context(self, tool_result: dict) -> dict:
"""Extract and enrich tool execution context."""
context = {
'tool_name': tool_result.get('tool_name', ''),
'execution_status': tool_result.get('status', 'unknown'),
'execution_time_ms': tool_result.get('execution_time_ms', 0),
'parameters_used': tool_result.get('parameters', {}),
'result_data': tool_result.get('result', {}),
'error_info': tool_result.get('error', {}),
'mcp_servers_used': tool_result.get('mcp_servers', []),
'performance_data': tool_result.get('performance', {}),
'user_intent': tool_result.get('user_intent', ''),
'session_context': tool_result.get('session_context', {}),
'timestamp': time.time()
}
# Analyze operation characteristics
context.update(self._analyze_operation_outcome(context))
# Extract quality indicators
context.update(self._extract_quality_indicators(context))
return context
def _analyze_operation_outcome(self, context: dict) -> dict:
"""Analyze the outcome of the tool operation."""
outcome_analysis = {
'success': context['execution_status'] == 'success',
'partial_success': False,
'error_occurred': context['execution_status'] == 'error',
'performance_acceptable': True,
'quality_indicators': [],
'risk_factors': []
}
# Analyze execution status
if context['execution_status'] in ['partial', 'warning']:
outcome_analysis['partial_success'] = True
# Performance analysis
execution_time = context.get('execution_time_ms', 0)
if execution_time > 5000: # 5 second threshold
outcome_analysis['performance_acceptable'] = False
outcome_analysis['risk_factors'].append('slow_execution')
# Error analysis
if context.get('error_info'):
error_type = context['error_info'].get('type', 'unknown')
outcome_analysis['error_type'] = error_type
outcome_analysis['error_recoverable'] = error_type not in ['fatal', 'security', 'corruption']
# Quality indicators from result data
result_data = context.get('result_data', {})
if result_data:
if result_data.get('validation_passed'):
outcome_analysis['quality_indicators'].append('validation_passed')
if result_data.get('tests_passed'):
outcome_analysis['quality_indicators'].append('tests_passed')
if result_data.get('linting_clean'):
outcome_analysis['quality_indicators'].append('linting_clean')
return outcome_analysis
def _extract_quality_indicators(self, context: dict) -> dict:
"""Extract quality indicators from execution context."""
quality_indicators = {
'code_quality_score': 0.0,
'security_compliance': True,
'performance_efficiency': 1.0,
'error_handling_present': False,
'documentation_adequate': False,
'test_coverage_acceptable': False
}
# Analyze tool output for quality indicators
tool_name = context['tool_name']
result_data = context.get('result_data', {})
# Code quality analysis
if tool_name in ['Write', 'Edit', 'Generate']:
# Check for quality indicators in the result
if 'quality_score' in result_data:
quality_indicators['code_quality_score'] = result_data['quality_score']
# Infer quality from operation success and performance
if context.get('success') and context.get('performance_acceptable'):
quality_indicators['code_quality_score'] = max(
quality_indicators['code_quality_score'], 0.7
)
# Security compliance
if context.get('error_type') in ['security', 'vulnerability']:
quality_indicators['security_compliance'] = False
# Performance efficiency
execution_time = context.get('execution_time_ms', 0)
expected_time = context.get('performance_data', {}).get('expected_time_ms', 1000)
if execution_time > 0 and expected_time > 0:
quality_indicators['performance_efficiency'] = min(expected_time / execution_time, 2.0)
# Error handling detection
if tool_name in ['Write', 'Edit'] and 'try' in str(result_data).lower():
quality_indicators['error_handling_present'] = True
# Documentation assessment
if tool_name in ['Document', 'Generate'] or 'doc' in context.get('user_intent', '').lower():
quality_indicators['documentation_adequate'] = context.get('success', False)
return quality_indicators
def _validate_tool_result(self, context: dict) -> ValidationResult:
"""Validate execution against SuperClaude principles."""
# Create operation data for validation
operation_data = {
'operation_type': context['tool_name'],
'has_error_handling': context.get('error_handling_present', False),
'affects_logic': context['tool_name'] in ['Write', 'Edit', 'Generate'],
'has_tests': context.get('test_coverage_acceptable', False),
'is_public_api': 'api' in context.get('user_intent', '').lower(),
'has_documentation': context.get('documentation_adequate', False),
'handles_user_input': 'input' in context.get('user_intent', '').lower(),
'has_input_validation': context.get('security_compliance', True),
'evidence': context.get('success', False)
}
# Run framework validation
validation_result = self.framework_logic.validate_operation(operation_data)
# Enhance with SuperClaude-specific validations
validation_result = self._enhance_validation_with_superclaude_rules(
validation_result, context
)
return validation_result
def _enhance_validation_with_superclaude_rules(self,
base_validation: ValidationResult,
context: dict) -> ValidationResult:
"""Enhance validation with SuperClaude-specific rules."""
enhanced_validation = ValidationResult(
is_valid=base_validation.is_valid,
issues=base_validation.issues.copy(),
warnings=base_validation.warnings.copy(),
suggestions=base_validation.suggestions.copy(),
quality_score=base_validation.quality_score
)
# RULES.md validation
# Rule: Always use Read tool before Write or Edit operations
if context['tool_name'] in ['Write', 'Edit']:
session_context = context.get('session_context', {})
recent_tools = session_context.get('recent_tools', [])
if not any('Read' in tool for tool in recent_tools[-3:]):
enhanced_validation.warnings.append(
"RULES violation: No Read operation detected before Write/Edit"
)
enhanced_validation.quality_score -= 0.1
# Rule: Use absolute paths only
params = context.get('parameters_used', {})
for param_name, param_value in params.items():
if 'path' in param_name.lower() and isinstance(param_value, str):
if not os.path.isabs(param_value) and not param_value.startswith(('http', 'https')):
enhanced_validation.issues.append(
f"RULES violation: Relative path used in {param_name}: {param_value}"
)
enhanced_validation.quality_score -= 0.2
# Rule: Validate before execution for high-risk operations
if context.get('risk_factors'):
if not context.get('validation_performed', False):
enhanced_validation.warnings.append(
"RULES recommendation: High-risk operation should include validation"
)
# PRINCIPLES.md validation
# Principle: Evidence > assumptions
if not context.get('evidence_provided', False) and context.get('assumptions_made', False):
enhanced_validation.suggestions.append(
"PRINCIPLES: Provide evidence to support assumptions"
)
# Principle: Code > documentation
if context['tool_name'] == 'Document' and not context.get('working_code_exists', True):
enhanced_validation.warnings.append(
"PRINCIPLES: Documentation should follow working code, not precede it"
)
# Principle: Efficiency > verbosity
result_size = len(str(context.get('result_data', '')))
if result_size > 5000 and not context.get('complexity_justifies_length', False):
enhanced_validation.suggestions.append(
"PRINCIPLES: Consider token efficiency techniques for large outputs"
)
# Recalculate overall validity
enhanced_validation.is_valid = (
len(enhanced_validation.issues) == 0 and
enhanced_validation.quality_score >= 0.7
)
return enhanced_validation
def _measure_effectiveness(self, context: dict, validation_result: ValidationResult) -> dict:
"""Measure operation effectiveness and quality."""
effectiveness_metrics = {
'overall_effectiveness': 0.0,
'quality_score': validation_result.quality_score,
'performance_score': 0.0,
'user_satisfaction_estimate': 0.0,
'learning_value': 0.0,
'improvement_potential': 0.0
}
# Performance scoring
execution_time = context.get('execution_time_ms', 0)
expected_time = context.get('performance_data', {}).get('expected_time_ms', 1000)
if execution_time > 0:
time_ratio = expected_time / max(execution_time, 1)
effectiveness_metrics['performance_score'] = min(time_ratio, 1.0)
else:
effectiveness_metrics['performance_score'] = 1.0
# User satisfaction estimation
if context.get('success'):
base_satisfaction = 0.8
if validation_result.quality_score > 0.8:
base_satisfaction += 0.15
if effectiveness_metrics['performance_score'] > 0.8:
base_satisfaction += 0.05
effectiveness_metrics['user_satisfaction_estimate'] = min(base_satisfaction, 1.0)
else:
# Reduce satisfaction based on error severity
error_severity = self._assess_error_severity(context)
effectiveness_metrics['user_satisfaction_estimate'] = max(0.3 - error_severity * 0.3, 0.0)
# Learning value assessment
if context.get('mcp_servers_used'):
effectiveness_metrics['learning_value'] += 0.2 # MCP usage provides learning
if context.get('error_occurred'):
effectiveness_metrics['learning_value'] += 0.3 # Errors provide valuable learning
if context.get('complexity_score', 0) > 0.6:
effectiveness_metrics['learning_value'] += 0.2 # Complex operations provide insights
effectiveness_metrics['learning_value'] = min(effectiveness_metrics['learning_value'], 1.0)
# Improvement potential
if len(validation_result.suggestions) > 0:
effectiveness_metrics['improvement_potential'] = min(len(validation_result.suggestions) * 0.2, 1.0)
# Overall effectiveness calculation
weights = {
'quality': 0.3,
'performance': 0.25,
'satisfaction': 0.35,
'learning': 0.1
}
effectiveness_metrics['overall_effectiveness'] = (
effectiveness_metrics['quality_score'] * weights['quality'] +
effectiveness_metrics['performance_score'] * weights['performance'] +
effectiveness_metrics['user_satisfaction_estimate'] * weights['satisfaction'] +
effectiveness_metrics['learning_value'] * weights['learning']
)
return effectiveness_metrics
def _assess_error_severity(self, context: dict) -> float:
"""Assess error severity on a scale of 0.0 to 1.0."""
if not context.get('error_occurred'):
return 0.0
error_type = context.get('error_type', 'unknown')
severity_map = {
'fatal': 1.0,
'security': 0.9,
'corruption': 0.8,
'timeout': 0.6,
'validation': 0.4,
'warning': 0.2,
'unknown': 0.5
}
return severity_map.get(error_type, 0.5)
def _analyze_learning_opportunities(self, context: dict, effectiveness_metrics: dict) -> dict:
"""Analyze learning opportunities from the execution."""
learning_analysis = {
'patterns_detected': [],
'success_factors': [],
'failure_factors': [],
'optimization_opportunities': [],
'adaptation_recommendations': []
}
# Pattern detection
if context.get('mcp_servers_used'):
for server in context['mcp_servers_used']:
if effectiveness_metrics['overall_effectiveness'] > 0.8:
learning_analysis['patterns_detected'].append(f"effective_{server}_usage")
elif effectiveness_metrics['overall_effectiveness'] < 0.5:
learning_analysis['patterns_detected'].append(f"ineffective_{server}_usage")
# Success factor analysis
if effectiveness_metrics['overall_effectiveness'] > 0.8:
if effectiveness_metrics['performance_score'] > 0.8:
learning_analysis['success_factors'].append('optimal_performance')
if effectiveness_metrics['quality_score'] > 0.8:
learning_analysis['success_factors'].append('high_quality_output')
if context.get('mcp_servers_used'):
learning_analysis['success_factors'].append('effective_mcp_coordination')
# Failure factor analysis
if effectiveness_metrics['overall_effectiveness'] < 0.5:
if effectiveness_metrics['performance_score'] < 0.5:
learning_analysis['failure_factors'].append('poor_performance')
if effectiveness_metrics['quality_score'] < 0.5:
learning_analysis['failure_factors'].append('quality_issues')
if context.get('error_occurred'):
learning_analysis['failure_factors'].append(f"error_{context.get('error_type', 'unknown')}")
# Optimization opportunities
if effectiveness_metrics['improvement_potential'] > 0.3:
learning_analysis['optimization_opportunities'].append('validation_improvements_available')
if context.get('execution_time_ms', 0) > 2000:
learning_analysis['optimization_opportunities'].append('performance_optimization_needed')
# Adaptation recommendations
if len(learning_analysis['success_factors']) > 0:
learning_analysis['adaptation_recommendations'].append(
f"Reinforce patterns: {', '.join(learning_analysis['success_factors'])}"
)
if len(learning_analysis['failure_factors']) > 0:
learning_analysis['adaptation_recommendations'].append(
f"Address failure patterns: {', '.join(learning_analysis['failure_factors'])}"
)
return learning_analysis
def _record_learning_events(self, context: dict, effectiveness_metrics: dict, learning_analysis: dict):
"""Record learning events for future adaptation."""
overall_effectiveness = effectiveness_metrics['overall_effectiveness']
# Record general operation learning
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'tool_name': context['tool_name'],
'mcp_servers': context.get('mcp_servers_used', []),
'success_factors': learning_analysis['success_factors'],
'failure_factors': learning_analysis['failure_factors']
},
overall_effectiveness,
0.8, # High confidence in post-execution analysis
{'hook': 'post_tool_use', 'effectiveness': overall_effectiveness}
)
# Track tool preference if execution was successful
if context.get('success') and overall_effectiveness > 0.7:
operation_type = self._categorize_operation(context['tool_name'])
if operation_type:
self.learning_engine.update_last_preference(
f"tool_{operation_type}",
context['tool_name']
)
# Record MCP server effectiveness
for server in context.get('mcp_servers_used', []):
self.learning_engine.record_learning_event(
LearningType.EFFECTIVENESS_FEEDBACK,
AdaptationScope.USER,
context,
{'mcp_server': server},
overall_effectiveness,
0.9, # Very high confidence in direct feedback
{'server_performance': effectiveness_metrics['performance_score']}
)
# Record error patterns if applicable
if context.get('error_occurred'):
self.learning_engine.record_learning_event(
LearningType.ERROR_RECOVERY,
AdaptationScope.PROJECT,
context,
{
'error_type': context.get('error_type'),
'recovery_successful': context.get('error_recoverable', False),
'context_factors': learning_analysis['failure_factors']
},
1.0 - self._assess_error_severity(context), # Inverse of severity
1.0, # Full confidence in error data
{'error_learning': True}
)
def _generate_recommendations(self, context: dict, validation_result: ValidationResult,
learning_analysis: dict) -> dict:
"""Generate recommendations for improvement."""
recommendations = {
'immediate_actions': [],
'optimization_suggestions': [],
'learning_adaptations': [],
'prevention_measures': []
}
# Immediate actions from validation issues
for issue in validation_result.issues:
recommendations['immediate_actions'].append(f"Fix: {issue}")
for warning in validation_result.warnings:
recommendations['immediate_actions'].append(f"Address: {warning}")
# Optimization suggestions
for suggestion in validation_result.suggestions:
recommendations['optimization_suggestions'].append(suggestion)
for opportunity in learning_analysis['optimization_opportunities']:
recommendations['optimization_suggestions'].append(f"Optimize: {opportunity}")
# Learning adaptations
for adaptation in learning_analysis['adaptation_recommendations']:
recommendations['learning_adaptations'].append(adaptation)
# Prevention measures for errors
if context.get('error_occurred'):
error_type = context.get('error_type', 'unknown')
if error_type == 'timeout':
recommendations['prevention_measures'].append("Consider parallel execution for large operations")
elif error_type == 'validation':
recommendations['prevention_measures'].append("Enable pre-validation for similar operations")
elif error_type == 'security':
recommendations['prevention_measures'].append("Implement security validation checks")
return recommendations
def _calculate_quality_score(self, context: dict, validation_result: ValidationResult) -> float:
"""Calculate quality score based on validation and execution."""
base_score = validation_result.quality_score
# Adjust for execution time
execution_time = context.get('execution_time_ms', 0)
time_ratio = execution_time / max(self.performance_target_ms, 1)
time_penalty = min(time_ratio, 1.0)
# Initialize error penalty (no penalty when no error occurs)
error_penalty = 1.0
# Adjust for error occurrence
if context.get('error_occurred'):
error_severity = self._assess_error_severity(context)
error_penalty = 1.0 - error_severity
# Combine adjustments
quality_score = base_score * time_penalty * error_penalty
return quality_score
def _create_validation_report(self, context: dict, validation_result: ValidationResult,
effectiveness_metrics: dict, learning_analysis: dict,
recommendations: dict) -> dict:
"""Create comprehensive validation report."""
return {
'tool_name': context['tool_name'],
'execution_status': context['execution_status'],
'timestamp': context['timestamp'],
'validation': {
'is_valid': validation_result.is_valid,
'quality_score': validation_result.quality_score,
'issues': validation_result.issues,
'warnings': validation_result.warnings,
'suggestions': validation_result.suggestions
},
'effectiveness': effectiveness_metrics,
'learning': {
'patterns_detected': learning_analysis['patterns_detected'],
'success_factors': learning_analysis['success_factors'],
'failure_factors': learning_analysis['failure_factors'],
'learning_value': effectiveness_metrics['learning_value']
},
'recommendations': recommendations,
'compliance': {
'rules_compliance': len([i for i in validation_result.issues if 'RULES' in i]) == 0,
'principles_alignment': len([w for w in validation_result.warnings if 'PRINCIPLES' in w]) == 0,
'superclaude_score': self._calculate_superclaude_compliance_score(validation_result)
},
'metadata': {
'hook_version': 'post_tool_use_1.0',
'validation_timestamp': time.time(),
'learning_events_recorded': len(learning_analysis['patterns_detected']) + 1
}
}
def _calculate_superclaude_compliance_score(self, validation_result: ValidationResult) -> float:
"""Calculate overall SuperClaude compliance score."""
base_score = validation_result.quality_score
# Penalties for specific violations
rules_violations = len([i for i in validation_result.issues if 'RULES' in i])
principles_violations = len([w for w in validation_result.warnings if 'PRINCIPLES' in w])
penalty = (rules_violations * 0.2) + (principles_violations * 0.1)
return max(base_score - penalty, 0.0)
def _create_fallback_result(self, tool_result: dict, error: str) -> dict:
"""Create fallback validation report on error."""
return {
'tool_name': tool_result.get('tool_name', 'unknown'),
'execution_status': 'validation_error',
'timestamp': time.time(),
'error': error,
'fallback_mode': True,
'validation': {
'is_valid': False,
'quality_score': 0.0,
'issues': [f"Validation hook error: {error}"],
'warnings': [],
'suggestions': ['Fix validation hook error']
},
'effectiveness': {
'overall_effectiveness': 0.0,
'quality_score': 0.0,
'performance_score': 0.0,
'user_satisfaction_estimate': 0.0,
'learning_value': 0.0
},
'performance_metrics': {
'processing_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def _analyze_execution_patterns(self, context: dict, validation_result: ValidationResult) -> dict:
"""Analyze patterns in tool execution."""
pattern_analysis = {
'error_pattern_detected': False,
'pattern_type': 'unknown',
'description': 'No error pattern detected'
}
# Check for error occurrence
if context.get('error_occurred'):
error_type = context.get('error_type', 'unknown')
# Check for specific error types
if error_type in ['fatal', 'security', 'corruption']:
pattern_analysis['error_pattern_detected'] = True
pattern_analysis['pattern_type'] = error_type
pattern_analysis['description'] = f"Error pattern detected: {error_type}"
return pattern_analysis
def _categorize_operation(self, tool_name: str) -> Optional[str]:
"""Categorize tool into operation type for preference tracking."""
operation_map = {
'read': ['Read', 'Get', 'List', 'Search', 'Find'],
'write': ['Write', 'Create', 'Generate'],
'edit': ['Edit', 'Update', 'Modify', 'Replace'],
'analyze': ['Analyze', 'Validate', 'Check', 'Test'],
'mcp': ['Context7', 'Sequential', 'Magic', 'Playwright', 'Morphllm', 'Serena']
}
for operation_type, tools in operation_map.items():
if any(tool in tool_name for tool in tools):
return operation_type
return None
def main():
"""Main hook execution function."""
try:
# Read tool result from stdin
tool_result = json.loads(sys.stdin.read())
# Initialize and run hook
hook = PostToolUseHook()
result = hook.process_tool_result(tool_result)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'validation_error': True,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,773 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Pre-Compact Hook
Implements MODE_Token_Efficiency.md compression algorithms for intelligent context optimization.
Performance target: <150ms execution time.
This hook runs before context compaction and provides:
- Intelligent compression strategy selection
- Selective content preservation with framework exclusion
- Symbol systems and abbreviation optimization
- Quality-gated compression with ≥95% information preservation
- Adaptive compression based on resource constraints
"""
import sys
import json
import time
import os
from pathlib import Path
from typing import Dict, Any, List, Optional, Tuple
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic
from pattern_detection import PatternDetector
from mcp_intelligence import MCPIntelligence
from compression_engine import (
CompressionEngine, CompressionLevel, ContentType, CompressionResult, CompressionStrategy
)
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class PreCompactHook:
"""
Pre-compact hook implementing SuperClaude token efficiency intelligence.
Responsibilities:
- Analyze context for compression opportunities
- Apply selective compression with framework protection
- Implement symbol systems and abbreviation optimization
- Maintain ≥95% information preservation quality
- Adapt compression strategy based on resource constraints
- Learn from compression effectiveness and user preferences
"""
def __init__(self):
start_time = time.time()
# Initialize core components
self.framework_logic = FrameworkLogic()
self.pattern_detector = PatternDetector()
self.mcp_intelligence = MCPIntelligence()
self.compression_engine = CompressionEngine()
# Initialize learning engine with installation directory cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self.learning_engine = LearningEngine(cache_dir)
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('pre_compact')
# Load compression configuration (from YAML if exists, otherwise use hook config)
try:
self.compression_config = config_loader.load_config('compression')
except FileNotFoundError:
# Fall back to hook configuration if YAML file not found
self.compression_config = self.hook_config.get('configuration', {})
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('pre_compact', 'performance_target_ms', 150)
def process_pre_compact(self, compact_request: dict) -> dict:
"""
Process pre-compact request with intelligent compression.
Args:
compact_request: Context compaction request from Claude Code
Returns:
Compression configuration and optimized content strategy
"""
start_time = time.time()
# Log hook start
log_hook_start("pre_compact", {
"session_id": compact_request.get('session_id', ''),
"content_size": len(compact_request.get('content', '')),
"resource_state": compact_request.get('resource_state', {}),
"triggers": compact_request.get('triggers', [])
})
try:
# Extract compression context
context = self._extract_compression_context(compact_request)
# Analyze content for compression strategy
content_analysis = self._analyze_content_for_compression(context)
# Determine optimal compression strategy
compression_strategy = self._determine_compression_strategy(context, content_analysis)
# Log compression strategy decision
log_decision(
"pre_compact",
"compression_strategy",
compression_strategy.level.value,
f"Based on resource usage: {context.get('token_usage_percent', 0)}%, content type: {content_analysis['content_type'].value}"
)
# Apply selective compression with framework protection
compression_results = self._apply_selective_compression(
context, compression_strategy, content_analysis
)
# Validate compression quality
quality_validation = self._validate_compression_quality(
compression_results, compression_strategy
)
# Log quality validation results
if not quality_validation['overall_quality_met']:
log_decision(
"pre_compact",
"quality_validation",
"failed",
f"Preservation score: {quality_validation['preservation_score']:.2f}, Issues: {', '.join(quality_validation['quality_issues'])}"
)
# Record learning events
self._record_compression_learning(context, compression_results, quality_validation)
# Generate compression configuration
compression_config = self._generate_compression_config(
context, compression_strategy, compression_results, quality_validation
)
# Performance tracking
execution_time = (time.time() - start_time) * 1000
compression_config['performance_metrics'] = {
'compression_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'efficiency_score': self._calculate_compression_efficiency(context, execution_time)
}
# Log compression results
log_decision(
"pre_compact",
"compression_results",
f"{compression_config['results']['compression_ratio']:.1%}",
f"Saved {compression_config['optimization']['estimated_token_savings']} tokens"
)
# Log hook end
log_hook_end(
"pre_compact",
int(execution_time),
True,
{
"compression_ratio": compression_config['results']['compression_ratio'],
"preservation_score": compression_config['quality']['preservation_score'],
"token_savings": compression_config['optimization']['estimated_token_savings'],
"performance_target_met": execution_time < self.performance_target_ms
}
)
return compression_config
except Exception as e:
# Log error
log_error("pre_compact", str(e), {"request": compact_request})
# Log hook end with failure
log_hook_end("pre_compact", int((time.time() - start_time) * 1000), False)
# Graceful fallback on error
return self._create_fallback_compression_config(compact_request, str(e))
def _extract_compression_context(self, compact_request: dict) -> dict:
"""Extract and enrich compression context."""
context = {
'session_id': compact_request.get('session_id', ''),
'content_to_compress': compact_request.get('content', ''),
'content_metadata': compact_request.get('metadata', {}),
'resource_constraints': compact_request.get('resource_state', {}),
'user_preferences': compact_request.get('user_preferences', {}),
'compression_triggers': compact_request.get('triggers', []),
'previous_compressions': compact_request.get('compression_history', []),
'session_context': compact_request.get('session_context', {}),
'timestamp': time.time()
}
# Analyze content characteristics
context.update(self._analyze_content_characteristics(context))
# Extract resource state
context.update(self._extract_resource_state(context))
return context
def _analyze_content_characteristics(self, context: dict) -> dict:
"""Analyze content characteristics for compression decisions."""
content = context.get('content_to_compress', '')
metadata = context.get('content_metadata', {})
characteristics = {
'content_length': len(content),
'content_complexity': 0.0,
'repetition_factor': 0.0,
'technical_density': 0.0,
'framework_content_ratio': 0.0,
'user_content_ratio': 0.0,
'compressibility_score': 0.0
}
if not content:
return characteristics
# Content complexity analysis
lines = content.split('\n')
characteristics['content_complexity'] = self._calculate_content_complexity(content, lines)
# Repetition analysis
characteristics['repetition_factor'] = self._calculate_repetition_factor(content, lines)
# Technical density
characteristics['technical_density'] = self._calculate_technical_density(content)
# Framework vs user content ratio
framework_ratio, user_ratio = self._analyze_content_sources(content, metadata)
characteristics['framework_content_ratio'] = framework_ratio
characteristics['user_content_ratio'] = user_ratio
# Overall compressibility score
characteristics['compressibility_score'] = self._calculate_compressibility_score(characteristics)
return characteristics
def _calculate_content_complexity(self, content: str, lines: List[str]) -> float:
"""Calculate content complexity score (0.0 to 1.0)."""
complexity_indicators = [
len([line for line in lines if len(line) > 100]) / max(len(lines), 1), # Long lines
len([char for char in content if char in '{}[]()']) / max(len(content), 1), # Structural chars
len(set(content.split())) / max(len(content.split()), 1), # Vocabulary richness
]
return min(sum(complexity_indicators) / len(complexity_indicators), 1.0)
def _calculate_repetition_factor(self, content: str, lines: List[str]) -> float:
"""Calculate repetition factor for compression potential."""
if not lines:
return 0.0
# Line repetition
unique_lines = len(set(lines))
line_repetition = 1.0 - (unique_lines / len(lines))
# Word repetition
words = content.split()
if words:
unique_words = len(set(words))
word_repetition = 1.0 - (unique_words / len(words))
else:
word_repetition = 0.0
return (line_repetition + word_repetition) / 2
def _calculate_technical_density(self, content: str) -> float:
"""Calculate technical density for compression strategy."""
technical_patterns = [
r'\b[A-Z][a-zA-Z]*\b', # CamelCase
r'\b\w+\.\w+\b', # Dotted notation
r'\b\d+\.\d+\.\d+\b', # Version numbers
r'\b[a-z]+_[a-z]+\b', # Snake_case
r'\b[A-Z]{2,}\b', # CONSTANTS
]
import re
technical_matches = 0
for pattern in technical_patterns:
technical_matches += len(re.findall(pattern, content))
total_words = len(content.split())
return min(technical_matches / max(total_words, 1), 1.0)
def _analyze_content_sources(self, content: str, metadata: dict) -> Tuple[float, float]:
"""Analyze ratio of framework vs user content."""
# Framework content indicators
framework_indicators = [
'SuperClaude', 'CLAUDE.md', 'FLAGS.md', 'PRINCIPLES.md',
'ORCHESTRATOR.md', 'MCP_', 'MODE_', 'SESSION_LIFECYCLE'
]
# User content indicators
user_indicators = [
'project_files', 'user_documentation', 'source_code',
'configuration_files', 'custom_content'
]
framework_score = 0
user_score = 0
# Check content text
content_lower = content.lower()
for indicator in framework_indicators:
if indicator.lower() in content_lower:
framework_score += 1
for indicator in user_indicators:
if indicator.lower() in content_lower:
user_score += 1
# Check metadata
content_type = metadata.get('content_type', '')
file_path = metadata.get('file_path', '')
if any(pattern in file_path for pattern in ['/.claude/', 'framework']):
framework_score += 3
if any(pattern in content_type for pattern in user_indicators):
user_score += 3
total_score = framework_score + user_score
if total_score == 0:
return 0.5, 0.5 # Unknown, assume mixed
return framework_score / total_score, user_score / total_score
def _calculate_compressibility_score(self, characteristics: dict) -> float:
"""Calculate overall compressibility score."""
# Higher repetition = higher compressibility
repetition_contribution = characteristics['repetition_factor'] * 0.4
# Higher technical density = better compression with abbreviations
technical_contribution = characteristics['technical_density'] * 0.3
# Framework content is not compressed (exclusion)
framework_penalty = characteristics['framework_content_ratio'] * 0.5
# Content complexity affects compression effectiveness
complexity_factor = 1.0 - (characteristics['content_complexity'] * 0.2)
score = (repetition_contribution + technical_contribution) * complexity_factor - framework_penalty
return max(min(score, 1.0), 0.0)
def _extract_resource_state(self, context: dict) -> dict:
"""Extract resource state for compression decisions."""
resource_constraints = context.get('resource_constraints', {})
return {
'memory_usage_percent': resource_constraints.get('memory_usage', 0),
'token_usage_percent': resource_constraints.get('token_usage', 0),
'conversation_length': resource_constraints.get('conversation_length', 0),
'resource_pressure': resource_constraints.get('pressure_level', 'normal'),
'user_requests_compression': resource_constraints.get('user_compression_request', False)
}
def _analyze_content_for_compression(self, context: dict) -> dict:
"""Analyze content to determine compression approach."""
content = context.get('content_to_compress', '')
metadata = context.get('content_metadata', {})
# Classify content type
content_type = self.compression_engine.classify_content(content, metadata)
# Analyze compression opportunities
analysis = {
'content_type': content_type,
'compression_opportunities': [],
'preservation_requirements': [],
'optimization_techniques': []
}
# Framework content - complete exclusion
if content_type == ContentType.FRAMEWORK_CONTENT:
analysis['preservation_requirements'].append('complete_exclusion')
analysis['compression_opportunities'] = []
log_decision(
"pre_compact",
"content_classification",
"framework_content",
"Complete exclusion from compression - framework protection"
)
return analysis
# User content - minimal compression only
if content_type == ContentType.USER_CONTENT:
analysis['preservation_requirements'].extend([
'high_fidelity_preservation',
'minimal_compression_only'
])
analysis['compression_opportunities'].append('whitespace_optimization')
log_decision(
"pre_compact",
"content_classification",
"user_content",
"Minimal compression only - user content preservation"
)
return analysis
# Session/working data - full compression applicable
compressibility = context.get('compressibility_score', 0.0)
if compressibility > 0.7:
analysis['compression_opportunities'].extend([
'symbol_systems',
'abbreviation_systems',
'structural_optimization',
'redundancy_removal'
])
elif compressibility > 0.4:
analysis['compression_opportunities'].extend([
'symbol_systems',
'structural_optimization'
])
else:
analysis['compression_opportunities'].append('minimal_optimization')
# Technical content optimization
if context.get('technical_density', 0) > 0.6:
analysis['optimization_techniques'].append('technical_abbreviations')
# Repetitive content optimization
if context.get('repetition_factor', 0) > 0.5:
analysis['optimization_techniques'].append('pattern_compression')
return analysis
def _determine_compression_strategy(self, context: dict, content_analysis: dict) -> CompressionStrategy:
"""Determine optimal compression strategy."""
# Determine compression level based on resource state
compression_level = self.compression_engine.determine_compression_level({
'resource_usage_percent': context.get('token_usage_percent', 0),
'conversation_length': context.get('conversation_length', 0),
'user_requests_brevity': context.get('user_requests_compression', False),
'complexity_score': context.get('content_complexity', 0.0)
})
# Adjust for content type
content_type = content_analysis['content_type']
if content_type == ContentType.FRAMEWORK_CONTENT:
compression_level = CompressionLevel.MINIMAL # Actually no compression
elif content_type == ContentType.USER_CONTENT:
compression_level = CompressionLevel.MINIMAL
# Create strategy
strategy = self.compression_engine._create_compression_strategy(compression_level, content_type)
# Customize based on content analysis
opportunities = content_analysis.get('compression_opportunities', [])
if 'symbol_systems' not in opportunities:
strategy.symbol_systems_enabled = False
if 'abbreviation_systems' not in opportunities:
strategy.abbreviation_systems_enabled = False
if 'structural_optimization' not in opportunities:
strategy.structural_optimization = False
return strategy
def _apply_selective_compression(self, context: dict, strategy: CompressionStrategy,
content_analysis: dict) -> Dict[str, CompressionResult]:
"""Apply selective compression with framework protection."""
content = context.get('content_to_compress', '')
metadata = context.get('content_metadata', {})
# Split content into sections for selective processing
content_sections = self._split_content_into_sections(content, metadata)
compression_results = {}
for section_name, section_data in content_sections.items():
section_content = section_data['content']
section_metadata = section_data['metadata']
# Apply compression to each section
result = self.compression_engine.compress_content(
section_content,
context,
section_metadata
)
compression_results[section_name] = result
return compression_results
def _split_content_into_sections(self, content: str, metadata: dict) -> dict:
"""Split content into sections for selective compression."""
sections = {}
# Simple splitting strategy - can be enhanced
lines = content.split('\n')
# Detect different content types within the text
current_section = 'default'
current_content = []
for line in lines:
# Framework content detection
if any(indicator in line for indicator in ['SuperClaude', 'CLAUDE.md', 'FLAGS.md']):
if current_content and current_section != 'framework':
sections[current_section] = {
'content': '\n'.join(current_content),
'metadata': {**metadata, 'content_type': current_section}
}
current_content = []
current_section = 'framework'
# User code detection
elif any(indicator in line for indicator in ['def ', 'class ', 'function', 'import ']):
if current_content and current_section != 'user_code':
sections[current_section] = {
'content': '\n'.join(current_content),
'metadata': {**metadata, 'content_type': current_section}
}
current_content = []
current_section = 'user_code'
# Session data detection
elif any(indicator in line for indicator in ['session_', 'checkpoint_', 'cache_']):
if current_content and current_section != 'session_data':
sections[current_section] = {
'content': '\n'.join(current_content),
'metadata': {**metadata, 'content_type': current_section}
}
current_content = []
current_section = 'session_data'
current_content.append(line)
# Add final section
if current_content:
sections[current_section] = {
'content': '\n'.join(current_content),
'metadata': {**metadata, 'content_type': current_section}
}
# If no sections detected, treat as single section
if not sections:
sections['default'] = {
'content': content,
'metadata': metadata
}
return sections
def _validate_compression_quality(self, compression_results: Dict[str, CompressionResult],
strategy: CompressionStrategy) -> dict:
"""Validate compression quality against standards."""
validation = {
'overall_quality_met': True,
'preservation_score': 0.0,
'compression_efficiency': 0.0,
'quality_issues': [],
'quality_warnings': []
}
if not compression_results:
return validation
# Calculate overall metrics
total_original = sum(result.original_length for result in compression_results.values())
total_compressed = sum(result.compressed_length for result in compression_results.values())
total_preservation = sum(result.preservation_score for result in compression_results.values())
if total_original > 0:
validation['compression_efficiency'] = (total_original - total_compressed) / total_original
validation['preservation_score'] = total_preservation / len(compression_results)
# Quality threshold validation
if validation['preservation_score'] < strategy.quality_threshold:
validation['overall_quality_met'] = False
validation['quality_issues'].append(
f"Preservation score {validation['preservation_score']:.2f} below threshold {strategy.quality_threshold}"
)
# Individual section validation
for section_name, result in compression_results.items():
if result.quality_score < 0.8:
validation['quality_warnings'].append(
f"Section '{section_name}' quality score low: {result.quality_score:.2f}"
)
if result.compression_ratio > 0.9: # Over 90% compression might be too aggressive
validation['quality_warnings'].append(
f"Section '{section_name}' compression ratio very high: {result.compression_ratio:.2f}"
)
return validation
def _record_compression_learning(self, context: dict, compression_results: Dict[str, CompressionResult],
quality_validation: dict):
"""Record compression learning for future optimization."""
overall_effectiveness = quality_validation['preservation_score'] * quality_validation['compression_efficiency']
# Record compression effectiveness
self.learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.USER,
context,
{
'compression_level': self.compression_engine.determine_compression_level(context).value,
'techniques_used': list(set().union(*[result.techniques_used for result in compression_results.values()])),
'preservation_score': quality_validation['preservation_score'],
'compression_efficiency': quality_validation['compression_efficiency']
},
overall_effectiveness,
0.9, # High confidence in compression metrics
{'hook': 'pre_compact', 'compression_learning': True}
)
# Record user preference if compression was requested
if context.get('user_requests_compression'):
self.learning_engine.record_learning_event(
LearningType.USER_PREFERENCE,
AdaptationScope.USER,
context,
{'compression_preference': 'enabled', 'user_satisfaction': overall_effectiveness},
overall_effectiveness,
0.8,
{'user_initiated_compression': True}
)
def _calculate_compression_efficiency(self, context: dict, execution_time_ms: float) -> float:
"""Calculate compression processing efficiency."""
content_length = context.get('content_length', 1)
# Efficiency based on processing speed per character
chars_per_ms = content_length / max(execution_time_ms, 1)
# Target: 100 chars per ms for good efficiency
target_chars_per_ms = 100
efficiency = min(chars_per_ms / target_chars_per_ms, 1.0)
return efficiency
def _generate_compression_config(self, context: dict, strategy: CompressionStrategy,
compression_results: Dict[str, CompressionResult],
quality_validation: dict) -> dict:
"""Generate comprehensive compression configuration."""
total_original = sum(result.original_length for result in compression_results.values())
total_compressed = sum(result.compressed_length for result in compression_results.values())
config = {
'compression_enabled': True,
'compression_level': strategy.level.value,
'selective_compression': True,
'strategy': {
'symbol_systems_enabled': strategy.symbol_systems_enabled,
'abbreviation_systems_enabled': strategy.abbreviation_systems_enabled,
'structural_optimization': strategy.structural_optimization,
'quality_threshold': strategy.quality_threshold
},
'results': {
'original_length': total_original,
'compressed_length': total_compressed,
'compression_ratio': (total_original - total_compressed) / max(total_original, 1),
'sections_processed': len(compression_results),
'techniques_used': list(set().union(*[result.techniques_used for result in compression_results.values()]))
},
'quality': {
'preservation_score': quality_validation['preservation_score'],
'quality_met': quality_validation['overall_quality_met'],
'issues': quality_validation['quality_issues'],
'warnings': quality_validation['quality_warnings']
},
'framework_protection': {
'framework_content_excluded': True,
'user_content_preserved': True,
'selective_processing_enabled': True
},
'optimization': {
'estimated_token_savings': int((total_original - total_compressed) * 0.7), # Rough estimate
'processing_efficiency': quality_validation['compression_efficiency'],
'recommendation': self._get_compression_recommendation(context, quality_validation)
},
'metadata': {
'hook_version': 'pre_compact_1.0',
'compression_timestamp': context['timestamp'],
'content_classification': 'selective_compression_applied'
}
}
return config
def _get_compression_recommendation(self, context: dict, quality_validation: dict) -> str:
"""Get compression recommendation based on results."""
if not quality_validation['overall_quality_met']:
return "Reduce compression level to maintain quality"
elif quality_validation['compression_efficiency'] > 0.7:
return "Excellent compression efficiency achieved"
elif quality_validation['compression_efficiency'] > 0.4:
return "Good compression efficiency, consider slight optimization"
else:
return "Low compression efficiency, consider alternative strategies"
def _create_fallback_compression_config(self, compact_request: dict, error: str) -> dict:
"""Create fallback compression configuration on error."""
return {
'compression_enabled': False,
'fallback_mode': True,
'error': error,
'strategy': {
'symbol_systems_enabled': False,
'abbreviation_systems_enabled': False,
'structural_optimization': False,
'quality_threshold': 1.0
},
'results': {
'original_length': len(compact_request.get('content', '')),
'compressed_length': len(compact_request.get('content', '')),
'compression_ratio': 0.0,
'sections_processed': 0,
'techniques_used': []
},
'quality': {
'preservation_score': 1.0,
'quality_met': False,
'issues': [f"Compression hook error: {error}"],
'warnings': []
},
'performance_metrics': {
'compression_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def main():
"""Main hook execution function."""
try:
# Read compact request from stdin
compact_request = json.loads(sys.stdin.read())
# Initialize and run hook
hook = PreCompactHook()
result = hook.process_pre_compact(compact_request)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'compression_enabled': False,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,648 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Pre-Tool-Use Hook
Implements ORCHESTRATOR.md + MCP routing intelligence for optimal tool selection.
Performance target: <200ms execution time.
This hook runs before every tool usage and provides:
- Intelligent tool routing and MCP server selection
- Performance optimization and parallel execution planning
- Context-aware tool configuration
- Fallback strategy implementation
- Real-time adaptation based on effectiveness
"""
import sys
import json
import time
import os
from pathlib import Path
from typing import Dict, Any, List, Optional
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic, OperationContext, OperationType, RiskLevel
from pattern_detection import PatternDetector, PatternMatch
from mcp_intelligence import MCPIntelligence, MCPActivationPlan
from compression_engine import CompressionEngine
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class PreToolUseHook:
"""
Pre-tool-use hook implementing SuperClaude orchestration intelligence.
Responsibilities:
- Analyze tool usage context and requirements
- Route to optimal MCP servers based on capability matching
- Configure parallel execution and performance optimization
- Apply learned adaptations for tool selection
- Implement fallback strategies for server failures
- Track tool effectiveness and performance metrics
"""
def __init__(self):
start_time = time.time()
# Initialize core components
self.framework_logic = FrameworkLogic()
self.pattern_detector = PatternDetector()
self.mcp_intelligence = MCPIntelligence()
self.compression_engine = CompressionEngine()
# Initialize learning engine with installation directory cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self.learning_engine = LearningEngine(cache_dir)
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('pre_tool_use')
# Load orchestrator configuration (from YAML if exists, otherwise use hook config)
try:
self.orchestrator_config = config_loader.load_config('orchestrator')
except FileNotFoundError:
# Fall back to hook configuration if YAML file not found
self.orchestrator_config = self.hook_config.get('configuration', {})
# Load performance configuration (from YAML if exists, otherwise use hook config)
try:
self.performance_config = config_loader.load_config('performance')
except FileNotFoundError:
# Fall back to performance targets from global configuration
self.performance_config = config_loader.get_performance_targets()
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('pre_tool_use', 'performance_target_ms', 200)
def process_tool_use(self, tool_request: dict) -> dict:
"""
Process tool use request with intelligent routing.
Args:
tool_request: Tool usage request from Claude Code
Returns:
Enhanced tool configuration with SuperClaude intelligence
"""
start_time = time.time()
# Log hook start
log_hook_start("pre_tool_use", {
"tool_name": tool_request.get('tool_name', 'unknown'),
"has_parameters": bool(tool_request.get('parameters'))
})
try:
# Extract tool context
context = self._extract_tool_context(tool_request)
# Analyze tool requirements and capabilities
requirements = self._analyze_tool_requirements(context)
# Log routing decision
if requirements.get('mcp_server_hints'):
log_decision(
"pre_tool_use",
"mcp_server_selection",
",".join(requirements['mcp_server_hints']),
f"Tool '{context['tool_name']}' requires capabilities: {', '.join(requirements.get('capabilities_needed', []))}"
)
# Detect patterns for intelligent routing
routing_analysis = self._analyze_routing_patterns(context, requirements)
# Apply learned adaptations
enhanced_routing = self._apply_routing_adaptations(context, routing_analysis)
# Create optimal execution plan
execution_plan = self._create_execution_plan(context, enhanced_routing)
# Log execution strategy decision
log_decision(
"pre_tool_use",
"execution_strategy",
execution_plan['execution_strategy'],
f"Complexity: {context.get('complexity_score', 0):.2f}, Files: {context.get('file_count', 1)}"
)
# Configure tool enhancement
tool_config = self._configure_tool_enhancement(context, execution_plan)
# Record learning event
self._record_tool_learning(context, tool_config)
# Performance validation
execution_time = (time.time() - start_time) * 1000
tool_config['performance_metrics'] = {
'routing_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'efficiency_score': self._calculate_efficiency_score(context, execution_time)
}
# Log successful completion
log_hook_end(
"pre_tool_use",
int(execution_time),
True,
{
"tool_name": context['tool_name'],
"mcp_servers": tool_config.get('mcp_integration', {}).get('servers', []),
"enhanced_mode": tool_config.get('enhanced_mode', False)
}
)
return tool_config
except Exception as e:
# Log error
execution_time = (time.time() - start_time) * 1000
log_error(
"pre_tool_use",
str(e),
{"tool_name": tool_request.get('tool_name', 'unknown')}
)
log_hook_end("pre_tool_use", int(execution_time), False)
# Graceful fallback on error
return self._create_fallback_tool_config(tool_request, str(e))
def _extract_tool_context(self, tool_request: dict) -> dict:
"""Extract and enrich tool usage context."""
context = {
'tool_name': tool_request.get('tool_name', ''),
'tool_parameters': tool_request.get('parameters', {}),
'user_intent': tool_request.get('user_intent', ''),
'session_context': tool_request.get('session_context', {}),
'previous_tools': tool_request.get('previous_tools', []),
'operation_sequence': tool_request.get('operation_sequence', []),
'resource_state': tool_request.get('resource_state', {}),
'timestamp': time.time()
}
# Extract operation characteristics
context.update(self._analyze_operation_characteristics(context))
# Analyze tool chain context
context.update(self._analyze_tool_chain_context(context))
return context
def _analyze_operation_characteristics(self, context: dict) -> dict:
"""Analyze operation characteristics for routing decisions."""
characteristics = {
'operation_type': OperationType.READ,
'complexity_score': 0.0,
'file_count': 1,
'directory_count': 1,
'parallelizable': False,
'resource_intensive': False,
'requires_intelligence': False
}
tool_name = context['tool_name']
tool_params = context['tool_parameters']
# Determine operation type from tool
if tool_name in ['Write', 'Edit', 'MultiEdit']:
characteristics['operation_type'] = OperationType.WRITE
characteristics['complexity_score'] += 0.2
elif tool_name in ['Build', 'Implement']:
characteristics['operation_type'] = OperationType.BUILD
characteristics['complexity_score'] += 0.4
elif tool_name in ['Test', 'Validate']:
characteristics['operation_type'] = OperationType.TEST
characteristics['complexity_score'] += 0.1
elif tool_name in ['Analyze', 'Debug']:
characteristics['operation_type'] = OperationType.ANALYZE
characteristics['complexity_score'] += 0.3
characteristics['requires_intelligence'] = True
# Analyze file/directory scope
if 'file_path' in tool_params:
characteristics['file_count'] = 1
elif 'files' in tool_params:
file_list = tool_params['files']
characteristics['file_count'] = len(file_list) if isinstance(file_list, list) else 1
if characteristics['file_count'] > 3:
characteristics['parallelizable'] = True
characteristics['complexity_score'] += 0.3
if 'directory' in tool_params or 'path' in tool_params:
path_param = tool_params.get('directory') or tool_params.get('path', '')
if '*' in str(path_param) or '**' in str(path_param):
characteristics['directory_count'] = 5 # Estimate for glob patterns
characteristics['complexity_score'] += 0.2
characteristics['parallelizable'] = True
# Resource intensity analysis
if characteristics['file_count'] > 10 or characteristics['complexity_score'] > 0.6:
characteristics['resource_intensive'] = True
# Intelligence requirements
intelligence_tools = ['Analyze', 'Debug', 'Optimize', 'Refactor', 'Generate']
if any(tool in tool_name for tool in intelligence_tools):
characteristics['requires_intelligence'] = True
return characteristics
def _analyze_tool_chain_context(self, context: dict) -> dict:
"""Analyze tool chain context for optimization opportunities."""
chain_analysis = {
'chain_length': len(context['previous_tools']),
'pattern_detected': None,
'optimization_opportunity': False,
'cache_opportunity': False
}
previous_tools = context['previous_tools']
if len(previous_tools) >= 2:
# Detect common patterns
tool_names = [tool.get('name', '') for tool in previous_tools[-3:]]
# Read-Edit pattern
if any('Read' in name for name in tool_names) and any('Edit' in name for name in tool_names):
chain_analysis['pattern_detected'] = 'read_edit_pattern'
chain_analysis['optimization_opportunity'] = True
# Multiple file operations
if sum(1 for name in tool_names if 'file' in name.lower()) >= 2:
chain_analysis['pattern_detected'] = 'multi_file_pattern'
chain_analysis['optimization_opportunity'] = True
# Analysis chain
if sum(1 for name in tool_names if any(word in name for word in ['Analyze', 'Search', 'Find'])) >= 2:
chain_analysis['pattern_detected'] = 'analysis_chain'
chain_analysis['cache_opportunity'] = True
return chain_analysis
def _analyze_tool_requirements(self, context: dict) -> dict:
"""Analyze tool requirements for capability matching."""
requirements = {
'capabilities_needed': [],
'performance_requirements': {},
'quality_requirements': {},
'mcp_server_hints': [],
'native_tool_sufficient': True
}
tool_name = context['tool_name']
characteristics = context
# Determine required capabilities
if characteristics.get('requires_intelligence'):
requirements['capabilities_needed'].extend(['analysis', 'reasoning', 'context_understanding'])
requirements['native_tool_sufficient'] = False
if characteristics.get('complexity_score', 0) > 0.6:
requirements['capabilities_needed'].extend(['complex_reasoning', 'systematic_analysis'])
requirements['mcp_server_hints'].append('sequential')
if characteristics.get('file_count', 1) > 5:
requirements['capabilities_needed'].extend(['multi_file_coordination', 'semantic_understanding'])
requirements['mcp_server_hints'].append('serena')
# UI/component operations
if any(word in context.get('user_intent', '').lower() for word in ['component', 'ui', 'frontend', 'design']):
requirements['capabilities_needed'].append('ui_generation')
requirements['mcp_server_hints'].append('magic')
# Documentation/library operations
if any(word in context.get('user_intent', '').lower() for word in ['library', 'documentation', 'framework', 'api']):
requirements['capabilities_needed'].append('documentation_access')
requirements['mcp_server_hints'].append('context7')
# Testing operations
if tool_name in ['Test'] or 'test' in context.get('user_intent', '').lower():
requirements['capabilities_needed'].append('testing_automation')
requirements['mcp_server_hints'].append('playwright')
# Performance requirements
if characteristics.get('resource_intensive'):
requirements['performance_requirements'] = {
'max_execution_time_ms': 5000,
'memory_efficiency_required': True,
'parallel_execution_preferred': True
}
else:
requirements['performance_requirements'] = {
'max_execution_time_ms': 2000,
'response_time_critical': True
}
# Quality requirements
if context.get('session_context', {}).get('is_production', False):
requirements['quality_requirements'] = {
'validation_required': True,
'error_handling_critical': True,
'rollback_capability_needed': True
}
return requirements
def _analyze_routing_patterns(self, context: dict, requirements: dict) -> dict:
"""Analyze patterns for intelligent routing decisions."""
# Create operation data for pattern detection
operation_data = {
'operation_type': context.get('operation_type', OperationType.READ).value,
'file_count': context.get('file_count', 1),
'complexity_score': context.get('complexity_score', 0.0),
'tool_name': context['tool_name']
}
# Run pattern detection
detection_result = self.pattern_detector.detect_patterns(
context.get('user_intent', ''),
context,
operation_data
)
# Create MCP activation plan
mcp_plan = self.mcp_intelligence.create_activation_plan(
context.get('user_intent', ''),
context,
operation_data
)
return {
'pattern_matches': detection_result.matches,
'recommended_mcp_servers': detection_result.recommended_mcp_servers,
'mcp_activation_plan': mcp_plan,
'routing_confidence': detection_result.confidence_score,
'optimization_opportunities': self._identify_optimization_opportunities(context, requirements)
}
def _identify_optimization_opportunities(self, context: dict, requirements: dict) -> list:
"""Identify optimization opportunities for tool execution."""
opportunities = []
# Parallel execution opportunity
if context.get('parallelizable') and context.get('file_count', 1) > 3:
opportunities.append({
'type': 'parallel_execution',
'description': 'Multi-file operation suitable for parallel processing',
'estimated_speedup': min(context.get('file_count', 1) * 0.3, 2.0)
})
# Caching opportunity
if context.get('cache_opportunity'):
opportunities.append({
'type': 'result_caching',
'description': 'Analysis results can be cached for reuse',
'estimated_speedup': 1.5
})
# MCP server coordination
if len(requirements.get('mcp_server_hints', [])) > 1:
opportunities.append({
'type': 'mcp_coordination',
'description': 'Multiple MCP servers can work together',
'quality_improvement': 0.2
})
# Intelligence routing
if not requirements.get('native_tool_sufficient'):
opportunities.append({
'type': 'intelligence_routing',
'description': 'Operation benefits from MCP server intelligence',
'quality_improvement': 0.3
})
return opportunities
def _apply_routing_adaptations(self, context: dict, routing_analysis: dict) -> dict:
"""Apply learned adaptations to routing decisions."""
base_routing = {
'recommended_mcp_servers': routing_analysis['recommended_mcp_servers'],
'mcp_activation_plan': routing_analysis['mcp_activation_plan'],
'optimization_opportunities': routing_analysis['optimization_opportunities']
}
# Apply learning engine adaptations
enhanced_routing = self.learning_engine.apply_adaptations(context, base_routing)
return enhanced_routing
def _create_execution_plan(self, context: dict, enhanced_routing: dict) -> dict:
"""Create optimal execution plan for tool usage."""
plan = {
'execution_strategy': 'direct',
'mcp_servers_required': enhanced_routing.get('recommended_mcp_servers', []),
'parallel_execution': False,
'caching_enabled': False,
'fallback_strategy': 'native_tools',
'performance_optimizations': [],
'estimated_execution_time_ms': 500
}
# Determine execution strategy
if context.get('complexity_score', 0) > 0.6:
plan['execution_strategy'] = 'intelligent_routing'
elif context.get('file_count', 1) > 5:
plan['execution_strategy'] = 'parallel_coordination'
# Configure parallel execution
if context.get('parallelizable') and context.get('file_count', 1) > 3:
plan['parallel_execution'] = True
plan['performance_optimizations'].append('parallel_file_processing')
plan['estimated_execution_time_ms'] = int(plan['estimated_execution_time_ms'] * 0.6)
# Configure caching
if context.get('cache_opportunity'):
plan['caching_enabled'] = True
plan['performance_optimizations'].append('result_caching')
# Configure MCP coordination
mcp_servers = plan['mcp_servers_required']
if len(mcp_servers) > 1:
plan['coordination_strategy'] = enhanced_routing.get('mcp_activation_plan', {}).get('coordination_strategy', 'collaborative')
# Estimate execution time based on complexity
base_time = 200
complexity_multiplier = 1 + context.get('complexity_score', 0.0)
file_multiplier = 1 + (context.get('file_count', 1) - 1) * 0.1
plan['estimated_execution_time_ms'] = int(base_time * complexity_multiplier * file_multiplier)
return plan
def _configure_tool_enhancement(self, context: dict, execution_plan: dict) -> dict:
"""Configure tool enhancement based on execution plan."""
tool_config = {
'tool_name': context['tool_name'],
'enhanced_mode': execution_plan['execution_strategy'] != 'direct',
'mcp_integration': {
'enabled': len(execution_plan['mcp_servers_required']) > 0,
'servers': execution_plan['mcp_servers_required'],
'coordination_strategy': execution_plan.get('coordination_strategy', 'single_server')
},
'performance_optimization': {
'parallel_execution': execution_plan['parallel_execution'],
'caching_enabled': execution_plan['caching_enabled'],
'optimizations': execution_plan['performance_optimizations']
},
'quality_enhancement': {
'validation_enabled': context.get('session_context', {}).get('is_production', False),
'error_recovery': True,
'context_preservation': True
},
'execution_metadata': {
'estimated_time_ms': execution_plan['estimated_execution_time_ms'],
'complexity_score': context.get('complexity_score', 0.0),
'intelligence_level': self._determine_intelligence_level(context)
}
}
# Add tool-specific enhancements
tool_config.update(self._get_tool_specific_enhancements(context, execution_plan))
return tool_config
def _determine_intelligence_level(self, context: dict) -> str:
"""Determine required intelligence level for operation."""
complexity = context.get('complexity_score', 0.0)
if complexity >= 0.8:
return 'high'
elif complexity >= 0.5:
return 'medium'
elif context.get('requires_intelligence'):
return 'medium'
else:
return 'low'
def _get_tool_specific_enhancements(self, context: dict, execution_plan: dict) -> dict:
"""Get tool-specific enhancement configurations."""
tool_name = context['tool_name']
enhancements = {}
# File operation enhancements
if tool_name in ['Read', 'Write', 'Edit']:
enhancements['file_operations'] = {
'integrity_check': True,
'backup_on_write': context.get('session_context', {}).get('is_production', False),
'encoding_detection': True
}
# Multi-file operation enhancements
if tool_name in ['MultiEdit', 'Batch'] or context.get('file_count', 1) > 3:
enhancements['multi_file_operations'] = {
'transaction_mode': True,
'rollback_capability': True,
'progress_tracking': True
}
# Analysis operation enhancements
if tool_name in ['Analyze', 'Debug', 'Search']:
enhancements['analysis_operations'] = {
'deep_context_analysis': context.get('complexity_score', 0.0) > 0.5,
'semantic_understanding': 'serena' in execution_plan['mcp_servers_required'],
'pattern_recognition': True
}
# Build/Implementation enhancements
if tool_name in ['Build', 'Implement', 'Generate']:
enhancements['build_operations'] = {
'framework_integration': 'context7' in execution_plan['mcp_servers_required'],
'component_generation': 'magic' in execution_plan['mcp_servers_required'],
'quality_validation': True
}
return enhancements
def _calculate_efficiency_score(self, context: dict, execution_time_ms: float) -> float:
"""Calculate efficiency score for the routing decision."""
# Base efficiency is inverse of execution time relative to target
time_efficiency = min(self.performance_target_ms / max(execution_time_ms, 1), 1.0)
# Complexity handling efficiency
complexity = context.get('complexity_score', 0.0)
complexity_efficiency = 1.0 - (complexity * 0.3) # Some complexity is expected
# Resource utilization efficiency
resource_usage = context.get('resource_state', {}).get('usage_percent', 0)
resource_efficiency = 1.0 - max(resource_usage - 70, 0) / 100.0
# Weighted efficiency score
efficiency_score = (time_efficiency * 0.4 +
complexity_efficiency * 0.3 +
resource_efficiency * 0.3)
return max(min(efficiency_score, 1.0), 0.0)
def _record_tool_learning(self, context: dict, tool_config: dict):
"""Record tool usage for learning purposes."""
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'tool_name': context['tool_name'],
'mcp_servers_used': tool_config.get('mcp_integration', {}).get('servers', []),
'execution_strategy': tool_config.get('execution_metadata', {}).get('intelligence_level', 'low'),
'optimizations_applied': tool_config.get('performance_optimization', {}).get('optimizations', [])
},
0.8, # Assume good effectiveness (will be updated later)
0.7, # Medium confidence until validated
{'hook': 'pre_tool_use', 'version': '1.0'}
)
def _create_fallback_tool_config(self, tool_request: dict, error: str) -> dict:
"""Create fallback tool configuration on error."""
return {
'tool_name': tool_request.get('tool_name', 'unknown'),
'enhanced_mode': False,
'fallback_mode': True,
'error': error,
'mcp_integration': {
'enabled': False,
'servers': [],
'coordination_strategy': 'none'
},
'performance_optimization': {
'parallel_execution': False,
'caching_enabled': False,
'optimizations': []
},
'performance_metrics': {
'routing_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def main():
"""Main hook execution function."""
try:
# Read tool request from stdin
tool_request = json.loads(sys.stdin.read())
# Initialize and run hook
hook = PreToolUseHook()
result = hook.process_tool_use(tool_request)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'enhanced_mode': False,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,704 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Session Start Hook
Implements SESSION_LIFECYCLE.md + FLAGS.md logic for intelligent session bootstrap.
Performance target: <50ms execution time.
This hook runs at the start of every Claude Code session and provides:
- Smart project context loading with framework exclusion
- Automatic mode detection and activation
- MCP server intelligence routing
- User preference adaptation
- Performance-optimized initialization
"""
import sys
import json
import time
import os
from pathlib import Path
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic, OperationContext, OperationType, RiskLevel
from pattern_detection import PatternDetector, PatternType
from mcp_intelligence import MCPIntelligence
from compression_engine import CompressionEngine, CompressionLevel, ContentType
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class SessionStartHook:
"""
Session start hook implementing SuperClaude intelligence.
Responsibilities:
- Initialize session with project context
- Apply user preferences and learned adaptations
- Activate appropriate modes and MCP servers
- Set up compression and performance optimization
- Track session metrics and performance
"""
def __init__(self):
start_time = time.time()
# Initialize only essential components immediately
self.framework_logic = FrameworkLogic()
# Lazy-load other components to improve performance
self._pattern_detector = None
self._mcp_intelligence = None
self._compression_engine = None
self._learning_engine = None
# Use installation directory for cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self._cache_dir = cache_dir
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('session_start')
# Load session configuration (from YAML if exists, otherwise use hook config)
try:
self.session_config = config_loader.load_config('session')
except FileNotFoundError:
# Fall back to hook configuration if YAML file not found
self.session_config = self.hook_config.get('configuration', {})
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('session_start', 'performance_target_ms', 50)
@property
def pattern_detector(self):
"""Lazy-load pattern detector to improve initialization performance."""
if self._pattern_detector is None:
self._pattern_detector = PatternDetector()
return self._pattern_detector
@property
def mcp_intelligence(self):
"""Lazy-load MCP intelligence to improve initialization performance."""
if self._mcp_intelligence is None:
self._mcp_intelligence = MCPIntelligence()
return self._mcp_intelligence
@property
def compression_engine(self):
"""Lazy-load compression engine to improve initialization performance."""
if self._compression_engine is None:
self._compression_engine = CompressionEngine()
return self._compression_engine
@property
def learning_engine(self):
"""Lazy-load learning engine to improve initialization performance."""
if self._learning_engine is None:
self._learning_engine = LearningEngine(self._cache_dir)
return self._learning_engine
def initialize_session(self, session_context: dict) -> dict:
"""
Initialize session with SuperClaude intelligence.
Args:
session_context: Session initialization context from Claude Code
Returns:
Enhanced session configuration
"""
start_time = time.time()
# Log hook start
log_hook_start("session_start", {
"project_path": session_context.get('project_path', 'unknown'),
"user_id": session_context.get('user_id', 'anonymous'),
"has_previous_session": bool(session_context.get('previous_session_id'))
})
try:
# Extract session context
context = self._extract_session_context(session_context)
# Detect patterns and operation intent
detection_result = self._detect_session_patterns(context)
# Apply learned adaptations
enhanced_recommendations = self._apply_learning_adaptations(
context, detection_result
)
# Create MCP activation plan
mcp_plan = self._create_mcp_activation_plan(
context, enhanced_recommendations
)
# Configure compression strategy
compression_config = self._configure_compression(context)
# Generate session configuration
session_config = self._generate_session_config(
context, enhanced_recommendations, mcp_plan, compression_config
)
# Record learning event
self._record_session_learning(context, session_config)
# Detect and activate modes
activated_modes = self._activate_intelligent_modes(context, enhanced_recommendations)
# Log mode activation decisions
for mode in activated_modes:
log_decision(
"session_start",
"mode_activation",
mode['name'],
f"Activated based on: {mode.get('trigger', 'automatic detection')}"
)
# Configure MCP server activation
mcp_configuration = self._configure_mcp_servers(context, activated_modes)
# Log MCP server decisions
if mcp_configuration.get('enabled_servers'):
log_decision(
"session_start",
"mcp_server_activation",
",".join(mcp_configuration['enabled_servers']),
f"Project type: {context.get('project_type', 'unknown')}"
)
# Performance validation
execution_time = (time.time() - start_time) * 1000
session_config['performance_metrics'] = {
'initialization_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'efficiency_score': self._calculate_initialization_efficiency(execution_time)
}
# Persist session context to cache for other hooks
session_id = context['session_id']
session_file_path = self._cache_dir / f"session_{session_id}.json"
try:
with open(session_file_path, 'w') as f:
json.dump(session_config, f, indent=2)
except Exception as e:
# Log error but don't fail session initialization
log_error("session_start", f"Failed to persist session context: {str(e)}", {"session_id": session_id})
# Log successful completion
log_hook_end(
"session_start",
int(execution_time),
True,
{
"project_type": context.get('project_type', 'unknown'),
"modes_activated": [m['name'] for m in activated_modes],
"mcp_servers": mcp_configuration.get('enabled_servers', [])
}
)
return session_config
except Exception as e:
# Log error
execution_time = (time.time() - start_time) * 1000
log_error(
"session_start",
str(e),
{"project_path": session_context.get('project_path', 'unknown')}
)
log_hook_end("session_start", int(execution_time), False)
# Graceful fallback on error
return self._create_fallback_session_config(session_context, str(e))
def _extract_session_context(self, session_data: dict) -> dict:
"""Extract and enrich session context."""
context = {
'session_id': session_data.get('session_id', 'unknown'),
'project_path': session_data.get('project_path', ''),
'user_input': session_data.get('user_input', ''),
'conversation_length': session_data.get('conversation_length', 0),
'resource_usage_percent': session_data.get('resource_usage_percent', 0),
'is_continuation': session_data.get('is_continuation', False),
'previous_session_id': session_data.get('previous_session_id'),
'timestamp': time.time()
}
# Detect project characteristics
if context['project_path']:
project_path = Path(context['project_path'])
context.update(self._analyze_project_structure(project_path))
# Analyze user input for intent
if context['user_input']:
context.update(self._analyze_user_intent(context['user_input']))
return context
def _analyze_project_structure(self, project_path: Path) -> dict:
"""Analyze project structure for intelligent configuration."""
analysis = {
'project_type': 'unknown',
'has_tests': False,
'has_frontend': False,
'has_backend': False,
'framework_detected': None,
'file_count_estimate': 0,
'directory_count_estimate': 0,
'is_production': False
}
try:
if not project_path.exists():
return analysis
# Quick file/directory count (limited for performance)
files = list(project_path.rglob('*'))[:100] # Limit for performance
analysis['file_count_estimate'] = len([f for f in files if f.is_file()])
analysis['directory_count_estimate'] = len([f for f in files if f.is_dir()])
# Detect project type
if (project_path / 'package.json').exists():
analysis['project_type'] = 'nodejs'
analysis['has_frontend'] = True
elif (project_path / 'pyproject.toml').exists() or (project_path / 'setup.py').exists():
analysis['project_type'] = 'python'
elif (project_path / 'Cargo.toml').exists():
analysis['project_type'] = 'rust'
elif (project_path / 'go.mod').exists():
analysis['project_type'] = 'go'
# Check for tests
test_patterns = ['test', 'tests', '__tests__', 'spec']
analysis['has_tests'] = any(
(project_path / pattern).exists() or
any(pattern in str(f) for f in files[:20])
for pattern in test_patterns
)
# Check for production indicators
prod_indicators = ['.env.production', 'docker-compose.yml', 'Dockerfile', '.github']
analysis['is_production'] = any(
(project_path / indicator).exists() for indicator in prod_indicators
)
# Framework detection (quick check)
if analysis['project_type'] == 'nodejs':
package_json = project_path / 'package.json'
if package_json.exists():
try:
with open(package_json) as f:
pkg_data = json.load(f)
deps = {**pkg_data.get('dependencies', {}), **pkg_data.get('devDependencies', {})}
if 'react' in deps:
analysis['framework_detected'] = 'react'
elif 'vue' in deps:
analysis['framework_detected'] = 'vue'
elif 'angular' in deps:
analysis['framework_detected'] = 'angular'
elif 'express' in deps:
analysis['has_backend'] = True
except:
pass
except Exception:
# Return partial analysis on error
pass
return analysis
def _analyze_user_intent(self, user_input: str) -> dict:
"""Analyze user input for session intent and complexity."""
intent_analysis = {
'operation_type': OperationType.READ,
'complexity_score': 0.0,
'brainstorming_likely': False,
'user_expertise': 'intermediate',
'urgency': 'normal'
}
user_lower = user_input.lower()
# Detect operation type
if any(word in user_lower for word in ['build', 'create', 'implement', 'develop']):
intent_analysis['operation_type'] = OperationType.BUILD
intent_analysis['complexity_score'] += 0.3
elif any(word in user_lower for word in ['fix', 'debug', 'troubleshoot', 'solve']):
intent_analysis['operation_type'] = OperationType.ANALYZE
intent_analysis['complexity_score'] += 0.2
elif any(word in user_lower for word in ['refactor', 'restructure', 'reorganize']):
intent_analysis['operation_type'] = OperationType.REFACTOR
intent_analysis['complexity_score'] += 0.4
elif any(word in user_lower for word in ['test', 'validate', 'check']):
intent_analysis['operation_type'] = OperationType.TEST
intent_analysis['complexity_score'] += 0.1
# Detect brainstorming needs
brainstorm_indicators = [
'not sure', 'thinking about', 'maybe', 'possibly', 'could we',
'brainstorm', 'explore', 'figure out', 'new project', 'startup idea'
]
intent_analysis['brainstorming_likely'] = any(
indicator in user_lower for indicator in brainstorm_indicators
)
# Complexity indicators
complexity_indicators = [
'complex', 'complicated', 'comprehensive', 'entire', 'whole', 'system-wide',
'architecture', 'multiple', 'many', 'several'
]
for indicator in complexity_indicators:
if indicator in user_lower:
intent_analysis['complexity_score'] += 0.2
intent_analysis['complexity_score'] = min(intent_analysis['complexity_score'], 1.0)
# Detect urgency
if any(word in user_lower for word in ['urgent', 'asap', 'quickly', 'fast']):
intent_analysis['urgency'] = 'high'
elif any(word in user_lower for word in ['when you can', 'no rush', 'eventually']):
intent_analysis['urgency'] = 'low'
return intent_analysis
def _detect_session_patterns(self, context: dict) -> dict:
"""Detect patterns for intelligent session configuration."""
# Skip pattern detection if no user input provided
if not context.get('user_input', '').strip():
return {
'pattern_matches': [],
'recommended_modes': [],
'recommended_mcp_servers': [],
'suggested_flags': [],
'confidence_score': 0.0
}
# Create operation context for pattern detection
operation_data = {
'operation_type': context.get('operation_type', OperationType.READ).value,
'file_count': context.get('file_count_estimate', 1),
'directory_count': context.get('directory_count_estimate', 1),
'complexity_score': context.get('complexity_score', 0.0),
'has_external_dependencies': context.get('framework_detected') is not None,
'project_type': context.get('project_type', 'unknown')
}
# Run pattern detection
detection_result = self.pattern_detector.detect_patterns(
context.get('user_input', ''),
context,
operation_data
)
return {
'pattern_matches': detection_result.matches,
'recommended_modes': detection_result.recommended_modes,
'recommended_mcp_servers': detection_result.recommended_mcp_servers,
'suggested_flags': detection_result.suggested_flags,
'confidence_score': detection_result.confidence_score
}
def _apply_learning_adaptations(self, context: dict, detection_result: dict) -> dict:
"""Apply learned adaptations to enhance recommendations."""
base_recommendations = {
'recommended_modes': detection_result['recommended_modes'],
'recommended_mcp_servers': detection_result['recommended_mcp_servers'],
'suggested_flags': detection_result['suggested_flags']
}
# Apply learning engine adaptations
enhanced_recommendations = self.learning_engine.apply_adaptations(
context, base_recommendations
)
# Apply user preferences if available
self._apply_user_preferences(enhanced_recommendations, context)
return enhanced_recommendations
def _apply_user_preferences(self, recommendations: dict, context: dict):
"""Apply stored user preferences to recommendations."""
# Check for preferred tools for different operations
operation_types = ['read', 'write', 'edit', 'analyze', 'mcp']
for op_type in operation_types:
pref_key = f"tool_{op_type}"
preferred_tool = self.learning_engine.get_last_preference(pref_key)
if preferred_tool:
# Add a hint to the recommendations
if 'preference_hints' not in recommendations:
recommendations['preference_hints'] = {}
recommendations['preference_hints'][op_type] = preferred_tool
# Store project-specific information if we have a project path
if context.get('project_path'):
project_path = context['project_path']
# Store project type if detected
if context.get('project_type') and context['project_type'] != 'unknown':
self.learning_engine.update_project_info(
project_path,
'project_type',
context['project_type']
)
# Store framework if detected
if context.get('framework_detected'):
self.learning_engine.update_project_info(
project_path,
'framework',
context['framework_detected']
)
def _create_mcp_activation_plan(self, context: dict, recommendations: dict) -> dict:
"""Create MCP server activation plan."""
# Create operation data for MCP intelligence
operation_data = {
'file_count': context.get('file_count_estimate', 1),
'complexity_score': context.get('complexity_score', 0.0),
'operation_type': context.get('operation_type', OperationType.READ).value
}
# Create MCP activation plan
mcp_plan = self.mcp_intelligence.create_activation_plan(
context.get('user_input', ''),
context,
operation_data
)
return {
'servers_to_activate': mcp_plan.servers_to_activate,
'activation_order': mcp_plan.activation_order,
'estimated_cost_ms': mcp_plan.estimated_cost_ms,
'coordination_strategy': mcp_plan.coordination_strategy,
'fallback_strategy': mcp_plan.fallback_strategy
}
def _configure_compression(self, context: dict) -> dict:
"""Configure compression strategy for the session."""
compression_level = self.compression_engine.determine_compression_level(context)
return {
'compression_level': compression_level.value,
'estimated_savings': self.compression_engine._estimate_compression_savings(compression_level),
'quality_impact': self.compression_engine._estimate_quality_impact(compression_level),
'selective_compression_enabled': True
}
def _generate_session_config(self, context: dict, recommendations: dict,
mcp_plan: dict, compression_config: dict) -> dict:
"""Generate comprehensive session configuration."""
config = {
'session_id': context['session_id'],
'superclaude_enabled': True,
'initialization_timestamp': context['timestamp'],
# Mode configuration
'active_modes': recommendations.get('recommended_modes', []),
'mode_configurations': self._get_mode_configurations(recommendations),
# MCP server configuration
'mcp_servers': {
'enabled_servers': mcp_plan['servers_to_activate'],
'activation_order': mcp_plan['activation_order'],
'coordination_strategy': mcp_plan['coordination_strategy']
},
# Compression configuration
'compression': compression_config,
# Performance configuration
'performance': {
'resource_monitoring_enabled': True,
'optimization_targets': self.framework_logic.performance_targets,
'delegation_threshold': 0.4 if context.get('complexity_score', 0) > 0.4 else 0.6
},
# Learning configuration
'learning': {
'adaptation_enabled': True,
'effectiveness_tracking': True,
'applied_adaptations': recommendations.get('applied_adaptations', [])
},
# Context preservation
'context': {
'project_type': context.get('project_type', 'unknown'),
'complexity_score': context.get('complexity_score', 0.0),
'brainstorming_mode': context.get('brainstorming_likely', False),
'user_expertise': context.get('user_expertise', 'intermediate')
},
# Quality gates
'quality_gates': self._configure_quality_gates(context),
# Session metadata
'metadata': {
'framework_version': '1.0.0',
'hook_version': 'session_start_1.0',
'configuration_source': 'superclaude_intelligence'
}
}
return config
def _get_mode_configurations(self, recommendations: dict) -> dict:
"""Get specific configuration for activated modes."""
mode_configs = {}
for mode in recommendations.get('recommended_modes', []):
if mode == 'brainstorming':
mode_configs[mode] = {
'max_rounds': 15,
'convergence_threshold': 0.85,
'auto_handoff_enabled': True
}
elif mode == 'task_management':
mode_configs[mode] = {
'delegation_enabled': True,
'wave_orchestration': True,
'auto_checkpoints': True
}
elif mode == 'token_efficiency':
mode_configs[mode] = {
'compression_level': 'adaptive',
'symbol_systems_enabled': True,
'selective_preservation': True
}
return mode_configs
def _configure_quality_gates(self, context: dict) -> list:
"""Configure quality gates based on context."""
# Create operation context for quality gate determination
operation_context = OperationContext(
operation_type=context.get('operation_type', OperationType.READ),
file_count=context.get('file_count_estimate', 1),
directory_count=context.get('directory_count_estimate', 1),
has_tests=context.get('has_tests', False),
is_production=context.get('is_production', False),
user_expertise=context.get('user_expertise', 'intermediate'),
project_type=context.get('project_type', 'unknown'),
complexity_score=context.get('complexity_score', 0.0),
risk_level=RiskLevel.LOW
)
return self.framework_logic.get_quality_gates(operation_context)
def _record_session_learning(self, context: dict, session_config: dict):
"""Record session initialization for learning."""
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'session_config': session_config,
'modes_activated': session_config.get('active_modes', []),
'mcp_servers': session_config.get('mcp_servers', {}).get('enabled_servers', [])
},
1.0, # Assume successful initialization
0.8, # High confidence in pattern
{'hook': 'session_start', 'version': '1.0'}
)
def _create_fallback_session_config(self, session_context: dict, error: str) -> dict:
"""Create fallback configuration on error."""
return {
'session_id': session_context.get('session_id', 'unknown'),
'superclaude_enabled': False,
'fallback_mode': True,
'error': error,
'basic_config': {
'compression_level': 'minimal',
'mcp_servers_enabled': False,
'learning_disabled': True
},
'performance_metrics': {
'execution_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def _activate_intelligent_modes(self, context: dict, recommendations: dict) -> list:
"""Activate intelligent modes based on context and recommendations."""
activated_modes = []
# Add brainstorming mode if likely
if context.get('brainstorming_likely', False):
activated_modes.append({'name': 'brainstorming', 'trigger': 'user input'})
# Add task management mode if recommended
if 'task_management' in recommendations.get('recommended_modes', []):
activated_modes.append({'name': 'task_management', 'trigger': 'pattern detection'})
# Add token efficiency mode if recommended
if 'token_efficiency' in recommendations.get('recommended_modes', []):
activated_modes.append({'name': 'token_efficiency', 'trigger': 'pattern detection'})
return activated_modes
def _configure_mcp_servers(self, context: dict, activated_modes: list) -> dict:
"""Configure MCP servers based on context and activated modes."""
# Create operation data for MCP intelligence
operation_data = {
'file_count': context.get('file_count_estimate', 1),
'complexity_score': context.get('complexity_score', 0.0),
'operation_type': context.get('operation_type', OperationType.READ).value
}
# Create MCP activation plan
mcp_plan = self.mcp_intelligence.create_activation_plan(
context.get('user_input', ''),
context,
operation_data
)
return {
'enabled_servers': mcp_plan.servers_to_activate,
'activation_order': mcp_plan.activation_order,
'coordination_strategy': mcp_plan.coordination_strategy
}
def _calculate_initialization_efficiency(self, execution_time: float) -> float:
"""Calculate initialization efficiency score."""
return 1.0 - (execution_time / self.performance_target_ms) if execution_time < self.performance_target_ms else 0.0
def main():
"""Main hook execution function."""
try:
# Read session data from stdin
session_data = json.loads(sys.stdin.read())
# Initialize and run hook
hook = SessionStartHook()
result = hook.initialize_session(session_data)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'superclaude_enabled': False,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,25 +0,0 @@
"""
SuperClaude-Lite Shared Infrastructure
Core components for the executable SuperClaude intelligence framework.
Provides shared functionality across all 7 Claude Code hooks.
"""
__version__ = "1.0.0"
__author__ = "SuperClaude Framework"
from .yaml_loader import UnifiedConfigLoader
from .framework_logic import FrameworkLogic
from .pattern_detection import PatternDetector
from .mcp_intelligence import MCPIntelligence
from .compression_engine import CompressionEngine
from .learning_engine import LearningEngine
__all__ = [
'UnifiedConfigLoader',
'FrameworkLogic',
'PatternDetector',
'MCPIntelligence',
'CompressionEngine',
'LearningEngine'
]

View File

@@ -1,662 +0,0 @@
"""
Compression Engine for SuperClaude-Lite
Intelligent token optimization implementing MODE_Token_Efficiency.md algorithms
with adaptive compression, symbol systems, and quality-gated validation.
"""
import re
import json
import hashlib
from typing import Dict, Any, List, Optional, Tuple, Set
from dataclasses import dataclass
from enum import Enum
from yaml_loader import config_loader
class CompressionLevel(Enum):
"""Compression levels from MODE_Token_Efficiency.md."""
MINIMAL = "minimal" # 0-40% compression
EFFICIENT = "efficient" # 40-70% compression
COMPRESSED = "compressed" # 70-85% compression
CRITICAL = "critical" # 85-95% compression
EMERGENCY = "emergency" # 95%+ compression
class ContentType(Enum):
"""Types of content for selective compression."""
FRAMEWORK_CONTENT = "framework" # SuperClaude framework - EXCLUDE
SESSION_DATA = "session" # Session metadata - COMPRESS
USER_CONTENT = "user" # User project files - PRESERVE
WORKING_ARTIFACTS = "artifacts" # Analysis results - COMPRESS
@dataclass
class CompressionResult:
"""Result of compression operation."""
original_length: int
compressed_length: int
compression_ratio: float
quality_score: float # 0.0 to 1.0
techniques_used: List[str]
preservation_score: float # Information preservation
processing_time_ms: float
@dataclass
class CompressionStrategy:
"""Strategy configuration for compression."""
level: CompressionLevel
symbol_systems_enabled: bool
abbreviation_systems_enabled: bool
structural_optimization: bool
selective_preservation: Dict[str, bool]
quality_threshold: float
class CompressionEngine:
"""
Intelligent token optimization engine implementing MODE_Token_Efficiency.md.
Features:
- 5-level adaptive compression (minimal to emergency)
- Symbol systems for mathematical and logical relationships
- Abbreviation systems for technical domains
- Selective compression with framework/user content protection
- Quality-gated validation with ≥95% information preservation
- Real-time compression effectiveness monitoring
"""
def __init__(self):
try:
self.config = config_loader.load_config('compression')
except Exception as e:
# Fallback to default configuration if config loading fails
self.config = {'compression_levels': {}, 'selective_compression': {}}
self.symbol_mappings = self._load_symbol_mappings()
self.abbreviation_mappings = self._load_abbreviation_mappings()
self.compression_cache = {}
self.performance_metrics = {}
def _load_symbol_mappings(self) -> Dict[str, str]:
"""Load symbol system mappings from configuration."""
return {
# Core Logic & Flow
'leads to': '',
'implies': '',
'transforms to': '',
'converts to': '',
'rollback': '',
'reverse': '',
'bidirectional': '',
'sync': '',
'and': '&',
'combine': '&',
'separator': '|',
'or': '|',
'define': ':',
'specify': ':',
'sequence': '»',
'then': '»',
'therefore': '',
'because': '',
'equivalent': '',
'approximately': '',
'not equal': '',
# Status & Progress
'completed': '',
'passed': '',
'failed': '',
'error': '',
'warning': '⚠️',
'information': '',
'in progress': '🔄',
'processing': '🔄',
'waiting': '',
'pending': '',
'critical': '🚨',
'urgent': '🚨',
'target': '🎯',
'goal': '🎯',
'metrics': '📊',
'data': '📊',
'insight': '💡',
'learning': '💡',
# Technical Domains
'performance': '',
'optimization': '',
'analysis': '🔍',
'investigation': '🔍',
'configuration': '🔧',
'setup': '🔧',
'security': '🛡️',
'protection': '🛡️',
'deployment': '📦',
'package': '📦',
'design': '🎨',
'frontend': '🎨',
'network': '🌐',
'connectivity': '🌐',
'mobile': '📱',
'responsive': '📱',
'architecture': '🏗️',
'system structure': '🏗️',
'components': '🧩',
'modular': '🧩'
}
def _load_abbreviation_mappings(self) -> Dict[str, str]:
"""Load abbreviation system mappings from configuration."""
return {
# System & Architecture
'configuration': 'cfg',
'settings': 'cfg',
'implementation': 'impl',
'code structure': 'impl',
'architecture': 'arch',
'system design': 'arch',
'performance': 'perf',
'optimization': 'perf',
'operations': 'ops',
'deployment': 'ops',
'environment': 'env',
'runtime context': 'env',
# Development Process
'requirements': 'req',
'dependencies': 'deps',
'packages': 'deps',
'validation': 'val',
'verification': 'val',
'testing': 'test',
'quality assurance': 'test',
'documentation': 'docs',
'guides': 'docs',
'standards': 'std',
'conventions': 'std',
# Quality & Analysis
'quality': 'qual',
'maintainability': 'qual',
'security': 'sec',
'safety measures': 'sec',
'error': 'err',
'exception handling': 'err',
'recovery': 'rec',
'resilience': 'rec',
'severity': 'sev',
'priority level': 'sev',
'optimization': 'opt',
'improvement': 'opt'
}
def determine_compression_level(self, context: Dict[str, Any]) -> CompressionLevel:
"""
Determine appropriate compression level based on context.
Args:
context: Session context including resource usage, conversation length, etc.
Returns:
Appropriate CompressionLevel for the situation
"""
resource_usage = context.get('resource_usage_percent', 0)
conversation_length = context.get('conversation_length', 0)
user_requests_brevity = context.get('user_requests_brevity', False)
complexity_score = context.get('complexity_score', 0.0)
# Emergency compression for critical resource constraints
if resource_usage >= 95:
return CompressionLevel.EMERGENCY
# Critical compression for high resource usage
if resource_usage >= 85 or conversation_length > 200:
return CompressionLevel.CRITICAL
# Compressed level for moderate constraints
if resource_usage >= 70 or conversation_length > 100 or user_requests_brevity:
return CompressionLevel.COMPRESSED
# Efficient level for mild constraints or complex operations
if resource_usage >= 40 or complexity_score > 0.6:
return CompressionLevel.EFFICIENT
# Minimal compression for normal operations
return CompressionLevel.MINIMAL
def classify_content(self, content: str, metadata: Dict[str, Any]) -> ContentType:
"""
Classify content type for selective compression.
Args:
content: Content to classify
metadata: Metadata about the content (file paths, context, etc.)
Returns:
ContentType for compression decision making
"""
file_path = metadata.get('file_path', '')
context_type = metadata.get('context_type', '')
# Framework content - complete exclusion
framework_patterns = [
'~/.claude/',
'.claude/',
'SuperClaude/',
'CLAUDE.md',
'FLAGS.md',
'PRINCIPLES.md',
'ORCHESTRATOR.md',
'MCP_',
'MODE_',
'SESSION_LIFECYCLE.md'
]
for pattern in framework_patterns:
if pattern in file_path or pattern in content:
return ContentType.FRAMEWORK_CONTENT
# Session data - apply compression
if context_type in ['session_metadata', 'checkpoint_data', 'cache_content']:
return ContentType.SESSION_DATA
# Working artifacts - apply compression
if context_type in ['analysis_results', 'processing_data', 'working_artifacts']:
return ContentType.WORKING_ARTIFACTS
# User content - preserve with minimal compression only
user_patterns = [
'project_files',
'user_documentation',
'source_code',
'configuration_files',
'custom_content'
]
for pattern in user_patterns:
if pattern in context_type or pattern in file_path:
return ContentType.USER_CONTENT
# Default to user content preservation
return ContentType.USER_CONTENT
def compress_content(self,
content: str,
context: Dict[str, Any],
metadata: Dict[str, Any] = None) -> CompressionResult:
"""
Compress content with intelligent optimization.
Args:
content: Content to compress
context: Session context for compression level determination
metadata: Content metadata for selective compression
Returns:
CompressionResult with metrics and compressed content
"""
import time
start_time = time.time()
if metadata is None:
metadata = {}
# Classify content type
content_type = self.classify_content(content, metadata)
# Framework content - no compression
if content_type == ContentType.FRAMEWORK_CONTENT:
return CompressionResult(
original_length=len(content),
compressed_length=len(content),
compression_ratio=0.0,
quality_score=1.0,
techniques_used=['framework_exclusion'],
preservation_score=1.0,
processing_time_ms=(time.time() - start_time) * 1000
)
# User content - minimal compression only
if content_type == ContentType.USER_CONTENT:
compression_level = CompressionLevel.MINIMAL
else:
compression_level = self.determine_compression_level(context)
# Create compression strategy
strategy = self._create_compression_strategy(compression_level, content_type)
# Apply compression techniques
compressed_content = content
techniques_used = []
if strategy.symbol_systems_enabled:
compressed_content, symbol_techniques = self._apply_symbol_systems(compressed_content)
techniques_used.extend(symbol_techniques)
if strategy.abbreviation_systems_enabled:
compressed_content, abbrev_techniques = self._apply_abbreviation_systems(compressed_content)
techniques_used.extend(abbrev_techniques)
if strategy.structural_optimization:
compressed_content, struct_techniques = self._apply_structural_optimization(
compressed_content, compression_level
)
techniques_used.extend(struct_techniques)
# Calculate metrics
original_length = len(content)
compressed_length = len(compressed_content)
compression_ratio = (original_length - compressed_length) / original_length if original_length > 0 else 0.0
# Quality validation
quality_score = self._validate_compression_quality(content, compressed_content, strategy)
preservation_score = self._calculate_information_preservation(content, compressed_content)
processing_time = (time.time() - start_time) * 1000
# Cache result for performance
cache_key = hashlib.md5(content.encode()).hexdigest()
self.compression_cache[cache_key] = compressed_content
return CompressionResult(
original_length=original_length,
compressed_length=compressed_length,
compression_ratio=compression_ratio,
quality_score=quality_score,
techniques_used=techniques_used,
preservation_score=preservation_score,
processing_time_ms=processing_time
)
def _create_compression_strategy(self, level: CompressionLevel, content_type: ContentType) -> CompressionStrategy:
"""Create compression strategy based on level and content type."""
level_configs = {
CompressionLevel.MINIMAL: {
'symbol_systems': True, # Changed: Enable basic optimizations even for minimal
'abbreviations': False,
'structural': True, # Changed: Enable basic structural optimization
'quality_threshold': 0.98
},
CompressionLevel.EFFICIENT: {
'symbol_systems': True,
'abbreviations': False,
'structural': True,
'quality_threshold': 0.95
},
CompressionLevel.COMPRESSED: {
'symbol_systems': True,
'abbreviations': True,
'structural': True,
'quality_threshold': 0.90
},
CompressionLevel.CRITICAL: {
'symbol_systems': True,
'abbreviations': True,
'structural': True,
'quality_threshold': 0.85
},
CompressionLevel.EMERGENCY: {
'symbol_systems': True,
'abbreviations': True,
'structural': True,
'quality_threshold': 0.80
}
}
config = level_configs[level]
# Adjust for content type
if content_type == ContentType.USER_CONTENT:
# More conservative for user content
config['quality_threshold'] = min(config['quality_threshold'] + 0.1, 1.0)
return CompressionStrategy(
level=level,
symbol_systems_enabled=config['symbol_systems'],
abbreviation_systems_enabled=config['abbreviations'],
structural_optimization=config['structural'],
selective_preservation={},
quality_threshold=config['quality_threshold']
)
def _apply_symbol_systems(self, content: str) -> Tuple[str, List[str]]:
"""Apply symbol system replacements."""
if not content or not isinstance(content, str):
return content or "", []
compressed = content
techniques = []
try:
# Apply symbol mappings with word boundary protection
for phrase, symbol in self.symbol_mappings.items():
if not phrase or not symbol:
continue
pattern = r'\b' + re.escape(phrase) + r'\b'
if re.search(pattern, compressed, re.IGNORECASE):
compressed = re.sub(pattern, symbol, compressed, flags=re.IGNORECASE)
techniques.append(f"symbol_{phrase.replace(' ', '_')}")
except Exception as e:
# If regex fails, return original content
return content, []
return compressed, techniques
def _apply_abbreviation_systems(self, content: str) -> Tuple[str, List[str]]:
"""Apply abbreviation system replacements."""
if not content or not isinstance(content, str):
return content or "", []
compressed = content
techniques = []
try:
# Apply abbreviation mappings with context awareness
for phrase, abbrev in self.abbreviation_mappings.items():
if not phrase or not abbrev:
continue
pattern = r'\b' + re.escape(phrase) + r'\b'
if re.search(pattern, compressed, re.IGNORECASE):
compressed = re.sub(pattern, abbrev, compressed, flags=re.IGNORECASE)
techniques.append(f"abbrev_{phrase.replace(' ', '_')}")
except Exception as e:
# If regex fails, return original content
return content, []
return compressed, techniques
def _apply_structural_optimization(self, content: str, level: CompressionLevel) -> Tuple[str, List[str]]:
"""Apply structural optimizations for token efficiency."""
if not content or not isinstance(content, str):
return content or "", []
compressed = content
techniques = []
try:
# Always remove redundant whitespace for any level
if re.search(r'\s{2,}|\n\s*\n', compressed):
compressed = re.sub(r'\s+', ' ', compressed)
compressed = re.sub(r'\n\s*\n', '\n', compressed)
techniques.append('whitespace_optimization')
# Phrase simplification for compressed levels and above
if level in [CompressionLevel.COMPRESSED, CompressionLevel.CRITICAL, CompressionLevel.EMERGENCY]:
# Simplify common phrases FIRST
phrase_simplifications = {
r'in order to': 'to',
r'it is important to note that': 'note:',
r'please be aware that': 'note:',
r'it should be noted that': 'note:',
r'for the purpose of': 'for',
r'with regard to': 'regarding',
r'in relation to': 'regarding'
}
for pattern, replacement in phrase_simplifications.items():
if re.search(pattern, compressed, re.IGNORECASE):
compressed = re.sub(pattern, replacement, compressed, flags=re.IGNORECASE)
techniques.append('phrase_simplification')
# Remove redundant words AFTER phrase simplification
if re.search(r'\b(the|a|an)\s+', compressed, re.IGNORECASE):
compressed = re.sub(r'\b(the|a|an)\s+', '', compressed, flags=re.IGNORECASE)
techniques.append('article_removal')
except Exception as e:
# If regex fails, return original content
return content, []
return compressed, techniques
def _validate_compression_quality(self, original: str, compressed: str, strategy: CompressionStrategy) -> float:
"""Validate compression quality against thresholds."""
# Simple quality heuristics (real implementation would be more sophisticated)
# Check if key information is preserved
original_words = set(re.findall(r'\b\w+\b', original.lower()))
compressed_words = set(re.findall(r'\b\w+\b', compressed.lower()))
# Word preservation ratio
word_preservation = len(compressed_words & original_words) / len(original_words) if original_words else 1.0
# Length efficiency (not too aggressive)
length_ratio = len(compressed) / len(original) if original else 1.0
# Penalize over-compression
if length_ratio < 0.3:
word_preservation *= 0.8
quality_score = (word_preservation * 0.7) + (min(length_ratio * 2, 1.0) * 0.3)
return min(quality_score, 1.0)
def _calculate_information_preservation(self, original: str, compressed: str) -> float:
"""Calculate information preservation score."""
# Enhanced preservation metric based on multiple factors
# Extract key concepts (capitalized words, technical terms, file extensions)
original_concepts = set(re.findall(r'\b[A-Z][a-z]+\b|\b\w+\.(js|py|md|yaml|json)\b|\b\w*[A-Z]\w*\b', original))
compressed_concepts = set(re.findall(r'\b[A-Z][a-z]+\b|\b\w+\.(js|py|md|yaml|json)\b|\b\w*[A-Z]\w*\b', compressed))
# Also check for symbols that represent preserved concepts
symbol_mappings = {
'': ['leads', 'implies', 'transforms', 'converts'],
'': ['performance', 'optimization', 'speed'],
'🛡️': ['security', 'protection', 'safety'],
'': ['error', 'failed', 'exception'],
'⚠️': ['warning', 'caution'],
'🔍': ['analysis', 'investigation', 'search'],
'🔧': ['configuration', 'setup', 'tools'],
'📦': ['deployment', 'package', 'bundle'],
'🎨': ['design', 'frontend', 'ui'],
'🌐': ['network', 'web', 'connectivity'],
'📱': ['mobile', 'responsive'],
'🏗️': ['architecture', 'structure'],
'🧩': ['components', 'modular']
}
# Count preserved concepts through symbols
symbol_preserved_concepts = set()
for symbol, related_words in symbol_mappings.items():
if symbol in compressed:
for word in related_words:
if word in original.lower():
symbol_preserved_concepts.add(word)
# Extract important words (longer than 4 characters, not common words)
common_words = {'this', 'that', 'with', 'have', 'will', 'been', 'from', 'they',
'know', 'want', 'good', 'much', 'some', 'time', 'very', 'when',
'come', 'here', 'just', 'like', 'long', 'make', 'many', 'over',
'such', 'take', 'than', 'them', 'well', 'were', 'through'}
original_words = set(word.lower() for word in re.findall(r'\b\w{4,}\b', original)
if word.lower() not in common_words)
compressed_words = set(word.lower() for word in re.findall(r'\b\w{4,}\b', compressed)
if word.lower() not in common_words)
# Add symbol-preserved concepts to compressed words
compressed_words.update(symbol_preserved_concepts)
# Calculate concept preservation
if original_concepts:
concept_preservation = len(compressed_concepts & original_concepts) / len(original_concepts)
else:
concept_preservation = 1.0
# Calculate important word preservation
if original_words:
word_preservation = len(compressed_words & original_words) / len(original_words)
else:
word_preservation = 1.0
# Weight concept preservation more heavily, but be more generous
total_preservation = (concept_preservation * 0.6) + (word_preservation * 0.4)
# Bonus for symbol usage that preserves meaning
symbol_bonus = min(len(symbol_preserved_concepts) * 0.05, 0.15)
total_preservation += symbol_bonus
# Apply length penalty for over-compression
length_ratio = len(compressed) / len(original) if len(original) > 0 else 1.0
if length_ratio < 0.2: # Heavily penalize extreme over-compression
total_preservation *= 0.6
elif length_ratio < 0.4: # Penalize significant over-compression
total_preservation *= 0.8
elif length_ratio < 0.5: # Moderate penalty for over-compression
total_preservation *= 0.9
return min(total_preservation, 1.0)
def get_compression_recommendations(self, context: Dict[str, Any]) -> Dict[str, Any]:
"""Get recommendations for optimizing compression."""
recommendations = []
current_level = self.determine_compression_level(context)
resource_usage = context.get('resource_usage_percent', 0)
# Resource-based recommendations
if resource_usage > 85:
recommendations.append("Enable emergency compression mode for critical resource constraints")
elif resource_usage > 70:
recommendations.append("Consider compressed mode for better resource efficiency")
elif resource_usage < 40:
recommendations.append("Resource usage low - minimal compression sufficient")
# Performance recommendations
if context.get('processing_time_ms', 0) > 500:
recommendations.append("Compression processing time high - consider caching strategies")
return {
'current_level': current_level.value,
'recommendations': recommendations,
'estimated_savings': self._estimate_compression_savings(current_level),
'quality_impact': self._estimate_quality_impact(current_level),
'performance_metrics': self.performance_metrics
}
def _estimate_compression_savings(self, level: CompressionLevel) -> Dict[str, float]:
"""Estimate compression savings for a given level."""
savings_map = {
CompressionLevel.MINIMAL: {'token_reduction': 0.15, 'time_savings': 0.05},
CompressionLevel.EFFICIENT: {'token_reduction': 0.40, 'time_savings': 0.15},
CompressionLevel.COMPRESSED: {'token_reduction': 0.60, 'time_savings': 0.25},
CompressionLevel.CRITICAL: {'token_reduction': 0.75, 'time_savings': 0.35},
CompressionLevel.EMERGENCY: {'token_reduction': 0.85, 'time_savings': 0.45}
}
return savings_map.get(level, {'token_reduction': 0.0, 'time_savings': 0.0})
def _estimate_quality_impact(self, level: CompressionLevel) -> float:
"""Estimate quality preservation for a given level."""
quality_map = {
CompressionLevel.MINIMAL: 0.98,
CompressionLevel.EFFICIENT: 0.95,
CompressionLevel.COMPRESSED: 0.90,
CompressionLevel.CRITICAL: 0.85,
CompressionLevel.EMERGENCY: 0.80
}
return quality_map.get(level, 0.95)

View File

@@ -1,343 +0,0 @@
"""
Core SuperClaude Framework Logic
Implements the core decision-making algorithms from the SuperClaude framework,
including RULES.md, PRINCIPLES.md, and ORCHESTRATOR.md patterns.
"""
import json
import time
from typing import Dict, Any, List, Optional, Tuple, Union
from dataclasses import dataclass
from enum import Enum
from yaml_loader import config_loader
class OperationType(Enum):
"""Types of operations SuperClaude can perform."""
READ = "read"
WRITE = "write"
EDIT = "edit"
ANALYZE = "analyze"
BUILD = "build"
TEST = "test"
DEPLOY = "deploy"
REFACTOR = "refactor"
class RiskLevel(Enum):
"""Risk levels for operations."""
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
@dataclass
class OperationContext:
"""Context information for an operation."""
operation_type: OperationType
file_count: int
directory_count: int
has_tests: bool
is_production: bool
user_expertise: str # beginner, intermediate, expert
project_type: str # web, api, cli, library, etc.
complexity_score: float # 0.0 to 1.0
risk_level: RiskLevel
@dataclass
class ValidationResult:
"""Result of validation checks."""
is_valid: bool
issues: List[str]
warnings: List[str]
suggestions: List[str]
quality_score: float # 0.0 to 1.0
class FrameworkLogic:
"""
Core SuperClaude framework logic implementation.
Encapsulates decision-making algorithms from:
- RULES.md: Operational rules and security patterns
- PRINCIPLES.md: Development principles and quality standards
- ORCHESTRATOR.md: Intelligent routing and coordination
"""
def __init__(self):
# Load performance targets from SuperClaude configuration
self.performance_targets = {}
# Get hook-specific performance targets
self.performance_targets['session_start_ms'] = config_loader.get_hook_config(
'session_start', 'performance_target_ms', 50
)
self.performance_targets['tool_routing_ms'] = config_loader.get_hook_config(
'pre_tool_use', 'performance_target_ms', 200
)
self.performance_targets['validation_ms'] = config_loader.get_hook_config(
'post_tool_use', 'performance_target_ms', 100
)
self.performance_targets['compression_ms'] = config_loader.get_hook_config(
'pre_compact', 'performance_target_ms', 150
)
# Load additional performance settings from global configuration
global_perf = config_loader.get_performance_targets()
if global_perf:
self.performance_targets.update(global_perf)
def should_use_read_before_write(self, context: OperationContext) -> bool:
"""
RULES.md: Always use Read tool before Write or Edit operations.
"""
return context.operation_type in [OperationType.WRITE, OperationType.EDIT]
def calculate_complexity_score(self, operation_data: Dict[str, Any]) -> float:
"""
Calculate operation complexity score (0.0 to 1.0).
Factors:
- File count and types
- Operation scope
- Dependencies
- Risk factors
"""
score = 0.0
# File count factor (0.0 to 0.3)
file_count = operation_data.get('file_count', 1)
if file_count <= 1:
score += 0.0
elif file_count <= 3:
score += 0.1
elif file_count <= 10:
score += 0.2
else:
score += 0.3
# Directory factor (0.0 to 0.2)
dir_count = operation_data.get('directory_count', 1)
if dir_count > 2:
score += 0.2
elif dir_count > 1:
score += 0.1
# Operation type factor (0.0 to 0.3)
op_type = operation_data.get('operation_type', '')
if op_type in ['refactor', 'architecture', 'system-wide']:
score += 0.3
elif op_type in ['build', 'implement', 'migrate']:
score += 0.2
elif op_type in ['fix', 'update', 'improve']:
score += 0.1
# Language/framework factor (0.0 to 0.2)
if operation_data.get('multi_language', False):
score += 0.2
elif operation_data.get('framework_changes', False):
score += 0.1
return min(score, 1.0)
def assess_risk_level(self, context: OperationContext) -> RiskLevel:
"""
Assess risk level based on operation context.
"""
if context.is_production:
return RiskLevel.HIGH
if context.complexity_score > 0.7:
return RiskLevel.HIGH
elif context.complexity_score > 0.4:
return RiskLevel.MEDIUM
elif context.file_count > 10:
return RiskLevel.MEDIUM
else:
return RiskLevel.LOW
def should_enable_validation(self, context: OperationContext) -> bool:
"""
ORCHESTRATOR.md: Enable validation for production code or high-risk operations.
"""
return (
context.is_production or
context.risk_level in [RiskLevel.HIGH, RiskLevel.CRITICAL] or
context.operation_type in [OperationType.DEPLOY, OperationType.REFACTOR]
)
def should_enable_delegation(self, context: OperationContext) -> Tuple[bool, str]:
"""
ORCHESTRATOR.md: Enable delegation for multi-file operations.
Returns:
(should_delegate, delegation_strategy)
"""
if context.file_count > 3:
return True, "files"
elif context.directory_count > 2:
return True, "folders"
elif context.complexity_score > 0.4:
return True, "auto"
else:
return False, "none"
def validate_operation(self, operation_data: Dict[str, Any]) -> ValidationResult:
"""
PRINCIPLES.md: Validate operation against core principles.
"""
issues = []
warnings = []
suggestions = []
quality_score = 1.0
# Check for evidence-based decision making
if 'evidence' not in operation_data:
warnings.append("No evidence provided for decision")
quality_score -= 0.1
# Check for proper error handling
if operation_data.get('operation_type') in ['write', 'edit', 'deploy']:
if not operation_data.get('has_error_handling', False):
issues.append("Error handling not implemented")
quality_score -= 0.2
# Check for test coverage
if operation_data.get('affects_logic', False):
if not operation_data.get('has_tests', False):
warnings.append("No tests found for logic changes")
quality_score -= 0.1
suggestions.append("Add unit tests for new logic")
# Check for documentation
if operation_data.get('is_public_api', False):
if not operation_data.get('has_documentation', False):
warnings.append("Public API lacks documentation")
quality_score -= 0.1
suggestions.append("Add API documentation")
# Security checks
if operation_data.get('handles_user_input', False):
if not operation_data.get('has_input_validation', False):
issues.append("User input handling without validation")
quality_score -= 0.3
is_valid = len(issues) == 0 and quality_score >= 0.7
return ValidationResult(
is_valid=is_valid,
issues=issues,
warnings=warnings,
suggestions=suggestions,
quality_score=max(quality_score, 0.0)
)
def determine_thinking_mode(self, context: OperationContext) -> Optional[str]:
"""
FLAGS.md: Determine appropriate thinking mode based on complexity.
"""
if context.complexity_score >= 0.8:
return "--ultrathink"
elif context.complexity_score >= 0.6:
return "--think-hard"
elif context.complexity_score >= 0.3:
return "--think"
else:
return None
def should_enable_efficiency_mode(self, session_data: Dict[str, Any]) -> bool:
"""
MODE_Token_Efficiency.md: Enable efficiency mode based on resource usage.
"""
resource_usage = session_data.get('resource_usage_percent', 0)
conversation_length = session_data.get('conversation_length', 0)
return (
resource_usage > 75 or
conversation_length > 100 or
session_data.get('user_requests_brevity', False)
)
def get_quality_gates(self, context: OperationContext) -> List[str]:
"""
ORCHESTRATOR.md: Get appropriate quality gates for operation.
"""
gates = ['syntax_validation']
if context.operation_type in [OperationType.WRITE, OperationType.EDIT]:
gates.extend(['type_analysis', 'code_quality'])
if self.should_enable_validation(context):
gates.extend(['security_assessment', 'performance_analysis'])
if context.has_tests:
gates.append('test_validation')
if context.operation_type == OperationType.DEPLOY:
gates.extend(['integration_testing', 'deployment_validation'])
return gates
def estimate_performance_impact(self, context: OperationContext) -> Dict[str, Any]:
"""
Estimate performance impact and suggested optimizations.
"""
base_time = 100 # ms
# Calculate estimated time based on complexity
estimated_time = base_time * (1 + context.complexity_score * 3)
# Factor in file count
if context.file_count > 5:
estimated_time *= 1.5
# Suggest optimizations
optimizations = []
if context.file_count > 3:
optimizations.append("Consider parallel processing")
if context.complexity_score > 0.6:
optimizations.append("Enable delegation mode")
if context.directory_count > 2:
optimizations.append("Use folder-based delegation")
return {
'estimated_time_ms': int(estimated_time),
'performance_risk': 'high' if estimated_time > 1000 else 'low',
'suggested_optimizations': optimizations,
'efficiency_gains_possible': len(optimizations) > 0
}
def apply_superclaude_principles(self, operation_data: Dict[str, Any]) -> Dict[str, Any]:
"""
Apply SuperClaude core principles to operation planning.
Returns enhanced operation data with principle-based recommendations.
"""
enhanced_data = operation_data.copy()
# Evidence > assumptions
if 'assumptions' in enhanced_data and not enhanced_data.get('evidence'):
enhanced_data['recommendations'] = enhanced_data.get('recommendations', [])
enhanced_data['recommendations'].append(
"Gather evidence to validate assumptions"
)
# Code > documentation
if enhanced_data.get('operation_type') == 'document' and not enhanced_data.get('has_working_code'):
enhanced_data['warnings'] = enhanced_data.get('warnings', [])
enhanced_data['warnings'].append(
"Ensure working code exists before extensive documentation"
)
# Efficiency > verbosity
if enhanced_data.get('output_length', 0) > 1000 and not enhanced_data.get('justification_for_length'):
enhanced_data['efficiency_suggestions'] = enhanced_data.get('efficiency_suggestions', [])
enhanced_data['efficiency_suggestions'].append(
"Consider token efficiency techniques for long outputs"
)
return enhanced_data

View File

@@ -1,411 +0,0 @@
"""
Intelligence Engine for SuperClaude Framework-Hooks
Generic YAML pattern interpreter that provides intelligent services by consuming
declarative YAML patterns. Enables hot-reloadable intelligence without code changes.
"""
import time
import hashlib
from typing import Dict, Any, List, Optional, Tuple, Union
from pathlib import Path
from yaml_loader import config_loader
class IntelligenceEngine:
"""
Generic YAML pattern interpreter for declarative intelligence.
Features:
- Hot-reload YAML intelligence patterns
- Context-aware pattern matching
- Decision tree execution
- Recommendation generation
- Performance optimization
- Multi-pattern coordination
"""
def __init__(self):
self.patterns: Dict[str, Dict[str, Any]] = {}
self.pattern_cache: Dict[str, Any] = {}
self.pattern_timestamps: Dict[str, float] = {}
self.evaluation_cache: Dict[str, Tuple[Any, float]] = {}
self.cache_duration = 300 # 5 minutes
self._load_all_patterns()
def _load_all_patterns(self):
"""Load all intelligence pattern configurations."""
pattern_files = [
'intelligence_patterns',
'mcp_orchestration',
'hook_coordination',
'performance_intelligence',
'validation_intelligence',
'user_experience'
]
for pattern_file in pattern_files:
try:
patterns = config_loader.load_config(pattern_file)
self.patterns[pattern_file] = patterns
self.pattern_timestamps[pattern_file] = time.time()
except Exception as e:
print(f"Warning: Could not load {pattern_file} patterns: {e}")
self.patterns[pattern_file] = {}
def reload_patterns(self, force: bool = False) -> bool:
"""
Reload patterns if they have changed.
Args:
force: Force reload even if no changes detected
Returns:
True if patterns were reloaded
"""
reloaded = False
for pattern_file in self.patterns.keys():
try:
# Force reload or check for changes
if force:
patterns = config_loader.load_config(pattern_file, force_reload=True)
self.patterns[pattern_file] = patterns
self.pattern_timestamps[pattern_file] = time.time()
reloaded = True
else:
# Check if pattern file has been updated
current_patterns = config_loader.load_config(pattern_file)
pattern_hash = self._compute_pattern_hash(current_patterns)
cached_hash = self.pattern_cache.get(f"{pattern_file}_hash")
if pattern_hash != cached_hash:
self.patterns[pattern_file] = current_patterns
self.pattern_cache[f"{pattern_file}_hash"] = pattern_hash
self.pattern_timestamps[pattern_file] = time.time()
reloaded = True
except Exception as e:
print(f"Warning: Could not reload {pattern_file} patterns: {e}")
if reloaded:
# Clear evaluation cache when patterns change
self.evaluation_cache.clear()
return reloaded
def _compute_pattern_hash(self, patterns: Dict[str, Any]) -> str:
"""Compute hash of pattern configuration for change detection."""
pattern_str = str(sorted(patterns.items()))
return hashlib.md5(pattern_str.encode()).hexdigest()
def evaluate_context(self, context: Dict[str, Any], pattern_type: str) -> Dict[str, Any]:
"""
Evaluate context against patterns to generate recommendations.
Args:
context: Current operation context
pattern_type: Type of patterns to evaluate (e.g., 'mcp_orchestration')
Returns:
Dictionary with recommendations and metadata
"""
# Check cache first
cache_key = f"{pattern_type}_{self._compute_context_hash(context)}"
if cache_key in self.evaluation_cache:
result, timestamp = self.evaluation_cache[cache_key]
if time.time() - timestamp < self.cache_duration:
return result
# Hot-reload patterns if needed
self.reload_patterns()
# Get patterns for this type
patterns = self.patterns.get(pattern_type, {})
if not patterns:
return {'recommendations': {}, 'confidence': 0.0, 'source': 'no_patterns'}
# Evaluate patterns
recommendations = {}
confidence_scores = []
if pattern_type == 'mcp_orchestration':
recommendations = self._evaluate_mcp_patterns(context, patterns)
elif pattern_type == 'hook_coordination':
recommendations = self._evaluate_hook_patterns(context, patterns)
elif pattern_type == 'performance_intelligence':
recommendations = self._evaluate_performance_patterns(context, patterns)
elif pattern_type == 'validation_intelligence':
recommendations = self._evaluate_validation_patterns(context, patterns)
elif pattern_type == 'user_experience':
recommendations = self._evaluate_ux_patterns(context, patterns)
elif pattern_type == 'intelligence_patterns':
recommendations = self._evaluate_learning_patterns(context, patterns)
# Calculate overall confidence
overall_confidence = max(confidence_scores) if confidence_scores else 0.0
result = {
'recommendations': recommendations,
'confidence': overall_confidence,
'source': pattern_type,
'timestamp': time.time()
}
# Cache result
self.evaluation_cache[cache_key] = (result, time.time())
return result
def _compute_context_hash(self, context: Dict[str, Any]) -> str:
"""Compute hash of context for caching."""
context_str = str(sorted(context.items()))
return hashlib.md5(context_str.encode()).hexdigest()[:8]
def _evaluate_mcp_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate MCP orchestration patterns."""
server_selection = patterns.get('server_selection', {})
decision_tree = server_selection.get('decision_tree', [])
recommendations = {
'primary_server': None,
'support_servers': [],
'coordination_mode': 'sequential',
'confidence': 0.0
}
# Evaluate decision tree
for rule in decision_tree:
if self._matches_conditions(context, rule.get('conditions', {})):
recommendations['primary_server'] = rule.get('primary_server')
recommendations['support_servers'] = rule.get('support_servers', [])
recommendations['coordination_mode'] = rule.get('coordination_mode', 'sequential')
recommendations['confidence'] = rule.get('confidence', 0.5)
break
# Apply fallback if no match
if not recommendations['primary_server']:
fallback = server_selection.get('fallback_chain', {})
recommendations['primary_server'] = fallback.get('default_primary', 'sequential')
recommendations['confidence'] = 0.3
return recommendations
def _evaluate_hook_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate hook coordination patterns."""
execution_patterns = patterns.get('execution_patterns', {})
recommendations = {
'execution_strategy': 'sequential',
'parallel_groups': [],
'conditional_hooks': [],
'performance_optimizations': []
}
# Check for parallel execution opportunities
parallel_groups = execution_patterns.get('parallel_execution', {}).get('groups', [])
for group in parallel_groups:
if self._should_enable_parallel_group(context, group):
recommendations['parallel_groups'].append(group)
# Check conditional execution rules
conditional_rules = execution_patterns.get('conditional_execution', {}).get('rules', [])
for rule in conditional_rules:
if self._matches_conditions(context, rule.get('conditions', [])):
recommendations['conditional_hooks'].append({
'hook': rule.get('hook'),
'priority': rule.get('priority', 'medium')
})
return recommendations
def _evaluate_performance_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate performance intelligence patterns."""
auto_optimization = patterns.get('auto_optimization', {})
optimization_triggers = auto_optimization.get('optimization_triggers', [])
recommendations = {
'optimizations': [],
'resource_zone': 'green',
'performance_actions': []
}
# Check optimization triggers
for trigger in optimization_triggers:
if self._matches_conditions(context, trigger.get('condition', {})):
recommendations['optimizations'].extend(trigger.get('actions', []))
recommendations['performance_actions'].append({
'trigger': trigger.get('name'),
'urgency': trigger.get('urgency', 'medium')
})
# Determine resource zone
resource_usage = context.get('resource_usage', 0.5)
resource_zones = patterns.get('resource_management', {}).get('resource_zones', {})
for zone_name, zone_config in resource_zones.items():
threshold = zone_config.get('threshold', 1.0)
if resource_usage <= threshold:
recommendations['resource_zone'] = zone_name
break
return recommendations
def _evaluate_validation_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate validation intelligence patterns."""
proactive_diagnostics = patterns.get('proactive_diagnostics', {})
early_warnings = proactive_diagnostics.get('early_warning_patterns', {})
recommendations = {
'health_score': 1.0,
'warnings': [],
'diagnostics': [],
'remediation_suggestions': []
}
# Check early warning patterns
for category, warnings in early_warnings.items():
for warning in warnings:
if self._matches_conditions(context, warning.get('pattern', {})):
recommendations['warnings'].append({
'name': warning.get('name'),
'severity': warning.get('severity', 'medium'),
'recommendation': warning.get('recommendation'),
'category': category
})
# Calculate health score (simplified)
base_health = 1.0
for warning in recommendations['warnings']:
severity_impact = {'low': 0.05, 'medium': 0.1, 'high': 0.2, 'critical': 0.4}
base_health -= severity_impact.get(warning['severity'], 0.1)
recommendations['health_score'] = max(0.0, base_health)
return recommendations
def _evaluate_ux_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate user experience patterns."""
project_detection = patterns.get('project_detection', {})
detection_patterns = project_detection.get('detection_patterns', {})
recommendations = {
'project_type': 'unknown',
'suggested_servers': [],
'smart_defaults': {},
'user_suggestions': []
}
# Detect project type
file_indicators = context.get('file_indicators', [])
directory_indicators = context.get('directory_indicators', [])
for category, projects in detection_patterns.items():
for project_type, project_config in projects.items():
if self._matches_project_indicators(file_indicators, directory_indicators, project_config):
recommendations['project_type'] = project_type
project_recs = project_config.get('recommendations', {})
recommendations['suggested_servers'] = project_recs.get('mcp_servers', [])
recommendations['smart_defaults'] = project_recs
break
return recommendations
def _evaluate_learning_patterns(self, context: Dict[str, Any], patterns: Dict[str, Any]) -> Dict[str, Any]:
"""Evaluate learning intelligence patterns."""
learning_intelligence = patterns.get('learning_intelligence', {})
pattern_recognition = learning_intelligence.get('pattern_recognition', {})
recommendations = {
'pattern_dimensions': [],
'learning_strategy': 'standard',
'confidence_threshold': 0.7
}
# Get pattern dimensions
dimensions = pattern_recognition.get('dimensions', {})
recommendations['pattern_dimensions'] = dimensions.get('primary', []) + dimensions.get('secondary', [])
# Determine learning strategy based on context
complexity = context.get('complexity_score', 0.5)
if complexity > 0.8:
recommendations['learning_strategy'] = 'comprehensive'
elif complexity < 0.3:
recommendations['learning_strategy'] = 'lightweight'
return recommendations
def _matches_conditions(self, context: Dict[str, Any], conditions: Union[Dict, List]) -> bool:
"""Check if context matches pattern conditions."""
if isinstance(conditions, list):
# List of conditions (AND logic)
return all(self._matches_single_condition(context, cond) for cond in conditions)
elif isinstance(conditions, dict):
if 'AND' in conditions:
return all(self._matches_single_condition(context, cond) for cond in conditions['AND'])
elif 'OR' in conditions:
return any(self._matches_single_condition(context, cond) for cond in conditions['OR'])
else:
return self._matches_single_condition(context, conditions)
return False
def _matches_single_condition(self, context: Dict[str, Any], condition: Dict[str, Any]) -> bool:
"""Check if context matches a single condition."""
for key, expected_value in condition.items():
context_value = context.get(key)
if context_value is None:
return False
# Handle string operations
if isinstance(expected_value, str):
if expected_value.startswith('>'):
threshold = float(expected_value[1:])
return float(context_value) > threshold
elif expected_value.startswith('<'):
threshold = float(expected_value[1:])
return float(context_value) < threshold
elif isinstance(expected_value, list):
return context_value in expected_value
else:
return context_value == expected_value
elif isinstance(expected_value, list):
return context_value in expected_value
else:
return context_value == expected_value
return True
def _should_enable_parallel_group(self, context: Dict[str, Any], group: Dict[str, Any]) -> bool:
"""Determine if a parallel group should be enabled."""
# Simple heuristic: enable if not in resource-constrained environment
resource_usage = context.get('resource_usage', 0.5)
return resource_usage < 0.8 and context.get('complexity_score', 0.5) > 0.3
def _matches_project_indicators(self, files: List[str], dirs: List[str],
project_config: Dict[str, Any]) -> bool:
"""Check if file/directory indicators match project pattern."""
file_indicators = project_config.get('file_indicators', [])
dir_indicators = project_config.get('directory_indicators', [])
file_matches = sum(1 for indicator in file_indicators if any(indicator in f for f in files))
dir_matches = sum(1 for indicator in dir_indicators if any(indicator in d for d in dirs))
confidence_threshold = project_config.get('confidence_threshold', 0.8)
total_indicators = len(file_indicators) + len(dir_indicators)
if total_indicators == 0:
return False
match_ratio = (file_matches + dir_matches) / total_indicators
return match_ratio >= confidence_threshold
def get_intelligence_summary(self) -> Dict[str, Any]:
"""Get summary of current intelligence state."""
return {
'loaded_patterns': list(self.patterns.keys()),
'cache_entries': len(self.evaluation_cache),
'last_reload': max(self.pattern_timestamps.values()) if self.pattern_timestamps else 0,
'pattern_status': {name: 'loaded' for name in self.patterns.keys()}
}

View File

@@ -1,891 +0,0 @@
"""
Learning Engine for SuperClaude-Lite
Cross-hook adaptation system that learns from user patterns, operation effectiveness,
and system performance to continuously improve SuperClaude intelligence.
"""
import json
import time
import statistics
from typing import Dict, Any, List, Optional, Tuple, Set
from dataclasses import dataclass, asdict
from enum import Enum
from pathlib import Path
from yaml_loader import config_loader
from intelligence_engine import IntelligenceEngine
class LearningType(Enum):
"""Types of learning patterns."""
USER_PREFERENCE = "user_preference"
OPERATION_PATTERN = "operation_pattern"
PERFORMANCE_OPTIMIZATION = "performance_optimization"
ERROR_RECOVERY = "error_recovery"
EFFECTIVENESS_FEEDBACK = "effectiveness_feedback"
class AdaptationScope(Enum):
"""Scope of learning adaptations."""
SESSION = "session" # Apply only to current session
PROJECT = "project" # Apply to current project
USER = "user" # Apply across all user sessions
GLOBAL = "global" # Apply to all users (anonymized)
@dataclass
class LearningRecord:
"""Record of a learning event."""
timestamp: float
learning_type: LearningType
scope: AdaptationScope
context: Dict[str, Any]
pattern: Dict[str, Any]
effectiveness_score: float # 0.0 to 1.0
confidence: float # 0.0 to 1.0
metadata: Dict[str, Any]
@dataclass
class Adaptation:
"""An adaptation learned from patterns."""
adaptation_id: str
pattern_signature: str
trigger_conditions: Dict[str, Any]
modifications: Dict[str, Any]
effectiveness_history: List[float]
usage_count: int
last_used: float
confidence_score: float
@dataclass
class LearningInsight:
"""Insight derived from learning patterns."""
insight_type: str
description: str
evidence: List[str]
recommendations: List[str]
confidence: float
impact_score: float
class LearningEngine:
"""
Cross-hook adaptation system for continuous improvement.
Features:
- User preference learning and adaptation
- Operation pattern recognition and optimization
- Performance feedback integration
- Cross-hook coordination and knowledge sharing
- Effectiveness measurement and validation
- Personalization and project-specific adaptations
"""
def __init__(self, cache_dir: Path):
self.cache_dir = Path(cache_dir)
self.cache_dir.mkdir(exist_ok=True)
self.learning_records: List[LearningRecord] = []
self.adaptations: Dict[str, Adaptation] = {}
self.user_preferences: Dict[str, Any] = {}
self.project_patterns: Dict[str, Dict[str, Any]] = {}
# Initialize intelligence engine for YAML pattern integration
self.intelligence_engine = IntelligenceEngine()
self._load_learning_data()
def _load_learning_data(self):
"""Load existing learning data from cache with robust error handling."""
# Initialize empty data structures first
self.learning_records = []
self.adaptations = {}
self.user_preferences = {}
self.project_patterns = {}
try:
# Load learning records with corruption detection
records_file = self.cache_dir / "learning_records.json"
if records_file.exists():
try:
with open(records_file, 'r') as f:
content = f.read().strip()
if not content:
# Empty file, initialize with empty array
self._initialize_empty_records_file(records_file)
elif content == '[]':
# Valid empty array
self.learning_records = []
else:
# Try to parse JSON
data = json.loads(content)
if isinstance(data, list):
self.learning_records = [
LearningRecord(**record) for record in data
if self._validate_learning_record(record)
]
else:
# Invalid format, reinitialize
self._initialize_empty_records_file(records_file)
except (json.JSONDecodeError, TypeError, ValueError) as e:
# JSON corruption detected, reinitialize
print(f"Learning records corrupted, reinitializing: {e}")
self._initialize_empty_records_file(records_file)
else:
# File doesn't exist, create it
self._initialize_empty_records_file(records_file)
# Load adaptations with error handling
adaptations_file = self.cache_dir / "adaptations.json"
if adaptations_file.exists():
try:
with open(adaptations_file, 'r') as f:
data = json.load(f)
if isinstance(data, dict):
self.adaptations = {
k: Adaptation(**v) for k, v in data.items()
if self._validate_adaptation_data(v)
}
except (json.JSONDecodeError, TypeError, ValueError):
# Corrupted adaptations file, start fresh
self.adaptations = {}
# Load user preferences with error handling
preferences_file = self.cache_dir / "user_preferences.json"
if preferences_file.exists():
try:
with open(preferences_file, 'r') as f:
data = json.load(f)
if isinstance(data, dict):
self.user_preferences = data
except (json.JSONDecodeError, TypeError, ValueError):
self.user_preferences = {}
# Load project patterns with error handling
patterns_file = self.cache_dir / "project_patterns.json"
if patterns_file.exists():
try:
with open(patterns_file, 'r') as f:
data = json.load(f)
if isinstance(data, dict):
self.project_patterns = data
except (json.JSONDecodeError, TypeError, ValueError):
self.project_patterns = {}
except Exception as e:
# Final fallback - ensure all data structures are initialized
print(f"Error loading learning data, using defaults: {e}")
self.learning_records = []
self.adaptations = {}
self.user_preferences = {}
self.project_patterns = {}
def record_learning_event(self,
learning_type: LearningType,
scope: AdaptationScope,
context: Dict[str, Any],
pattern: Dict[str, Any],
effectiveness_score: float,
confidence: float = 1.0,
metadata: Dict[str, Any] = None) -> str:
"""
Record a learning event for future adaptation.
Args:
learning_type: Type of learning event
scope: Scope of the learning (session, project, user, global)
context: Context in which the learning occurred
pattern: Pattern or behavior that was observed
effectiveness_score: How effective the pattern was (0.0 to 1.0)
confidence: Confidence in the learning (0.0 to 1.0)
metadata: Additional metadata about the learning event
Returns:
Learning record ID
"""
if metadata is None:
metadata = {}
# Validate effectiveness score bounds
if not (0.0 <= effectiveness_score <= 1.0):
raise ValueError(f"Effectiveness score must be between 0.0 and 1.0, got: {effectiveness_score}")
# Validate confidence bounds
if not (0.0 <= confidence <= 1.0):
raise ValueError(f"Confidence must be between 0.0 and 1.0, got: {confidence}")
# Flag suspicious perfect score sequences (potential overfitting)
if effectiveness_score == 1.0:
metadata['perfect_score_flag'] = True
record = LearningRecord(
timestamp=time.time(),
learning_type=learning_type,
scope=scope,
context=context,
pattern=pattern,
effectiveness_score=effectiveness_score,
confidence=confidence,
metadata=metadata
)
self.learning_records.append(record)
# Trigger adaptation creation if pattern is significant
if effectiveness_score > 0.7 and confidence > 0.6:
self._create_adaptation_from_record(record)
# Save to cache
self._save_learning_data()
return f"learning_{int(record.timestamp)}"
def _create_adaptation_from_record(self, record: LearningRecord):
"""Create an adaptation from a significant learning record."""
pattern_signature = self._generate_pattern_signature(record.pattern, record.context)
# Check if adaptation already exists
if pattern_signature in self.adaptations:
adaptation = self.adaptations[pattern_signature]
adaptation.effectiveness_history.append(record.effectiveness_score)
adaptation.usage_count += 1
adaptation.last_used = record.timestamp
# Update confidence based on consistency
if len(adaptation.effectiveness_history) > 1:
consistency = 1.0 - statistics.stdev(adaptation.effectiveness_history[-5:]) / max(statistics.mean(adaptation.effectiveness_history[-5:]), 0.1)
adaptation.confidence_score = min(consistency * record.confidence, 1.0)
else:
# Create new adaptation
adaptation_id = f"adapt_{int(record.timestamp)}_{len(self.adaptations)}"
adaptation = Adaptation(
adaptation_id=adaptation_id,
pattern_signature=pattern_signature,
trigger_conditions=self._extract_trigger_conditions(record.context),
modifications=self._extract_modifications(record.pattern),
effectiveness_history=[record.effectiveness_score],
usage_count=1,
last_used=record.timestamp,
confidence_score=record.confidence
)
self.adaptations[pattern_signature] = adaptation
def _generate_pattern_signature(self, pattern: Dict[str, Any], context: Dict[str, Any]) -> str:
"""Generate a unique signature for a pattern using YAML intelligence patterns."""
# Get pattern dimensions from YAML intelligence patterns
intelligence_patterns = self.intelligence_engine.evaluate_context(context, 'intelligence_patterns')
pattern_dimensions = intelligence_patterns.get('recommendations', {}).get('pattern_dimensions', [])
# If no YAML dimensions available, use fallback dimensions
if not pattern_dimensions:
pattern_dimensions = ['context_type', 'complexity_score', 'operation_type', 'performance_score']
key_elements = []
# Use YAML-defined dimensions for signature generation
for dimension in pattern_dimensions:
if dimension in context:
value = context[dimension]
# Bucket numeric values for better grouping
if isinstance(value, (int, float)) and dimension in ['complexity_score', 'performance_score']:
bucketed_value = int(value * 10) / 10 # Round to 0.1
key_elements.append(f"{dimension}:{bucketed_value}")
elif isinstance(value, (int, float)) and dimension in ['file_count', 'directory_count']:
bucketed_value = min(int(value), 10) # Cap at 10 for grouping
key_elements.append(f"{dimension}:{bucketed_value}")
else:
key_elements.append(f"{dimension}:{value}")
elif dimension in pattern:
key_elements.append(f"{dimension}:{pattern[dimension]}")
# Add pattern-specific elements
for key in ['mcp_server', 'mode', 'compression_level', 'delegation_strategy']:
if key in pattern and key not in [d.split(':')[0] for d in key_elements]:
key_elements.append(f"{key}:{pattern[key]}")
signature = "_".join(sorted(key_elements))
return signature if signature else "unknown_pattern"
def _extract_trigger_conditions(self, context: Dict[str, Any]) -> Dict[str, Any]:
"""Extract trigger conditions from context."""
conditions = {}
# Operational conditions
for key in ['operation_type', 'complexity_score', 'file_count', 'directory_count']:
if key in context:
conditions[key] = context[key]
# Environmental conditions
for key in ['resource_usage_percent', 'conversation_length', 'user_expertise']:
if key in context:
conditions[key] = context[key]
# Project conditions
for key in ['project_type', 'has_tests', 'is_production']:
if key in context:
conditions[key] = context[key]
return conditions
def _extract_modifications(self, pattern: Dict[str, Any]) -> Dict[str, Any]:
"""Extract modifications to apply from pattern."""
modifications = {}
# MCP server preferences
if 'mcp_server' in pattern:
modifications['preferred_mcp_server'] = pattern['mcp_server']
# Mode preferences
if 'mode' in pattern:
modifications['preferred_mode'] = pattern['mode']
# Flag preferences
if 'flags' in pattern:
modifications['suggested_flags'] = pattern['flags']
# Performance optimizations
if 'optimization' in pattern:
modifications['optimization'] = pattern['optimization']
return modifications
def get_adaptations_for_context(self, context: Dict[str, Any]) -> List[Adaptation]:
"""Get relevant adaptations for the current context."""
relevant_adaptations = []
for adaptation in self.adaptations.values():
if self._matches_trigger_conditions(adaptation.trigger_conditions, context):
# Check effectiveness threshold
if adaptation.confidence_score > 0.5 and len(adaptation.effectiveness_history) > 0:
avg_effectiveness = statistics.mean(adaptation.effectiveness_history)
if avg_effectiveness > 0.6:
relevant_adaptations.append(adaptation)
# Sort by effectiveness and confidence
relevant_adaptations.sort(
key=lambda a: statistics.mean(a.effectiveness_history) * a.confidence_score,
reverse=True
)
return relevant_adaptations
def _matches_trigger_conditions(self, conditions: Dict[str, Any], context: Dict[str, Any]) -> bool:
"""Check if context matches adaptation trigger conditions."""
for key, expected_value in conditions.items():
if key not in context:
continue
context_value = context[key]
# Exact match for strings and booleans
if isinstance(expected_value, (str, bool)):
if context_value != expected_value:
return False
# Range match for numbers
elif isinstance(expected_value, (int, float)):
tolerance = 0.1 if isinstance(expected_value, float) else 1
if abs(context_value - expected_value) > tolerance:
return False
return True
def apply_adaptations(self,
context: Dict[str, Any],
base_recommendations: Dict[str, Any]) -> Dict[str, Any]:
"""
Apply learned adaptations enhanced with YAML intelligence patterns.
Args:
context: Current operation context
base_recommendations: Base recommendations before adaptation
Returns:
Enhanced recommendations with learned adaptations and YAML intelligence applied
"""
# Get YAML intelligence recommendations first
mcp_intelligence = self.intelligence_engine.evaluate_context(context, 'mcp_orchestration')
ux_intelligence = self.intelligence_engine.evaluate_context(context, 'user_experience')
performance_intelligence = self.intelligence_engine.evaluate_context(context, 'performance_intelligence')
# Start with base recommendations and add YAML intelligence
enhanced_recommendations = base_recommendations.copy()
# Integrate YAML-based MCP recommendations
mcp_recs = mcp_intelligence.get('recommendations', {})
if mcp_recs.get('primary_server'):
if 'recommended_mcp_servers' not in enhanced_recommendations:
enhanced_recommendations['recommended_mcp_servers'] = []
servers = enhanced_recommendations['recommended_mcp_servers']
if mcp_recs['primary_server'] not in servers:
servers.insert(0, mcp_recs['primary_server'])
# Add support servers
for support_server in mcp_recs.get('support_servers', []):
if support_server not in servers:
servers.append(support_server)
# Integrate UX intelligence (project detection, smart defaults)
ux_recs = ux_intelligence.get('recommendations', {})
if ux_recs.get('suggested_servers'):
if 'recommended_mcp_servers' not in enhanced_recommendations:
enhanced_recommendations['recommended_mcp_servers'] = []
for server in ux_recs['suggested_servers']:
if server not in enhanced_recommendations['recommended_mcp_servers']:
enhanced_recommendations['recommended_mcp_servers'].append(server)
# Integrate performance optimizations
perf_recs = performance_intelligence.get('recommendations', {})
if perf_recs.get('optimizations'):
enhanced_recommendations['performance_optimizations'] = perf_recs['optimizations']
enhanced_recommendations['resource_zone'] = perf_recs.get('resource_zone', 'green')
# Apply learned adaptations on top of YAML intelligence
relevant_adaptations = self.get_adaptations_for_context(context)
for adaptation in relevant_adaptations:
# Apply modifications from adaptation
for modification_type, modification_value in adaptation.modifications.items():
if modification_type == 'preferred_mcp_server':
# Enhance MCP server selection
if 'recommended_mcp_servers' not in enhanced_recommendations:
enhanced_recommendations['recommended_mcp_servers'] = []
servers = enhanced_recommendations['recommended_mcp_servers']
if modification_value not in servers:
servers.insert(0, modification_value) # Prioritize learned preference
elif modification_type == 'preferred_mode':
# Enhance mode selection
if 'recommended_modes' not in enhanced_recommendations:
enhanced_recommendations['recommended_modes'] = []
modes = enhanced_recommendations['recommended_modes']
if modification_value not in modes:
modes.insert(0, modification_value)
elif modification_type == 'suggested_flags':
# Enhance flag suggestions
if 'suggested_flags' not in enhanced_recommendations:
enhanced_recommendations['suggested_flags'] = []
for flag in modification_value:
if flag not in enhanced_recommendations['suggested_flags']:
enhanced_recommendations['suggested_flags'].append(flag)
elif modification_type == 'optimization':
# Apply performance optimizations
if 'optimizations' not in enhanced_recommendations:
enhanced_recommendations['optimizations'] = []
enhanced_recommendations['optimizations'].append(modification_value)
# Update usage tracking
adaptation.usage_count += 1
adaptation.last_used = time.time()
# Add learning metadata
enhanced_recommendations['applied_adaptations'] = [
{
'id': adaptation.adaptation_id,
'confidence': adaptation.confidence_score,
'effectiveness': statistics.mean(adaptation.effectiveness_history)
}
for adaptation in relevant_adaptations
]
return enhanced_recommendations
def record_effectiveness_feedback(self,
adaptation_ids: List[str],
effectiveness_score: float,
context: Dict[str, Any]):
"""Record feedback on adaptation effectiveness."""
for adaptation_id in adaptation_ids:
# Find adaptation by ID
adaptation = None
for adapt in self.adaptations.values():
if adapt.adaptation_id == adaptation_id:
adaptation = adapt
break
if adaptation:
adaptation.effectiveness_history.append(effectiveness_score)
# Update confidence based on consistency
if len(adaptation.effectiveness_history) > 2:
recent_scores = adaptation.effectiveness_history[-5:]
consistency = 1.0 - statistics.stdev(recent_scores) / max(statistics.mean(recent_scores), 0.1)
adaptation.confidence_score = min(consistency, 1.0)
# Record learning event
self.record_learning_event(
LearningType.EFFECTIVENESS_FEEDBACK,
AdaptationScope.USER,
context,
{'adaptation_id': adaptation_id},
effectiveness_score,
adaptation.confidence_score
)
def generate_learning_insights(self) -> List[LearningInsight]:
"""Generate insights from learning patterns."""
insights = []
# User preference insights
insights.extend(self._analyze_user_preferences())
# Performance pattern insights
insights.extend(self._analyze_performance_patterns())
# Error pattern insights
insights.extend(self._analyze_error_patterns())
# Effectiveness insights
insights.extend(self._analyze_effectiveness_patterns())
return insights
def _analyze_user_preferences(self) -> List[LearningInsight]:
"""Analyze user preference patterns."""
insights = []
# Analyze MCP server preferences
mcp_usage = {}
for record in self.learning_records:
if record.learning_type == LearningType.USER_PREFERENCE:
server = record.pattern.get('mcp_server')
if server:
if server not in mcp_usage:
mcp_usage[server] = []
mcp_usage[server].append(record.effectiveness_score)
if mcp_usage:
# Find most effective server
server_effectiveness = {
server: statistics.mean(scores)
for server, scores in mcp_usage.items()
if len(scores) >= 3
}
if server_effectiveness:
best_server = max(server_effectiveness, key=server_effectiveness.get)
best_score = server_effectiveness[best_server]
if best_score > 0.8:
insights.append(LearningInsight(
insight_type="user_preference",
description=f"User consistently prefers {best_server} MCP server",
evidence=[f"Effectiveness score: {best_score:.2f}", f"Usage count: {len(mcp_usage[best_server])}"],
recommendations=[f"Auto-suggest {best_server} for similar operations"],
confidence=min(best_score, 1.0),
impact_score=0.7
))
return insights
def _analyze_performance_patterns(self) -> List[LearningInsight]:
"""Analyze performance optimization patterns."""
insights = []
# Analyze delegation effectiveness
delegation_records = [
r for r in self.learning_records
if r.learning_type == LearningType.PERFORMANCE_OPTIMIZATION
and 'delegation' in r.pattern
]
if len(delegation_records) >= 5:
avg_effectiveness = statistics.mean([r.effectiveness_score for r in delegation_records])
if avg_effectiveness > 0.75:
insights.append(LearningInsight(
insight_type="performance_optimization",
description="Delegation consistently improves performance",
evidence=[f"Average effectiveness: {avg_effectiveness:.2f}", f"Sample size: {len(delegation_records)}"],
recommendations=["Enable delegation for multi-file operations", "Lower delegation threshold"],
confidence=avg_effectiveness,
impact_score=0.8
))
return insights
def _analyze_error_patterns(self) -> List[LearningInsight]:
"""Analyze error recovery patterns."""
insights = []
error_records = [
r for r in self.learning_records
if r.learning_type == LearningType.ERROR_RECOVERY
]
if len(error_records) >= 3:
# Analyze common error contexts
error_contexts = {}
for record in error_records:
context_key = record.context.get('operation_type', 'unknown')
if context_key not in error_contexts:
error_contexts[context_key] = []
error_contexts[context_key].append(record)
for context, records in error_contexts.items():
if len(records) >= 2:
avg_recovery_effectiveness = statistics.mean([r.effectiveness_score for r in records])
insights.append(LearningInsight(
insight_type="error_recovery",
description=f"Error patterns identified for {context} operations",
evidence=[f"Occurrence count: {len(records)}", f"Recovery effectiveness: {avg_recovery_effectiveness:.2f}"],
recommendations=[f"Add proactive validation for {context} operations"],
confidence=min(len(records) / 5, 1.0),
impact_score=0.6
))
return insights
def _analyze_effectiveness_patterns(self) -> List[LearningInsight]:
"""Analyze overall effectiveness patterns."""
insights = []
if len(self.learning_records) >= 10:
recent_records = sorted(self.learning_records, key=lambda r: r.timestamp)[-10:]
avg_effectiveness = statistics.mean([r.effectiveness_score for r in recent_records])
if avg_effectiveness > 0.8:
insights.append(LearningInsight(
insight_type="effectiveness_trend",
description="SuperClaude effectiveness is high and improving",
evidence=[f"Recent average effectiveness: {avg_effectiveness:.2f}"],
recommendations=["Continue current learning patterns", "Consider expanding adaptation scope"],
confidence=avg_effectiveness,
impact_score=0.9
))
elif avg_effectiveness < 0.6:
insights.append(LearningInsight(
insight_type="effectiveness_concern",
description="SuperClaude effectiveness below optimal",
evidence=[f"Recent average effectiveness: {avg_effectiveness:.2f}"],
recommendations=["Review recent adaptations", "Gather more user feedback", "Adjust learning thresholds"],
confidence=1.0 - avg_effectiveness,
impact_score=0.8
))
return insights
def _save_learning_data(self):
"""Save learning data to cache files with validation and atomic writes."""
try:
# Save learning records with validation
records_file = self.cache_dir / "learning_records.json"
records_data = []
for record in self.learning_records:
try:
# Convert record to dict and handle enums
record_dict = asdict(record)
# Convert enum values to strings for JSON serialization
if isinstance(record_dict.get('learning_type'), LearningType):
record_dict['learning_type'] = record_dict['learning_type'].value
if isinstance(record_dict.get('scope'), AdaptationScope):
record_dict['scope'] = record_dict['scope'].value
# Validate the record
if self._validate_learning_record_dict(record_dict):
records_data.append(record_dict)
else:
print(f"Warning: Invalid record skipped: {record_dict}")
except Exception as e:
print(f"Warning: Error processing record: {e}")
continue # Skip invalid records
# Atomic write to prevent corruption during write
temp_file = records_file.with_suffix('.tmp')
with open(temp_file, 'w') as f:
json.dump(records_data, f, indent=2)
temp_file.replace(records_file)
# Save adaptations with validation
adaptations_file = self.cache_dir / "adaptations.json"
adaptations_data = {}
for k, v in self.adaptations.items():
try:
adapt_dict = asdict(v)
if self._validate_adaptation_data(adapt_dict):
adaptations_data[k] = adapt_dict
except Exception:
continue
temp_file = adaptations_file.with_suffix('.tmp')
with open(temp_file, 'w') as f:
json.dump(adaptations_data, f, indent=2)
temp_file.replace(adaptations_file)
# Save user preferences
preferences_file = self.cache_dir / "user_preferences.json"
if isinstance(self.user_preferences, dict):
temp_file = preferences_file.with_suffix('.tmp')
with open(temp_file, 'w') as f:
json.dump(self.user_preferences, f, indent=2)
temp_file.replace(preferences_file)
# Save project patterns
patterns_file = self.cache_dir / "project_patterns.json"
if isinstance(self.project_patterns, dict):
temp_file = patterns_file.with_suffix('.tmp')
with open(temp_file, 'w') as f:
json.dump(self.project_patterns, f, indent=2)
temp_file.replace(patterns_file)
except Exception as e:
print(f"Error saving learning data: {e}")
def _initialize_empty_records_file(self, records_file: Path):
"""Initialize learning records file with empty array."""
try:
with open(records_file, 'w') as f:
json.dump([], f)
except Exception as e:
print(f"Error initializing records file: {e}")
def _validate_learning_record(self, record_data: dict) -> bool:
"""Validate learning record data structure."""
required_fields = ['timestamp', 'learning_type', 'scope', 'context', 'pattern', 'effectiveness_score', 'confidence', 'metadata']
try:
return all(field in record_data for field in required_fields)
except (TypeError, AttributeError):
return False
def _validate_learning_record_dict(self, record_dict: dict) -> bool:
"""Validate learning record dictionary before saving."""
try:
# Check required fields exist and have valid types
if not isinstance(record_dict.get('timestamp'), (int, float)):
return False
# Handle both enum objects and string values for learning_type
learning_type = record_dict.get('learning_type')
if not (isinstance(learning_type, str) or isinstance(learning_type, LearningType)):
return False
# Handle both enum objects and string values for scope
scope = record_dict.get('scope')
if not (isinstance(scope, str) or isinstance(scope, AdaptationScope)):
return False
if not isinstance(record_dict.get('context'), dict):
return False
if not isinstance(record_dict.get('pattern'), dict):
return False
if not isinstance(record_dict.get('effectiveness_score'), (int, float)):
return False
if not isinstance(record_dict.get('confidence'), (int, float)):
return False
if not isinstance(record_dict.get('metadata'), dict):
return False
return True
except (TypeError, AttributeError):
return False
def _validate_adaptation_data(self, adapt_data: dict) -> bool:
"""Validate adaptation data structure."""
required_fields = ['adaptation_id', 'pattern_signature', 'trigger_conditions', 'modifications', 'effectiveness_history', 'usage_count', 'last_used', 'confidence_score']
try:
return all(field in adapt_data for field in required_fields)
except (TypeError, AttributeError):
return False
def get_intelligent_recommendations(self, context: Dict[str, Any]) -> Dict[str, Any]:
"""
Get comprehensive intelligent recommendations combining YAML patterns and learned adaptations.
Args:
context: Current operation context
Returns:
Comprehensive recommendations with intelligence from multiple sources
"""
# Get base recommendations from all YAML intelligence patterns
base_recommendations = {}
# Collect recommendations from all intelligence pattern types
pattern_types = ['mcp_orchestration', 'hook_coordination', 'performance_intelligence',
'validation_intelligence', 'user_experience', 'intelligence_patterns']
intelligence_results = {}
for pattern_type in pattern_types:
try:
result = self.intelligence_engine.evaluate_context(context, pattern_type)
intelligence_results[pattern_type] = result
# Merge recommendations
recommendations = result.get('recommendations', {})
for key, value in recommendations.items():
if key not in base_recommendations:
base_recommendations[key] = value
elif isinstance(base_recommendations[key], list) and isinstance(value, list):
# Merge lists without duplicates
base_recommendations[key] = list(set(base_recommendations[key] + value))
except Exception as e:
print(f"Warning: Could not evaluate {pattern_type} patterns: {e}")
# Apply learned adaptations on top of YAML intelligence
enhanced_recommendations = self.apply_adaptations(context, base_recommendations)
# Add intelligence metadata
enhanced_recommendations['intelligence_metadata'] = {
'yaml_patterns_used': list(intelligence_results.keys()),
'adaptations_applied': len(self.get_adaptations_for_context(context)),
'confidence_scores': {k: v.get('confidence', 0.0) for k, v in intelligence_results.items()},
'recommendations_source': 'yaml_intelligence_plus_learned_adaptations'
}
return enhanced_recommendations
def cleanup_old_data(self, days_to_keep: int = 30):
"""Clean up old learning data to prevent cache bloat."""
cutoff_time = time.time() - (days_to_keep * 24 * 60 * 60)
# Remove old learning records
self.learning_records = [
record for record in self.learning_records
if record.timestamp > cutoff_time
]
# Remove unused adaptations
self.adaptations = {
k: v for k, v in self.adaptations.items()
if v.last_used > cutoff_time or v.usage_count > 5
}
self._save_learning_data()
def update_last_preference(self, preference_key: str, value: Any):
"""Simply store the last successful choice - no complex learning."""
if not self.user_preferences:
self.user_preferences = {}
self.user_preferences[preference_key] = {
"value": value,
"timestamp": time.time()
}
self._save_learning_data()
def get_last_preference(self, preference_key: str, default=None):
"""Get the last successful choice if available."""
if not self.user_preferences:
return default
pref = self.user_preferences.get(preference_key, {})
return pref.get("value", default)
def update_project_info(self, project_path: str, info_type: str, value: Any):
"""Store basic project information."""
if not self.project_patterns:
self.project_patterns = {}
if project_path not in self.project_patterns:
self.project_patterns[project_path] = {}
self.project_patterns[project_path][info_type] = value
self.project_patterns[project_path]["last_updated"] = time.time()
self._save_learning_data()

View File

@@ -1,319 +0,0 @@
"""
Simple logger for SuperClaude-Lite hooks.
Provides structured logging of hook events for later analysis.
Focuses on capturing hook lifecycle, decisions, and errors in a
structured format without any analysis or complex features.
"""
import json
import logging
import os
import time
from datetime import datetime, timezone, timedelta
from pathlib import Path
from typing import Optional, Dict, Any
import uuid
import glob
# Import configuration loader
try:
from .yaml_loader import UnifiedConfigLoader
except ImportError:
# Fallback if yaml_loader is not available
UnifiedConfigLoader = None
class HookLogger:
"""Simple logger for SuperClaude-Lite hooks."""
def __init__(self, log_dir: str = None, retention_days: int = None):
"""
Initialize the logger.
Args:
log_dir: Directory to store log files. Defaults to cache/logs/
retention_days: Number of days to keep log files. Defaults to 30.
"""
# Load configuration
self.config = self._load_config()
# Check if logging is enabled
if not self.config.get('logging', {}).get('enabled', True):
self.enabled = False
return
self.enabled = True
# Set up log directory
if log_dir is None:
# Get SuperClaude-Lite root directory (2 levels up from shared/)
root_dir = Path(__file__).parent.parent.parent
log_dir_config = self.config.get('logging', {}).get('file_settings', {}).get('log_directory', 'cache/logs')
log_dir = root_dir / log_dir_config
self.log_dir = Path(log_dir)
self.log_dir.mkdir(parents=True, exist_ok=True)
# Log retention settings
if retention_days is None:
retention_days = self.config.get('logging', {}).get('file_settings', {}).get('retention_days', 30)
self.retention_days = retention_days
# Session ID for correlating events - shared across all hooks in the same Claude Code session
self.session_id = self._get_or_create_session_id()
# Set up Python logger
self._setup_logger()
# Clean up old logs on initialization
self._cleanup_old_logs()
def _load_config(self) -> Dict[str, Any]:
"""Load logging configuration from YAML file."""
if UnifiedConfigLoader is None:
# Return default configuration if loader not available
return {
'logging': {
'enabled': True,
'level': 'INFO',
'file_settings': {
'log_directory': 'cache/logs',
'retention_days': 30
}
}
}
try:
# Get project root
root_dir = Path(__file__).parent.parent.parent
loader = UnifiedConfigLoader(root_dir)
# Load logging configuration
config = loader.load_yaml('logging')
return config or {}
except Exception:
# Return default configuration on error
return {
'logging': {
'enabled': True,
'level': 'INFO',
'file_settings': {
'log_directory': 'cache/logs',
'retention_days': 30
}
}
}
def _get_or_create_session_id(self) -> str:
"""
Get or create a shared session ID for correlation across all hooks.
Checks in order:
1. Environment variable CLAUDE_SESSION_ID
2. Session file in cache directory
3. Generate new UUID and save to session file
Returns:
8-character session ID string
"""
# Check environment variable first
env_session_id = os.environ.get('CLAUDE_SESSION_ID')
if env_session_id:
return env_session_id[:8] # Truncate to 8 characters for consistency
# Check for session file in cache directory
cache_dir = self.log_dir.parent # logs are in cache/logs, so parent is cache/
session_file = cache_dir / "session_id"
try:
if session_file.exists():
session_id = session_file.read_text(encoding='utf-8').strip()
# Validate it's a reasonable session ID (8 chars, alphanumeric)
if len(session_id) == 8 and session_id.replace('-', '').isalnum():
return session_id
except (IOError, OSError):
# If we can't read the file, generate a new one
pass
# Generate new session ID and save it
new_session_id = str(uuid.uuid4())[:8]
try:
# Ensure cache directory exists
cache_dir.mkdir(parents=True, exist_ok=True)
session_file.write_text(new_session_id, encoding='utf-8')
except (IOError, OSError):
# If we can't write the file, just return the ID
# The session won't be shared, but at least this instance will work
pass
return new_session_id
def _setup_logger(self):
"""Set up the Python logger with JSON formatting."""
self.logger = logging.getLogger("superclaude_lite_hooks")
# Set log level from configuration
log_level_str = self.config.get('logging', {}).get('level', 'INFO').upper()
log_level = getattr(logging, log_level_str, logging.INFO)
self.logger.setLevel(log_level)
# Remove existing handlers to avoid duplicates
self.logger.handlers.clear()
# Create daily log file
today = datetime.now().strftime("%Y-%m-%d")
log_file = self.log_dir / f"superclaude-lite-{today}.log"
# File handler
handler = logging.FileHandler(log_file, mode='a', encoding='utf-8')
handler.setLevel(logging.INFO)
# Simple formatter - just output the message (which is already JSON)
formatter = logging.Formatter('%(message)s')
handler.setFormatter(formatter)
self.logger.addHandler(handler)
def _create_event(self, event_type: str, hook_name: str, data: Dict[str, Any] = None) -> Dict[str, Any]:
"""Create a structured event."""
event = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"session": self.session_id,
"hook": hook_name,
"event": event_type
}
if data:
event["data"] = data
return event
def _should_log_event(self, hook_name: str, event_type: str) -> bool:
"""Check if this event should be logged based on configuration."""
if not self.enabled:
return False
# Check hook-specific configuration
hook_config = self.config.get('hook_configuration', {}).get(hook_name, {})
if not hook_config.get('enabled', True):
return False
# Check event type configuration
hook_logging = self.config.get('logging', {}).get('hook_logging', {})
event_mapping = {
'start': 'log_lifecycle',
'end': 'log_lifecycle',
'decision': 'log_decisions',
'error': 'log_errors'
}
config_key = event_mapping.get(event_type, 'log_lifecycle')
return hook_logging.get(config_key, True)
def log_hook_start(self, hook_name: str, context: Optional[Dict[str, Any]] = None):
"""Log the start of a hook execution."""
if not self._should_log_event(hook_name, 'start'):
return
event = self._create_event("start", hook_name, context)
self.logger.info(json.dumps(event))
def log_hook_end(self, hook_name: str, duration_ms: int, success: bool, result: Optional[Dict[str, Any]] = None):
"""Log the end of a hook execution."""
if not self._should_log_event(hook_name, 'end'):
return
data = {
"duration_ms": duration_ms,
"success": success
}
if result:
data["result"] = result
event = self._create_event("end", hook_name, data)
self.logger.info(json.dumps(event))
def log_decision(self, hook_name: str, decision_type: str, choice: str, reason: str):
"""Log a decision made by a hook."""
if not self._should_log_event(hook_name, 'decision'):
return
data = {
"type": decision_type,
"choice": choice,
"reason": reason
}
event = self._create_event("decision", hook_name, data)
self.logger.info(json.dumps(event))
def log_error(self, hook_name: str, error: str, context: Optional[Dict[str, Any]] = None):
"""Log an error that occurred in a hook."""
if not self._should_log_event(hook_name, 'error'):
return
data = {
"error": error
}
if context:
data["context"] = context
event = self._create_event("error", hook_name, data)
self.logger.info(json.dumps(event))
def _cleanup_old_logs(self):
"""Remove log files older than retention_days."""
if self.retention_days <= 0:
return
cutoff_date = datetime.now() - timedelta(days=self.retention_days)
# Find all log files
log_pattern = self.log_dir / "superclaude-lite-*.log"
for log_file in glob.glob(str(log_pattern)):
try:
# Extract date from filename
filename = os.path.basename(log_file)
date_str = filename.replace("superclaude-lite-", "").replace(".log", "")
file_date = datetime.strptime(date_str, "%Y-%m-%d")
# Remove if older than cutoff
if file_date < cutoff_date:
os.remove(log_file)
except (ValueError, OSError):
# Skip files that don't match expected format or can't be removed
continue
# Global logger instance
_logger = None
def get_logger() -> HookLogger:
"""Get the global logger instance."""
global _logger
if _logger is None:
_logger = HookLogger()
return _logger
# Convenience functions for easy hook integration
def log_hook_start(hook_name: str, context: Optional[Dict[str, Any]] = None):
"""Log the start of a hook execution."""
get_logger().log_hook_start(hook_name, context)
def log_hook_end(hook_name: str, duration_ms: int, success: bool, result: Optional[Dict[str, Any]] = None):
"""Log the end of a hook execution."""
get_logger().log_hook_end(hook_name, duration_ms, success, result)
def log_decision(hook_name: str, decision_type: str, choice: str, reason: str):
"""Log a decision made by a hook."""
get_logger().log_decision(hook_name, decision_type, choice, reason)
def log_error(hook_name: str, error: str, context: Optional[Dict[str, Any]] = None):
"""Log an error that occurred in a hook."""
get_logger().log_error(hook_name, error, context)

View File

@@ -1,762 +0,0 @@
"""
MCP Intelligence Engine for SuperClaude-Lite
Intelligent MCP server activation, coordination, and optimization based on
ORCHESTRATOR.md patterns and real-time context analysis.
"""
import json
import time
from typing import Dict, Any, List, Optional, Set, Tuple
from dataclasses import dataclass
from enum import Enum
from yaml_loader import config_loader
from pattern_detection import PatternDetector, PatternMatch
class MCPServerState(Enum):
"""States of MCP server availability."""
AVAILABLE = "available"
UNAVAILABLE = "unavailable"
LOADING = "loading"
ERROR = "error"
@dataclass
class MCPServerCapability:
"""Capability definition for an MCP server."""
server_name: str
primary_functions: List[str]
performance_profile: str # lightweight, standard, intensive
activation_cost_ms: int
token_efficiency: float # 0.0 to 1.0
quality_impact: float # 0.0 to 1.0
@dataclass
class MCPActivationPlan:
"""Plan for MCP server activation."""
servers_to_activate: List[str]
activation_order: List[str]
estimated_cost_ms: int
efficiency_gains: Dict[str, float]
fallback_strategy: Dict[str, str]
coordination_strategy: str
class MCPIntelligence:
"""
Intelligent MCP server management and coordination.
Implements ORCHESTRATOR.md patterns for:
- Smart server selection based on context
- Performance-optimized activation sequences
- Fallback strategies for server failures
- Cross-server coordination and caching
- Real-time adaptation based on effectiveness
"""
def __init__(self):
self.pattern_detector = PatternDetector()
self.server_capabilities = self._load_server_capabilities()
self.server_states = self._initialize_server_states()
self.activation_history = []
self.performance_metrics = {}
def _load_server_capabilities(self) -> Dict[str, MCPServerCapability]:
"""Load MCP server capabilities from configuration."""
config = config_loader.load_config('orchestrator')
capabilities = {}
servers_config = config.get('mcp_servers', {})
capabilities['context7'] = MCPServerCapability(
server_name='context7',
primary_functions=['library_docs', 'framework_patterns', 'best_practices'],
performance_profile='standard',
activation_cost_ms=150,
token_efficiency=0.8,
quality_impact=0.9
)
capabilities['sequential'] = MCPServerCapability(
server_name='sequential',
primary_functions=['complex_analysis', 'multi_step_reasoning', 'debugging'],
performance_profile='intensive',
activation_cost_ms=200,
token_efficiency=0.6,
quality_impact=0.95
)
capabilities['magic'] = MCPServerCapability(
server_name='magic',
primary_functions=['ui_components', 'design_systems', 'frontend_generation'],
performance_profile='standard',
activation_cost_ms=120,
token_efficiency=0.85,
quality_impact=0.9
)
capabilities['playwright'] = MCPServerCapability(
server_name='playwright',
primary_functions=['e2e_testing', 'browser_automation', 'performance_testing'],
performance_profile='intensive',
activation_cost_ms=300,
token_efficiency=0.7,
quality_impact=0.85
)
capabilities['morphllm'] = MCPServerCapability(
server_name='morphllm',
primary_functions=['intelligent_editing', 'pattern_application', 'fast_apply'],
performance_profile='lightweight',
activation_cost_ms=80,
token_efficiency=0.9,
quality_impact=0.8
)
capabilities['serena'] = MCPServerCapability(
server_name='serena',
primary_functions=['semantic_analysis', 'project_context', 'memory_management'],
performance_profile='standard',
activation_cost_ms=100,
token_efficiency=0.75,
quality_impact=0.95
)
return capabilities
def _initialize_server_states(self) -> Dict[str, MCPServerState]:
"""Initialize server state tracking."""
return {
server: MCPServerState.AVAILABLE
for server in self.server_capabilities.keys()
}
def create_activation_plan(self,
user_input: str,
context: Dict[str, Any],
operation_data: Dict[str, Any]) -> MCPActivationPlan:
"""
Create intelligent MCP server activation plan.
Args:
user_input: User's request or command
context: Session and environment context
operation_data: Information about the planned operation
Returns:
MCPActivationPlan with optimized server selection and coordination
"""
# Detect patterns to determine server needs
detection_result = self.pattern_detector.detect_patterns(
user_input, context, operation_data
)
# Extract recommended servers from pattern detection
recommended_servers = detection_result.recommended_mcp_servers
# Apply intelligent selection based on context
optimized_servers = self._optimize_server_selection(
recommended_servers, context, operation_data
)
# Determine activation order for optimal performance
activation_order = self._calculate_activation_order(optimized_servers, context)
# Calculate estimated costs and gains
estimated_cost = self._calculate_activation_cost(optimized_servers)
efficiency_gains = self._calculate_efficiency_gains(optimized_servers, operation_data)
# Create fallback strategy
fallback_strategy = self._create_fallback_strategy(optimized_servers)
# Determine coordination strategy
coordination_strategy = self._determine_coordination_strategy(
optimized_servers, operation_data
)
return MCPActivationPlan(
servers_to_activate=optimized_servers,
activation_order=activation_order,
estimated_cost_ms=estimated_cost,
efficiency_gains=efficiency_gains,
fallback_strategy=fallback_strategy,
coordination_strategy=coordination_strategy
)
def _optimize_server_selection(self,
recommended_servers: List[str],
context: Dict[str, Any],
operation_data: Dict[str, Any]) -> List[str]:
"""Apply intelligent optimization to server selection."""
optimized = set(recommended_servers)
# Morphllm vs Serena intelligence selection
file_count = operation_data.get('file_count', 1)
complexity_score = operation_data.get('complexity_score', 0.0)
if 'morphllm' in optimized and 'serena' in optimized:
# Choose the more appropriate server based on complexity
if file_count > 10 or complexity_score > 0.6:
optimized.remove('morphllm') # Use Serena for complex operations
else:
optimized.remove('serena') # Use Morphllm for efficient operations
elif file_count > 10 or complexity_score > 0.6:
# Auto-add Serena for complex operations
optimized.add('serena')
optimized.discard('morphllm')
elif file_count <= 10 and complexity_score <= 0.6:
# Auto-add Morphllm for simple operations
optimized.add('morphllm')
optimized.discard('serena')
# Resource constraint optimization
resource_usage = context.get('resource_usage_percent', 0)
if resource_usage > 85:
# Remove intensive servers under resource constraints
intensive_servers = {
name for name, cap in self.server_capabilities.items()
if cap.performance_profile == 'intensive'
}
optimized -= intensive_servers
# Performance optimization based on operation type
operation_type = operation_data.get('operation_type', '')
if operation_type in ['read', 'analyze'] and 'sequential' not in optimized:
# Add Sequential for analysis operations
optimized.add('sequential')
# Auto-add Context7 if external libraries detected
if operation_data.get('has_external_dependencies', False):
optimized.add('context7')
return list(optimized)
def _calculate_activation_order(self, servers: List[str], context: Dict[str, Any]) -> List[str]:
"""Calculate optimal activation order for performance."""
if not servers:
return []
# Sort by activation cost (lightweight first)
server_costs = [
(server, self.server_capabilities[server].activation_cost_ms)
for server in servers
]
server_costs.sort(key=lambda x: x[1])
# Special ordering rules
ordered = []
# 1. Serena first if present (provides context for others)
if 'serena' in servers:
ordered.append('serena')
servers = [s for s in servers if s != 'serena']
# 2. Context7 early for documentation context
if 'context7' in servers:
ordered.append('context7')
servers = [s for s in servers if s != 'context7']
# 3. Remaining servers by cost
remaining_costs = [
(server, self.server_capabilities[server].activation_cost_ms)
for server in servers
]
remaining_costs.sort(key=lambda x: x[1])
ordered.extend([server for server, _ in remaining_costs])
return ordered
def _calculate_activation_cost(self, servers: List[str]) -> int:
"""Calculate total activation cost in milliseconds."""
return sum(
self.server_capabilities[server].activation_cost_ms
for server in servers
if server in self.server_capabilities
)
def _calculate_efficiency_gains(self, servers: List[str], operation_data: Dict[str, Any]) -> Dict[str, float]:
"""Calculate expected efficiency gains from server activation."""
gains = {}
for server in servers:
if server not in self.server_capabilities:
continue
capability = self.server_capabilities[server]
# Base efficiency gain
base_gain = capability.token_efficiency * capability.quality_impact
# Context-specific adjustments
if server == 'morphllm' and operation_data.get('file_count', 1) <= 5:
gains[server] = base_gain * 1.2 # Extra efficient for small operations
elif server == 'serena' and operation_data.get('complexity_score', 0) > 0.6:
gains[server] = base_gain * 1.3 # Extra valuable for complex operations
elif server == 'sequential' and 'debug' in operation_data.get('operation_type', ''):
gains[server] = base_gain * 1.4 # Extra valuable for debugging
else:
gains[server] = base_gain
return gains
def _create_fallback_strategy(self, servers: List[str]) -> Dict[str, str]:
"""Create fallback strategy for server failures."""
fallbacks = {}
# Define fallback mappings
fallback_map = {
'morphllm': 'serena', # Serena can handle editing
'serena': 'morphllm', # Morphllm can handle simple edits
'sequential': 'context7', # Context7 for documentation-based analysis
'context7': 'sequential', # Sequential for complex analysis
'magic': 'morphllm', # Morphllm for component generation
'playwright': 'sequential' # Sequential for test planning
}
for server in servers:
fallback = fallback_map.get(server)
if fallback and fallback not in servers:
fallbacks[server] = fallback
else:
fallbacks[server] = 'native_tools' # Fall back to native Claude tools
return fallbacks
def _determine_coordination_strategy(self, servers: List[str], operation_data: Dict[str, Any]) -> str:
"""Determine how servers should coordinate."""
if len(servers) <= 1:
return 'single_server'
# Sequential coordination for complex analysis
if 'sequential' in servers and operation_data.get('complexity_score', 0) > 0.6:
return 'sequential_lead'
# Serena coordination for multi-file operations
if 'serena' in servers and operation_data.get('file_count', 1) > 5:
return 'serena_lead'
# Parallel coordination for independent operations
if len(servers) >= 3:
return 'parallel_with_sync'
return 'collaborative'
def execute_activation_plan(self, plan: MCPActivationPlan, context: Dict[str, Any]) -> Dict[str, Any]:
"""
Execute MCP server activation plan with error handling and performance tracking.
Args:
plan: MCPActivationPlan to execute
context: Current session context
Returns:
Execution results with performance metrics and activated servers
"""
start_time = time.time()
activated_servers = []
failed_servers = []
fallback_activations = []
for server in plan.activation_order:
try:
# Check server availability
if self.server_states.get(server) == MCPServerState.UNAVAILABLE:
failed_servers.append(server)
self._handle_server_fallback(server, plan, fallback_activations)
continue
# Activate server (simulated - real implementation would call MCP)
self.server_states[server] = MCPServerState.LOADING
activation_start = time.time()
# Simulate activation time
expected_cost = self.server_capabilities[server].activation_cost_ms
actual_cost = expected_cost * (0.8 + 0.4 * hash(server) % 1000 / 1000) # Simulated variance
self.server_states[server] = MCPServerState.AVAILABLE
activated_servers.append(server)
# Track performance
activation_time = (time.time() - activation_start) * 1000
self.performance_metrics[server] = {
'last_activation_ms': activation_time,
'expected_ms': expected_cost,
'efficiency_ratio': expected_cost / max(activation_time, 1)
}
except Exception as e:
failed_servers.append(server)
self.server_states[server] = MCPServerState.ERROR
self._handle_server_fallback(server, plan, fallback_activations)
total_time = (time.time() - start_time) * 1000
# Update activation history
self.activation_history.append({
'timestamp': time.time(),
'plan': plan,
'activated': activated_servers,
'failed': failed_servers,
'fallbacks': fallback_activations,
'total_time_ms': total_time
})
return {
'activated_servers': activated_servers,
'failed_servers': failed_servers,
'fallback_activations': fallback_activations,
'total_activation_time_ms': total_time,
'coordination_strategy': plan.coordination_strategy,
'performance_metrics': self.performance_metrics
}
def _handle_server_fallback(self, failed_server: str, plan: MCPActivationPlan, fallback_activations: List[str]):
"""Handle server activation failure with fallback strategy."""
fallback = plan.fallback_strategy.get(failed_server)
if fallback and fallback != 'native_tools' and fallback not in plan.servers_to_activate:
# Try to activate fallback server
if self.server_states.get(fallback) == MCPServerState.AVAILABLE:
fallback_activations.append(f"{failed_server}->{fallback}")
# In real implementation, would activate fallback server
def get_optimization_recommendations(self, context: Dict[str, Any]) -> Dict[str, Any]:
"""Get recommendations for optimizing MCP server usage."""
recommendations = []
# Analyze activation history for patterns
if len(self.activation_history) >= 5:
recent_activations = self.activation_history[-5:]
# Check for frequently failing servers
failed_counts = {}
for activation in recent_activations:
for failed in activation['failed']:
failed_counts[failed] = failed_counts.get(failed, 0) + 1
for server, count in failed_counts.items():
if count >= 3:
recommendations.append(f"Server {server} failing frequently - consider fallback strategy")
# Check for performance issues
avg_times = {}
for activation in recent_activations:
total_time = activation['total_time_ms']
server_count = len(activation['activated'])
if server_count > 0:
avg_time_per_server = total_time / server_count
avg_times[len(activation['activated'])] = avg_time_per_server
if avg_times and max(avg_times.values()) > 500:
recommendations.append("Consider reducing concurrent server activations for better performance")
# Resource usage recommendations
resource_usage = context.get('resource_usage_percent', 0)
if resource_usage > 80:
recommendations.append("High resource usage - consider lightweight servers only")
return {
'recommendations': recommendations,
'performance_metrics': self.performance_metrics,
'server_states': {k: v.value for k, v in self.server_states.items()},
'efficiency_score': self._calculate_overall_efficiency()
}
def _calculate_overall_efficiency(self) -> float:
"""Calculate overall MCP system efficiency."""
if not self.performance_metrics:
return 1.0
efficiency_scores = []
for server, metrics in self.performance_metrics.items():
efficiency_ratio = metrics.get('efficiency_ratio', 1.0)
efficiency_scores.append(min(efficiency_ratio, 2.0)) # Cap at 200% efficiency
return sum(efficiency_scores) / len(efficiency_scores) if efficiency_scores else 1.0
def select_optimal_server(self, tool_name: str, context: Dict[str, Any]) -> str:
"""
Select the most appropriate MCP server for a given tool and context.
Enhanced with intelligent analysis of:
- User intent keywords and patterns
- Operation type classification
- Test type specific routing
- Multi-factor context analysis
- Smart fallback logic
Args:
tool_name: Name of the tool to be executed
context: Context information for intelligent selection
Returns:
Name of the optimal server for the tool
"""
# Extract context information
user_intent = context.get('user_intent', '').lower()
operation_type = context.get('operation_type', '').lower()
test_type = context.get('test_type', '').lower()
file_count = context.get('file_count', 1)
complexity_score = context.get('complexity_score', 0.0)
has_external_deps = context.get('has_external_dependencies', False)
# 1. KEYWORD-BASED INTENT ANALYSIS
# UI/Frontend keywords → Magic
ui_keywords = [
'component', 'ui', 'frontend', 'react', 'vue', 'angular', 'button', 'form',
'modal', 'layout', 'design', 'responsive', 'css', 'styling', 'theme',
'navigation', 'menu', 'sidebar', 'dashboard', 'card', 'table', 'chart'
]
# Testing keywords → Playwright
test_keywords = [
'test', 'testing', 'e2e', 'end-to-end', 'browser', 'automation',
'selenium', 'cypress', 'performance', 'load test', 'visual test',
'regression', 'cross-browser', 'integration test'
]
# Documentation keywords → Context7
doc_keywords = [
'documentation', 'docs', 'library', 'framework', 'api', 'reference',
'best practice', 'pattern', 'tutorial', 'guide', 'example', 'usage',
'install', 'setup', 'configuration', 'migration'
]
# Analysis/Debug keywords → Sequential
analysis_keywords = [
'analyze', 'debug', 'troubleshoot', 'investigate', 'complex', 'architecture',
'system', 'performance', 'bottleneck', 'optimization', 'refactor',
'review', 'audit', 'security', 'vulnerability'
]
# Memory/Context keywords → Serena
context_keywords = [
'memory', 'context', 'semantic', 'symbol', 'reference', 'definition',
'search', 'find', 'locate', 'navigate', 'project', 'codebase', 'workspace'
]
# Editing keywords → Morphllm
edit_keywords = [
'edit', 'modify', 'change', 'update', 'fix', 'replace', 'rewrite',
'format', 'style', 'cleanup', 'transform', 'apply', 'batch'
]
# Check user intent against keyword categories
intent_scores = {}
for keyword in ui_keywords:
if keyword in user_intent:
intent_scores['magic'] = intent_scores.get('magic', 0) + 1
for keyword in test_keywords:
if keyword in user_intent:
intent_scores['playwright'] = intent_scores.get('playwright', 0) + 1
for keyword in doc_keywords:
if keyword in user_intent:
intent_scores['context7'] = intent_scores.get('context7', 0) + 1
for keyword in analysis_keywords:
if keyword in user_intent:
intent_scores['sequential'] = intent_scores.get('sequential', 0) + 1
for keyword in context_keywords:
if keyword in user_intent:
intent_scores['serena'] = intent_scores.get('serena', 0) + 1
for keyword in edit_keywords:
if keyword in user_intent:
intent_scores['morphllm'] = intent_scores.get('morphllm', 0) + 1
# 2. OPERATION TYPE ANALYSIS
operation_server_map = {
'create': 'magic', # UI creation
'build': 'magic', # Component building
'implement': 'magic', # Feature implementation
'test': 'playwright', # Testing operations
'validate': 'playwright', # Validation testing
'analyze': 'sequential', # Analysis operations
'debug': 'sequential', # Debugging
'troubleshoot': 'sequential', # Problem solving
'document': 'context7', # Documentation
'research': 'context7', # Research operations
'edit': 'morphllm', # File editing
'modify': 'morphllm', # Content modification
'search': 'serena', # Code search
'find': 'serena', # Finding operations
'navigate': 'serena' # Navigation
}
if operation_type in operation_server_map:
server = operation_server_map[operation_type]
intent_scores[server] = intent_scores.get(server, 0) + 2 # Higher weight
# 3. TEST TYPE SPECIFIC ROUTING
test_type_map = {
'e2e': 'playwright',
'end-to-end': 'playwright',
'integration': 'playwright',
'browser': 'playwright',
'visual': 'playwright',
'performance': 'playwright',
'load': 'playwright',
'ui': 'playwright',
'functional': 'playwright',
'regression': 'playwright',
'cross-browser': 'playwright',
'unit': 'sequential', # Complex unit test analysis
'security': 'sequential', # Security test analysis
'api': 'sequential' # API test analysis
}
if test_type and test_type in test_type_map:
server = test_type_map[test_type]
intent_scores[server] = intent_scores.get(server, 0) + 3 # Highest weight
# 4. TOOL-BASED MAPPING (Original logic enhanced)
tool_server_mapping = {
# File operations - context dependent
'read_file': None, # Will be determined by context
'write_file': None, # Will be determined by context
'edit_file': None, # Will be determined by context
# Analysis operations
'analyze_architecture': 'sequential',
'complex_reasoning': 'sequential',
'debug_analysis': 'sequential',
'system_analysis': 'sequential',
'performance_analysis': 'sequential',
# UI operations
'create_component': 'magic',
'ui_component': 'magic',
'design_system': 'magic',
'build_ui': 'magic',
'frontend_generation': 'magic',
# Testing operations
'browser_test': 'playwright',
'e2e_test': 'playwright',
'performance_test': 'playwright',
'visual_test': 'playwright',
'cross_browser_test': 'playwright',
# Documentation operations
'get_documentation': 'context7',
'library_docs': 'context7',
'framework_patterns': 'context7',
'api_reference': 'context7',
'best_practices': 'context7',
# Semantic operations
'semantic_analysis': 'serena',
'project_context': 'serena',
'memory_management': 'serena',
'symbol_search': 'serena',
'code_navigation': 'serena',
# Fast editing operations
'fast_edit': 'morphllm',
'pattern_application': 'morphllm',
'batch_edit': 'morphllm',
'text_transformation': 'morphllm'
}
# Primary server selection based on tool
primary_server = tool_server_mapping.get(tool_name)
if primary_server:
intent_scores[primary_server] = intent_scores.get(primary_server, 0) + 2
# 5. COMPLEXITY AND SCALE ANALYSIS
# High complexity → Sequential for analysis
if complexity_score > 0.6:
intent_scores['sequential'] = intent_scores.get('sequential', 0) + 2
# Large file count → Serena for project context
if file_count > 10:
intent_scores['serena'] = intent_scores.get('serena', 0) + 2
elif file_count > 5:
intent_scores['serena'] = intent_scores.get('serena', 0) + 1
# Small operations → Morphllm for efficiency
if file_count <= 3 and complexity_score <= 0.4:
intent_scores['morphllm'] = intent_scores.get('morphllm', 0) + 1
# External dependencies → Context7 for documentation
if has_external_deps:
intent_scores['context7'] = intent_scores.get('context7', 0) + 1
# 6. CONTEXTUAL FALLBACK LOGIC
# Check for file operation context-dependent routing
if tool_name in ['read_file', 'write_file', 'edit_file']:
# Route based on context
if any(keyword in user_intent for keyword in ui_keywords):
intent_scores['magic'] = intent_scores.get('magic', 0) + 2
elif any(keyword in user_intent for keyword in test_keywords):
intent_scores['playwright'] = intent_scores.get('playwright', 0) + 2
elif complexity_score > 0.5 or file_count > 5:
intent_scores['serena'] = intent_scores.get('serena', 0) + 2
else:
intent_scores['morphllm'] = intent_scores.get('morphllm', 0) + 2
# 7. SERVER SELECTION DECISION
# Return server with highest score
if intent_scores:
best_server = max(intent_scores.items(), key=lambda x: x[1])[0]
# Validate server availability
if self.server_states.get(best_server) == MCPServerState.AVAILABLE:
return best_server
# 8. INTELLIGENT FALLBACK CHAIN
# Fallback based on context characteristics
if complexity_score > 0.7 or 'complex' in user_intent or 'analyze' in user_intent:
return 'sequential'
elif any(keyword in user_intent for keyword in ui_keywords) or operation_type in ['create', 'build']:
return 'magic'
elif any(keyword in user_intent for keyword in test_keywords) or 'test' in operation_type:
return 'playwright'
elif has_external_deps or any(keyword in user_intent for keyword in doc_keywords):
return 'context7'
elif file_count > 10 or any(keyword in user_intent for keyword in context_keywords):
return 'serena'
else:
return 'morphllm' # Efficient default for simple operations
def get_fallback_server(self, tool_name: str, context: Dict[str, Any]) -> str:
"""
Get fallback server when primary server fails.
Args:
tool_name: Name of the tool
context: Context information
Returns:
Name of the fallback server
"""
primary_server = self.select_optimal_server(tool_name, context)
# Define fallback chains
fallback_chains = {
'sequential': 'serena',
'serena': 'morphllm',
'morphllm': 'context7',
'magic': 'morphllm',
'playwright': 'sequential',
'context7': 'morphllm'
}
fallback = fallback_chains.get(primary_server, 'morphllm')
# Avoid circular fallback
if fallback == primary_server:
return 'morphllm'
return fallback

View File

@@ -1,985 +0,0 @@
"""
Pattern Detection Engine for SuperClaude-Lite
Intelligent pattern detection for automatic mode activation,
MCP server selection, and operational optimization.
"""
import re
import json
from typing import Dict, Any, List, Set, Optional, Tuple
from dataclasses import dataclass
from enum import Enum
from yaml_loader import config_loader
class PatternType(Enum):
"""Types of patterns we can detect."""
MODE_TRIGGER = "mode_trigger"
MCP_SERVER = "mcp_server"
OPERATION_TYPE = "operation_type"
COMPLEXITY_INDICATOR = "complexity_indicator"
PERSONA_HINT = "persona_hint"
PERFORMANCE_HINT = "performance_hint"
@dataclass
class PatternMatch:
"""A detected pattern match."""
pattern_type: PatternType
pattern_name: str
confidence: float # 0.0 to 1.0
matched_text: str
suggestions: List[str]
metadata: Dict[str, Any]
@dataclass
class DetectionResult:
"""Result of pattern detection analysis."""
matches: List[PatternMatch]
recommended_modes: List[str]
recommended_mcp_servers: List[str]
suggested_flags: List[str]
complexity_score: float
confidence_score: float
class PatternDetector:
"""
Intelligent pattern detection system.
Analyzes user input, context, and operation patterns to determine:
- Which SuperClaude modes should be activated
- Which MCP servers are needed
- What optimization flags to apply
- Complexity and performance considerations
"""
def __init__(self):
"""Initialize pattern detector with configuration loading and error handling."""
try:
self.patterns = config_loader.load_config('modes') or {}
self.mcp_patterns = config_loader.load_config('orchestrator') or {}
except Exception as e:
print(f"Warning: Failed to load configuration: {e}")
self.patterns = {}
self.mcp_patterns = {}
self._compile_patterns()
def _compile_patterns(self):
"""Compile regex patterns for efficient matching with proper error handling."""
self.compiled_patterns = {}
self.mode_configs = {}
self.mcp_configs = {}
# Load mode detection patterns from the correct YAML structure
mode_detection = self.patterns.get('mode_detection', {})
for mode_name, mode_config in mode_detection.items():
try:
# Store mode configuration for threshold access
self.mode_configs[mode_name] = mode_config
# Compile all trigger patterns from different categories
all_patterns = []
trigger_patterns = mode_config.get('trigger_patterns', {})
# Handle different YAML structures
if isinstance(trigger_patterns, list):
# Simple list of patterns
all_patterns.extend(trigger_patterns)
elif isinstance(trigger_patterns, dict):
# Nested categories of patterns
for category, patterns in trigger_patterns.items():
if isinstance(patterns, list):
all_patterns.extend(patterns)
elif isinstance(patterns, str):
all_patterns.append(patterns)
elif isinstance(trigger_patterns, str):
# Single pattern string
all_patterns.append(trigger_patterns)
# Compile patterns with error handling
compiled = []
for pattern in all_patterns:
try:
compiled.append(re.compile(pattern, re.IGNORECASE))
except re.error as e:
print(f"Warning: Invalid regex pattern '{pattern}' for mode {mode_name}: {e}")
self.compiled_patterns[f"mode_{mode_name}"] = compiled
except Exception as e:
print(f"Warning: Error compiling patterns for mode {mode_name}: {e}")
self.compiled_patterns[f"mode_{mode_name}"] = []
# Load MCP server patterns from routing_patterns
routing_patterns = self.mcp_patterns.get('routing_patterns', {})
for server_name, server_config in routing_patterns.items():
try:
# Store server configuration
self.mcp_configs[server_name] = server_config
triggers = server_config.get('triggers', [])
compiled = []
for trigger in triggers:
try:
compiled.append(re.compile(trigger, re.IGNORECASE))
except re.error as e:
print(f"Warning: Invalid regex pattern '{trigger}' for server {server_name}: {e}")
self.compiled_patterns[f"mcp_{server_name}"] = compiled
except Exception as e:
print(f"Warning: Error compiling patterns for MCP server {server_name}: {e}")
self.compiled_patterns[f"mcp_{server_name}"] = []
def detect_patterns(self,
user_input: str,
context: Dict[str, Any],
operation_data: Dict[str, Any]) -> DetectionResult:
"""
Perform comprehensive pattern detection with input validation and error handling.
Args:
user_input: User's request or command
context: Session and environment context
operation_data: Information about the planned operation
Returns:
DetectionResult with all detected patterns and recommendations
"""
# Validate inputs
is_valid, validation_message = self.validate_input(user_input, context, operation_data)
if not is_valid:
# Return empty result for invalid inputs
return DetectionResult(
matches=[],
recommended_modes=[],
recommended_mcp_servers=[],
suggested_flags=[],
complexity_score=0.0,
confidence_score=0.0
)
matches = []
try:
# Detect mode triggers
mode_matches = self._detect_mode_patterns(user_input, context)
matches.extend(mode_matches)
# Detect MCP server needs
mcp_matches = self._detect_mcp_patterns(user_input, context, operation_data)
matches.extend(mcp_matches)
# Detect complexity indicators
complexity_matches = self._detect_complexity_patterns(user_input, operation_data)
matches.extend(complexity_matches)
# Detect persona hints
persona_matches = self._detect_persona_patterns(user_input, context)
matches.extend(persona_matches)
# Calculate overall scores
complexity_score = self._calculate_complexity_score(matches, operation_data)
confidence_score = self._calculate_confidence_score(matches)
# Generate recommendations
recommended_modes = self._get_recommended_modes(matches, complexity_score)
recommended_mcp_servers = self._get_recommended_mcp_servers(matches, context)
suggested_flags = self._get_suggested_flags(matches, complexity_score, context)
return DetectionResult(
matches=matches,
recommended_modes=recommended_modes,
recommended_mcp_servers=recommended_mcp_servers,
suggested_flags=suggested_flags,
complexity_score=complexity_score,
confidence_score=confidence_score
)
except Exception as e:
print(f"Error during pattern detection: {e}")
# Return partial results if available
return DetectionResult(
matches=matches, # Include any matches found before error
recommended_modes=[],
recommended_mcp_servers=[],
suggested_flags=[],
complexity_score=operation_data.get('complexity_score', 0.0),
confidence_score=0.0
)
def _detect_mode_patterns(self, user_input: str, context: Dict[str, Any]) -> List[PatternMatch]:
"""Detect which SuperClaude modes should be activated using compiled patterns."""
matches = []
# Iterate through all compiled mode patterns
for pattern_key, compiled_patterns in self.compiled_patterns.items():
if not pattern_key.startswith("mode_"):
continue
mode_name = pattern_key[5:] # Remove "mode_" prefix
mode_config = self.mode_configs.get(mode_name, {})
confidence_threshold = mode_config.get('confidence_threshold', 0.7)
# Check if any pattern matches
for pattern in compiled_patterns:
try:
match = pattern.search(user_input)
if match:
# Calculate confidence based on pattern type and context
confidence = self._calculate_mode_confidence(mode_name, match, context)
# Only include if above threshold
if confidence >= confidence_threshold:
matches.append(PatternMatch(
pattern_type=PatternType.MODE_TRIGGER,
pattern_name=mode_name,
confidence=confidence,
matched_text=match.group(),
suggestions=[f"Enable {mode_name} mode based on detected patterns"],
metadata={
"mode": mode_name,
"auto_activate": mode_config.get('activation_type') == 'automatic',
"threshold_met": confidence >= confidence_threshold
}
))
break # Stop after first match for this mode
except Exception as e:
print(f"Warning: Error matching pattern for mode {mode_name}: {e}")
# Check context-based triggers (resource usage, complexity, etc.)
matches.extend(self._detect_context_mode_triggers(context))
return matches
def _calculate_mode_confidence(self, mode_name: str, match: re.Match, context: Dict[str, Any]) -> float:
"""Calculate confidence score for mode activation based on match and context."""
base_confidence = 0.7
# Mode-specific confidence adjustments
mode_adjustments = {
'brainstorming': {
'uncertainty_words': ['maybe', 'not sure', 'thinking about', 'wondering'],
'project_words': ['new project', 'startup', 'build something'],
'exploration_words': ['brainstorm', 'explore', 'figure out']
},
'task_management': {
'scope_words': ['multiple', 'many', 'complex', 'comprehensive'],
'build_words': ['build', 'implement', 'create', 'develop']
},
'token_efficiency': {
'efficiency_words': ['brief', 'concise', 'compressed', 'short'],
'resource_words': ['token', 'resource', 'memory', 'optimization']
}
}
adjustments = mode_adjustments.get(mode_name, {})
matched_text = match.group().lower()
# Boost confidence based on specific word categories
confidence_boost = 0.0
for category, words in adjustments.items():
if any(word in matched_text for word in words):
confidence_boost += 0.1
# Context-based adjustments
resource_usage = context.get('resource_usage_percent', 0)
if mode_name == 'token_efficiency' and resource_usage > 75:
confidence_boost += 0.2
file_count = context.get('file_count', 1)
complexity_score = context.get('complexity_score', 0.0)
if mode_name == 'task_management' and (file_count > 3 or complexity_score > 0.4):
confidence_boost += 0.15
return min(base_confidence + confidence_boost, 1.0)
def _detect_context_mode_triggers(self, context: Dict[str, Any]) -> List[PatternMatch]:
"""Detect mode triggers based on context alone (not user input)."""
matches = []
# Resource-based token efficiency trigger
resource_usage = context.get('resource_usage_percent', 0)
if resource_usage > 75:
token_efficiency_config = self.mode_configs.get('token_efficiency', {})
confidence_threshold = token_efficiency_config.get('confidence_threshold', 0.75)
matches.append(PatternMatch(
pattern_type=PatternType.MODE_TRIGGER,
pattern_name="token_efficiency",
confidence=0.85,
matched_text="high_resource_usage",
suggestions=["Auto-enable token efficiency due to resource constraints"],
metadata={
"mode": "token_efficiency",
"trigger": "resource_constraint",
"resource_usage": resource_usage
}
))
# Complexity-based task management trigger
file_count = context.get('file_count', 1)
complexity_score = context.get('complexity_score', 0.0)
task_mgmt_config = self.mode_configs.get('task_management', {})
auto_thresholds = task_mgmt_config.get('auto_activation_thresholds', {})
file_threshold = auto_thresholds.get('file_count', 3)
complexity_threshold = auto_thresholds.get('complexity_score', 0.4)
if file_count >= file_threshold or complexity_score >= complexity_threshold:
matches.append(PatternMatch(
pattern_type=PatternType.MODE_TRIGGER,
pattern_name="task_management",
confidence=0.8,
matched_text="complexity_threshold_met",
suggestions=["Auto-enable task management for complex operations"],
metadata={
"mode": "task_management",
"trigger": "complexity_threshold",
"file_count": file_count,
"complexity_score": complexity_score
}
))
return matches
def _detect_mcp_patterns(self, user_input: str, context: Dict[str, Any], operation_data: Dict[str, Any]) -> List[PatternMatch]:
"""Detect which MCP servers should be activated using compiled patterns."""
matches = []
# Iterate through all compiled MCP server patterns
for pattern_key, compiled_patterns in self.compiled_patterns.items():
if not pattern_key.startswith("mcp_"):
continue
server_name = pattern_key[4:] # Remove "mcp_" prefix
server_config = self.mcp_configs.get(server_name, {})
confidence_threshold = server_config.get('confidence_threshold', 0.7)
# Check if any pattern matches
for pattern in compiled_patterns:
try:
match = pattern.search(user_input)
if match:
# Calculate confidence based on server type and context
confidence = self._calculate_mcp_confidence(server_name, match, context, operation_data)
# Only include if above threshold
if confidence >= confidence_threshold:
matches.append(PatternMatch(
pattern_type=PatternType.MCP_SERVER,
pattern_name=server_name,
confidence=confidence,
matched_text=match.group(),
suggestions=[f"Enable {server_name} server for {server_config.get('capabilities', ['general'])[0]} capabilities"],
metadata={
"mcp_server": server_name,
"confidence_threshold": confidence_threshold,
"server_config": server_config,
"capabilities": server_config.get('capabilities', [])
}
))
break # Stop after first match for this server
except Exception as e:
print(f"Warning: Error matching pattern for MCP server {server_name}: {e}")
# Add hybrid intelligence selection (Morphllm vs Serena)
matches.extend(self._detect_hybrid_intelligence_selection(operation_data))
return matches
def _calculate_mcp_confidence(self, server_name: str, match: re.Match, context: Dict[str, Any], operation_data: Dict[str, Any]) -> float:
"""Calculate confidence score for MCP server activation."""
server_config = self.mcp_configs.get(server_name, {})
base_confidence = server_config.get('confidence_threshold', 0.7)
# Server-specific confidence adjustments
matched_text = match.group().lower()
confidence_boost = 0.0
# Boost confidence based on operation context
file_count = operation_data.get('file_count', 1)
complexity_score = operation_data.get('complexity_score', 0.0)
# Context-specific boosts
if server_name == 'sequential' and (complexity_score > 0.6 or 'complex' in matched_text):
confidence_boost += 0.15
elif server_name == 'magic' and ('ui' in matched_text or 'component' in matched_text):
confidence_boost += 0.1
elif server_name == 'context7' and ('library' in matched_text or 'framework' in matched_text):
confidence_boost += 0.1
elif server_name == 'playwright' and ('test' in matched_text or 'automation' in matched_text):
confidence_boost += 0.1
# Performance profile adjustments
performance_profile = server_config.get('performance_profile', 'standard')
if performance_profile == 'intensive' and complexity_score > 0.5:
confidence_boost += 0.05
elif performance_profile == 'lightweight' and file_count <= 3:
confidence_boost += 0.05
return min(base_confidence + confidence_boost, 1.0)
def _detect_hybrid_intelligence_selection(self, operation_data: Dict[str, Any]) -> List[PatternMatch]:
"""Detect whether to use Morphllm or Serena based on operation characteristics."""
matches = []
file_count = operation_data.get('file_count', 1)
complexity_score = operation_data.get('complexity_score', 0.0)
operation_types = operation_data.get('operation_types', [])
# Get hybrid intelligence configuration
hybrid_config = self.mcp_patterns.get('hybrid_intelligence', {}).get('morphllm_vs_serena', {})
# Morphllm criteria
morphllm_criteria = hybrid_config.get('morphllm_criteria', {})
morphllm_file_max = morphllm_criteria.get('file_count_max', 10)
morphllm_complexity_max = morphllm_criteria.get('complexity_max', 0.6)
morphllm_ops = morphllm_criteria.get('preferred_operations', [])
# Serena criteria
serena_criteria = hybrid_config.get('serena_criteria', {})
serena_file_min = serena_criteria.get('file_count_min', 5)
serena_complexity_min = serena_criteria.get('complexity_min', 0.4)
serena_ops = serena_criteria.get('preferred_operations', [])
# Determine which system to use
morphllm_score = 0
serena_score = 0
# File count scoring
if file_count <= morphllm_file_max:
morphllm_score += 1
if file_count >= serena_file_min:
serena_score += 1
# Complexity scoring
if complexity_score <= morphllm_complexity_max:
morphllm_score += 1
if complexity_score >= serena_complexity_min:
serena_score += 1
# Operation type scoring
for op_type in operation_types:
if op_type in morphllm_ops:
morphllm_score += 1
if op_type in serena_ops:
serena_score += 1
# Make selection based on scores
if serena_score > morphllm_score:
matches.append(PatternMatch(
pattern_type=PatternType.MCP_SERVER,
pattern_name="serena",
confidence=0.8 + (serena_score * 0.05),
matched_text="hybrid_intelligence_selection",
suggestions=["Use Serena for complex multi-file operations with semantic understanding"],
metadata={
"mcp_server": "serena",
"selection_reason": "hybrid_intelligence",
"file_count": file_count,
"complexity_score": complexity_score,
"score": serena_score
}
))
elif morphllm_score > 0: # Only suggest Morphllm if it has some score
matches.append(PatternMatch(
pattern_type=PatternType.MCP_SERVER,
pattern_name="morphllm",
confidence=0.7 + (morphllm_score * 0.05),
matched_text="hybrid_intelligence_selection",
suggestions=["Use Morphllm for efficient editing operations with pattern optimization"],
metadata={
"mcp_server": "morphllm",
"selection_reason": "hybrid_intelligence",
"file_count": file_count,
"complexity_score": complexity_score,
"score": morphllm_score
}
))
return matches
def _detect_complexity_patterns(self, user_input: str, operation_data: Dict[str, Any]) -> List[PatternMatch]:
"""Detect complexity indicators in the request with configurable thresholds."""
matches = []
# Get complexity thresholds from configuration
auto_activation = self.mcp_patterns.get('auto_activation', {})
complexity_thresholds = auto_activation.get('complexity_thresholds', {})
# High complexity indicators from text patterns
high_complexity_patterns = [
(r"(?:entire|whole|complete)\s+(?:codebase|system|application)", 0.4),
(r"(?:refactor|migrate|restructure)\s+(?:all|everything|entire)", 0.35),
(r"(?:architecture|system-wide|comprehensive)\s+(?:change|update|redesign)", 0.3),
(r"(?:complex|complicated|sophisticated)\s+(?:logic|algorithm|system)", 0.25)
]
for pattern, score_boost in high_complexity_patterns:
try:
match = re.search(pattern, user_input, re.IGNORECASE)
if match:
matches.append(PatternMatch(
pattern_type=PatternType.COMPLEXITY_INDICATOR,
pattern_name="high_complexity",
confidence=0.8,
matched_text=match.group(),
suggestions=["Consider delegation and thinking modes"],
metadata={"complexity_level": "high", "score_boost": score_boost}
))
break
except re.error as e:
print(f"Warning: Invalid complexity pattern: {e}")
# File count and operation complexity indicators
file_count = operation_data.get('file_count', 1)
complexity_score = operation_data.get('complexity_score', 0.0)
directory_count = operation_data.get('directory_count', 1)
# Multi-file operation detection with configurable thresholds
delegation_threshold = complexity_thresholds.get('enable_delegation', {})
file_threshold = delegation_threshold.get('file_count', 3)
dir_threshold = delegation_threshold.get('directory_count', 2)
complexity_threshold = delegation_threshold.get('complexity_score', 0.4)
if file_count >= file_threshold:
matches.append(PatternMatch(
pattern_type=PatternType.COMPLEXITY_INDICATOR,
pattern_name="multi_file_operation",
confidence=0.9,
matched_text=f"{file_count}_files",
suggestions=[f"Enable delegation for {file_count}-file operations"],
metadata={
"file_count": file_count,
"delegation_recommended": True,
"threshold_met": "file_count"
}
))
if directory_count >= dir_threshold:
matches.append(PatternMatch(
pattern_type=PatternType.COMPLEXITY_INDICATOR,
pattern_name="multi_directory_operation",
confidence=0.85,
matched_text=f"{directory_count}_directories",
suggestions=[f"Enable delegation for {directory_count}-directory operations"],
metadata={
"directory_count": directory_count,
"delegation_recommended": True,
"threshold_met": "directory_count"
}
))
if complexity_score >= complexity_threshold:
matches.append(PatternMatch(
pattern_type=PatternType.COMPLEXITY_INDICATOR,
pattern_name="high_complexity_score",
confidence=0.8 + min(complexity_score * 0.2, 0.2), # Cap boost at 0.2
matched_text=f"complexity_{complexity_score:.2f}",
suggestions=[f"Enable advanced processing for complexity score {complexity_score:.2f}"],
metadata={
"complexity_score": complexity_score,
"sequential_recommended": complexity_score > 0.6,
"threshold_met": "complexity_score"
}
))
return matches
def _detect_persona_patterns(self, user_input: str, context: Dict[str, Any]) -> List[PatternMatch]:
"""Detect hints about which persona should be active with improved confidence calculation."""
matches = []
# Enhanced persona patterns with confidence weighting
persona_patterns = {
"architect": {
"patterns": [r"(?:architecture|design|structure|system)\s+(?:review|analysis|planning)"],
"base_confidence": 0.75,
"boost_words": ["architecture", "design", "structure", "system", "planning"]
},
"performance": {
"patterns": [r"(?:performance|optimization|speed|efficiency|bottleneck)"],
"base_confidence": 0.8,
"boost_words": ["performance", "optimization", "speed", "efficiency", "bottleneck"]
},
"security": {
"patterns": [r"(?:security|vulnerability|audit|secure|safety)"],
"base_confidence": 0.85,
"boost_words": ["security", "vulnerability", "audit", "secure", "safety"]
},
"frontend": {
"patterns": [r"(?:ui|frontend|interface|component|design|responsive)"],
"base_confidence": 0.75,
"boost_words": ["ui", "frontend", "interface", "component", "responsive"]
},
"backend": {
"patterns": [r"(?:api|server|database|backend|service)"],
"base_confidence": 0.75,
"boost_words": ["api", "server", "database", "backend", "service"]
},
"devops": {
"patterns": [r"(?:deploy|deployment|ci|cd|infrastructure|docker|kubernetes)"],
"base_confidence": 0.8,
"boost_words": ["deploy", "deployment", "infrastructure", "docker", "kubernetes"]
},
"testing": {
"patterns": [r"(?:test|testing|qa|quality|coverage|validation)"],
"base_confidence": 0.75,
"boost_words": ["test", "testing", "qa", "quality", "coverage", "validation"]
}
}
for persona, persona_config in persona_patterns.items():
patterns = persona_config["patterns"]
base_confidence = persona_config["base_confidence"]
boost_words = persona_config["boost_words"]
for pattern in patterns:
try:
match = re.search(pattern, user_input, re.IGNORECASE)
if match:
# Calculate confidence with word-based boosting
confidence = self._calculate_persona_confidence(
base_confidence, match, boost_words, user_input
)
matches.append(PatternMatch(
pattern_type=PatternType.PERSONA_HINT,
pattern_name=persona,
confidence=confidence,
matched_text=match.group(),
suggestions=[f"Consider {persona} persona for specialized expertise"],
metadata={
"persona": persona,
"domain_specific": True,
"base_confidence": base_confidence,
"calculated_confidence": confidence
}
))
break # Stop after first match for this persona
except re.error as e:
print(f"Warning: Invalid persona pattern for {persona}: {e}")
return matches
def _calculate_persona_confidence(self, base_confidence: float, match: re.Match, boost_words: List[str], full_text: str) -> float:
"""Calculate persona confidence with word-based boosting."""
confidence = base_confidence
matched_text = match.group().lower()
full_text_lower = full_text.lower()
# Boost confidence based on additional domain words in the full text
word_count = 0
for word in boost_words:
if word in full_text_lower:
word_count += 1
# Add confidence boost based on domain word density
confidence_boost = min(word_count * 0.05, 0.15) # Cap at 0.15
return min(confidence + confidence_boost, 1.0)
def _calculate_complexity_score(self, matches: List[PatternMatch], operation_data: Dict[str, Any]) -> float:
"""Calculate overall complexity score from detected patterns with proper weighting."""
try:
base_score = operation_data.get('complexity_score', 0.0)
# Add complexity from pattern matches with proper validation
complexity_boost = 0.0
for match in matches:
if match.pattern_type == PatternType.COMPLEXITY_INDICATOR:
score_boost = match.metadata.get('score_boost', 0.1)
if isinstance(score_boost, (int, float)) and 0 <= score_boost <= 1:
complexity_boost += score_boost
# Weight the pattern-based boost to prevent over-scoring
weighted_boost = complexity_boost * 0.7 # Reduce impact of pattern matches
final_score = base_score + weighted_boost
return min(max(final_score, 0.0), 1.0) # Ensure score is between 0 and 1
except Exception as e:
print(f"Warning: Error calculating complexity score: {e}")
return operation_data.get('complexity_score', 0.0)
def _calculate_confidence_score(self, matches: List[PatternMatch]) -> float:
"""Calculate overall confidence in pattern detection with improved weighting."""
try:
if not matches:
return 0.0
# Weight different match types differently
type_weights = {
PatternType.MODE_TRIGGER: 0.3,
PatternType.MCP_SERVER: 0.25,
PatternType.COMPLEXITY_INDICATOR: 0.2,
PatternType.PERSONA_HINT: 0.15,
PatternType.PERFORMANCE_HINT: 0.1
}
weighted_confidence = 0.0
total_weight = 0.0
for match in matches:
weight = type_weights.get(match.pattern_type, 0.1)
weighted_confidence += match.confidence * weight
total_weight += weight
if total_weight == 0:
return 0.0
return min(weighted_confidence / total_weight, 1.0)
except Exception as e:
print(f"Warning: Error calculating confidence score: {e}")
return 0.0
def _get_recommended_modes(self, matches: List[PatternMatch], complexity_score: float) -> List[str]:
"""Get recommended modes based on detected patterns with configuration support."""
modes = set()
try:
# Add modes from pattern matches
for match in matches:
if match.pattern_type == PatternType.MODE_TRIGGER:
# Only add if confidence threshold is met
mode_config = self.mode_configs.get(match.pattern_name, {})
threshold = mode_config.get('confidence_threshold', 0.7)
if match.confidence >= threshold:
modes.add(match.pattern_name)
# Auto-activate modes based on complexity with configurable thresholds
auto_activation = self.mcp_patterns.get('auto_activation', {})
complexity_thresholds = auto_activation.get('complexity_thresholds', {})
# Task management auto-activation
task_mgmt_threshold = complexity_thresholds.get('enable_delegation', {}).get('complexity_score', 0.4)
if complexity_score >= task_mgmt_threshold:
modes.add("task_management")
# Sequential analysis auto-activation
sequential_threshold = complexity_thresholds.get('enable_sequential', {}).get('complexity_score', 0.6)
if complexity_score >= sequential_threshold:
# Don't add sequential as a mode, but note it for MCP server selection
pass
except Exception as e:
print(f"Warning: Error getting recommended modes: {e}")
return list(modes)
def _get_recommended_mcp_servers(self, matches: List[PatternMatch], context: Dict[str, Any]) -> List[str]:
"""Get recommended MCP servers based on detected patterns with priority handling."""
servers = {} # Use dict to track server priorities
try:
for match in matches:
if match.pattern_type == PatternType.MCP_SERVER:
server_config = match.metadata.get('server_config', {})
priority = server_config.get('priority', 'medium')
# Assign numeric priority for sorting
priority_value = {'high': 3, 'medium': 2, 'low': 1}.get(priority, 2)
if match.pattern_name not in servers or servers[match.pattern_name]['priority'] < priority_value:
servers[match.pattern_name] = {
'priority': priority_value,
'confidence': match.confidence
}
# Sort servers by priority, then by confidence
sorted_servers = sorted(
servers.items(),
key=lambda x: (x[1]['priority'], x[1]['confidence']),
reverse=True
)
return [server[0] for server in sorted_servers]
except Exception as e:
print(f"Warning: Error getting recommended MCP servers: {e}")
return []
def _get_suggested_flags(self, matches: List[PatternMatch], complexity_score: float, context: Dict[str, Any]) -> List[str]:
"""Get suggested flags based on patterns and complexity with configuration support."""
flags = []
try:
# Get auto-activation configuration
auto_activation = self.mcp_patterns.get('auto_activation', {})
complexity_thresholds = auto_activation.get('complexity_thresholds', {})
# Mode-specific flags
mode_flags = {
'brainstorming': ['--brainstorm'],
'task_management': ['--delegate', '--wave-mode'],
'token_efficiency': ['--uc'],
'introspection': ['--introspect']
}
# Add flags based on detected modes
for match in matches:
if match.pattern_type == PatternType.MODE_TRIGGER:
mode_name = match.pattern_name
if mode_name in mode_flags:
flags.extend(mode_flags[mode_name])
# MCP server-specific flags (thinking flags based on server matches)
mcp_thinking_flags = {
'sequential': '--think',
'serena': '--think-hard'
}
# Add thinking flags based on MCP server matches
for match in matches:
if match.pattern_type == PatternType.MCP_SERVER:
server_name = match.pattern_name
if server_name in mcp_thinking_flags:
thinking_flag = mcp_thinking_flags[server_name]
if thinking_flag not in flags:
flags.append(thinking_flag)
# Check for performance analysis patterns (special case for think-hard)
for match in matches:
if match.pattern_type == PatternType.MCP_SERVER and match.pattern_name == 'sequential':
matched_text = match.matched_text.lower()
if 'performance' in matched_text or 'bottleneck' in matched_text or 'bundle' in matched_text:
# Replace --think with --think-hard for performance analysis
if '--think' in flags:
flags.remove('--think')
if '--think-hard' not in flags:
flags.append('--think-hard')
# Thinking flags based on complexity with configurable thresholds
sequential_config = complexity_thresholds.get('enable_sequential', {})
sequential_threshold = sequential_config.get('complexity_score', 0.6)
# Only add complexity-based thinking flags if no MCP-based flags were added
if not any(flag.startswith('--think') for flag in flags):
if complexity_score >= 0.8:
flags.append("--ultrathink")
elif complexity_score >= sequential_threshold:
flags.append("--think-hard")
elif complexity_score >= 0.3:
flags.append("--think")
# Delegation flags from pattern matches
delegation_recommended = False
for match in matches:
if match.metadata.get("delegation_recommended"):
delegation_recommended = True
break
if delegation_recommended and "--delegate" not in flags:
flags.append("--delegate")
# Efficiency flags based on patterns and context
efficiency_needed = False
resource_usage = context.get('resource_usage_percent', 0)
for match in matches:
if match.metadata.get("compression_needed"):
efficiency_needed = True
break
# Check resource thresholds from configuration
resource_mgmt = self.mcp_patterns.get('performance_optimization', {}).get('resource_management', {})
token_threshold = resource_mgmt.get('token_threshold_percent', 75)
if (efficiency_needed or resource_usage > token_threshold) and "--uc" not in flags:
flags.append("--uc")
# Validation flags for high-risk operations
validation_config = complexity_thresholds.get('enable_validation', {})
is_production = context.get('is_production', False)
risk_level = context.get('risk_level', 'low')
validation_needed = (
complexity_score > 0.7 or
is_production or
risk_level in validation_config.get('risk_level', ['high', 'critical'])
)
if validation_needed:
flags.append("--validate")
except Exception as e:
print(f"Warning: Error getting suggested flags: {e}")
return flags
def validate_input(self, user_input: str, context: Dict[str, Any], operation_data: Dict[str, Any]) -> Tuple[bool, str]:
"""Validate input parameters for pattern detection."""
try:
# Validate user_input
if not isinstance(user_input, str):
return False, "user_input must be a string"
if len(user_input.strip()) == 0:
return False, "user_input cannot be empty"
# Validate context
if not isinstance(context, dict):
return False, "context must be a dictionary"
# Validate operation_data
if not isinstance(operation_data, dict):
return False, "operation_data must be a dictionary"
# Validate numeric values in context
resource_usage = context.get('resource_usage_percent', 0)
if not isinstance(resource_usage, (int, float)) or not (0 <= resource_usage <= 100):
context['resource_usage_percent'] = 0
# Validate numeric values in operation_data
file_count = operation_data.get('file_count', 1)
if not isinstance(file_count, int) or file_count < 0:
operation_data['file_count'] = 1
complexity_score = operation_data.get('complexity_score', 0.0)
if not isinstance(complexity_score, (int, float)) or not (0 <= complexity_score <= 1):
operation_data['complexity_score'] = 0.0
return True, "Input validation passed"
except Exception as e:
return False, f"Input validation error: {e}"
def get_pattern_statistics(self) -> Dict[str, Any]:
"""Get statistics about compiled patterns for debugging."""
try:
stats = {
'total_patterns': len(self.compiled_patterns),
'mode_patterns': len([k for k in self.compiled_patterns.keys() if k.startswith('mode_')]),
'mcp_patterns': len([k for k in self.compiled_patterns.keys() if k.startswith('mcp_')]),
'mode_configs_loaded': len(self.mode_configs),
'mcp_configs_loaded': len(self.mcp_configs),
'pattern_details': {}
}
# Detailed pattern statistics
for pattern_key, compiled_patterns in self.compiled_patterns.items():
stats['pattern_details'][pattern_key] = {
'pattern_count': len(compiled_patterns),
'patterns': [p.pattern for p in compiled_patterns]
}
return stats
except Exception as e:
return {'error': f"Failed to generate pattern statistics: {e}"}
def reset_patterns(self):
"""Reset and reload all patterns from configuration."""
try:
self.__init__()
return True
except Exception as e:
print(f"Warning: Failed to reset patterns: {e}")
return False

View File

@@ -1,763 +0,0 @@
#!/usr/bin/env python3
"""
YAML-Driven System Validation Engine for SuperClaude Framework-Hooks
Intelligent validation system that consumes declarative YAML patterns from
validation_intelligence.yaml for health scoring, proactive diagnostics, and
predictive analysis.
Features:
- YAML-driven validation patterns (hot-reloadable)
- Health scoring with weighted components
- Proactive diagnostic pattern matching
- Predictive health analysis
- Automated remediation suggestions
- Continuous validation cycles
"""
import os
import json
import time
import statistics
import sys
import argparse
from pathlib import Path
from typing import Dict, Any, List, Tuple, Optional
from dataclasses import dataclass, asdict
from enum import Enum
# Import our YAML intelligence infrastructure
from yaml_loader import config_loader
from intelligence_engine import IntelligenceEngine
class ValidationSeverity(Enum):
"""Validation issue severity levels."""
INFO = "info"
LOW = "low"
MEDIUM = "medium"
HIGH = "high"
CRITICAL = "critical"
class HealthStatus(Enum):
"""System health status levels."""
HEALTHY = "healthy"
WARNING = "warning"
CRITICAL = "critical"
UNKNOWN = "unknown"
@dataclass
class ValidationIssue:
"""Represents a validation issue found by the system."""
component: str
issue_type: str
severity: ValidationSeverity
description: str
evidence: List[str]
recommendations: List[str]
remediation_action: Optional[str] = None
auto_fixable: bool = False
timestamp: float = 0.0
def __post_init__(self):
if self.timestamp == 0.0:
self.timestamp = time.time()
@dataclass
class HealthScore:
"""Health score for a system component."""
component: str
score: float # 0.0 to 1.0
status: HealthStatus
contributing_factors: List[str]
trend: str # improving, stable, degrading
last_updated: float = 0.0
def __post_init__(self):
if self.last_updated == 0.0:
self.last_updated = time.time()
@dataclass
class DiagnosticResult:
"""Result of diagnostic analysis."""
component: str
diagnosis: str
confidence: float
symptoms: List[str]
root_cause: Optional[str]
recommendations: List[str]
predicted_impact: str
timeline: str
class YAMLValidationEngine:
"""
YAML-driven validation engine that consumes intelligence patterns.
Features:
- Hot-reloadable YAML validation patterns
- Component-based health scoring
- Proactive diagnostic pattern matching
- Predictive health analysis
- Intelligent remediation suggestions
"""
def __init__(self, framework_root: Path, fix_issues: bool = False):
self.framework_root = Path(framework_root)
self.fix_issues = fix_issues
self.cache_dir = self.framework_root / "cache"
self.config_dir = self.framework_root / "config"
# Initialize intelligence engine for YAML patterns
self.intelligence_engine = IntelligenceEngine()
# Validation state
self.issues: List[ValidationIssue] = []
self.fixes_applied: List[str] = []
self.health_scores: Dict[str, HealthScore] = {}
self.diagnostic_results: List[DiagnosticResult] = []
# Load validation intelligence patterns
self.validation_patterns = self._load_validation_patterns()
def _load_validation_patterns(self) -> Dict[str, Any]:
"""Load validation patterns from YAML intelligence configuration."""
try:
patterns = config_loader.get_validation_health_config()
return patterns if patterns else {}
except Exception as e:
print(f"Warning: Could not load validation patterns: {e}")
return {}
def validate_all(self) -> Tuple[List[ValidationIssue], List[str], Dict[str, HealthScore]]:
"""
Run comprehensive YAML-driven validation.
Returns:
Tuple of (issues, fixes_applied, health_scores)
"""
print("🔍 Starting YAML-driven framework validation...")
# Clear previous state
self.issues.clear()
self.fixes_applied.clear()
self.health_scores.clear()
self.diagnostic_results.clear()
# Get current system context
context = self._gather_system_context()
# Run validation intelligence analysis
validation_intelligence = self.intelligence_engine.evaluate_context(
context, 'validation_intelligence'
)
# Core component validations using YAML patterns
self._validate_learning_system(context, validation_intelligence)
self._validate_performance_system(context, validation_intelligence)
self._validate_mcp_coordination(context, validation_intelligence)
self._validate_hook_system(context, validation_intelligence)
self._validate_configuration_system(context, validation_intelligence)
self._validate_cache_system(context, validation_intelligence)
# Run proactive diagnostics
self._run_proactive_diagnostics(context)
# Calculate overall health score
self._calculate_overall_health_score()
# Generate remediation recommendations
self._generate_remediation_suggestions()
return self.issues, self.fixes_applied, self.health_scores
def _gather_system_context(self) -> Dict[str, Any]:
"""Gather current system context for validation analysis."""
context = {
'timestamp': time.time(),
'framework_root': str(self.framework_root),
'cache_directory_exists': self.cache_dir.exists(),
'config_directory_exists': self.config_dir.exists(),
}
# Learning system context
learning_records_path = self.cache_dir / "learning_records.json"
if learning_records_path.exists():
try:
with open(learning_records_path, 'r') as f:
records = json.load(f)
context['learning_records_count'] = len(records)
if records:
context['recent_learning_activity'] = len([
r for r in records
if r.get('timestamp', 0) > time.time() - 86400 # Last 24h
])
except:
context['learning_records_count'] = 0
context['recent_learning_activity'] = 0
# Adaptations context
adaptations_path = self.cache_dir / "adaptations.json"
if adaptations_path.exists():
try:
with open(adaptations_path, 'r') as f:
adaptations = json.load(f)
context['adaptations_count'] = len(adaptations)
# Calculate effectiveness statistics
all_effectiveness = []
for adaptation in adaptations.values():
history = adaptation.get('effectiveness_history', [])
all_effectiveness.extend(history)
if all_effectiveness:
context['average_effectiveness'] = statistics.mean(all_effectiveness)
context['effectiveness_variance'] = statistics.variance(all_effectiveness) if len(all_effectiveness) > 1 else 0
context['perfect_score_count'] = sum(1 for score in all_effectiveness if score == 1.0)
except:
context['adaptations_count'] = 0
# Configuration files context
yaml_files = list(self.config_dir.glob("*.yaml")) if self.config_dir.exists() else []
context['yaml_config_count'] = len(yaml_files)
context['intelligence_patterns_available'] = len([
f for f in yaml_files
if f.name in ['intelligence_patterns.yaml', 'mcp_orchestration.yaml',
'hook_coordination.yaml', 'performance_intelligence.yaml',
'validation_intelligence.yaml', 'user_experience.yaml']
])
return context
def _validate_learning_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate learning system using YAML patterns."""
print("📊 Validating learning system...")
component_weight = self.validation_patterns.get('component_weights', {}).get('learning_system', 0.25)
scoring_metrics = self.validation_patterns.get('scoring_metrics', {}).get('learning_system', {})
issues = []
score_factors = []
# Pattern diversity validation
adaptations_count = context.get('adaptations_count', 0)
if adaptations_count > 0:
# Simplified diversity calculation
diversity_score = min(adaptations_count / 50.0, 0.95) # Cap at 0.95
pattern_diversity_config = scoring_metrics.get('pattern_diversity', {})
healthy_range = pattern_diversity_config.get('healthy_range', [0.6, 0.95])
if diversity_score < healthy_range[0]:
issues.append(ValidationIssue(
component="learning_system",
issue_type="pattern_diversity",
severity=ValidationSeverity.MEDIUM,
description=f"Pattern diversity low: {diversity_score:.2f}",
evidence=[f"Only {adaptations_count} unique patterns learned"],
recommendations=["Expose system to more diverse operational patterns"]
))
score_factors.append(diversity_score)
# Effectiveness consistency validation
effectiveness_variance = context.get('effectiveness_variance', 0)
if effectiveness_variance is not None:
consistency_score = max(0, 1.0 - effectiveness_variance)
effectiveness_config = scoring_metrics.get('effectiveness_consistency', {})
healthy_range = effectiveness_config.get('healthy_range', [0.7, 0.9])
if consistency_score < healthy_range[0]:
issues.append(ValidationIssue(
component="learning_system",
issue_type="effectiveness_consistency",
severity=ValidationSeverity.LOW,
description=f"Effectiveness variance high: {effectiveness_variance:.3f}",
evidence=[f"Effectiveness consistency score: {consistency_score:.2f}"],
recommendations=["Review learning patterns for instability"]
))
score_factors.append(consistency_score)
# Perfect score detection (overfitting indicator)
perfect_scores = context.get('perfect_score_count', 0)
total_effectiveness_records = context.get('adaptations_count', 0) * 3 # Rough estimate
if total_effectiveness_records > 0 and perfect_scores / total_effectiveness_records > 0.3:
issues.append(ValidationIssue(
component="learning_system",
issue_type="potential_overfitting",
severity=ValidationSeverity.MEDIUM,
description=f"High proportion of perfect scores: {perfect_scores}/{total_effectiveness_records}",
evidence=[f"Perfect score ratio: {perfect_scores/total_effectiveness_records:.1%}"],
recommendations=[
"Review learning patterns for overfitting",
"Add noise to prevent overconfident patterns"
],
remediation_action="automatic_pattern_diversification"
))
# Calculate health score
component_health = statistics.mean(score_factors) if score_factors else 0.5
health_status = (
HealthStatus.HEALTHY if component_health >= 0.8 else
HealthStatus.WARNING if component_health >= 0.6 else
HealthStatus.CRITICAL
)
self.health_scores['learning_system'] = HealthScore(
component='learning_system',
score=component_health,
status=health_status,
contributing_factors=[f"pattern_diversity", "effectiveness_consistency"],
trend="stable" # Would need historical data to determine trend
)
self.issues.extend(issues)
def _validate_performance_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate performance system using YAML patterns."""
print("⚡ Validating performance system...")
# This would integrate with actual performance metrics
# For now, provide basic validation based on available data
issues = []
score_factors = []
# Check for performance-related files and configurations
perf_score = 0.8 # Default assuming healthy
# Cache size validation (proxy for memory efficiency)
if self.cache_dir.exists():
cache_size = sum(f.stat().st_size for f in self.cache_dir.rglob('*') if f.is_file())
cache_size_mb = cache_size / (1024 * 1024)
if cache_size_mb > 10: # > 10MB cache
issues.append(ValidationIssue(
component="performance_system",
issue_type="cache_size_large",
severity=ValidationSeverity.LOW,
description=f"Cache size is large: {cache_size_mb:.1f}MB",
evidence=[f"Total cache size: {cache_size_mb:.1f}MB"],
recommendations=["Consider cache cleanup policies"],
remediation_action="aggressive_cache_cleanup"
))
perf_score -= 0.1
score_factors.append(perf_score)
self.health_scores['performance_system'] = HealthScore(
component='performance_system',
score=statistics.mean(score_factors) if score_factors else 0.8,
status=HealthStatus.HEALTHY,
contributing_factors=["cache_efficiency", "resource_utilization"],
trend="stable"
)
self.issues.extend(issues)
def _validate_mcp_coordination(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate MCP coordination system using YAML patterns."""
print("🔗 Validating MCP coordination...")
issues = []
score = 0.8 # Default healthy score
# Check MCP orchestration patterns availability
mcp_patterns_available = 'mcp_orchestration.yaml' in [
f.name for f in self.config_dir.glob("*.yaml")
] if self.config_dir.exists() else False
if not mcp_patterns_available:
issues.append(ValidationIssue(
component="mcp_coordination",
issue_type="missing_orchestration_patterns",
severity=ValidationSeverity.MEDIUM,
description="MCP orchestration patterns not available",
evidence=["mcp_orchestration.yaml not found"],
recommendations=["Ensure MCP orchestration patterns are configured"]
))
score -= 0.2
self.health_scores['mcp_coordination'] = HealthScore(
component='mcp_coordination',
score=score,
status=HealthStatus.HEALTHY if score >= 0.8 else HealthStatus.WARNING,
contributing_factors=["pattern_availability", "server_selection_accuracy"],
trend="stable"
)
self.issues.extend(issues)
def _validate_hook_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate hook system using YAML patterns."""
print("🎣 Validating hook system...")
issues = []
score = 0.8
# Check hook coordination patterns
hook_patterns_available = 'hook_coordination.yaml' in [
f.name for f in self.config_dir.glob("*.yaml")
] if self.config_dir.exists() else False
if not hook_patterns_available:
issues.append(ValidationIssue(
component="hook_system",
issue_type="missing_coordination_patterns",
severity=ValidationSeverity.MEDIUM,
description="Hook coordination patterns not available",
evidence=["hook_coordination.yaml not found"],
recommendations=["Ensure hook coordination patterns are configured"]
))
score -= 0.2
self.health_scores['hook_system'] = HealthScore(
component='hook_system',
score=score,
status=HealthStatus.HEALTHY if score >= 0.8 else HealthStatus.WARNING,
contributing_factors=["coordination_patterns", "execution_efficiency"],
trend="stable"
)
self.issues.extend(issues)
def _validate_configuration_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate configuration system using YAML patterns."""
print("📝 Validating configuration system...")
issues = []
score_factors = []
# Check YAML configuration files
expected_intelligence_files = [
'intelligence_patterns.yaml',
'mcp_orchestration.yaml',
'hook_coordination.yaml',
'performance_intelligence.yaml',
'validation_intelligence.yaml',
'user_experience.yaml'
]
available_files = [f.name for f in self.config_dir.glob("*.yaml")] if self.config_dir.exists() else []
missing_files = [f for f in expected_intelligence_files if f not in available_files]
if missing_files:
issues.append(ValidationIssue(
component="configuration_system",
issue_type="missing_intelligence_configs",
severity=ValidationSeverity.HIGH,
description=f"Missing {len(missing_files)} intelligence configuration files",
evidence=[f"Missing files: {', '.join(missing_files)}"],
recommendations=["Ensure all intelligence pattern files are available"]
))
score_factors.append(0.5)
else:
score_factors.append(0.9)
# Validate YAML syntax
yaml_issues = 0
if self.config_dir.exists():
for yaml_file in self.config_dir.glob("*.yaml"):
try:
with open(yaml_file, 'r') as f:
config_loader.load_config(yaml_file.stem)
except Exception as e:
yaml_issues += 1
issues.append(ValidationIssue(
component="configuration_system",
issue_type="yaml_syntax_error",
severity=ValidationSeverity.HIGH,
description=f"YAML syntax error in {yaml_file.name}",
evidence=[f"Error: {str(e)}"],
recommendations=[f"Fix YAML syntax in {yaml_file.name}"]
))
syntax_score = max(0, 1.0 - yaml_issues * 0.2)
score_factors.append(syntax_score)
overall_score = statistics.mean(score_factors) if score_factors else 0.5
self.health_scores['configuration_system'] = HealthScore(
component='configuration_system',
score=overall_score,
status=HealthStatus.HEALTHY if overall_score >= 0.8 else
HealthStatus.WARNING if overall_score >= 0.6 else
HealthStatus.CRITICAL,
contributing_factors=["file_availability", "yaml_syntax", "intelligence_patterns"],
trend="stable"
)
self.issues.extend(issues)
def _validate_cache_system(self, context: Dict[str, Any], intelligence: Dict[str, Any]):
"""Validate cache system using YAML patterns."""
print("💾 Validating cache system...")
issues = []
score = 0.8
if not self.cache_dir.exists():
issues.append(ValidationIssue(
component="cache_system",
issue_type="cache_directory_missing",
severity=ValidationSeverity.HIGH,
description="Cache directory does not exist",
evidence=[f"Path not found: {self.cache_dir}"],
recommendations=["Initialize cache directory"],
auto_fixable=True,
remediation_action="create_cache_directory"
))
score = 0.3
else:
# Validate essential cache files
essential_files = ['learning_records.json', 'adaptations.json']
missing_essential = []
for essential_file in essential_files:
file_path = self.cache_dir / essential_file
if not file_path.exists():
missing_essential.append(essential_file)
if missing_essential:
issues.append(ValidationIssue(
component="cache_system",
issue_type="missing_essential_cache_files",
severity=ValidationSeverity.MEDIUM,
description=f"Missing essential cache files: {', '.join(missing_essential)}",
evidence=[f"Missing files in {self.cache_dir}"],
recommendations=["Initialize missing cache files"],
auto_fixable=True
))
score -= 0.1 * len(missing_essential)
self.health_scores['cache_system'] = HealthScore(
component='cache_system',
score=score,
status=HealthStatus.HEALTHY if score >= 0.8 else
HealthStatus.WARNING if score >= 0.6 else
HealthStatus.CRITICAL,
contributing_factors=["directory_existence", "essential_files"],
trend="stable"
)
self.issues.extend(issues)
def _run_proactive_diagnostics(self, context: Dict[str, Any]):
"""Run proactive diagnostic pattern matching from YAML."""
print("🔮 Running proactive diagnostics...")
# Get early warning patterns from YAML
early_warning_patterns = self.validation_patterns.get(
'proactive_diagnostics', {}
).get('early_warning_patterns', {})
# Check learning system warnings
learning_warnings = early_warning_patterns.get('learning_system_warnings', [])
for warning_pattern in learning_warnings:
if self._matches_warning_pattern(context, warning_pattern):
severity_map = {
'low': ValidationSeverity.LOW,
'medium': ValidationSeverity.MEDIUM,
'high': ValidationSeverity.HIGH,
'critical': ValidationSeverity.CRITICAL
}
self.issues.append(ValidationIssue(
component="learning_system",
issue_type=warning_pattern.get('name', 'unknown_warning'),
severity=severity_map.get(warning_pattern.get('severity', 'medium'), ValidationSeverity.MEDIUM),
description=f"Proactive warning: {warning_pattern.get('name')}",
evidence=[f"Pattern matched: {warning_pattern.get('pattern', {})}"],
recommendations=[warning_pattern.get('recommendation', 'Review system state')],
remediation_action=warning_pattern.get('remediation')
))
# Similar checks for performance and coordination warnings would go here
def _matches_warning_pattern(self, context: Dict[str, Any], warning_pattern: Dict[str, Any]) -> bool:
"""Check if current context matches a warning pattern."""
pattern_conditions = warning_pattern.get('pattern', {})
for key, expected_value in pattern_conditions.items():
if key not in context:
continue
context_value = context[key]
# Handle string comparisons with operators
if isinstance(expected_value, str):
if expected_value.startswith('>'):
threshold = float(expected_value[1:])
if not (isinstance(context_value, (int, float)) and context_value > threshold):
return False
elif expected_value.startswith('<'):
threshold = float(expected_value[1:])
if not (isinstance(context_value, (int, float)) and context_value < threshold):
return False
else:
if context_value != expected_value:
return False
else:
if context_value != expected_value:
return False
return True
def _calculate_overall_health_score(self):
"""Calculate overall system health score using YAML component weights."""
component_weights = self.validation_patterns.get('component_weights', {
'learning_system': 0.25,
'performance_system': 0.20,
'mcp_coordination': 0.20,
'hook_system': 0.15,
'configuration_system': 0.10,
'cache_system': 0.10
})
weighted_score = 0.0
total_weight = 0.0
for component, weight in component_weights.items():
if component in self.health_scores:
weighted_score += self.health_scores[component].score * weight
total_weight += weight
overall_score = weighted_score / total_weight if total_weight > 0 else 0.0
overall_status = (
HealthStatus.HEALTHY if overall_score >= 0.8 else
HealthStatus.WARNING if overall_score >= 0.6 else
HealthStatus.CRITICAL
)
self.health_scores['overall'] = HealthScore(
component='overall_system',
score=overall_score,
status=overall_status,
contributing_factors=list(component_weights.keys()),
trend="stable"
)
def _generate_remediation_suggestions(self):
"""Generate intelligent remediation suggestions based on issues found."""
auto_fixable_issues = [issue for issue in self.issues if issue.auto_fixable]
if auto_fixable_issues and self.fix_issues:
for issue in auto_fixable_issues:
if issue.remediation_action == "create_cache_directory":
try:
self.cache_dir.mkdir(parents=True, exist_ok=True)
self.fixes_applied.append(f"✅ Created cache directory: {self.cache_dir}")
except Exception as e:
print(f"Failed to create cache directory: {e}")
def print_results(self, verbose: bool = False):
"""Print comprehensive validation results."""
print("\n" + "="*70)
print("🎯 YAML-DRIVEN VALIDATION RESULTS")
print("="*70)
# Overall health score
overall_health = self.health_scores.get('overall')
if overall_health:
status_emoji = {
HealthStatus.HEALTHY: "🟢",
HealthStatus.WARNING: "🟡",
HealthStatus.CRITICAL: "🔴",
HealthStatus.UNKNOWN: ""
}
print(f"\n{status_emoji.get(overall_health.status, '')} Overall Health Score: {overall_health.score:.2f}/1.0 ({overall_health.status.value})")
# Component health scores
if verbose and len(self.health_scores) > 1:
print(f"\n📊 Component Health Scores:")
for component, health in self.health_scores.items():
if component != 'overall':
status_emoji = {
HealthStatus.HEALTHY: "🟢",
HealthStatus.WARNING: "🟡",
HealthStatus.CRITICAL: "🔴"
}
print(f" {status_emoji.get(health.status, '')} {component}: {health.score:.2f}")
# Issues found
if not self.issues:
print("\n✅ All validations passed! System appears healthy.")
else:
severity_counts = {}
for issue in self.issues:
severity_counts[issue.severity] = severity_counts.get(issue.severity, 0) + 1
print(f"\n🔍 Found {len(self.issues)} issues:")
for severity in [ValidationSeverity.CRITICAL, ValidationSeverity.HIGH,
ValidationSeverity.MEDIUM, ValidationSeverity.LOW, ValidationSeverity.INFO]:
if severity in severity_counts:
severity_emoji = {
ValidationSeverity.CRITICAL: "🚨",
ValidationSeverity.HIGH: "⚠️ ",
ValidationSeverity.MEDIUM: "🟡",
ValidationSeverity.LOW: " ",
ValidationSeverity.INFO: "💡"
}
print(f" {severity_emoji.get(severity, '')} {severity.value.title()}: {severity_counts[severity]}")
if verbose:
print(f"\n📋 Detailed Issues:")
for issue in sorted(self.issues, key=lambda x: x.severity.value):
print(f"\n{issue.component}/{issue.issue_type} ({issue.severity.value})")
print(f" {issue.description}")
if issue.evidence:
print(f" Evidence: {'; '.join(issue.evidence)}")
if issue.recommendations:
print(f" Recommendations: {'; '.join(issue.recommendations)}")
# Fixes applied
if self.fixes_applied:
print(f"\n🔧 Applied {len(self.fixes_applied)} fixes:")
for fix in self.fixes_applied:
print(f" {fix}")
print("\n" + "="*70)
def main():
"""Main entry point for YAML-driven validation."""
parser = argparse.ArgumentParser(
description="YAML-driven Framework-Hooks validation engine"
)
parser.add_argument("--fix", action="store_true",
help="Attempt to fix auto-fixable issues")
parser.add_argument("--verbose", action="store_true",
help="Verbose output with detailed results")
parser.add_argument("--framework-root",
default=".",
help="Path to Framework-Hooks directory")
args = parser.parse_args()
framework_root = Path(args.framework_root).resolve()
if not framework_root.exists():
print(f"❌ Framework root directory not found: {framework_root}")
sys.exit(1)
# Initialize YAML-driven validation engine
validator = YAMLValidationEngine(framework_root, args.fix)
# Run comprehensive validation
issues, fixes, health_scores = validator.validate_all()
# Print results
validator.print_results(args.verbose)
# Exit with health score as return code (0 = perfect, higher = issues)
overall_health = health_scores.get('overall')
health_score = overall_health.score if overall_health else 0.0
exit_code = max(0, min(10, int((1.0 - health_score) * 10))) # 0-10 range
sys.exit(exit_code)
if __name__ == "__main__":
main()

View File

@@ -1,422 +0,0 @@
"""
Unified Configuration Loader for SuperClaude-Lite
High-performance configuration loading with support for both JSON and YAML formats,
caching, hot-reload capabilities, and comprehensive error handling.
Supports:
- Claude Code settings.json (JSON format)
- SuperClaude superclaude-config.json (JSON format)
- YAML configuration files
- Unified configuration interface for hooks
"""
import os
import json
import yaml
import time
import hashlib
from typing import Dict, Any, Optional, Union
from pathlib import Path
class UnifiedConfigLoader:
"""
Intelligent configuration loader with support for JSON and YAML formats.
Features:
- Dual-configuration support (Claude Code + SuperClaude)
- File modification detection for hot-reload
- In-memory caching for performance (<10ms access)
- Comprehensive error handling and validation
- Environment variable interpolation
- Include/merge support for modular configs
- Unified configuration interface
"""
def __init__(self, project_root: Union[str, Path]):
self.project_root = Path(project_root)
self.config_dir = self.project_root / "config"
# Configuration file paths
self.claude_settings_path = self.project_root / "settings.json"
self.superclaude_config_path = self.project_root / "superclaude-config.json"
# Cache for all configuration sources
self._cache: Dict[str, Dict[str, Any]] = {}
self._file_hashes: Dict[str, str] = {}
self._last_check: Dict[str, float] = {}
self.check_interval = 1.0 # Check files every 1 second max
# Configuration source registry
self._config_sources = {
'claude_settings': self.claude_settings_path,
'superclaude_config': self.superclaude_config_path
}
def load_config(self, config_name: str, force_reload: bool = False) -> Dict[str, Any]:
"""
Load configuration with intelligent caching (supports JSON and YAML).
Args:
config_name: Name of config file or special config identifier
- For YAML: config file name without .yaml extension
- For JSON: 'claude_settings' or 'superclaude_config'
force_reload: Force reload even if cached
Returns:
Parsed configuration dictionary
Raises:
FileNotFoundError: If config file doesn't exist
ValueError: If config parsing fails
"""
# Handle special configuration sources
if config_name in self._config_sources:
return self._load_json_config(config_name, force_reload)
# Handle YAML configuration files
config_path = self.config_dir / f"{config_name}.yaml"
if not config_path.exists():
raise FileNotFoundError(f"Configuration file not found: {config_path}")
# Check if we need to reload
if not force_reload and self._should_use_cache(config_name, config_path):
return self._cache[config_name]
# Load and parse the YAML configuration
try:
with open(config_path, 'r', encoding='utf-8') as f:
content = f.read()
# Environment variable interpolation
content = self._interpolate_env_vars(content)
# Parse YAML
config = yaml.safe_load(content)
# Handle includes/merges
config = self._process_includes(config, config_path.parent)
# Update cache
self._cache[config_name] = config
self._file_hashes[config_name] = self._compute_hash(config_path)
self._last_check[config_name] = time.time()
return config
except yaml.YAMLError as e:
raise ValueError(f"YAML parsing error in {config_path}: {e}")
except Exception as e:
raise RuntimeError(f"Error loading config {config_name}: {e}")
def _load_json_config(self, config_name: str, force_reload: bool = False) -> Dict[str, Any]:
"""Load JSON configuration file."""
config_path = self._config_sources[config_name]
if not config_path.exists():
raise FileNotFoundError(f"Configuration file not found: {config_path}")
# Check if we need to reload
if not force_reload and self._should_use_cache(config_name, config_path):
return self._cache[config_name]
# Load and parse the JSON configuration
try:
with open(config_path, 'r', encoding='utf-8') as f:
content = f.read()
# Environment variable interpolation
content = self._interpolate_env_vars(content)
# Parse JSON
config = json.loads(content)
# Update cache
self._cache[config_name] = config
self._file_hashes[config_name] = self._compute_hash(config_path)
self._last_check[config_name] = time.time()
return config
except json.JSONDecodeError as e:
raise ValueError(f"JSON parsing error in {config_path}: {e}")
except Exception as e:
raise RuntimeError(f"Error loading JSON config {config_name}: {e}")
def get_section(self, config_name: str, section_path: str, default: Any = None) -> Any:
"""
Get specific section from configuration using dot notation.
Args:
config_name: Configuration file name or identifier
section_path: Dot-separated path (e.g., 'routing.ui_components')
default: Default value if section not found
Returns:
Configuration section value or default
"""
config = self.load_config(config_name)
try:
result = config
for key in section_path.split('.'):
result = result[key]
return result
except (KeyError, TypeError):
return default
def get_hook_config(self, hook_name: str, section_path: str = None, default: Any = None) -> Any:
"""
Get hook-specific configuration from SuperClaude config.
Args:
hook_name: Hook name (e.g., 'session_start', 'pre_tool_use')
section_path: Optional dot-separated path within hook config
default: Default value if not found
Returns:
Hook configuration or specific section
"""
base_path = f"hook_configurations.{hook_name}"
if section_path:
full_path = f"{base_path}.{section_path}"
else:
full_path = base_path
return self.get_section('superclaude_config', full_path, default)
def get_claude_hooks(self) -> Dict[str, Any]:
"""Get Claude Code hook definitions from settings.json."""
return self.get_section('claude_settings', 'hooks', {})
def get_superclaude_config(self, section_path: str = None, default: Any = None) -> Any:
"""
Get SuperClaude framework configuration.
Args:
section_path: Optional dot-separated path (e.g., 'global_configuration.performance_monitoring')
default: Default value if not found
Returns:
Configuration section or full config if no path specified
"""
if section_path:
return self.get_section('superclaude_config', section_path, default)
else:
return self.load_config('superclaude_config')
def get_mcp_server_config(self, server_name: str = None) -> Dict[str, Any]:
"""
Get MCP server configuration.
Args:
server_name: Optional specific server name
Returns:
MCP server configuration
"""
if server_name:
return self.get_section('superclaude_config', f'mcp_server_integration.servers.{server_name}', {})
else:
return self.get_section('superclaude_config', 'mcp_server_integration', {})
def get_performance_targets(self) -> Dict[str, Any]:
"""Get performance targets for all components."""
return self.get_section('superclaude_config', 'global_configuration.performance_monitoring', {})
def is_hook_enabled(self, hook_name: str) -> bool:
"""Check if a specific hook is enabled."""
return self.get_hook_config(hook_name, 'enabled', False)
def reload_all(self) -> None:
"""Force reload of all cached configurations."""
for config_name in list(self._cache.keys()):
self.load_config(config_name, force_reload=True)
def _should_use_cache(self, config_name: str, config_path: Path) -> bool:
"""Check if cached version is still valid."""
if config_name not in self._cache:
return False
# Rate limit file checks
now = time.time()
if now - self._last_check.get(config_name, 0) < self.check_interval:
return True
# Check if file changed
current_hash = self._compute_hash(config_path)
return current_hash == self._file_hashes.get(config_name)
def _compute_hash(self, file_path: Path) -> str:
"""Compute file hash for change detection."""
stat = file_path.stat()
return hashlib.md5(f"{stat.st_mtime}:{stat.st_size}".encode()).hexdigest()
def _interpolate_env_vars(self, content: str) -> str:
"""Replace environment variables in YAML content."""
import re
def replace_env_var(match):
var_name = match.group(1)
default_value = match.group(2) if match.group(2) else ""
return os.getenv(var_name, default_value)
# Support ${VAR} and ${VAR:default} syntax
pattern = r'\$\{([^}:]+)(?::([^}]*))?\}'
return re.sub(pattern, replace_env_var, content)
def _process_includes(self, config: Dict[str, Any], base_dir: Path) -> Dict[str, Any]:
"""Process include directives in configuration."""
if not isinstance(config, dict):
return config
# Handle special include key
if '__include__' in config:
includes = config.pop('__include__')
if isinstance(includes, str):
includes = [includes]
for include_file in includes:
include_path = base_dir / include_file
if include_path.exists():
with open(include_path, 'r', encoding='utf-8') as f:
included_config = yaml.safe_load(f.read())
if isinstance(included_config, dict):
# Merge included config (current config takes precedence)
included_config.update(config)
config = included_config
return config
def get_intelligence_config(self, intelligence_type: str, section_path: str = None, default: Any = None) -> Any:
"""
Get intelligence configuration from YAML patterns.
Args:
intelligence_type: Type of intelligence config (e.g., 'intelligence_patterns', 'mcp_orchestration')
section_path: Optional dot-separated path within intelligence config
default: Default value if not found
Returns:
Intelligence configuration or specific section
"""
try:
config = self.load_config(intelligence_type)
if section_path:
result = config
for key in section_path.split('.'):
result = result[key]
return result
else:
return config
except (FileNotFoundError, KeyError, TypeError):
return default
def get_pattern_dimensions(self) -> Dict[str, Any]:
"""Get pattern recognition dimensions from intelligence patterns."""
return self.get_intelligence_config(
'intelligence_patterns',
'learning_intelligence.pattern_recognition.dimensions',
{'primary': ['context_type', 'complexity_score', 'operation_type'], 'secondary': []}
)
def get_mcp_orchestration_rules(self) -> Dict[str, Any]:
"""Get MCP server orchestration rules."""
return self.get_intelligence_config(
'mcp_orchestration',
'server_selection.decision_tree',
[]
)
def get_hook_coordination_patterns(self) -> Dict[str, Any]:
"""Get hook coordination execution patterns."""
return self.get_intelligence_config(
'hook_coordination',
'execution_patterns',
{}
)
def get_performance_zones(self) -> Dict[str, Any]:
"""Get performance management resource zones."""
return self.get_intelligence_config(
'performance_intelligence',
'resource_management.resource_zones',
{}
)
def get_validation_health_config(self) -> Dict[str, Any]:
"""Get validation and health scoring configuration."""
return self.get_intelligence_config(
'validation_intelligence',
'health_scoring',
{}
)
def get_ux_project_patterns(self) -> Dict[str, Any]:
"""Get user experience project detection patterns."""
return self.get_intelligence_config(
'user_experience',
'project_detection.detection_patterns',
{}
)
def get_intelligence_summary(self) -> Dict[str, Any]:
"""Get summary of all available intelligence configurations."""
intelligence_types = [
'intelligence_patterns',
'mcp_orchestration',
'hook_coordination',
'performance_intelligence',
'validation_intelligence',
'user_experience'
]
summary = {}
for intelligence_type in intelligence_types:
try:
config = self.load_config(intelligence_type)
summary[intelligence_type] = {
'loaded': True,
'version': config.get('version', 'unknown'),
'last_updated': config.get('last_updated', 'unknown'),
'sections': list(config.keys()) if isinstance(config, dict) else []
}
except Exception:
summary[intelligence_type] = {
'loaded': False,
'error': 'Failed to load configuration'
}
return summary
def reload_intelligence_configs(self) -> Dict[str, bool]:
"""Force reload all intelligence configurations and return status."""
intelligence_types = [
'intelligence_patterns',
'mcp_orchestration',
'hook_coordination',
'performance_intelligence',
'validation_intelligence',
'user_experience'
]
reload_status = {}
for intelligence_type in intelligence_types:
try:
self.load_config(intelligence_type, force_reload=True)
reload_status[intelligence_type] = True
except Exception as e:
reload_status[intelligence_type] = False
print(f"Warning: Could not reload {intelligence_type}: {e}")
return reload_status
# Global instance for shared use across hooks
# Use Claude installation directory instead of current working directory
import os
config_loader = UnifiedConfigLoader(os.path.expanduser("~/.claude"))

View File

@@ -1,804 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Stop Hook
Implements session analytics + /sc:save logic with performance tracking.
Performance target: <200ms execution time.
This hook runs at session end and provides:
- Comprehensive session analytics and performance metrics
- Learning consolidation and adaptation updates
- Session persistence with intelligent compression
- Performance optimization recommendations
- Quality assessment and improvement suggestions
"""
import sys
import json
import time
import os
from pathlib import Path
from typing import Dict, Any, List, Optional
import statistics
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic
from pattern_detection import PatternDetector
from mcp_intelligence import MCPIntelligence
from compression_engine import CompressionEngine
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class StopHook:
"""
Stop hook implementing session analytics and persistence.
Responsibilities:
- Analyze session performance and effectiveness
- Consolidate learning events and adaptations
- Generate comprehensive session analytics
- Implement intelligent session persistence
- Provide optimization recommendations for future sessions
- Track SuperClaude framework effectiveness metrics
"""
def __init__(self):
start_time = time.time()
# Initialize core components
self.framework_logic = FrameworkLogic()
self.pattern_detector = PatternDetector()
self.mcp_intelligence = MCPIntelligence()
self.compression_engine = CompressionEngine()
# Initialize learning engine with installation directory cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self.learning_engine = LearningEngine(cache_dir)
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('stop')
# Load session configuration (from YAML if exists, otherwise use hook config)
try:
self.session_config = config_loader.load_config('session')
except FileNotFoundError:
# Fall back to hook configuration if YAML file not found
self.session_config = self.hook_config.get('configuration', {})
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('stop', 'performance_target_ms', 200)
# Store cache directory reference
self._cache_dir = cache_dir
def process_session_stop(self, session_data: dict) -> dict:
"""
Process session stop with analytics and persistence.
Args:
session_data: Session termination data from Claude Code
Returns:
Session analytics report with learning insights and persistence status
"""
start_time = time.time()
# Log hook start
log_hook_start("stop", {
"session_id": session_data.get('session_id', ''),
"session_duration_ms": session_data.get('duration_ms', 0),
"operations_count": len(session_data.get('operations', [])),
"errors_count": len(session_data.get('errors', [])),
"superclaude_enabled": session_data.get('superclaude_enabled', False)
})
try:
# Extract session context
context = self._extract_session_context(session_data)
# Analyze session performance
performance_analysis = self._analyze_session_performance(context)
# Log performance analysis results
log_decision(
"stop",
"performance_analysis",
f"{performance_analysis['overall_score']:.2f}",
f"Productivity: {context.get('session_productivity', 0):.2f}, Errors: {context.get('error_rate', 0):.2f}, Bottlenecks: {', '.join(performance_analysis['bottlenecks_identified'])}"
)
# Consolidate learning events
learning_consolidation = self._consolidate_learning_events(context)
# Generate session analytics
session_analytics = self._generate_session_analytics(
context, performance_analysis, learning_consolidation
)
# Perform session persistence
persistence_result = self._perform_session_persistence(context, session_analytics)
# Log persistence results
if persistence_result['persistence_enabled']:
log_decision(
"stop",
"session_persistence",
"saved",
f"Analytics saved: {persistence_result['analytics_saved']}, Compression: {persistence_result['compression_applied']}"
)
# Generate recommendations
recommendations = self._generate_recommendations(
context, performance_analysis, learning_consolidation
)
# Log recommendations generated
total_recommendations = sum(len(recs) for recs in recommendations.values())
if total_recommendations > 0:
log_decision(
"stop",
"recommendations_generated",
str(total_recommendations),
f"Categories: {', '.join(k for k, v in recommendations.items() if v)}"
)
# Create final learning events
self._create_final_learning_events(context, session_analytics)
# Generate session report
session_report = self._generate_session_report(
context, session_analytics, persistence_result, recommendations
)
# Performance tracking
execution_time = (time.time() - start_time) * 1000
session_report['performance_metrics'] = {
'stop_processing_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'total_session_efficiency': self._calculate_session_efficiency(session_analytics)
}
# Log hook end with success
log_hook_end(
"stop",
int(execution_time),
True,
{
"session_score": session_analytics['performance_metrics']['overall_score'],
"superclaude_effectiveness": session_analytics['superclaude_effectiveness']['effectiveness_score'],
"learning_insights": session_analytics['learning_summary']['insights_generated'],
"recommendations": total_recommendations,
"performance_target_met": execution_time < self.performance_target_ms
}
)
return session_report
except Exception as e:
# Log error
log_error("stop", str(e), {"session_data": session_data})
# Log hook end with failure
log_hook_end("stop", int((time.time() - start_time) * 1000), False)
# Graceful fallback on error
return self._create_fallback_report(session_data, str(e))
def _get_current_session_id(self) -> str:
"""Get current session ID from cache."""
try:
session_id_file = self._cache_dir / "session_id"
if session_id_file.exists():
return session_id_file.read_text().strip()
except Exception:
pass
return ""
def _parse_session_activity(self, session_id: str) -> dict:
"""Parse session activity from log files."""
tools_used = set()
operations_count = 0
mcp_tools_used = set()
if not session_id:
return {
"operations_completed": 0,
"tools_utilized": [],
"unique_tools_count": 0,
"mcp_tools_used": []
}
# Parse current log file
log_file_path = self._cache_dir / "logs" / f"superclaude-lite-{time.strftime('%Y-%m-%d')}.log"
try:
if log_file_path.exists():
with open(log_file_path, 'r') as f:
for line in f:
try:
log_entry = json.loads(line.strip())
# Only process entries for current session
if log_entry.get('session') != session_id:
continue
# Count operations from pre_tool_use hook
if (log_entry.get('hook') == 'pre_tool_use' and
log_entry.get('event') == 'start'):
operations_count += 1
tool_name = log_entry.get('data', {}).get('tool_name', '')
if tool_name:
tools_used.add(tool_name)
# Track MCP tools separately
if tool_name.startswith('mcp__'):
mcp_tools_used.add(tool_name)
except (json.JSONDecodeError, KeyError):
# Skip malformed log entries
continue
except Exception as e:
# Log error but don't fail
log_error("stop", f"Failed to parse session activity: {str(e)}", {"session_id": session_id})
return {
"operations_completed": operations_count,
"tools_utilized": list(tools_used),
"unique_tools_count": len(tools_used),
"mcp_tools_used": list(mcp_tools_used)
}
def _extract_session_context(self, session_data: dict) -> dict:
"""Extract and enrich session context."""
# Get current session ID
current_session_id = self._get_current_session_id()
# Try to load session context from current session file
session_context = {}
if current_session_id:
session_file_path = self._cache_dir / f"session_{current_session_id}.json"
try:
if session_file_path.exists():
with open(session_file_path, 'r') as f:
session_context = json.load(f)
except Exception as e:
# Log error but continue with fallback
log_error("stop", f"Failed to load session context: {str(e)}", {"session_id": current_session_id})
# Parse session activity from logs
activity_data = self._parse_session_activity(current_session_id)
# Create context with session file data (if available) or fallback to session_data
context = {
'session_id': session_context.get('session_id', current_session_id or session_data.get('session_id', '')),
'session_duration_ms': session_data.get('duration_ms', 0), # This comes from hook system
'session_start_time': session_context.get('initialization_timestamp', session_data.get('start_time', 0)),
'session_end_time': time.time(),
'operations_performed': activity_data.get('tools_utilized', []), # Actual tools used from logs
'tools_used': activity_data.get('tools_utilized', []), # Actual tools used from logs
'mcp_servers_activated': session_context.get('mcp_servers', {}).get('enabled_servers', []),
'errors_encountered': session_data.get('errors', []),
'user_interactions': session_data.get('user_interactions', []),
'resource_usage': session_data.get('resource_usage', {}),
'quality_metrics': session_data.get('quality_metrics', {}),
'superclaude_enabled': session_context.get('superclaude_enabled', False),
# Add parsed activity metrics
'operation_count': activity_data.get('operations_completed', 0),
'unique_tools_count': activity_data.get('unique_tools_count', 0),
'mcp_tools_used': activity_data.get('mcp_tools_used', [])
}
# Calculate derived metrics
context.update(self._calculate_derived_metrics(context))
return context
def _calculate_derived_metrics(self, context: dict) -> dict:
"""Calculate derived session metrics."""
operations = context.get('operations_performed', [])
tools = context.get('tools_used', [])
return {
'operation_count': len(operations),
'unique_tools_count': len(set(tools)),
'error_rate': len(context.get('errors_encountered', [])) / max(len(operations), 1),
'mcp_usage_ratio': len(context.get('mcp_servers_activated', [])) / max(len(operations), 1),
'session_productivity': self._calculate_productivity_score(context),
'superclaude_effectiveness': self._calculate_superclaude_effectiveness(context)
}
def _calculate_productivity_score(self, context: dict) -> float:
"""Calculate session productivity score (0.0 to 1.0)."""
operations = context.get('operations_performed', [])
errors = context.get('errors_encountered', [])
duration_ms = context.get('session_duration_ms', 1)
if not operations:
return 0.0
# Base productivity from operation completion
completion_rate = (len(operations) - len(errors)) / len(operations)
# Time efficiency (operations per minute)
duration_minutes = duration_ms / (1000 * 60)
operations_per_minute = len(operations) / max(duration_minutes, 0.1)
# Normalize operations per minute (assume 5 ops/min is very productive)
time_efficiency = min(operations_per_minute / 5.0, 1.0)
# Combined productivity score
productivity = (completion_rate * 0.7) + (time_efficiency * 0.3)
return min(productivity, 1.0)
def _calculate_superclaude_effectiveness(self, context: dict) -> float:
"""Calculate SuperClaude framework effectiveness score."""
if not context.get('superclaude_enabled'):
return 0.0
# Factors that indicate SuperClaude effectiveness
factors = []
# MCP server utilization
mcp_ratio = context.get('mcp_usage_ratio', 0)
factors.append(min(mcp_ratio * 2, 1.0)) # More MCP usage = better intelligence
# Error reduction (assume SuperClaude reduces errors)
error_rate = context.get('error_rate', 0)
error_effectiveness = max(1.0 - (error_rate * 2), 0.0)
factors.append(error_effectiveness)
# Productivity enhancement
productivity = context.get('session_productivity', 0)
factors.append(productivity)
# Quality metrics if available
quality_metrics = context.get('quality_metrics', {})
if quality_metrics:
avg_quality = statistics.mean(quality_metrics.values()) if quality_metrics.values() else 0.5
factors.append(avg_quality)
return statistics.mean(factors) if factors else 0.5
def _analyze_session_performance(self, context: dict) -> dict:
"""Analyze overall session performance."""
performance_analysis = {
'overall_score': 0.0,
'performance_categories': {},
'bottlenecks_identified': [],
'optimization_opportunities': [],
'performance_trends': {}
}
# Overall performance scoring
productivity = context.get('session_productivity', 0)
effectiveness = context.get('superclaude_effectiveness', 0)
error_rate = context.get('error_rate', 0)
performance_analysis['overall_score'] = (
productivity * 0.4 +
effectiveness * 0.4 +
(1.0 - error_rate) * 0.2
)
# Category-specific performance
performance_analysis['performance_categories'] = {
'productivity': productivity,
'quality': 1.0 - error_rate,
'intelligence_utilization': context.get('mcp_usage_ratio', 0),
'resource_efficiency': self._calculate_resource_efficiency(context),
'user_satisfaction_estimate': self._estimate_user_satisfaction(context)
}
# Identify bottlenecks
if error_rate > 0.2:
performance_analysis['bottlenecks_identified'].append('high_error_rate')
if productivity < 0.5:
performance_analysis['bottlenecks_identified'].append('low_productivity')
if context.get('mcp_usage_ratio', 0) < 0.3 and context.get('superclaude_enabled'):
performance_analysis['bottlenecks_identified'].append('underutilized_intelligence')
log_decision(
"stop",
"intelligence_utilization",
"low",
f"MCP usage ratio: {context.get('mcp_usage_ratio', 0):.2f}, SuperClaude enabled but underutilized"
)
# Optimization opportunities
if context.get('unique_tools_count', 0) > 10:
performance_analysis['optimization_opportunities'].append('tool_usage_optimization')
if len(context.get('mcp_servers_activated', [])) < 2 and context.get('operation_count', 0) > 5:
performance_analysis['optimization_opportunities'].append('mcp_server_coordination')
return performance_analysis
def _calculate_resource_efficiency(self, context: dict) -> float:
"""Calculate resource usage efficiency."""
resource_usage = context.get('resource_usage', {})
if not resource_usage:
return 0.8 # Assume good efficiency if no data
# Extract resource metrics
memory_usage = resource_usage.get('memory_percent', 50)
cpu_usage = resource_usage.get('cpu_percent', 50)
token_usage = resource_usage.get('token_percent', 50)
# Efficiency is inversely related to usage (but some usage is good)
memory_efficiency = 1.0 - max((memory_usage - 60) / 40, 0) # Penalty above 60%
cpu_efficiency = 1.0 - max((cpu_usage - 70) / 30, 0) # Penalty above 70%
token_efficiency = 1.0 - max((token_usage - 75) / 25, 0) # Penalty above 75%
return (memory_efficiency + cpu_efficiency + token_efficiency) / 3
def _estimate_user_satisfaction(self, context: dict) -> float:
"""Estimate user satisfaction based on session metrics."""
satisfaction_factors = []
# Low error rate increases satisfaction
error_rate = context.get('error_rate', 0)
satisfaction_factors.append(1.0 - error_rate)
# High productivity increases satisfaction
productivity = context.get('session_productivity', 0)
satisfaction_factors.append(productivity)
# SuperClaude effectiveness increases satisfaction
if context.get('superclaude_enabled'):
effectiveness = context.get('superclaude_effectiveness', 0)
satisfaction_factors.append(effectiveness)
# Session duration factor (not too short, not too long)
duration_minutes = context.get('session_duration_ms', 0) / (1000 * 60)
if duration_minutes > 0:
# Optimal session length is 15-60 minutes
if 15 <= duration_minutes <= 60:
duration_satisfaction = 1.0
elif duration_minutes < 15:
duration_satisfaction = duration_minutes / 15
else:
duration_satisfaction = max(1.0 - (duration_minutes - 60) / 120, 0.3)
satisfaction_factors.append(duration_satisfaction)
return statistics.mean(satisfaction_factors) if satisfaction_factors else 0.5
def _consolidate_learning_events(self, context: dict) -> dict:
"""Consolidate learning events from the session."""
learning_consolidation = {
'total_learning_events': 0,
'learning_categories': {},
'adaptations_created': 0,
'effectiveness_feedback': [],
'learning_insights': []
}
# Generate learning insights from session
insights = self.learning_engine.generate_learning_insights()
learning_consolidation['learning_insights'] = [
{
'insight_type': insight.insight_type,
'description': insight.description,
'confidence': insight.confidence,
'impact_score': insight.impact_score
}
for insight in insights
]
# Session-specific learning
session_learning = {
'session_effectiveness': context.get('superclaude_effectiveness', 0),
'performance_score': context.get('session_productivity', 0),
'mcp_coordination_effectiveness': min(context.get('mcp_usage_ratio', 0) * 2, 1.0),
'error_recovery_success': 1.0 - context.get('error_rate', 0)
}
# Record session learning
self.learning_engine.record_learning_event(
LearningType.EFFECTIVENESS_FEEDBACK,
AdaptationScope.SESSION,
context,
session_learning,
context.get('superclaude_effectiveness', 0),
0.9,
{'hook': 'stop', 'session_end': True}
)
learning_consolidation['total_learning_events'] = 1 + len(insights)
return learning_consolidation
def _generate_session_analytics(self, context: dict, performance_analysis: dict,
learning_consolidation: dict) -> dict:
"""Generate comprehensive session analytics."""
analytics = {
'session_summary': {
'session_id': context['session_id'],
'duration_minutes': context.get('session_duration_ms', 0) / (1000 * 60),
'operations_completed': context.get('operation_count', 0),
'tools_utilized': context.get('unique_tools_count', 0),
'mcp_servers_used': len(context.get('mcp_servers_activated', [])),
'superclaude_enabled': context.get('superclaude_enabled', False)
},
'performance_metrics': {
'overall_score': performance_analysis['overall_score'],
'productivity_score': context.get('session_productivity', 0),
'quality_score': 1.0 - context.get('error_rate', 0),
'efficiency_score': performance_analysis['performance_categories'].get('resource_efficiency', 0),
'satisfaction_estimate': performance_analysis['performance_categories'].get('user_satisfaction_estimate', 0)
},
'superclaude_effectiveness': {
'framework_enabled': context.get('superclaude_enabled', False),
'effectiveness_score': context.get('superclaude_effectiveness', 0),
'intelligence_utilization': context.get('mcp_usage_ratio', 0),
'learning_events_generated': learning_consolidation['total_learning_events'],
'adaptations_created': learning_consolidation['adaptations_created']
},
'quality_analysis': {
'error_rate': context.get('error_rate', 0),
'operation_success_rate': 1.0 - context.get('error_rate', 0),
'bottlenecks': performance_analysis['bottlenecks_identified'],
'optimization_opportunities': performance_analysis['optimization_opportunities']
},
'learning_summary': {
'insights_generated': len(learning_consolidation['learning_insights']),
'key_insights': learning_consolidation['learning_insights'][:3], # Top 3 insights
'learning_effectiveness': statistics.mean([
insight['confidence'] * insight['impact_score']
for insight in learning_consolidation['learning_insights']
]) if learning_consolidation['learning_insights'] else 0.0
},
'resource_utilization': context.get('resource_usage', {}),
'session_metadata': {
'start_time': context.get('session_start_time', 0),
'end_time': context.get('session_end_time', 0),
'framework_version': '1.0.0',
'analytics_version': 'stop_1.0'
}
}
return analytics
def _perform_session_persistence(self, context: dict, session_analytics: dict) -> dict:
"""Perform intelligent session persistence."""
persistence_result = {
'persistence_enabled': True,
'session_data_saved': False,
'analytics_saved': False,
'learning_data_saved': False,
'compression_applied': False,
'storage_optimized': False
}
try:
# Save session analytics
analytics_data = json.dumps(session_analytics, indent=2)
# Apply compression if session data is large
if len(analytics_data) > 10000: # 10KB threshold
compression_result = self.compression_engine.compress_content(
analytics_data,
context,
{'content_type': 'session_data'}
)
persistence_result['compression_applied'] = True
persistence_result['compression_ratio'] = compression_result.compression_ratio
# Simulate saving (real implementation would use actual storage)
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
session_file = cache_dir / f"session_{context['session_id']}.json"
with open(session_file, 'w') as f:
f.write(analytics_data)
persistence_result['session_data_saved'] = True
persistence_result['analytics_saved'] = True
# Learning data is automatically saved by learning engine
persistence_result['learning_data_saved'] = True
# Optimize storage by cleaning old sessions
self._cleanup_old_sessions(cache_dir)
persistence_result['storage_optimized'] = True
except Exception as e:
persistence_result['error'] = str(e)
persistence_result['persistence_enabled'] = False
return persistence_result
def _cleanup_old_sessions(self, cache_dir: Path):
"""Clean up old session files to optimize storage."""
session_files = list(cache_dir.glob("session_*.json"))
# Keep only the most recent 50 sessions
if len(session_files) > 50:
session_files.sort(key=lambda f: f.stat().st_mtime, reverse=True)
for old_file in session_files[50:]:
try:
old_file.unlink()
except:
pass # Ignore cleanup errors
def _generate_recommendations(self, context: dict, performance_analysis: dict,
learning_consolidation: dict) -> dict:
"""Generate recommendations for future sessions."""
recommendations = {
'performance_improvements': [],
'superclaude_optimizations': [],
'learning_suggestions': [],
'workflow_enhancements': []
}
# Performance recommendations
if performance_analysis['overall_score'] < 0.7:
recommendations['performance_improvements'].extend([
'Focus on reducing error rate through validation',
'Consider enabling more SuperClaude intelligence features',
'Optimize tool selection and usage patterns'
])
# SuperClaude optimization recommendations
if context.get('superclaude_enabled') and context.get('superclaude_effectiveness', 0) < 0.6:
recommendations['superclaude_optimizations'].extend([
'Enable more MCP servers for better intelligence',
'Use delegation features for complex operations',
'Activate compression for resource optimization'
])
elif not context.get('superclaude_enabled'):
recommendations['superclaude_optimizations'].append(
'Consider enabling SuperClaude framework for enhanced productivity'
)
# Learning suggestions
if learning_consolidation['total_learning_events'] < 3:
recommendations['learning_suggestions'].append(
'Engage with more complex operations to improve system learning'
)
# Workflow enhancements
if context.get('error_rate', 0) > 0.1:
recommendations['workflow_enhancements'].extend([
'Use validation hooks to catch errors early',
'Enable pre-tool-use intelligence for better routing'
])
return recommendations
def _create_final_learning_events(self, context: dict, session_analytics: dict):
"""Create final learning events for the session."""
# Record overall session effectiveness
self.learning_engine.record_learning_event(
LearningType.USER_PREFERENCE,
AdaptationScope.USER,
context,
{
'session_pattern': 'completion',
'satisfaction_score': session_analytics['performance_metrics']['satisfaction_estimate'],
'productivity_achieved': session_analytics['performance_metrics']['productivity_score'],
'superclaude_usage': context.get('superclaude_enabled', False)
},
session_analytics['performance_metrics']['overall_score'],
1.0, # High confidence in final session metrics
{'hook': 'stop', 'final_learning': True}
)
def _calculate_session_efficiency(self, session_analytics: dict) -> float:
"""Calculate overall session efficiency score."""
performance_metrics = session_analytics.get('performance_metrics', {})
efficiency_components = [
performance_metrics.get('productivity_score', 0),
performance_metrics.get('quality_score', 0),
performance_metrics.get('efficiency_score', 0),
session_analytics.get('superclaude_effectiveness', {}).get('effectiveness_score', 0)
]
return statistics.mean([comp for comp in efficiency_components if comp > 0])
def _generate_session_report(self, context: dict, session_analytics: dict,
persistence_result: dict, recommendations: dict) -> dict:
"""Generate final session report."""
return {
'session_id': context['session_id'],
'session_completed': True,
'completion_timestamp': context.get('session_end_time', time.time()),
'analytics': session_analytics,
'persistence': persistence_result,
'recommendations': recommendations,
'summary': {
'session_success': session_analytics['performance_metrics']['overall_score'] > 0.6,
'superclaude_effective': session_analytics['superclaude_effectiveness']['effectiveness_score'] > 0.6,
'learning_achieved': session_analytics['learning_summary']['insights_generated'] > 0,
'recommendations_generated': sum(len(recs) for recs in recommendations.values()) > 0
},
'next_session_preparation': {
'enable_superclaude': True,
'suggested_optimizations': recommendations.get('superclaude_optimizations', [])[:2],
'learning_focus_areas': [insight['insight_type'] for insight in
session_analytics['learning_summary']['key_insights']]
},
'metadata': {
'hook_version': 'stop_1.0',
'report_timestamp': time.time(),
'analytics_comprehensive': True
}
}
def _create_fallback_report(self, session_data: dict, error: str) -> dict:
"""Create fallback session report on error."""
return {
'session_id': session_data.get('session_id', 'unknown'),
'session_completed': False,
'error': error,
'fallback_mode': True,
'analytics': {
'session_summary': {
'session_id': session_data.get('session_id', 'unknown'),
'error_occurred': True
},
'performance_metrics': {
'overall_score': 0.0
}
},
'persistence': {
'persistence_enabled': False,
'error': error
},
'performance_metrics': {
'stop_processing_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def main():
"""Main hook execution function."""
try:
# Read session data from stdin
session_data = json.loads(sys.stdin.read())
# Initialize and run hook
hook = StopHook()
result = hook.process_session_stop(session_data)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'session_completed': False,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,771 +0,0 @@
#!/usr/bin/env python3
"""
SuperClaude-Lite Subagent Stop Hook
Implements MODE_Task_Management delegation coordination and analytics.
Performance target: <150ms execution time.
This hook runs when subagents complete tasks and provides:
- Subagent performance analytics and coordination metrics
- Task delegation effectiveness measurement
- Cross-agent learning and adaptation
- Wave orchestration optimization
- Parallel execution performance tracking
"""
import sys
import json
import time
import os
from pathlib import Path
from typing import Dict, Any, List, Optional
import statistics
# Add shared modules to path
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "shared"))
from framework_logic import FrameworkLogic
from pattern_detection import PatternDetector
from mcp_intelligence import MCPIntelligence
from compression_engine import CompressionEngine
from learning_engine import LearningEngine, LearningType, AdaptationScope
from yaml_loader import config_loader
from logger import log_hook_start, log_hook_end, log_decision, log_error
class SubagentStopHook:
"""
Subagent stop hook implementing task management coordination.
Responsibilities:
- Analyze subagent task completion and performance
- Measure delegation effectiveness and coordination success
- Learn from parallel execution patterns
- Optimize wave orchestration strategies
- Coordinate cross-agent knowledge sharing
- Track task management framework effectiveness
"""
def __init__(self):
start_time = time.time()
# Initialize core components
self.framework_logic = FrameworkLogic()
self.pattern_detector = PatternDetector()
self.mcp_intelligence = MCPIntelligence()
self.compression_engine = CompressionEngine()
# Initialize learning engine with installation directory cache
import os
cache_dir = Path(os.path.expanduser("~/.claude/cache"))
cache_dir.mkdir(parents=True, exist_ok=True)
self.learning_engine = LearningEngine(cache_dir)
# Load task management configuration
self.task_config = config_loader.get_section('session', 'task_management', {})
# Load hook-specific configuration from SuperClaude config
self.hook_config = config_loader.get_hook_config('subagent_stop')
# Performance tracking using configuration
self.initialization_time = (time.time() - start_time) * 1000
self.performance_target_ms = config_loader.get_hook_config('subagent_stop', 'performance_target_ms', 150)
def process_subagent_stop(self, subagent_data: dict) -> dict:
"""
Process subagent completion with coordination analytics.
Args:
subagent_data: Subagent completion data from Claude Code
Returns:
Coordination analytics with delegation effectiveness and optimization insights
"""
start_time = time.time()
# Log hook start
log_hook_start("subagent_stop", {
"subagent_id": subagent_data.get('subagent_id', ''),
"task_id": subagent_data.get('task_id', ''),
"task_type": subagent_data.get('task_type', 'unknown'),
"delegation_strategy": subagent_data.get('delegation_strategy', 'unknown'),
"parallel_tasks": len(subagent_data.get('parallel_tasks', [])),
"wave_context": subagent_data.get('wave_context', {})
})
try:
# Extract subagent context
context = self._extract_subagent_context(subagent_data)
# Analyze task completion performance
task_analysis = self._analyze_task_completion(context)
# Log task completion analysis
log_decision(
"subagent_stop",
"task_completion",
"completed" if task_analysis['completion_success'] else "failed",
f"Quality: {task_analysis['completion_quality']:.2f}, Efficiency: {task_analysis['completion_efficiency']:.2f}"
)
# Measure delegation effectiveness
delegation_analysis = self._analyze_delegation_effectiveness(context, task_analysis)
# Log delegation effectiveness
log_decision(
"subagent_stop",
"delegation_effectiveness",
f"{delegation_analysis['delegation_value']:.2f}",
f"Strategy: {delegation_analysis['delegation_strategy']}, Overhead: {delegation_analysis['coordination_overhead']:.1%}"
)
# Analyze coordination patterns
coordination_analysis = self._analyze_coordination_patterns(context, delegation_analysis)
# Generate optimization recommendations
optimization_insights = self._generate_optimization_insights(
context, task_analysis, delegation_analysis, coordination_analysis
)
# Record coordination learning
self._record_coordination_learning(context, delegation_analysis, optimization_insights)
# Update wave orchestration metrics
wave_metrics = self._update_wave_orchestration_metrics(context, coordination_analysis)
# Log wave orchestration if applicable
if context.get('wave_total', 1) > 1:
log_decision(
"subagent_stop",
"wave_orchestration",
f"wave_{context.get('wave_position', 0) + 1}_of_{context.get('wave_total', 1)}",
f"Performance: {wave_metrics['wave_performance']:.2f}, Efficiency: {wave_metrics['orchestration_efficiency']:.2f}"
)
# Generate coordination report
coordination_report = self._generate_coordination_report(
context, task_analysis, delegation_analysis, coordination_analysis,
optimization_insights, wave_metrics
)
# Performance tracking
execution_time = (time.time() - start_time) * 1000
coordination_report['performance_metrics'] = {
'coordination_analysis_time_ms': execution_time,
'target_met': execution_time < self.performance_target_ms,
'coordination_efficiency': self._calculate_coordination_efficiency(context, execution_time)
}
# Log hook end with success
log_hook_end(
"subagent_stop",
int(execution_time),
True,
{
"task_success": task_analysis['completion_success'],
"delegation_value": delegation_analysis['delegation_value'],
"coordination_strategy": coordination_analysis['coordination_strategy'],
"wave_enabled": context.get('wave_total', 1) > 1,
"performance_target_met": execution_time < self.performance_target_ms
}
)
return coordination_report
except Exception as e:
# Log error
log_error("subagent_stop", str(e), {"subagent_data": subagent_data})
# Log hook end with failure
log_hook_end("subagent_stop", int((time.time() - start_time) * 1000), False)
# Graceful fallback on error
return self._create_fallback_report(subagent_data, str(e))
def _extract_subagent_context(self, subagent_data: dict) -> dict:
"""Extract and enrich subagent context."""
context = {
'subagent_id': subagent_data.get('subagent_id', ''),
'parent_session_id': subagent_data.get('parent_session_id', ''),
'task_id': subagent_data.get('task_id', ''),
'task_type': subagent_data.get('task_type', 'unknown'),
'delegation_strategy': subagent_data.get('delegation_strategy', 'unknown'),
'execution_time_ms': subagent_data.get('execution_time_ms', 0),
'task_result': subagent_data.get('result', {}),
'task_status': subagent_data.get('status', 'unknown'),
'resources_used': subagent_data.get('resources', {}),
'coordination_data': subagent_data.get('coordination', {}),
'parallel_tasks': subagent_data.get('parallel_tasks', []),
'wave_context': subagent_data.get('wave_context', {}),
'completion_timestamp': time.time()
}
# Analyze task characteristics
context.update(self._analyze_task_characteristics(context))
# Extract coordination metrics
context.update(self._extract_coordination_metrics(context))
return context
def _analyze_task_characteristics(self, context: dict) -> dict:
"""Analyze characteristics of the completed task."""
task_result = context.get('task_result', {})
characteristics = {
'task_complexity': self._calculate_task_complexity(context),
'task_success': context.get('task_status') == 'completed',
'partial_success': context.get('task_status') == 'partial',
'task_error': context.get('task_status') == 'error',
'output_quality': self._assess_output_quality(task_result),
'resource_efficiency': self._calculate_resource_efficiency(context),
'coordination_required': len(context.get('parallel_tasks', [])) > 0
}
return characteristics
def _calculate_task_complexity(self, context: dict) -> float:
"""Calculate task complexity score (0.0 to 1.0)."""
complexity_factors = []
# Task type complexity
task_type = context.get('task_type', 'unknown')
type_complexity = {
'file_analysis': 0.3,
'code_generation': 0.6,
'multi_file_edit': 0.7,
'architecture_analysis': 0.9,
'system_refactor': 1.0
}
complexity_factors.append(type_complexity.get(task_type, 0.5))
# Execution time complexity
execution_time = context.get('execution_time_ms', 0)
if execution_time > 0:
# Normalize to 5 seconds as high complexity
time_complexity = min(execution_time / 5000, 1.0)
complexity_factors.append(time_complexity)
# Resource usage complexity
resources = context.get('resources_used', {})
if resources:
resource_complexity = max(
resources.get('memory_mb', 0) / 1000, # 1GB = high
resources.get('cpu_percent', 0) / 100
)
complexity_factors.append(min(resource_complexity, 1.0))
# Coordination complexity
if context.get('coordination_required'):
complexity_factors.append(0.4) # Coordination adds complexity
return statistics.mean(complexity_factors) if complexity_factors else 0.5
def _assess_output_quality(self, task_result: dict) -> float:
"""Assess quality of task output (0.0 to 1.0)."""
if not task_result:
return 0.0
quality_indicators = []
# Check for quality metrics in result
if 'quality_score' in task_result:
quality_indicators.append(task_result['quality_score'])
# Check for validation results
if task_result.get('validation_passed'):
quality_indicators.append(0.8)
elif task_result.get('validation_failed'):
quality_indicators.append(0.3)
# Check for error indicators
if task_result.get('errors'):
error_penalty = min(len(task_result['errors']) * 0.2, 0.6)
quality_indicators.append(1.0 - error_penalty)
# Check for completeness
if task_result.get('completeness_ratio'):
quality_indicators.append(task_result['completeness_ratio'])
# Default quality estimation
if not quality_indicators:
# Estimate quality from task status
status = task_result.get('status', 'unknown')
if status == 'success':
quality_indicators.append(0.8)
elif status == 'partial':
quality_indicators.append(0.6)
else:
quality_indicators.append(0.4)
return statistics.mean(quality_indicators)
def _calculate_resource_efficiency(self, context: dict) -> float:
"""Calculate resource usage efficiency."""
resources = context.get('resources_used', {})
execution_time = context.get('execution_time_ms', 1)
if not resources:
return 0.7 # Assume moderate efficiency
# Memory efficiency (lower usage = higher efficiency)
memory_mb = resources.get('memory_mb', 100)
memory_efficiency = max(1.0 - (memory_mb / 1000), 0.1) # Penalty above 1GB
# CPU efficiency (moderate usage is optimal)
cpu_percent = resources.get('cpu_percent', 50)
if cpu_percent < 30:
cpu_efficiency = cpu_percent / 30 # Underutilization penalty
elif cpu_percent > 80:
cpu_efficiency = (100 - cpu_percent) / 20 # Overutilization penalty
else:
cpu_efficiency = 1.0 # Optimal range
# Time efficiency (faster is better, but not at quality cost)
expected_time = resources.get('expected_time_ms', execution_time)
if expected_time > 0:
time_efficiency = min(expected_time / execution_time, 1.0)
else:
time_efficiency = 0.8
return (memory_efficiency + cpu_efficiency + time_efficiency) / 3
def _extract_coordination_metrics(self, context: dict) -> dict:
"""Extract coordination-specific metrics."""
coordination_data = context.get('coordination_data', {})
return {
'coordination_overhead_ms': coordination_data.get('overhead_ms', 0),
'synchronization_points': coordination_data.get('sync_points', 0),
'data_exchange_size': coordination_data.get('data_exchange_bytes', 0),
'coordination_success': coordination_data.get('success', True),
'parallel_efficiency': coordination_data.get('parallel_efficiency', 1.0),
'wave_position': context.get('wave_context', {}).get('position', 0),
'wave_total': context.get('wave_context', {}).get('total_waves', 1)
}
def _analyze_task_completion(self, context: dict) -> dict:
"""Analyze task completion performance."""
task_analysis = {
'completion_success': context.get('task_success', False),
'completion_quality': context.get('output_quality', 0.0),
'completion_efficiency': context.get('resource_efficiency', 0.0),
'completion_time_performance': 0.0,
'error_analysis': {},
'success_factors': [],
'improvement_areas': []
}
# Time performance analysis
execution_time = context.get('execution_time_ms', 0)
task_type = context.get('task_type', 'unknown')
# Expected times by task type (rough estimates)
expected_times = {
'file_analysis': 500,
'code_generation': 2000,
'multi_file_edit': 1500,
'architecture_analysis': 3000,
'system_refactor': 5000
}
expected_time = expected_times.get(task_type, 1000)
if execution_time > 0:
task_analysis['completion_time_performance'] = min(expected_time / execution_time, 1.0)
# Success factor identification
if task_analysis['completion_success']:
if task_analysis['completion_quality'] > 0.8:
task_analysis['success_factors'].append('high_output_quality')
if task_analysis['completion_efficiency'] > 0.8:
task_analysis['success_factors'].append('efficient_resource_usage')
if task_analysis['completion_time_performance'] > 0.8:
task_analysis['success_factors'].append('fast_execution')
# Improvement area identification
if task_analysis['completion_quality'] < 0.6:
task_analysis['improvement_areas'].append('output_quality')
if task_analysis['completion_efficiency'] < 0.6:
task_analysis['improvement_areas'].append('resource_efficiency')
if task_analysis['completion_time_performance'] < 0.6:
task_analysis['improvement_areas'].append('execution_speed')
return task_analysis
def _analyze_delegation_effectiveness(self, context: dict, task_analysis: dict) -> dict:
"""Analyze effectiveness of task delegation."""
delegation_analysis = {
'delegation_strategy': context.get('delegation_strategy', 'unknown'),
'delegation_success': context.get('task_success', False),
'delegation_efficiency': 0.0,
'coordination_overhead': 0.0,
'parallel_benefit': 0.0,
'delegation_value': 0.0
}
# Calculate delegation efficiency
coordination_overhead = context.get('coordination_overhead_ms', 0)
execution_time = context.get('execution_time_ms', 1)
if execution_time > 0:
delegation_analysis['coordination_overhead'] = coordination_overhead / execution_time
delegation_analysis['delegation_efficiency'] = max(
1.0 - delegation_analysis['coordination_overhead'], 0.0
)
# Calculate parallel benefit
parallel_tasks = context.get('parallel_tasks', [])
if len(parallel_tasks) > 1:
# Estimate parallel benefit based on task coordination
parallel_efficiency = context.get('parallel_efficiency', 1.0)
theoretical_speedup = len(parallel_tasks)
actual_speedup = theoretical_speedup * parallel_efficiency
delegation_analysis['parallel_benefit'] = actual_speedup / theoretical_speedup
# Overall delegation value
quality_factor = task_analysis['completion_quality']
efficiency_factor = delegation_analysis['delegation_efficiency']
parallel_factor = delegation_analysis['parallel_benefit'] if parallel_tasks else 1.0
delegation_analysis['delegation_value'] = (
quality_factor * 0.4 +
efficiency_factor * 0.3 +
parallel_factor * 0.3
)
return delegation_analysis
def _analyze_coordination_patterns(self, context: dict, delegation_analysis: dict) -> dict:
"""Analyze coordination patterns and effectiveness."""
coordination_analysis = {
'coordination_strategy': 'unknown',
'synchronization_effectiveness': 0.0,
'data_flow_efficiency': 0.0,
'wave_coordination_success': 0.0,
'cross_agent_learning': 0.0,
'coordination_patterns_detected': []
}
# Determine coordination strategy
if context.get('wave_total', 1) > 1:
coordination_analysis['coordination_strategy'] = 'wave_orchestration'
elif len(context.get('parallel_tasks', [])) > 1:
coordination_analysis['coordination_strategy'] = 'parallel_coordination'
else:
coordination_analysis['coordination_strategy'] = 'single_agent'
# Synchronization effectiveness
sync_points = context.get('synchronization_points', 0)
coordination_success = context.get('coordination_success', True)
if sync_points > 0 and coordination_success:
coordination_analysis['synchronization_effectiveness'] = 1.0
elif sync_points > 0:
coordination_analysis['synchronization_effectiveness'] = 0.5
else:
coordination_analysis['synchronization_effectiveness'] = 0.8 # No sync needed
# Data flow efficiency
data_exchange = context.get('data_exchange_size', 0)
if data_exchange > 0:
# Efficiency based on data size (smaller is more efficient)
coordination_analysis['data_flow_efficiency'] = max(1.0 - (data_exchange / 1000000), 0.1) # 1MB threshold
else:
coordination_analysis['data_flow_efficiency'] = 1.0 # No data exchange needed
# Wave coordination success
wave_position = context.get('wave_position', 0)
wave_total = context.get('wave_total', 1)
if wave_total > 1:
# Success based on position completion and delegation value
wave_progress = (wave_position + 1) / wave_total
delegation_value = delegation_analysis.get('delegation_value', 0)
coordination_analysis['wave_coordination_success'] = (wave_progress + delegation_value) / 2
else:
coordination_analysis['wave_coordination_success'] = 1.0
# Detect coordination patterns
if delegation_analysis['delegation_value'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('effective_delegation')
if coordination_analysis['synchronization_effectiveness'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('efficient_synchronization')
if coordination_analysis['wave_coordination_success'] > 0.8:
coordination_analysis['coordination_patterns_detected'].append('successful_wave_orchestration')
# Log detected patterns if any
if coordination_analysis['coordination_patterns_detected']:
log_decision(
"subagent_stop",
"coordination_patterns",
str(len(coordination_analysis['coordination_patterns_detected'])),
f"Patterns: {', '.join(coordination_analysis['coordination_patterns_detected'])}"
)
return coordination_analysis
def _generate_optimization_insights(self, context: dict, task_analysis: dict,
delegation_analysis: dict, coordination_analysis: dict) -> dict:
"""Generate optimization insights for future delegations."""
insights = {
'delegation_optimizations': [],
'coordination_improvements': [],
'wave_strategy_recommendations': [],
'performance_enhancements': [],
'learning_opportunities': []
}
# Delegation optimizations
if delegation_analysis['delegation_value'] < 0.6:
insights['delegation_optimizations'].extend([
'Consider alternative delegation strategies',
'Reduce coordination overhead',
'Improve task partitioning'
])
if delegation_analysis['coordination_overhead'] > 0.3:
insights['delegation_optimizations'].append('Minimize coordination overhead')
# Coordination improvements
if coordination_analysis['synchronization_effectiveness'] < 0.7:
insights['coordination_improvements'].append('Improve synchronization mechanisms')
if coordination_analysis['data_flow_efficiency'] < 0.7:
insights['coordination_improvements'].append('Optimize data exchange patterns')
# Wave strategy recommendations
wave_success = coordination_analysis['wave_coordination_success']
if wave_success < 0.6 and context.get('wave_total', 1) > 1:
insights['wave_strategy_recommendations'].extend([
'Adjust wave orchestration strategy',
'Consider different task distribution',
'Improve wave synchronization'
])
elif wave_success > 0.8:
insights['wave_strategy_recommendations'].append('Wave orchestration working well - maintain strategy')
# Performance enhancements
if task_analysis['completion_time_performance'] < 0.6:
insights['performance_enhancements'].append('Optimize task execution speed')
if task_analysis['completion_efficiency'] < 0.6:
insights['performance_enhancements'].append('Improve resource utilization')
return insights
def _record_coordination_learning(self, context: dict, delegation_analysis: dict,
optimization_insights: dict):
"""Record coordination learning for future optimization."""
# Record delegation effectiveness
self.learning_engine.record_learning_event(
LearningType.PERFORMANCE_OPTIMIZATION,
AdaptationScope.PROJECT,
context,
{
'delegation_strategy': context.get('delegation_strategy'),
'task_type': context.get('task_type'),
'delegation_value': delegation_analysis['delegation_value'],
'coordination_overhead': delegation_analysis['coordination_overhead'],
'parallel_benefit': delegation_analysis['parallel_benefit']
},
delegation_analysis['delegation_value'],
0.8,
{'hook': 'subagent_stop', 'coordination_learning': True}
)
# Record task pattern learning
if context.get('task_success'):
self.learning_engine.record_learning_event(
LearningType.OPERATION_PATTERN,
AdaptationScope.USER,
context,
{
'successful_task_pattern': context.get('task_type'),
'success_factors': optimization_insights.get('performance_enhancements', []),
'delegation_effective': delegation_analysis['delegation_value'] > 0.7
},
delegation_analysis['delegation_value'],
0.9,
{'task_success_pattern': True}
)
def _update_wave_orchestration_metrics(self, context: dict, coordination_analysis: dict) -> dict:
"""Update wave orchestration performance metrics."""
wave_metrics = {
'wave_performance': 0.0,
'orchestration_efficiency': 0.0,
'wave_learning_value': 0.0,
'next_wave_recommendations': []
}
if context.get('wave_total', 1) > 1:
wave_success = coordination_analysis['wave_coordination_success']
wave_metrics['wave_performance'] = wave_success
# Calculate orchestration efficiency
coordination_overhead = context.get('coordination_overhead_ms', 0)
execution_time = context.get('execution_time_ms', 1)
if execution_time > 0:
wave_metrics['orchestration_efficiency'] = max(
1.0 - (coordination_overhead / execution_time), 0.0
)
# Learning value from wave coordination
wave_metrics['wave_learning_value'] = wave_success * 0.8 # Waves provide valuable learning
# Next wave recommendations
if wave_success > 0.8:
wave_metrics['next_wave_recommendations'].append('Continue current wave strategy')
else:
wave_metrics['next_wave_recommendations'].extend([
'Adjust wave coordination strategy',
'Improve inter-wave communication'
])
return wave_metrics
def _calculate_coordination_efficiency(self, context: dict, execution_time_ms: float) -> float:
"""Calculate coordination processing efficiency."""
# Efficiency based on coordination overhead vs processing time
coordination_overhead = context.get('coordination_overhead_ms', 0)
task_execution_time = context.get('execution_time_ms', 1)
if task_execution_time > 0:
coordination_ratio = coordination_overhead / task_execution_time
coordination_efficiency = max(1.0 - coordination_ratio, 0.0)
else:
coordination_efficiency = 0.8
# Processing time efficiency
processing_efficiency = min(100 / max(execution_time_ms, 1), 1.0) # Target: 100ms
return (coordination_efficiency + processing_efficiency) / 2
def _generate_coordination_report(self, context: dict, task_analysis: dict,
delegation_analysis: dict, coordination_analysis: dict,
optimization_insights: dict, wave_metrics: dict) -> dict:
"""Generate comprehensive coordination report."""
return {
'subagent_id': context['subagent_id'],
'task_id': context['task_id'],
'completion_timestamp': context['completion_timestamp'],
'task_completion': {
'success': task_analysis['completion_success'],
'quality_score': task_analysis['completion_quality'],
'efficiency_score': task_analysis['completion_efficiency'],
'time_performance': task_analysis['completion_time_performance'],
'success_factors': task_analysis['success_factors'],
'improvement_areas': task_analysis['improvement_areas']
},
'delegation_analysis': {
'strategy': delegation_analysis['delegation_strategy'],
'effectiveness': delegation_analysis['delegation_value'],
'efficiency': delegation_analysis['delegation_efficiency'],
'coordination_overhead': delegation_analysis['coordination_overhead'],
'parallel_benefit': delegation_analysis['parallel_benefit']
},
'coordination_metrics': {
'strategy': coordination_analysis['coordination_strategy'],
'synchronization_effectiveness': coordination_analysis['synchronization_effectiveness'],
'data_flow_efficiency': coordination_analysis['data_flow_efficiency'],
'patterns_detected': coordination_analysis['coordination_patterns_detected']
},
'wave_orchestration': {
'enabled': context.get('wave_total', 1) > 1,
'wave_position': context.get('wave_position', 0),
'total_waves': context.get('wave_total', 1),
'wave_performance': wave_metrics['wave_performance'],
'orchestration_efficiency': wave_metrics['orchestration_efficiency'],
'learning_value': wave_metrics['wave_learning_value']
},
'optimization_insights': optimization_insights,
'performance_summary': {
'overall_effectiveness': (
task_analysis['completion_quality'] * 0.4 +
delegation_analysis['delegation_value'] * 0.3 +
coordination_analysis['synchronization_effectiveness'] * 0.3
),
'delegation_success': delegation_analysis['delegation_value'] > 0.6,
'coordination_success': coordination_analysis['synchronization_effectiveness'] > 0.7,
'learning_value': wave_metrics.get('wave_learning_value', 0.5)
},
'next_task_recommendations': {
'continue_delegation': delegation_analysis['delegation_value'] > 0.6,
'optimize_coordination': coordination_analysis['synchronization_effectiveness'] < 0.7,
'adjust_wave_strategy': wave_metrics['wave_performance'] < 0.6,
'suggested_improvements': optimization_insights.get('delegation_optimizations', [])[:2]
},
'metadata': {
'hook_version': 'subagent_stop_1.0',
'analysis_timestamp': time.time(),
'coordination_framework': 'task_management_mode'
}
}
def _create_fallback_report(self, subagent_data: dict, error: str) -> dict:
"""Create fallback coordination report on error."""
return {
'subagent_id': subagent_data.get('subagent_id', 'unknown'),
'task_id': subagent_data.get('task_id', 'unknown'),
'completion_timestamp': time.time(),
'error': error,
'fallback_mode': True,
'task_completion': {
'success': False,
'quality_score': 0.0,
'efficiency_score': 0.0,
'error_occurred': True
},
'delegation_analysis': {
'strategy': 'unknown',
'effectiveness': 0.0,
'error': error
},
'performance_metrics': {
'coordination_analysis_time_ms': 0,
'target_met': False,
'error_occurred': True
}
}
def main():
"""Main hook execution function."""
try:
# Read subagent data from stdin
subagent_data = json.loads(sys.stdin.read())
# Initialize and run hook
hook = SubagentStopHook()
result = hook.process_subagent_stop(subagent_data)
# Output result as JSON
print(json.dumps(result, indent=2))
except Exception as e:
# Output error as JSON
error_result = {
'coordination_analysis_enabled': False,
'error': str(e),
'fallback_mode': True
}
print(json.dumps(error_result, indent=2))
sys.exit(1)
if __name__ == "__main__":
main()

View File

@@ -1,114 +0,0 @@
# Dynamic MCP Server Activation Pattern
# Just-in-time activation patterns for MCP servers
activation_patterns:
context7:
triggers:
- "import statements from external libraries"
- "framework-specific questions"
- "documentation requests"
- "best practices queries"
context_keywords:
- "how to use"
- "documentation"
- "examples"
- "patterns"
activation_confidence: 0.8
sequential:
triggers:
- "complex debugging scenarios"
- "multi-step analysis requests"
- "--think flags detected"
- "system design questions"
context_keywords:
- "analyze"
- "debug"
- "complex"
- "system"
- "architecture"
activation_confidence: 0.85
magic:
triggers:
- "UI component requests"
- "design system queries"
- "frontend development"
- "component keywords"
context_keywords:
- "component"
- "UI"
- "frontend"
- "design"
- "interface"
activation_confidence: 0.9
playwright:
triggers:
- "testing workflows"
- "browser automation"
- "e2e testing"
- "performance monitoring"
context_keywords:
- "test"
- "browser"
- "automation"
- "e2e"
- "performance"
activation_confidence: 0.85
morphllm:
triggers:
- "multi-file editing"
- "pattern application"
- "fast apply scenarios"
- "code transformation"
context_keywords:
- "edit"
- "modify"
- "refactor"
- "transform"
- "apply"
activation_confidence: 0.8
serena:
triggers:
- "semantic analysis"
- "project-wide operations"
- "symbol navigation"
- "memory management"
context_keywords:
- "analyze"
- "project"
- "semantic"
- "memory"
- "context"
activation_confidence: 0.75
coordination_patterns:
hybrid_intelligence:
serena_morphllm:
condition: "complex editing with semantic understanding"
strategy: "serena analyzes, morphllm executes"
confidence_threshold: 0.8
multi_server_activation:
max_concurrent: 3
priority_order:
- "serena"
- "sequential"
- "context7"
- "magic"
- "morphllm"
- "playwright"
fallback_strategies:
server_unavailable: "graceful_degradation"
timeout_handling: "partial_results"
error_recovery: "alternative_server"
performance_optimization:
cache_activation_decisions: true
cache_duration_minutes: 15
batch_similar_requests: true
lazy_loading: true

View File

@@ -1,107 +0,0 @@
# Dynamic Mode Detection Pattern
# Real-time mode activation based on context analysis
mode_detection:
brainstorming:
triggers:
- "vague project requests"
- "exploration keywords"
- "uncertainty indicators"
- "new project discussions"
patterns:
- "I want to build"
- "thinking about"
- "not sure"
- "explore"
- "brainstorm"
- "figure out"
confidence_threshold: 0.7
activation_hooks: ["session_start", "pre_tool_use"]
coordination:
command: "/sc:brainstorm"
mcp_servers: ["sequential", "context7"]
task_management:
triggers:
- "multi-step operations"
- "build/implement keywords"
- "system-wide scope"
- "delegation indicators"
patterns:
- "build"
- "implement"
- "create"
- "system"
- "comprehensive"
- "multiple files"
confidence_threshold: 0.8
activation_hooks: ["pre_tool_use", "subagent_stop"]
coordination:
wave_orchestration: true
delegation_patterns: true
token_efficiency:
triggers:
- "context usage >75%"
- "large-scale operations"
- "resource constraints"
- "brevity requests"
patterns:
- "compressed"
- "brief"
- "optimize"
- "efficient"
- "reduce"
confidence_threshold: 0.75
activation_hooks: ["pre_compact", "session_start"]
coordination:
compression_algorithms: true
selective_preservation: true
introspection:
triggers:
- "self-analysis requests"
- "framework discussions"
- "meta-cognitive needs"
- "error analysis"
patterns:
- "analyze reasoning"
- "framework"
- "meta"
- "introspect"
- "self-analysis"
confidence_threshold: 0.6
activation_hooks: ["post_tool_use"]
coordination:
meta_cognitive_analysis: true
reasoning_validation: true
adaptive_learning:
pattern_refinement:
enabled: true
learning_rate: 0.1
feedback_integration: true
user_adaptation:
track_preferences: true
adapt_thresholds: true
personalization: true
effectiveness_tracking:
mode_success_rate: true
user_satisfaction: true
performance_impact: true
cross_mode_coordination:
simultaneous_modes:
- ["task_management", "token_efficiency"]
- ["brainstorming", "introspection"]
mode_transitions:
brainstorming_to_task_management:
trigger: "requirements clarified"
confidence: 0.8
task_management_to_introspection:
trigger: "complex issues encountered"
confidence: 0.7

View File

@@ -1,177 +0,0 @@
# Learned Project Optimizations Pattern
# Project-specific adaptations that improve over time
project_profile:
id: "superclaude_framework"
type: "python_framework"
created: "2025-01-31"
last_analyzed: "2025-01-31"
optimization_cycles: 0
learned_optimizations:
file_patterns:
high_frequency_files:
patterns:
- "commands/*.md"
- "Core/*.md"
- "Modes/*.md"
- "MCP/*.md"
frequency_weight: 0.9
cache_priority: "high"
structural_patterns:
patterns:
- "markdown documentation with YAML frontmatter"
- "python scripts with comprehensive docstrings"
- "modular architecture with clear separation"
optimization: "maintain full context for these patterns"
workflow_optimizations:
effective_sequences:
- sequence: ["Read", "Edit", "Validate"]
success_rate: 0.95
context: "documentation updates"
- sequence: ["Glob", "Read", "MultiEdit"]
success_rate: 0.88
context: "multi-file refactoring"
- sequence: ["Serena analyze", "Morphllm execute"]
success_rate: 0.92
context: "large codebase changes"
mcp_server_effectiveness:
serena:
effectiveness: 0.9
optimal_contexts:
- "framework documentation analysis"
- "cross-file relationship mapping"
- "memory-driven development"
performance_notes: "excellent for project context"
sequential:
effectiveness: 0.85
optimal_contexts:
- "complex architectural decisions"
- "multi-step problem solving"
- "systematic analysis"
performance_notes: "valuable for thinking-intensive tasks"
morphllm:
effectiveness: 0.8
optimal_contexts:
- "pattern-based editing"
- "documentation updates"
- "style consistency"
performance_notes: "efficient for text transformations"
compression_learnings:
effective_strategies:
framework_content:
strategy: "complete_preservation"
reason: "high information density, frequent reference"
effectiveness: 0.95
session_metadata:
strategy: "aggressive_compression"
ratio: 0.7
effectiveness: 0.88
quality_preservation: 0.96
symbol_system_adoption:
technical_symbols: 0.9 # High adoption rate
status_symbols: 0.85 # Good adoption rate
flow_symbols: 0.8 # Good adoption rate
effectiveness: "significantly improved readability"
quality_gate_refinements:
validation_priorities:
- "markdown syntax validation"
- "YAML frontmatter validation"
- "cross-reference consistency"
- "documentation completeness"
custom_rules:
- rule: "SuperClaude framework paths preserved"
enforcement: "strict"
violation_action: "immediate_alert"
- rule: "session lifecycle compliance"
enforcement: "standard"
violation_action: "warning_with_suggestion"
performance_insights:
bottleneck_identification:
- area: "large markdown file processing"
impact: "medium"
optimization: "selective reading with targeted edits"
- area: "cross-file reference validation"
impact: "low"
optimization: "cached reference mapping"
acceleration_opportunities:
- opportunity: "pattern-based file detection"
potential_improvement: "40% faster file processing"
implementation: "regex pre-filtering"
- opportunity: "intelligent caching"
potential_improvement: "60% faster repeated operations"
implementation: "content-aware cache keys"
error_pattern_learning:
common_issues:
- issue: "path traversal in framework files"
frequency: 0.15
resolution: "automatic path validation"
prevention: "framework exclusion patterns"
- issue: "markdown syntax in code blocks"
frequency: 0.08
resolution: "improved syntax detection"
prevention: "context-aware parsing"
recovery_strategies:
- strategy: "graceful fallback to standard tools"
effectiveness: 0.9
context: "MCP server unavailability"
- strategy: "partial result delivery"
effectiveness: 0.85
context: "timeout scenarios"
adaptive_rules:
mode_activation_refinements:
task_management:
threshold: 0.85 # Raised due to project complexity
reason: "framework development benefits from structured approach"
token_efficiency:
threshold: 0.7 # Standard due to balanced content types
reason: "mixed documentation and code content"
mcp_coordination_rules:
- rule: "always activate serena for framework operations"
confidence: 0.95
effectiveness: 0.92
- rule: "use morphllm for documentation pattern updates"
confidence: 0.88
effectiveness: 0.87
continuous_improvement:
learning_velocity: "high" # Framework actively evolving
pattern_stability: "medium" # Architecture still developing
optimization_frequency: "per_session"
success_metrics:
operation_speed: "+25% improvement target"
quality_preservation: "98% minimum"
user_satisfaction: "90% target"
next_optimization_cycle:
focus_areas:
- "cross-file relationship mapping"
- "intelligent pattern detection"
- "performance monitoring integration"
target_date: "next_major_session"

View File

@@ -1,119 +0,0 @@
# Learned User Preferences Pattern
# Adaptive patterns that evolve based on user behavior
user_profile:
id: "example_user"
created: "2025-01-31"
last_updated: "2025-01-31"
sessions_analyzed: 0
learned_preferences:
communication_style:
verbosity_preference: "balanced" # minimal, balanced, detailed
technical_depth: "high" # low, medium, high
symbol_usage_comfort: "high" # low, medium, high
abbreviation_tolerance: "medium" # low, medium, high
workflow_patterns:
preferred_thinking_mode: "--think-hard"
mcp_server_preferences:
- "serena" # Most frequently beneficial
- "sequential" # High success rate
- "context7" # Frequently requested
mode_activation_frequency:
task_management: 0.8 # High usage
token_efficiency: 0.6 # Medium usage
brainstorming: 0.3 # Low usage
introspection: 0.4 # Medium usage
project_type_expertise:
python: 0.9 # High proficiency
react: 0.7 # Good proficiency
javascript: 0.8 # High proficiency
documentation: 0.6 # Medium proficiency
performance_preferences:
speed_vs_quality: "quality_focused" # speed_focused, balanced, quality_focused
compression_tolerance: 0.7 # How much compression user accepts
context_size_preference: "medium" # small, medium, large
learning_insights:
effective_patterns:
- pattern: "serena + morphllm hybrid"
success_rate: 0.92
context: "large refactoring tasks"
- pattern: "sequential + context7"
success_rate: 0.88
context: "complex debugging"
- pattern: "magic + context7"
success_rate: 0.85
context: "UI component creation"
ineffective_patterns:
- pattern: "playwright without setup"
success_rate: 0.3
context: "testing without proper configuration"
improvement: "always check test environment first"
optimization_opportunities:
- area: "context compression"
current_efficiency: 0.6
target_efficiency: 0.8
strategy: "increase abbreviation usage"
- area: "mcp coordination"
current_efficiency: 0.7
target_efficiency: 0.85
strategy: "better server selection logic"
adaptive_thresholds:
mode_activation:
brainstorming: 0.6 # Lowered from 0.7 due to user preference
task_management: 0.9 # Raised from 0.8 due to frequent use
token_efficiency: 0.65 # Adjusted based on tolerance
introspection: 0.5 # Lowered due to user comfort with meta-analysis
mcp_server_confidence:
serena: 0.65 # Lowered due to high success rate
sequential: 0.75 # Standard
context7: 0.7 # Slightly lowered due to frequent success
magic: 0.85 # Standard
morphllm: 0.7 # Lowered due to hybrid usage success
playwright: 0.9 # Raised due to setup issues
personalization_rules:
communication:
- "Use technical terminology freely"
- "Provide implementation details"
- "Include performance considerations"
- "Balance symbol usage with clarity"
workflow:
- "Prefer serena for analysis tasks"
- "Use sequential for complex problems"
- "Always validate with quality gates"
- "Optimize for long-term maintainability"
error_handling:
- "Provide detailed error context"
- "Suggest multiple solutions"
- "Include learning opportunities"
- "Track error patterns for prevention"
continuous_learning:
feedback_integration:
explicit_feedback: true
implicit_feedback: true # Based on user actions
outcome_tracking: true
pattern_evolution:
refinement_frequency: "weekly"
adaptation_rate: 0.1
stability_threshold: 0.95
quality_metrics:
user_satisfaction_score: 0.0 # To be measured
task_completion_rate: 0.0 # To be measured
efficiency_improvement: 0.0 # To be measured

View File

@@ -1,45 +0,0 @@
# Minimal Python Project Pattern
# Lightweight bootstrap pattern for Python projects
project_type: "python"
detection_patterns:
- "*.py files present"
- "requirements.txt or pyproject.toml"
- "__pycache__/ directories"
auto_flags:
- "--serena" # Semantic analysis
- "--context7" # Python documentation
mcp_servers:
primary: "serena"
secondary: ["context7", "sequential", "morphllm"]
patterns:
file_structure:
- "src/ or lib/"
- "tests/"
- "docs/"
- "requirements.txt"
common_tasks:
- "function refactoring"
- "class extraction"
- "import optimization"
- "testing setup"
intelligence:
mode_triggers:
- "token_efficiency: context >75%"
- "task_management: refactor|test|analyze"
validation_focus:
- "python_syntax"
- "pep8_compliance"
- "type_hints"
- "testing_coverage"
performance_targets:
bootstrap_ms: 40
context_size: "4KB"
cache_duration: "45min"

View File

@@ -1,45 +0,0 @@
# Minimal React Project Pattern
# Lightweight bootstrap pattern for React projects
project_type: "react"
detection_patterns:
- "package.json with react dependency"
- "src/ directory with .jsx/.tsx files"
- "public/index.html"
auto_flags:
- "--magic" # UI component generation
- "--context7" # React documentation
mcp_servers:
primary: "magic"
secondary: ["context7", "morphllm"]
patterns:
file_structure:
- "src/components/"
- "src/hooks/"
- "src/pages/"
- "src/utils/"
common_tasks:
- "component creation"
- "state management"
- "routing setup"
- "performance optimization"
intelligence:
mode_triggers:
- "token_efficiency: context >75%"
- "task_management: build|implement|create"
validation_focus:
- "jsx_syntax"
- "react_patterns"
- "accessibility"
- "performance"
performance_targets:
bootstrap_ms: 30
context_size: "3KB"
cache_duration: "60min"

View File

@@ -1,88 +0,0 @@
{
"hooks": {
"SessionStart": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/session_start.py",
"timeout": 10
}
]
}
],
"PreToolUse": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/pre_tool_use.py",
"timeout": 15
}
]
}
],
"PostToolUse": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/post_tool_use.py",
"timeout": 10
}
]
}
],
"PreCompact": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/pre_compact.py",
"timeout": 15
}
]
}
],
"Notification": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/notification.py",
"timeout": 10
}
]
}
],
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/stop.py",
"timeout": 15
}
]
}
],
"SubagentStop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "python3 ~/.claude/hooks/subagent_stop.py",
"timeout": 15
}
]
}
]
}
}

View File

@@ -1,345 +0,0 @@
{
"superclaude": {
"description": "SuperClaude-Lite Framework Configuration",
"version": "1.0.0",
"framework": "superclaude-lite",
"enabled": true
},
"hook_configurations": {
"session_start": {
"enabled": true,
"description": "SESSION_LIFECYCLE + FLAGS logic with intelligent bootstrap",
"performance_target_ms": 50,
"features": [
"smart_project_context_loading",
"automatic_mode_detection",
"mcp_server_intelligence_routing",
"user_preference_adaptation",
"performance_optimized_initialization"
],
"configuration": {
"auto_project_detection": true,
"framework_exclusion_enabled": true,
"intelligence_activation": true,
"learning_integration": true,
"performance_monitoring": true
},
"error_handling": {
"graceful_fallback": true,
"preserve_user_context": true,
"error_learning": true
}
},
"pre_tool_use": {
"enabled": true,
"description": "ORCHESTRATOR + MCP routing intelligence for optimal tool selection",
"performance_target_ms": 200,
"features": [
"intelligent_tool_routing",
"mcp_server_selection",
"performance_optimization",
"context_aware_configuration",
"fallback_strategy_implementation",
"real_time_adaptation"
],
"configuration": {
"mcp_intelligence": true,
"pattern_detection": true,
"learning_adaptations": true,
"performance_optimization": true,
"fallback_strategies": true
},
"integration": {
"mcp_servers": ["context7", "sequential", "magic", "playwright", "morphllm", "serena"],
"quality_gates": true,
"learning_engine": true
}
},
"post_tool_use": {
"enabled": true,
"description": "RULES + PRINCIPLES validation and learning system",
"performance_target_ms": 100,
"features": [
"quality_validation",
"rules_compliance_checking",
"principles_alignment_verification",
"effectiveness_measurement",
"error_pattern_detection",
"learning_opportunity_identification"
],
"configuration": {
"rules_validation": true,
"principles_validation": true,
"quality_standards_enforcement": true,
"effectiveness_tracking": true,
"learning_integration": true
},
"validation_levels": {
"basic": ["syntax_validation"],
"standard": ["syntax_validation", "type_analysis", "code_quality"],
"comprehensive": ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis"],
"production": ["syntax_validation", "type_analysis", "code_quality", "security_assessment", "performance_analysis", "integration_testing", "deployment_validation"]
}
},
"pre_compact": {
"enabled": true,
"description": "MODE_Token_Efficiency compression algorithms with intelligent optimization",
"performance_target_ms": 150,
"features": [
"intelligent_compression_strategy_selection",
"selective_content_preservation",
"framework_exclusion",
"symbol_systems_optimization",
"abbreviation_systems",
"quality_gated_compression"
],
"configuration": {
"selective_compression": true,
"framework_protection": true,
"quality_preservation_target": 0.95,
"compression_efficiency_target": 0.50,
"adaptive_compression": true
},
"compression_levels": {
"minimal": "0-40%",
"efficient": "40-70%",
"compressed": "70-85%",
"critical": "85-95%",
"emergency": "95%+"
}
},
"notification": {
"enabled": true,
"description": "Just-in-time MCP documentation loading and pattern updates",
"performance_target_ms": 100,
"features": [
"just_in_time_documentation_loading",
"dynamic_pattern_updates",
"framework_intelligence_updates",
"real_time_learning",
"performance_optimization_through_caching"
],
"configuration": {
"jit_documentation_loading": true,
"pattern_updates": true,
"intelligence_caching": true,
"learning_integration": true,
"performance_optimization": true
},
"caching": {
"documentation_cache_minutes": 30,
"pattern_cache_minutes": 60,
"intelligence_cache_minutes": 15
}
},
"stop": {
"enabled": true,
"description": "Session analytics + /sc:save logic with performance tracking",
"performance_target_ms": 200,
"features": [
"comprehensive_session_analytics",
"learning_consolidation",
"session_persistence",
"performance_optimization_recommendations",
"quality_assessment_and_improvement_suggestions"
],
"configuration": {
"session_analytics": true,
"learning_consolidation": true,
"session_persistence": true,
"performance_tracking": true,
"recommendation_generation": true
},
"analytics": {
"performance_metrics": true,
"effectiveness_measurement": true,
"learning_insights": true,
"optimization_recommendations": true
}
},
"subagent_stop": {
"enabled": true,
"description": "MODE_Task_Management delegation coordination and analytics",
"performance_target_ms": 150,
"features": [
"subagent_performance_analytics",
"delegation_effectiveness_measurement",
"cross_agent_learning",
"wave_orchestration_optimization",
"parallel_execution_performance_tracking"
],
"configuration": {
"delegation_analytics": true,
"coordination_measurement": true,
"wave_orchestration": true,
"performance_tracking": true,
"learning_integration": true
},
"task_management": {
"delegation_strategies": ["files", "folders", "auto"],
"wave_orchestration": true,
"parallel_coordination": true,
"performance_optimization": true
}
}
},
"global_configuration": {
"framework_integration": {
"superclaude_compliance": true,
"yaml_driven_logic": true,
"hot_reload_configuration": true,
"cross_hook_coordination": true
},
"performance_monitoring": {
"enabled": true,
"real_time_tracking": true,
"target_enforcement": true,
"optimization_suggestions": true,
"performance_analytics": true
},
"learning_system": {
"enabled": true,
"cross_hook_learning": true,
"adaptation_application": true,
"effectiveness_tracking": true,
"pattern_recognition": true
},
"error_handling": {
"graceful_degradation": true,
"fallback_strategies": true,
"error_learning": true,
"recovery_optimization": true
},
"security": {
"input_validation": true,
"path_traversal_protection": true,
"timeout_protection": true,
"resource_limits": true
}
},
"mcp_server_integration": {
"enabled": true,
"servers": {
"context7": {
"description": "Library documentation and framework patterns",
"capabilities": ["documentation_access", "framework_patterns", "best_practices"],
"performance_profile": "standard"
},
"sequential": {
"description": "Multi-step reasoning and complex analysis",
"capabilities": ["complex_reasoning", "systematic_analysis", "hypothesis_testing"],
"performance_profile": "intensive"
},
"magic": {
"description": "UI component generation and design systems",
"capabilities": ["ui_generation", "design_systems", "component_patterns"],
"performance_profile": "standard"
},
"playwright": {
"description": "Browser automation and testing",
"capabilities": ["browser_automation", "testing_frameworks", "performance_testing"],
"performance_profile": "intensive"
},
"morphllm": {
"description": "Intelligent editing with fast apply",
"capabilities": ["pattern_application", "fast_apply", "intelligent_editing"],
"performance_profile": "lightweight"
},
"serena": {
"description": "Semantic analysis and memory management",
"capabilities": ["semantic_understanding", "project_context", "memory_management"],
"performance_profile": "standard"
}
},
"coordination": {
"intelligent_routing": true,
"fallback_strategies": true,
"performance_optimization": true,
"learning_adaptation": true
}
},
"mode_integration": {
"enabled": true,
"modes": {
"brainstorming": {
"description": "Interactive requirements discovery",
"hooks": ["session_start", "notification"],
"mcp_servers": ["sequential", "context7"]
},
"task_management": {
"description": "Multi-layer task orchestration",
"hooks": ["session_start", "pre_tool_use", "subagent_stop", "stop"],
"mcp_servers": ["serena", "morphllm"]
},
"token_efficiency": {
"description": "Intelligent token optimization",
"hooks": ["pre_compact", "session_start"],
"mcp_servers": ["morphllm"]
},
"introspection": {
"description": "Meta-cognitive analysis",
"hooks": ["post_tool_use", "stop"],
"mcp_servers": ["sequential"]
}
}
},
"quality_gates": {
"enabled": true,
"8_step_validation": {
"step_1": "syntax_validation",
"step_2": "type_analysis",
"step_3": "code_quality",
"step_4": "security_assessment",
"step_5": "testing_validation",
"step_6": "performance_analysis",
"step_7": "documentation_verification",
"step_8": "integration_testing"
},
"hook_integration": {
"pre_tool_use": ["step_1", "step_2"],
"post_tool_use": ["step_3", "step_4", "step_5"],
"stop": ["step_6", "step_7", "step_8"]
}
},
"cache_configuration": {
"enabled": true,
"cache_directory": "./cache",
"learning_data_retention_days": 90,
"session_data_retention_days": 30,
"performance_data_retention_days": 365,
"automatic_cleanup": true
},
"logging_configuration": {
"enabled": true,
"log_level": "INFO",
"performance_logging": true,
"error_logging": true,
"learning_logging": true,
"hook_execution_logging": true
},
"development_support": {
"debugging_enabled": false,
"performance_profiling": false,
"verbose_logging": false,
"test_mode": false
}
}