Files
SuperClaude/.claude/commands/shared/performance-tracker.yml
NomenAK bce31d52a8 Initial commit: SuperClaude v4.0.0 configuration framework
- Core configuration files (CLAUDE.md, RULES.md, PERSONAS.md, MCP.md)
- 17 slash commands for specialized workflows
- 25 shared YAML resources for advanced configurations
- Installation script for global deployment
- 9 cognitive personas for specialized thinking modes
- MCP integration patterns for intelligent tool usage
- Token economy and ultracompressed mode support

🤖 Generated with [Claude Code](https://claude.ai/code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-06-22 14:02:49 +02:00

223 lines
6.4 KiB
YAML

# Performance Tracking System
## Legend
| Symbol | Meaning | | Abbrev | Meaning |
|--------|---------|---|--------|---------|
| ⚡ | fast/optimized | | perf | performance |
| 📊 | metrics/data | | exec | execution |
| ⏱ | timing/duration | | tok | token |
| 🔄 | continuous | | opt | optimization |
## Metrics Collection
```yaml
Command_Performance:
Timing_Metrics:
Start_Time: "Record command initiation timestamp"
End_Time: "Record command completion timestamp"
Duration: "end_time - start_time"
Phases: "breakdown by major operations"
Token_Metrics:
Input_Tokens: "Tokens in user command + context"
Output_Tokens: "Tokens in response + tool calls"
MCP_Tokens: "Tokens consumed by MCP servers"
Efficiency_Ratio: "output_value / total_tokens"
Operation_Metrics:
Tools_Used: "List of tools called (Read, Edit, Bash, etc)"
Files_Accessed: "Number of files read/written"
MCP_Calls: "Which MCP servers used + frequency"
Error_Count: "Number of errors encountered"
Success_Metrics:
Completion_Status: "success|partial|failure"
User_Satisfaction: "interruptions, corrections, positive signals"
Retry_Count: "Number of retry attempts needed"
Quality_Score: "estimated output quality (1-10)"
```
## Performance Baselines
```yaml
Command_Benchmarks:
Simple_Commands:
analyze_file: "<5s, <500 tokens"
read_file: "<2s, <200 tokens"
edit_file: "<3s, <300 tokens"
Medium_Commands:
build_component: "<30s, <2000 tokens"
test_execution: "<45s, <1500 tokens"
security_scan: "<60s, <3000 tokens"
Complex_Commands:
full_analysis: "<120s, <5000 tokens"
architecture_design: "<180s, <8000 tokens"
comprehensive_scan: "<300s, <10000 tokens"
MCP_Server_Performance:
Context7: "<5s response, 100-2000 tokens typical"
Sequential: "<30s analysis, 500-10000 tokens typical"
Magic: "<10s generation, 500-2000 tokens typical"
Puppeteer: "<15s operation, minimal tokens"
```
## Adaptive Optimization
```yaml
Real_Time_Optimization:
Slow_Operations:
Threshold: ">30s execution time"
Actions:
- Switch to faster tools (rg vs grep)
- Reduce scope (specific files vs full scan)
- Enable parallel processing
- Suggest --uc mode for token efficiency
High_Token_Usage:
Threshold: ">70% context or >5K tokens"
Actions:
- Auto-suggest UltraCompressed mode
- Cache repeated content
- Use cross-references vs repetition
- Summarize large outputs
Error_Patterns:
Repeated_Failures:
Threshold: "3+ failures same operation"
Actions:
- Switch to alternative tool
- Adjust approach/strategy
- Request user guidance
- Document known issue
Success_Pattern_Learning:
Track_Effective_Combinations:
- Which persona + command + flags work best
- Which MCP combinations are most efficient
- Which file patterns lead to success
- User preference patterns over time
```
## Monitoring Implementation
```yaml
Data_Collection:
Lightweight_Tracking:
- Minimal overhead (<1% performance impact)
- Background collection (no user interruption)
- Local storage only (privacy-preserving)
- Configurable (can be disabled)
Storage_Format:
Location: ".claudedocs/metrics/performance-YYYY-MM-DD.jsonl"
Format: "JSON Lines (one record per command)"
Retention: "30 days rolling, aggregated monthly"
Size_Limit: "10MB max per day"
Data_Structure:
timestamp: "ISO 8601 format"
command: "Full command w/ flags"
persona: "Active persona (if any)"
duration_ms: "Execution time in milliseconds"
tokens_input: "Input token count"
tokens_output: "Output token count"
tools_used: "Array of tool names"
mcp_servers: "Array of MCP servers used"
success: "boolean completion status"
error_count: "Number of errors encountered"
user_corrections: "Number of user interruptions/corrections"
```
## Reporting & Analysis
```yaml
Performance_Reports:
Daily_Summary:
Location: ".claudedocs/metrics/daily-summary-YYYY-MM-DD.md"
Content:
- Command execution statistics
- Token efficiency metrics
- Error frequency analysis
- Optimization recommendations
Weekly_Trends:
Location: ".claudedocs/metrics/weekly-trends-YYYY-WW.md"
Content:
- Performance trend analysis
- Usage pattern identification
- Efficiency improvements over time
- Bottleneck identification
Optimization_Insights:
Identify_Patterns:
- Most efficient command combinations
- Highest impact optimizations
- User workflow improvements
- System resource utilization
Alerts:
Performance_Degradation: "If avg response time >2x baseline"
High_Error_Rate: "If error rate >10% over 24h"
Token_Inefficiency: "If efficiency ratio drops >50%"
```
## Integration Points
```yaml
Command_Wrapper:
Pre_Execution:
- Record start timestamp
- Capture input context size
- Note active persona & flags
During_Execution:
- Track tool usage
- Monitor MCP server calls
- Count errors & retries
Post_Execution:
- Record completion time
- Calculate token usage
- Assess success metrics
- Store performance record
Auto_Optimization:
Context_Size_Management:
- Suggest /compact when context >70%
- Auto-enable --uc for large responses
- Cache expensive operations
Tool_Selection:
- Prefer faster tools for routine operations
- Use parallel execution when possible
- Skip redundant operations
User_Experience:
- Proactive performance feedback
- Optimization suggestions
- Alternative approach recommendations
```
## Usage Examples
```yaml
Basic_Monitoring:
Automatic: "Built into all SuperClaude commands"
Manual_Report: "/user:analyze --performance"
Custom_Query: "Show metrics for last 7 days"
Performance_Tuning:
Identify_Bottlenecks: "Which commands are consistently slow?"
Token_Optimization: "Which operations use most tokens?"
Success_Patterns: "What combinations work best?"
Continuous_Improvement:
Weekly_Review: "Automated performance trend analysis"
Optimization_Alerts: "Proactive performance degradation warnings"
Usage_Insights: "Learn user patterns for better defaults"
```
---
*Performance Tracker v1.0 - Intelligent monitoring & optimization for SuperClaude operations*