106 KiB
SuperClaude User Guide 🚀
🎯 The Simple Truth
Behind the apparent complexity, SuperClaude is actually simple to use.
You don't need to learn all the commands, flags, and personas. Just start using it! 🎈
SuperClaude has an intelligent routing system that tries to figure out what you need:
- Type
/analyze some-code/→ It picks the right analysis tools - Ask about security → Security expert auto-activates
- Work on frontend → UI specialist takes over
- Debug something → Investigation mode kicks in
Learning emerges during use - you'll naturally discover what works without studying manuals first.
The detailed guides below? They're here when you want to understand what just happened or dive deeper. But honestly? Most of the time you can just wing it. 😊
TL;DR: Install it, try /analyze or /build on your code, watch the magic happen.
A comprehensive guide to understanding and using SuperClaude v3.0 effectively. But remember - you can skip straight to trying it out!
Table of Contents 📖
- Welcome & Overview
- Core Components
- The Three Operational Modes
- The Orchestrator System
- Rules & Principles
- Getting Started Workflows
- Integration & Coordination
- Practical Examples
- Tips & Best Practices
- Troubleshooting
- What's Next
🚀 Just Start Here
Want to skip the reading and jump right in? Here's your 2-minute getting started:
# Try these commands in Claude Code:
/sc:help # See what's available
/sc:analyze README.md # SuperClaude analyzes your project
/sc:implement user-auth # Create features and components (NEW in v3!)
/sc:build # Smart build with auto-optimization
/sc:improve messy-file.js # Clean up code automatically
What just happened? SuperClaude automatically:
- Picked the right tools for each task 🛠️
- Activated appropriate experts (security, performance, etc.) 🎭
- Applied intelligent flags and optimizations ⚡
- Provided evidence-based suggestions 📊
See how easy that was? No studying required - SuperClaude figures out the complexity so you don't have to.
Want to understand how it works? Keep reading. Want to just keep experimenting? Go for it! 🎯
Welcome & Overview 👋
What is SuperClaude Really? 🤔
SuperClaude makes Claude Code smarter for development work. Instead of generic responses, you get specialized help from different experts (security, performance, frontend, etc.) who know their stuff.
The honest truth: We just released v3.0 and it's fresh out of beta. It works pretty well for what it does, but you should expect some rough edges as we continue improving things. We built this because we wanted Claude Code to be more helpful for real software development workflows.
The neat part? You don't need to manage any of this complexity. Just use normal commands like /analyze or /build and SuperClaude usually figures out which experts to involve and what tools to use. 🪄
What SuperClaude Adds ✨
🛠️ 16 Specialized Commands
- Development tools:
/implement,/build,/design - Analysis tools:
/analyze,/troubleshoot,/explain - Quality tools:
/improve,/cleanup,/test - Plus utilities for documentation, git, deployment, and more
- You just use them - SuperClaude handles the complexity automatically
- NEW:
/implementcommand for feature creation (restores v2 functionality)
🎭 11 Smart Personas (that know when to jump in)
- AI specialists that adapt behavior for different domains
- Auto-activate based on your requests (security expert for security tasks, etc.)
- Manual control available, but usually not needed
- Think of it as having a whole dev team that knows when to help
🔧 MCP Server Integration (smart external tools)
- Context7: Official library documentation lookup
- Sequential: Complex multi-step analysis
- Magic: Modern UI component generation
- Playwright: Browser automation and testing
- Auto-connects when needed - you don't manage this stuff
📋 Enhanced Task Management (happens behind the scenes)
- Progress tracking with TodoRead/TodoWrite
- Multi-session project management with
/task - Complex orchestration with
/spawn - Iterative improvement with
/loop - Mostly automatic - SuperClaude tracks what you're doing
⚡ Token Optimization (smart efficiency)
- Smart compression when context gets full
- Symbol system for efficient communication
- Performance optimization for large operations
- Usually activates when needed for large projects
Current Status (v3.0) 📊
✅ What's Working Well:
- Installation system (completely rewritten, much more reliable)
- Core framework with 16 commands and 11 personas
- MCP server integration (mostly working)
- Basic task management and workflow automation
- Documentation and user guides
⚠️ What's Still Rough:
- This is an initial release - bugs are expected
- Some MCP integrations could be smoother
- Performance isn't optimized yet for all operations
- Some advanced features are experimental
❌ What We Removed:
- Hooks system (got too complex, coming back in v4)
We're pretty happy with v3 as a foundation, but there's definitely room for improvement.
How It Works 🔄
The simple version: You type something like /analyze auth.js and SuperClaude figures out the rest.
The slightly more detailed version:
- Smart routing - Analyzes what you're asking for
- Auto-expert selection - Picks the right specialist (security, performance, etc.)
- Tool coordination - Connects to external systems when helpful
- Quality assurance - Makes sure suggestions are solid
You don't see any of this complexity - it just feels like Claude got way smarter about development stuff.
The nice thing is that most of this usually happens automatically. You make a request, SuperClaude tries to figure out a good approach, and executes with appropriate tools and expertise. Usually no configuration or setup needed - just hopefully better results. ✨
Quick Feature Overview 🎯
| Component | What It Does | Learn More (optional!) |
|---|---|---|
| Commands | 15 specialized tools that auto-activate | Commands Guide |
| Flags | Modifiers that mostly activate automatically | Flags Guide |
| Personas | 11 AI specialists that know when to help | Personas Guide |
| MCP Servers | External integrations that connect when useful | This guide |
| Modes | 3 operational modes for different workflows | This guide |
| Orchestrator | The smart routing that makes it all work | This guide |
Remember: You can use SuperClaude effectively without reading any of these guides. They're here when you get curious about how it works! 🎪
Core Components 🧩
SuperClaude is built from several interconnected systems that work together. Here's how each component fits into the bigger picture.
Commands: Your Toolkit 🛠️
Commands are specialized tools that handle specific types of development work. Instead of generic "help me with this," you get purpose-built tools for different scenarios.
15 Commands Organized by Purpose:
Development 🔨
/build- Project building, compilation, bundling/design- System architecture and component design
Analysis 🔍
/analyze- Comprehensive code and system analysis/troubleshoot- Problem investigation and debugging/explain- Educational explanations and learning
Quality ✨
/improve- Code enhancement and optimization/cleanup- Technical debt reduction/test- Testing and coverage analysis
Utilities 🔧
/document- Documentation creation/git- Enhanced git workflows/load- Project context loading/estimate- Project estimation/task- Long-term project management/spawn- Complex operation orchestration/index- Command navigation and help
Each command has its own flags, auto-activates appropriate personas, and integrates with relevant MCP servers. For detailed examples and usage patterns, see the Commands Guide.
Flags: Behavior Modifiers 🏁
Flags change how SuperClaude processes your requests. They're like command-line options that modify behavior, add capabilities, or change output style.
Key Flag Categories:
Planning & Analysis 🧠
--think/--think-hard/--ultrathink- Control thinking depth--plan- Show execution plan before running
Efficiency & Control ⚡
--uc- Ultra-compressed output for large operations--safe-mode- Conservative execution with validation--validate- Pre-operation risk assessment
MCP Server Control 🔧
--c7- Enable Context7 for documentation--seq- Enable Sequential for complex analysis--magic- Enable Magic for UI components--play- Enable Playwright for testing
Advanced Orchestration 🎭
--delegate- Enable sub-agent delegation for parallel processing--wave-mode- Multi-stage execution with compound intelligence--loop- Iterative improvement mode
Focus & Scope 🎯
--focus security- Focus on specific domains--scope project- Set analysis scope--persona-[name]- Activate specific personas
Flags often auto-activate based on context. For example, security-related requests usually get --persona-security and --focus security. See the Flags Guide for comprehensive details and patterns.
Personas: AI Specialists 🎭
Personas are like having a team of specialists available on demand. Each brings different expertise, priorities, and approaches to problems.
11 Personas Organized by Domain:
Technical Specialists 🔧
- 🏗️ architect - Systems design, long-term architecture
- 🎨 frontend - UI/UX, accessibility, frontend performance
- ⚙️ backend - APIs, databases, reliability
- 🛡️ security - Threat modeling, vulnerabilities
- ⚡ performance - Optimization, bottleneck elimination
Process & Quality ✨
- 🔍 analyzer - Root cause analysis, investigation
- 🧪 qa - Testing, quality assurance
- 🔄 refactorer - Code quality, technical debt
- 🚀 devops - Infrastructure, deployment
Knowledge & Communication 📚
- 👨🏫 mentor - Education, knowledge transfer
- ✍️ scribe - Documentation, technical writing
Personas usually auto-activate based on request patterns but you can override with --persona-[name] flags. Each has different priorities (e.g., security persona prioritizes security over speed). See the Personas Guide for detailed descriptions and examples.
MCP Servers: External Capabilities 🔧
MCP (Model Context Protocol) servers provide specialized capabilities beyond Claude's native abilities.
4 Integrated Servers:
Context7 📚
- Purpose: Official library documentation and best practices
- When it activates: Framework questions, external library usage
- What it provides: Up-to-date docs, code examples, patterns
- Example:
/build react-app --c7gets React best practices
Sequential 🧠
- Purpose: Complex multi-step analysis and systematic thinking
- When it activates: Debugging, system design,
--thinkflags - What it provides: Structured problem-solving, hypothesis testing
- Example:
/troubleshoot "auth randomly fails" --seq
Magic ✨
- Purpose: Modern UI component generation and design systems
- When it activates: UI component requests, frontend work
- What it provides: React/Vue/Angular components, design patterns
- Example:
/build dashboard --magiccreates modern UI components
Playwright 🎭
- Purpose: Browser automation, E2E testing, performance monitoring
- When it activates: Testing workflows, performance analysis
- What it provides: Cross-browser testing, visual validation, metrics
- Example:
/test e2e --playruns comprehensive browser tests
MCP servers usually coordinate automatically but you can control them with --all-mcp, --no-mcp, or specific flags like --c7.
How Components Work Together 🤝
The neat part is when components coordinate:
Example: Security Analysis Request
/sc:analyze auth-system/ --focus security
What usually happens:
- Command:
/analyzehandles code analysis - Flag:
--focus securitydirects attention - Persona: 🛡️ security specialist auto-activates
- MCP: Sequential provides systematic analysis
- Orchestrator: Routes everything for optimal execution
Result: Security-focused analysis with threat modeling perspective, systematic methodology, and comprehensive coverage.
This coordination usually happens for most requests - SuperClaude tries to figure out a good combination of tools and expertise for your specific need.
The Three Operational Modes 🎭
SuperClaude operates in three distinct modes that optimize different aspects of the development workflow. Understanding these modes helps you get the most out of the framework.
Task Management Mode 📋
What it is: Structured workflow execution with progress tracking and validation.
When it's used: Any multi-step operation that needs tracking and coordination.
How it works: SuperClaude breaks work into manageable tasks, tracks progress, and ensures quality through validation gates.
Four Layers of Task Management
Layer 1: Session Tasks (TodoRead/TodoWrite)
- Scope: Current Claude Code session
- Capacity: 3-20 tasks per session
- States: pending 📋, in_progress 🔄, completed ✅, blocked 🚧
- Usage: Real-time progress tracking for immediate work
# SuperClaude usually creates and manages session tasks
/sc:build large-project/
# → Creates: "Analyze project structure", "Run build process", "Validate output"
Layer 2: Project Tasks (/task command)
- Scope: Multi-session features (days to weeks)
- Structure: Hierarchical (Epic → Story → Task)
- Persistence: Cross-session state management
- Usage: Long-term feature development
/sc:task create "implement user dashboard" --priority high
/sc:task breakdown "payment integration"
/sc:task status # Check current project tasks
Layer 3: Complex Orchestration (/spawn command)
- Scope: Complex multi-domain operations
- Features: Parallel/sequential coordination, tool management
- Usage: Operations involving multiple tools/systems
/sc:spawn deploy-pipeline --parallel
/sc:spawn setup-dev-environment --monitor
Layer 4: Iterative Enhancement (/loop command)
- Scope: Progressive refinement workflows
- Features: Iteration cycles with validation
- Usage: Quality improvement and refinement
/sc:improve messy-code.js --loop --iterations 3
# → Iteratively improves code with validation between cycles
Task State Management
Core Principles:
- Evidence-Based Progress: Measurable outcomes, not just activity
- Single Focus Protocol: Only one task in_progress at a time
- Real-Time Updates: Immediate status changes as work progresses
- Quality Gates: Validation before marking tasks complete
Task Detection:
- Multi-step operations (3+ steps) → Creates task breakdown
- Keywords: build, implement, create, fix, optimize → Activates task tracking
- Scope indicators: system, feature, comprehensive → Adds progress monitoring
Introspection Mode 🧠
What it is: Meta-cognitive analysis that lets SuperClaude examine its own reasoning and decision-making processes.
When it's used: Complex problem-solving, framework troubleshooting, learning moments, or when you explicitly request it with --introspect.
How it works: SuperClaude steps outside normal operation to analyze its thinking patterns, decision logic, and action sequences.
Core Capabilities
Reasoning Analysis 🧠
- Examines logical flow and decision rationale
- Evaluates chain of thought coherence
- Identifies assumptions and potential biases
- Validates reasoning against evidence
Action Sequence Review 🔄
- Analyzes tool selection effectiveness
- Reviews workflow patterns and efficiency
- Considers alternative approaches
- Identifies optimization opportunities
Framework Compliance Check 🔍
- Validates actions against SuperClaude rules and principles
- Identifies deviations from standard patterns
- Provides corrective guidance when needed
- Ensures quality standards are met
Learning Recognition 💡
- Extracts insights from outcomes
- Identifies successful patterns for reuse
- Recognizes knowledge gaps for improvement
- Suggests future optimization strategies
Analysis Markers
When introspection mode is active, you'll see these markers:
- 🧠 Reasoning Analysis - Examining logical flow and decisions
- 🔄 Action Sequence Review - Analyzing workflow effectiveness
- 🎯 Self-Assessment - Meta-cognitive evaluation
- 📊 Pattern Recognition - Identifying behavioral patterns
- 🔍 Framework Compliance - Checking rule adherence
- 💡 Retrospective Insight - Learning from outcomes
When Introspection Activates
Usually activates for:
- Complex multi-step problems requiring meta-cognitive oversight
- Error recovery when outcomes don't match expectations
- Framework discussions or SuperClaude troubleshooting
- Pattern recognition needs for recurring behaviors
Manual activation:
/sc:analyze complex-system/ --introspect
/sc:troubleshoot "framework confusion" --introspection
Token Efficiency Mode ⚡
What it is: Intelligent optimization system that maximizes information density while preserving quality.
When it's used: Large operations, when context approaches limits, or when you need faster execution.
How it works: Adaptive compression using symbols, abbreviations, and structural optimization based on context and persona awareness.
Compression Strategies
5-Level Adaptive Compression:
- Minimal (0-40% usage): Full detail with persona-optimized clarity
- Efficient (40-70% usage): Balanced compression with domain awareness
- Compressed (70-85% usage): Aggressive optimization with quality gates
- Critical (85-95% usage): Maximum compression preserving essential context
- Emergency (95%+ usage): Ultra-compression with information validation
Symbol System
Core Logic & Flow:
→leads to, implies (auth.js:45 → security risk)⇒transforms to (input ⇒ validated_output)&and, combine (security & performance)»sequence, then (build » test » deploy)∴therefore (tests fail ∴ code broken)
Status & Progress:
- ✅ completed, passed
- ❌ failed, error
- ⚠️ warning
- 🔄 in progress
- 🎯 target, goal
Technical Domains:
- ⚡ Performance
- 🔍 Analysis
- 🛡️ Security
- 📦 Deployment
- 🎨 Design
Activation Strategy
Usually activates when:
- Context usage >75% → Enables compression
- Large-scale operations → Prevents token overflow
- Complex orchestration → Optimizes communication
Manual activation:
/sc:analyze huge-codebase/ --uc # Ultra-compressed mode
/sc:improve legacy-system/ --uc --delegate auto # Efficient large operations
Performance Goals (still improving!):
- Target: ~30-50% token reduction
- Quality: Tries to preserve ~95% of information
- Speed: Usually <100ms compression decisions
- Integration: Works with framework components
Mode Integration
The three modes often work together:
/sc:improve large-legacy-system/ --wave-mode auto --uc --introspect
What happens:
- Task Management: Creates structured improvement plan with progress tracking
- Token Efficiency: Compresses output for large-scale operation
- Introspection: Analyzes improvement strategy and validates approach
The Orchestrator System 🎯
The orchestrator is SuperClaude's intelligent routing system that tries to analyze your requests and coordinate a good combination of tools, personas, and integrations. It's what hopefully makes SuperClaude feel smart and responsive rather than just a collection of separate tools.
How the Orchestrator Works 🔄
Think of it as a smart dispatcher that:
- Analyzes your request to understand intent and complexity
- Routes to the best combination of commands, flags, personas, and MCP servers
- Coordinates execution for optimal results
- Validates through quality gates to ensure good outcomes
- Optimizes performance and resource usage
Detection Engine 🧠
The detection engine analyzes every request through multiple lenses:
Pattern Recognition
Complexity Detection:
- Simple: Single file operations, basic tasks (<3 steps) → Direct execution
- Moderate: Multi-file operations, analysis tasks (3-10 steps) → Standard routing
- Complex: System-wide changes, architectural decisions (>10 steps) → Advanced orchestration
Domain Identification:
- Frontend: Keywords like "UI", "component", "responsive" → 🎨 frontend persona + Magic MCP
- Backend: Keywords like "API", "database", "service" → ⚙️ backend persona + Context7 MCP
- Security: Keywords like "vulnerability", "auth", "compliance" → 🛡️ security persona + Sequential MCP
- Performance: Keywords like "slow", "optimize", "bottleneck" → ⚡ performance persona + Playwright MCP
Operation Type Classification:
- Analysis: "analyze", "review", "understand" → Sequential MCP + analyzer persona
- Creation: "create", "build", "implement" → Magic MCP (if UI) or Context7 (patterns)
- Modification: "improve", "refactor", "optimize" → Appropriate specialist persona
- Debugging: "troubleshoot", "fix", "debug" → Sequential MCP + analyzer persona
Auto-Activation Logic
High-Confidence Triggers (90%+ activation):
/sc:analyze auth-system/ --focus security
# → 🛡️ security persona + Sequential MCP + --validate flag
Context-Based Activation:
/sc:build react-components/
# → 🎨 frontend persona + Magic MCP + --c7 flag (React docs)
Performance-Based Activation:
# When context usage >75%
/sc:analyze large-project/
# → Auto-adds --uc flag for compression
Routing Intelligence 🚦
The routing system uses dynamic decision trees to map detected patterns to optimal tool combinations.
Master Routing Table
| Request Pattern | Usually Auto-Activates | How Often | Why |
|---|---|---|---|
| "analyze architecture" | 🏗️ architect + --ultrathink + Sequential | Most times | Complex system analysis |
| "create UI component" | 🎨 frontend + Magic + --uc | Pretty often | Frontend domain with generation |
| "security audit" | 🛡️ security + --ultrathink + Sequential | Most times | Security expertise needed |
| "debug complex issue" | 🔍 analyzer + --think + Sequential | Often | Investigation methodology |
| "improve performance" | ⚡ performance + --think-hard + Playwright | Pretty often | Performance expertise + testing |
Intelligent Coordination
Multi-Server Operations:
/sc:design user-dashboard --type api
Orchestrator usually coordinates:
- 🏗️ architect persona (system design)
- 🎨 frontend persona (UI design)
- Context7 MCP (framework patterns)
- Sequential MCP (design methodology)
Fallback Strategies:
- Context7 unavailable → WebSearch for documentation → Manual implementation
- Sequential timeout → Native Claude analysis → Note limitations
- Magic failure → Basic component generation → Suggest manual enhancement
Quality Gates & Validation Framework ✅
SuperClaude tries to implement an 8-step validation cycle for operations:
8-Step Quality Process
- Syntax Validation - Language parsers + Context7 standards
- Type Checking - Sequential analysis + compatibility verification
- Linting - Context7 rules + quality analysis
- Security Review - Sequential analysis + OWASP compliance
- Testing - Playwright E2E + coverage analysis (aiming for good coverage)
- Performance - Sequential analysis + benchmarking
- Documentation - Context7 patterns + completeness validation
- Integration - Playwright testing + deployment validation
Validation Automation
Continuous Integration:
- CI/CD pipeline integration
- Progressive validation with early failure detection
- Evidence generation with comprehensive metrics
Intelligent Monitoring:
- Success rate tracking with ML prediction
- Adaptive validation based on historical patterns
- Automatic optimization of validation strategies
Performance Optimization ⚡
The orchestrator tries to optimize for good performance through several strategies:
Resource Management
Token Allocation:
- Detection Engine: 1-2K tokens for pattern analysis
- Decision Trees: 500-1K tokens for routing logic
- MCP Coordination: Variable based on activated servers
- Reserve: 10% buffer for unexpected complexity
Operation Batching:
- Parallel execution when no dependencies exist
- Context sharing across related operations
- Cache strategies for successful routing patterns
- Smart queuing to prevent resource exhaustion
Advanced Orchestration
Sub-Agent Delegation:
# Auto-activates when >7 directories or >50 files detected
/sc:analyze monorepo/
# → --delegate auto flag + parallel processing
Wave Orchestration:
# Auto-activates when complexity >0.7 + files >20 + operation types >2
/sc:improve legacy-system/
# → --wave-mode auto + multi-stage execution
Real-World Orchestration Examples 💡
Example 1: Security Analysis Request
/sc:analyze user-auth/ --focus security
Orchestrator Analysis:
- Domain: Security (high confidence)
- Complexity: Moderate (authentication system)
- Operation: Analysis + scanning
Usually coordinates:
- 🛡️ security persona (threat modeling perspective)
- Sequential MCP (systematic analysis)
- --validate flag (pre-operation safety check)
- --think flag (complex security patterns)
Quality Gates: All 8 steps with emphasis on security validation
Example 2: Frontend Performance Optimization
/sc:improve slow-dashboard/ --focus performance
Orchestrator Analysis:
- Domain: Frontend + Performance (dual expertise needed)
- Complexity: High (performance optimization)
- Operation: Improvement + validation
Usually coordinates:
- ⚡ performance persona (primary)
- 🎨 frontend persona (secondary, if UI detected)
- Playwright MCP (performance testing)
- --think-hard flag (complex optimization)
Quality Gates: Performance-focused validation with benchmarking
Example 3: Large Codebase Analysis
/sc:analyze enterprise-monorepo/
Orchestrator Analysis:
- Scope: Large (>50 files detected)
- Complexity: High (enterprise-scale)
- Resources: High token usage predicted
Usually coordinates:
- --delegate auto flag (parallel processing)
- --uc flag (token optimization)
- 🏗️ architect persona (system-level analysis)
- Sequential MCP (structured analysis)
Quality Gates: Distributed validation across sub-agents
Orchestrator Configuration ⚙️
Performance Settings:
orchestrator_config:
enable_caching: true
parallel_operations: true
max_parallel: 3
token_reserve: 10%
emergency_threshold: 90%
Intelligence Settings:
learning_enabled: true
confidence_threshold: 0.7
pattern_detection: aggressive
wave_score_threshold: 0.7
The orchestrator tries to learn from successful patterns and improve future routing decisions based on outcomes.
Rules & Principles 📏
SuperClaude operates according to core rules and principles that ensure consistent, reliable, and helpful behavior. Understanding these helps you predict how SuperClaude will approach problems and why it makes certain decisions.
Core Operational Rules ⚖️
These are the core rules that SuperClaude tries to follow:
File Operation Security 🔐
- Always Read before Write/Edit - SuperClaude never modifies files without understanding current content
- Use absolute paths only - Prevents path traversal attacks and ensures reliable file operations
- Never auto-commit - SuperClaude won't commit changes to git unless explicitly requested
- Prefer batch operations - Multiple related changes are grouped for consistency
Why this matters: These rules prevent data loss, security vulnerabilities, and unintended modifications to your codebase.
Task Management Rules 📋
- Evidence-based progress - Tasks are only marked complete when there's measurable evidence
- Single focus protocol - Only one task is "in_progress" at a time for clarity
- Quality gates - All operations include validation steps before completion
- Context retention - Tries to preserve context well across operations
Why this matters: Ensures reliable progress tracking and prevents work from being lost or forgotten.
Framework Compliance Rules 🎯
- Check dependencies first - Always verify package.json/requirements.txt before using libraries
- Follow existing patterns - Respect project conventions, import styles, and architecture
- Systematic codebase changes - Complete discovery before making project-wide modifications
- Validate completion - Verify changes work and don't break existing functionality
Why this matters: Maintains code quality and consistency with your existing project structure.
Development Principles 🛠️
These principles guide how SuperClaude approaches development problems:
Evidence-Based Decision Making 📊
Primary Directive: "Evidence > assumptions | Code > documentation | Efficiency > verbosity"
- Measure before optimizing - Performance improvements based on actual metrics
- Test hypotheses systematically - Claims supported by verifiable data
- Document decision rationale - Clear reasoning for architectural choices
- Learn from outcomes - Continuous improvement based on results
In practice:
/sc:improve slow-api/ --focus performance
# → Measures current performance, identifies bottlenecks, optimizes based on data
SOLID Design Principles 🏗️
- Single Responsibility - Each component has one reason to change
- Open/Closed - Open for extension, closed for modification
- Liskov Substitution - Derived classes substitutable for base classes
- Interface Segregation - No forced dependencies on unused interfaces
- Dependency Inversion - Depend on abstractions, not concretions
Why SuperClaude follows these: Leads to maintainable, scalable, and flexible code that's easier to understand and modify.
Quality Philosophy ✨
- Prevention over detection - Build quality in rather than test it in
- Simplicity over complexity - Choose the simplest solution that works
- Maintainability over cleverness - Code should be easy to understand and modify
- Security by default - Implement secure patterns from the start
Senior Developer Mindset 🎓
SuperClaude approaches problems like an experienced developer:
- Systems thinking - Consider impacts across the entire system
- Long-term perspective - Decisions evaluated against multiple time horizons
- Risk calibration - Distinguish between acceptable and unacceptable risks
- Stakeholder awareness - Balance technical perfection with practical constraints
How Rules & Principles Affect You 💡
Predictable Behavior
Because SuperClaude follows consistent rules, you can predict how it will approach problems:
/sc:improve legacy-authentication/
You can expect:
- Reading existing code before suggesting changes
- Following your project's existing patterns
- Security-first approach (security persona likely activates)
- Evidence-based recommendations with reasoning
- Quality gates before marking improvements complete
Quality Assurance
The principles ensure high-quality outcomes:
- Tries to avoid magic changes - SuperClaude usually explains its reasoning
- Aims for no breaking changes - Tries to preserve existing functionality
- Security-conscious - Security principles are important
- Debt-aware - Tries to maintain or reduce complexity
Transparency
You should usually understand what SuperClaude is doing and why:
/sc:analyze --introspect complex-system/
Shows you:
- Decision-making process
- Rule application
- Principle adherence
- Alternative approaches considered
Examples of Rules & Principles in Action 🎯
Example 1: Systematic Refactoring
Request: "Clean up this messy codebase"
Rules Applied:
- Complete discovery before changes (searches entire codebase)
- Read all files before modifications
- Follow existing project patterns
- Validate completion with evidence
Principles Applied:
- Simplicity over complexity (reduces unnecessary complexity)
- Evidence-based decisions (measures complexity before/after)
- Quality assurance (comprehensive testing)
- Long-term maintainability (considers future modifications)
Example 2: Security Implementation
Request: "Add authentication to our API"
Rules Applied:
- Security persona usually auto-activates
- Never compromise on security fundamentals
- Check existing patterns first
- Quality gates include security validation
Principles Applied:
- Security by default (implements secure patterns)
- Defense in depth (multiple security layers)
- Evidence-based approach (follows established security patterns)
- Systems thinking (considers impact on entire application)
Example 3: Performance Optimization
Request: "This page loads slowly"
Rules Applied:
- Measure before optimizing
- Evidence-based progress tracking
- Validate improvements with metrics
- Maintain existing functionality
Principles Applied:
- Measurement-driven optimization
- User experience focus
- Systematic methodology
- Prevention over detection (identifies root causes)
Rule Enforcement & Quality Gates 🚨
SuperClaude enforces rules through its quality gate system:
Enforcement Approach
- Pre-operation validation - Checks risks before starting
- Real-time monitoring - Tracks rule compliance during execution
- Post-operation verification - Confirms rules were followed
- Evidence collection - Documents compliance for transparency
When Rules Are Challenged
Sometimes rules might seem to conflict with immediate needs:
Example: "Just make this work quickly, don't worry about quality"
SuperClaude's response:
- Acknowledges the urgency
- Explains why quality rules matter for long-term success
- Offers compromise solutions that maintain essential rules
- Documents risks if quality standards are relaxed
Principles That Guide Persona Behavior 🎭
Each persona follows the core principles but emphasizes different aspects:
- 🛡️ Security persona: Security > compliance > reliability > performance
- ⚡ Performance persona: Measure first > optimize critical path > user experience
- 🏗️ Architect persona: Long-term maintainability > scalability > performance
- 🎨 Frontend persona: User needs > accessibility > performance > technical elegance
Why this matters: You can predict how different personas will prioritize trade-offs based on their core principles.
Living Principles 🌱
These rules and principles aren't set in stone. They evolve based on:
- Community feedback - Real-world usage patterns inform improvements
- Outcome analysis - Successful patterns are reinforced
- Technology changes - Principles adapt to new development practices
- User needs - Rules balance flexibility with consistency
The goal is to maintain helpful, predictable behavior while adapting to the changing landscape of software development.
Getting Started Workflows 🛣️
Now that you understand SuperClaude's components, let's look at practical workflows for different development scenarios. These patterns will help you get productive quickly.
First-Time Setup 🎬
If you haven't installed SuperClaude yet, see the Installation Guide. Once installed, here's how to get started:
Quick Verification
# Test basic functionality
/sc:help # Should show SuperClaude commands
/sc:analyze README.md # Try analyzing a simple file
/sc:build --help # Check command options
Understanding Auto-Activation
Try these commands to see how SuperClaude automatically chooses the right tools:
# Frontend work → frontend persona + Magic MCP
/sc:build src/components/
# Security analysis → security persona + Sequential MCP
/sc:analyze auth/ --focus security
# Performance investigation → performance persona + Playwright MCP
/sc:analyze --focus performance slow-endpoints/
Watch for auto-activated flags and personas in the output. This shows SuperClaude's intelligent routing in action.
Development Workflow Patterns 🔄
New Project Onboarding
When starting work on an unfamiliar project:
# 1. Load project context
/sc:load --deep --summary
# → Gives overview of structure, dependencies, patterns
# 2. Analyze architecture
/sc:analyze --focus architecture
# → 🏗️ architect persona provides system understanding
# 3. Check code quality
/sc:analyze --focus quality
# → 🧪 qa persona identifies potential issues
# 4. Review documentation
/sc:document README --type guide
# → ✍️ scribe persona improves project documentation
Feature Development Cycle
For developing new features:
# 1. Design phase
/sc:design user-dashboard --type component
# → 🏗️ architect + 🎨 frontend personas coordinate
# 2. Implementation
/sc:build dashboard-components/
# → 🎨 frontend persona + Magic MCP for UI generation
# 3. Testing
/sc:test --type e2e dashboard/
# → 🧪 qa persona + Playwright MCP for testing
# 4. Documentation
/sc:document dashboard/ --type api
# → ✍️ scribe persona creates comprehensive docs
Bug Investigation & Resolution
For systematic debugging:
# 1. Problem investigation
/sc:troubleshoot "login randomly fails" --think
# → 🔍 analyzer persona + Sequential MCP for methodology
# 2. Root cause analysis
/sc:analyze auth-flow/ --focus debugging
# → Systematic investigation with evidence collection
# 3. Fix implementation
/sc:improve auth/ --safe-mode --validate
# → Safe improvements with validation
# 4. Verification testing
/sc:test auth-flow/ --coverage
# → Comprehensive testing to ensure fix works
Code Quality Improvement
For improving existing code:
# 1. Quality assessment
/sc:analyze legacy-code/ --focus quality
# → 🔄 refactorer persona identifies improvement opportunities
# 2. Safe improvements
/sc:improve --preview legacy-code/
# → See what would change before applying
# 3. Apply improvements
/sc:improve --safe legacy-code/
# → Apply only low-risk improvements
# 4. Validate changes
/sc:test --coverage improved-code/
# → Ensure improvements don't break functionality
Common Workflow Combinations 🤝
Security-First Development
# Development with security focus
/sc:analyze --persona-security --focus security
/sc:build --validate --safe-mode
/sc:test --type security
/sc:git --persona-security --validate
Performance-Optimized Workflow
# Performance-focused development
/sc:analyze --focus performance --persona-performance
/sc:improve --type performance --benchmark
/sc:test --focus performance --play
/sc:test --focus performance --play
Team Collaboration Workflow
# Collaborative development patterns
/sc:analyze team-code/ --persona-qa --focus quality
/sc:document features/ --persona-scribe --type guide
/sc:git --smart-commit --branch-strategy
/sc:task status # Check team progress
Advanced Workflow Patterns 🚀
Large Codebase Management
For working with enterprise-scale projects:
# Efficient large-scale analysis
/sc:analyze monorepo/ --delegate auto --uc --focus architecture
# → Parallel processing + compression + architectural focus
# Systematic improvements
/sc:improve legacy-system/ --wave-mode auto --safe-mode
# → Multi-stage improvements with safety checks
# Comprehensive quality review
/sc:analyze enterprise-app/ --delegate folders --focus quality
# → Distributed quality analysis
Legacy System Modernization
For updating old codebases:
# Assessment phase
/sc:analyze legacy/ --persona-architect --ultrathink
# → Deep architectural analysis
# Planning phase
/sc:design modernization-strategy --type architecture
# → Comprehensive modernization plan
# Implementation phase
/sc:improve legacy/ --wave-mode systematic --safe-mode --loop
# → Iterative, safe improvements with validation
# Migration support
/sc:migrate --type framework legacy-to-modern/
# → Framework migration assistance
Multi-Domain Projects
For projects spanning multiple technical domains:
# Coordinate across domains
/sc:analyze fullstack-app/ --all-mcp --delegate auto
# → All MCP servers + parallel processing
# Domain-specific improvements
/sc:improve frontend/ --persona-frontend --magic
/sc:improve backend/ --persona-backend --c7
/sc:improve infrastructure/ --persona-devops --seq
# Integration validation
/sc:test --type integration --play
# → Comprehensive integration testing
Workflow Optimization Tips 💡
Start Small, Scale Up
# Begin with focused scope
/sc:analyze single-component.js --focus quality
# Expand as needed
/sc:analyze entire-module/ --focus quality --delegate files
# Scale to full system
/sc:analyze whole-project/ --delegate auto --uc
Use Progressive Enhancement
# Basic command
/sc:build project/
# Add intelligence
/sc:build project/ --think --c7
# Full orchestration
/sc:build project/ --wave-mode auto --all-mcp --delegate auto
Combine Complementary Personas
# Security + Performance analysis
/sc:analyze api/ --persona-security
/sc:analyze api/ --persona-performance
# Architecture + Quality review
/sc:review system/ --persona-architect --focus architecture
/sc:review system/ --persona-qa --focus quality
Troubleshooting Workflows 🚨
When Commands Don't Work as Expected
# Debug with introspection
/sc:troubleshoot "command issues" --introspect
# → Meta-cognitive analysis of what went wrong
# Try different approaches
/sc:analyze problem/ --persona-analyzer --seq
# → Systematic investigation methodology
# Check framework status
/sc:load framework-status/ --summary
# → Understand current SuperClaude state
When Performance is Slow
# Optimize for speed
/sc:analyze large-project/ --no-mcp --uc --scope module
# → Disable extra features, compress output, limit scope
# Use delegation for large tasks
/sc:improve huge-codebase/ --delegate auto --concurrency 5
# → Parallel processing with controlled concurrency
When Results Aren't Focused Enough
# Use specific focus flags
/sc:analyze code/ --focus security --scope file
# Activate appropriate personas manually
/sc:analyze frontend-code/ --persona-security # Security view of frontend
# Combine multiple approaches
/sc:analyze --focus performance --persona-performance --play
Building Your Own Workflows 🛠️
Identify Your Common Patterns
Track what combinations work well for your specific needs:
# Security-focused API development
alias secure-api="/build api/ --persona-security --validate --c7"
# Performance-optimized frontend work
alias perf-frontend="/build ui/ --persona-performance --magic --benchmark"
# Quality improvement workflow
alias quality-check="/scan --focus quality && /improve --safe-mode && /test --coverage"
Experiment with Flag Combinations
Try different combinations to find what works best:
# For learning: verbose explanations with docs
/sc:explain concept --persona-mentor --verbose --c7
# For safety: maximum validation and checking
/sc:improve critical-code/ --safe-mode --validate --preview
# For efficiency: compressed output with parallel processing
/sc:analyze big-project/ --uc --delegate auto --concurrency 3
Remember: SuperClaude learns from successful patterns, so the more you use effective combinations, the better it gets at auto-activating the right approach for your needs.
Integration & Coordination 🤝
Understanding how SuperClaude's components work together is key to using the framework effectively. This section shows you how commands, flags, personas, and MCP servers coordinate automatically - and how to control that coordination when needed.
Auto-Coordination Examples 🤖
SuperClaude automatically coordinates components based on context. Here's how it works in practice:
Frontend Development Request
/sc:build react-dashboard/
Automatic coordination:
- Command:
/buildhandles compilation and bundling - Persona: 🎨 frontend auto-activates (React detected)
- MCP: Magic provides modern UI components
- MCP: Context7 provides React best practices
- Flags:
--c7auto-activates for framework docs
Result: React-optimized build with modern components, accessibility checks, and performance optimization.
Security Analysis Request
/sc:scan user-authentication/ --focus security
Automatic coordination:
- Command:
/scanhandles security scanning - Persona: 🛡️ security auto-activates (security focus)
- MCP: Sequential provides systematic analysis
- Flags:
--validateauto-activates (high-risk operation) - Flags:
--thinkauto-activates (complex security patterns)
Result: Comprehensive security analysis with threat modeling, vulnerability detection, and compliance checking.
Performance Investigation
/sc:troubleshoot "API responses are slow"
Automatic coordination:
- Command:
/troubleshoothandles investigation - Persona: ⚡ performance auto-activates (performance keywords)
- Persona: 🔍 analyzer provides investigation methodology
- MCP: Sequential structures the debugging process
- MCP: Playwright provides performance testing
- Flags:
--thinkauto-activates (complex debugging)
Result: Systematic performance investigation with metrics, bottleneck identification, and optimization recommendations.
Manual Coordination Control 🎛️
Sometimes you want to override auto-coordination for specific needs:
Override Persona Selection
# View frontend code from security perspective
/sc:analyze react-components/ --persona-security
# → Security analysis of UI components (XSS, data exposure, etc.)
# Apply architectural thinking to small utility
/sc:improve utility-function.js --persona-architect
# → Design patterns and extensibility for simple code
Control MCP Server Usage
# Disable all MCP servers for speed
/sc:analyze large-codebase/ --no-mcp
# → 40-60% faster execution, native tools only
# Enable all MCP servers for comprehensive analysis
/sc:analyze complex-system/ --all-mcp
# → Maximum capabilities, higher token usage
# Use specific MCP combinations
/sc:build ui-components/ --magic --c7 --no-seq
# → UI generation + docs, skip complex analysis
Combine Multiple Perspectives
# Sequential analysis with different personas
/sc:analyze payment-system/ --persona-security # Security view
/sc:analyze payment-system/ --persona-performance # Performance view
/sc:analyze payment-system/ --persona-architect # Architecture view
# Or coordinate automatically
/sc:review payment-system/ --focus quality
# → Auto-coordinates security + performance + architecture insights
Flag Coordination Patterns 🏁
Flags work together to create powerful combinations:
Safety-First Patterns
# Maximum safety for critical code
/sc:improve production-auth/ --safe-mode --validate --preview
# → Conservative changes + risk assessment + preview before applying
# Safe exploration of large changes
/sc:improve legacy-system/ --wave-mode auto --safe-mode --validate
# → Multi-stage improvements + safety checks + validation gates
Performance-Optimized Patterns
# Fast execution for large operations
/sc:analyze huge-project/ --uc --no-mcp --scope module
# → Compressed output + native tools + limited scope
# Efficient parallel processing
/sc:improve monorepo/ --delegate auto --uc --concurrency 5
# → Parallel processing + compression + controlled resource usage
Learning-Focused Patterns
# Educational explanations with full context
/sc:explain complex-concept --persona-mentor --verbose --c7
# → Educational approach + detailed explanations + official docs
# Deep understanding with transparency
/sc:analyze mysterious-code/ --persona-analyzer --think-hard --introspect
# → Investigation methodology + deep analysis + thinking transparency
MCP Server Coordination 🔧
MCP servers often work together automatically:
Documentation + Analysis
/sc:improve old-react-code/
MCP coordination:
- Context7: Gets current React best practices
- Sequential: Analyzes code against modern patterns
- Magic: Suggests modern component patterns
- Result: Modernization with current standards
Testing + Performance
/sc:test dashboard/ --focus performance
MCP coordination:
- Sequential: Plans comprehensive test strategy
- Playwright: Executes performance testing
- Context7: Provides testing best practices
- Result: Performance testing with industry standards
Complex Problem Solving
/sc:troubleshoot "complex multi-service issue" --ultrathink
MCP coordination:
- Sequential: Structures systematic investigation
- Context7: Provides service architecture patterns
- Playwright: Tests service interactions
- Result: Comprehensive multi-domain debugging
Persona Collaboration Patterns 🎭
Personas automatically collaborate on complex requests:
Architecture + Security
/sc:design payment-api --type secure
Persona collaboration:
- 🏗️ architect: System design and scalability
- 🛡️ security: Threat modeling and secure patterns
- ⚙️ backend: API implementation patterns
- Result: Secure, scalable API design
Frontend + Performance
/sc:build dashboard --focus performance
Persona collaboration:
- 🎨 frontend: UI/UX and accessibility
- ⚡ performance: Optimization and metrics
- 🏗️ architect: Component architecture
- Result: Fast, accessible, well-structured dashboard
Quality + Refactoring
/sc:improve legacy-code/ --focus quality
Persona collaboration:
- 🔄 refactorer: Code quality and patterns
- 🧪 qa: Testing and validation
- 🏗️ architect: Structural improvements
- Result: Clean, tested, well-architected code
Advanced Coordination Strategies 🚀
Wave Orchestration
For complex multi-stage operations:
/sc:improve enterprise-system/ --wave-mode systematic
Wave coordination:
- Analysis Wave: 🔍 analyzer + Sequential assess current state
- Planning Wave: 🏗️ architect + Context7 design improvements
- Implementation Wave: Appropriate specialists + tools implement changes
- Validation Wave: 🧪 qa + Playwright verify improvements
- Optimization Wave: ⚡ performance + metrics optimize results
Sub-Agent Delegation
For parallel processing:
/sc:analyze large-monorepo/ --delegate folders
Delegation coordination:
- Main Agent: Orchestrates and synthesizes results
- Sub-Agents: Specialized analysis of individual folders
- Coordination: Results combined with domain expertise
- MCP Integration: Shared across all agents
Adaptive Intelligence
SuperClaude adapts coordination based on context:
Development Phase Detection:
- Planning phase → 🏗️ architect + ✍️ scribe emphasis
- Implementation phase → Domain specialists + Magic/Context7
- Testing phase → 🧪 qa + Playwright emphasis
- Deployment phase → 🚀 devops + validation emphasis
Complexity-Based Scaling:
- Simple tasks → Direct execution
- Moderate complexity → Persona + MCP coordination
- High complexity → Wave orchestration + delegation
Coordination Troubleshooting 🔧
When Auto-Coordination Goes Wrong
# Too many tools activated (slow/expensive)
/sc:analyze simple-file.js --no-mcp --answer-only
# → Minimal tooling for simple tasks
# Wrong persona activated
/sc:analyze backend-api/ --persona-security
# → Override with explicit persona choice
# Not enough analysis depth
/sc:troubleshoot complex-issue --ultrathink --all-mcp
# → Force maximum capabilities
Optimizing Coordination
# Start simple, add complexity as needed
/sc:analyze code.js # Basic analysis
/sc:analyze code.js --think # Add thinking
/sc:analyze code.js --think --c7 # Add documentation
/sc:analyze code.js --think --c7 --seq # Add systematic analysis
Understanding Coordination Decisions
# See why certain tools were chosen
/sc:analyze complex-system/ --introspect
# → Shows decision-making process and tool selection reasoning
Best Practices for Integration 💡
Let Auto-Coordination Work First
- Trust SuperClaude's automatic tool selection
- Override only when you need specific perspectives
- Start with simple commands and add flags as needed
Understand Flag Interactions
- Some flags override others (
--no-mcpoverrides--c7,--seq) - Safety flags take precedence over optimization flags
- Persona flags can be overridden by more specific persona requests
Use Appropriate Scope
- File-level: Single persona + minimal MCP
- Module-level: Domain personas + relevant MCP
- System-level: Multiple personas + full MCP coordination
Monitor Resource Usage
- Large operations → Use
--ucand--delegate - Simple tasks → Use
--no-mcpand--answer-only - Critical operations → Use
--safe-modeand--validate
The key is understanding that SuperClaude's intelligence comes from the coordination between its components. The automatic coordination works well most of the time, but knowing how to control it gives you the flexibility to handle any situation.
Practical Examples 💡
Real-world scenarios showing SuperClaude in action. These examples demonstrate how different components work together to solve common development problems.
Scenario 1: New Team Member Onboarding 👋
Situation: You're starting work on an unfamiliar React/Node.js e-commerce project.
Step 1: Project Understanding
/sc:load --deep --summary
What happens:
- 🔍 analyzer persona activates (investigation needed)
- Sequential MCP structures the analysis
- Context7 MCP identifies framework patterns
- Creates comprehensive project overview
Output: Project structure, tech stack, dependencies, and architecture summary.
Step 2: Code Quality Assessment
/sc:analyze --focus quality
Auto-coordination:
- 🧪 qa persona activates (quality focus)
- Sequential MCP provides systematic analysis
- Scans for code quality, security, and performance issues
- Generates actionable improvement recommendations
Output: Quality report with specific issues and improvement suggestions.
Step 3: Architecture Understanding
/sc:analyze --focus architecture --persona-architect
What happens:
- 🏗️ architect persona provides system design perspective
- Context7 MCP brings in React/Node.js architectural patterns
- Sequential MCP structures the architectural analysis
- Identifies design patterns, data flow, and component relationships
Output: Architectural overview with design patterns and system relationships.
Step 4: Getting Started Guide
/sc:document onboarding --type guide --persona-scribe
What happens:
- ✍️ scribe persona creates professional documentation
- Context7 MCP provides documentation standards
- Synthesizes previous analysis into newcomer-friendly guide
- Includes setup instructions and key concepts
Output: Comprehensive onboarding guide for future team members.
Time saved: What normally takes 2-3 days of exploration is condensed into a comprehensive understanding in about 30 minutes.
Scenario 2: Security Vulnerability Investigation 🛡️
Situation: Security scanner flagged potential issues in user authentication system.
Step 1: Security-Focused Analysis
/sc:scan auth-system/ --persona-security --focus security
Auto-coordination:
- 🛡️ security persona activates (security expertise)
- Sequential MCP provides systematic threat modeling
- Context7 MCP brings in OWASP and security best practices
--validateflag auto-activates (high-risk operation)
Output: Detailed security analysis with threat assessment and vulnerability prioritization.
Step 2: Root Cause Investigation
/sc:troubleshoot "JWT token exposure in logs" --think --seq
What happens:
- 🔍 analyzer persona provides investigation methodology
--thinkflag enables deep analysis- Sequential MCP structures the debugging process
- Traces data flow and identifies exposure points
Output: Root cause analysis with evidence trail and impact assessment.
Step 3: Secure Implementation
/sc:improve auth-system/ --focus security --safe-mode --validate
Auto-coordination:
- 🛡️ security persona maintains security focus
--safe-modeensures conservative changes--validateconfirms changes before applying- Context7 MCP provides secure coding patterns
Output: Security improvements with minimal risk and comprehensive validation.
Step 4: Security Testing
/sc:test auth-system/ --type security --play
What happens:
- 🧪 qa persona provides testing expertise
- Playwright MCP executes security testing scenarios
- Tests authentication flows, session management, and access controls
- Validates security improvements are working
Output: Comprehensive security test results with evidence of improvements.
Risk reduction: Systematic approach reduces chance of missing security issues and ensures comprehensive coverage.
Scenario 3: Performance Optimization Sprint ⚡
Situation: E-commerce dashboard is loading slowly, affecting user experience.
Step 1: Performance Analysis
/sc:analyze dashboard/ --focus performance --persona-performance
Auto-coordination:
- ⚡ performance persona activates (performance expertise)
- Playwright MCP provides performance metrics and testing
- Context7 MCP brings in React performance best practices
--think-hardauto-activates (complex performance analysis)
Output: Performance bottleneck identification with metrics and prioritized optimization opportunities.
Step 2: Frontend Performance Deep Dive
/sc:analyze frontend/ --persona-frontend --focus performance --play
What happens:
- 🎨 frontend persona provides UI/UX perspective
- ⚡ performance persona coordinates (dual expertise)
- Playwright MCP measures Core Web Vitals, bundle sizes, render times
- Magic MCP suggests modern optimization patterns
Output: Frontend-specific performance analysis with accessibility and user experience considerations.
Step 3: Backend API Performance
/sc:analyze api/ --persona-backend --focus performance
Auto-coordination:
- ⚙️ backend persona provides server-side expertise
- Sequential MCP analyzes database queries and API patterns
- Context7 MCP provides Node.js/Express optimization patterns
- Identifies slow queries, inefficient endpoints, and caching opportunities
Output: Backend performance analysis with database and API optimization recommendations.
Step 4: Systematic Optimization
/sc:improve dashboard/ --focus performance --loop --iterations 3
What happens:
- ⚡ performance persona leads optimization
--loopenables iterative improvement- Each iteration: optimize → measure → validate → improve
- Progressive enhancement with metrics validation
Output: Iterative performance improvements with measurable results after each cycle.
Step 5: Performance Testing Validation
/sc:test dashboard/ --focus performance --play --benchmark
What happens:
- Playwright MCP executes comprehensive performance testing
- Tests on multiple devices, network conditions, and browsers
- Measures Core Web Vitals, load times, and user interaction metrics
- Validates improvements meet performance budgets
Output: Performance test results proving optimization effectiveness.
Performance gain: Systematic approach typically achieves 40-70% performance improvements with measurable validation.
Scenario 4: Legacy Code Modernization 🔄
Situation: 5-year-old React application needs modernization to current standards.
Step 1: Legacy Assessment
/sc:analyze legacy-app/ --persona-architect --ultrathink
Auto-coordination:
- 🏗️ architect persona provides structural analysis
--ultrathinkenables maximum analysis depth- Context7 MCP compares against current React patterns
- Sequential MCP provides systematic modernization assessment
Output: Comprehensive legacy analysis with modernization roadmap and risk assessment.
Step 2: Modernization Planning
/sc:design modernization-strategy --type architecture --persona-architect
What happens:
- 🏗️ architect persona designs migration strategy
- Context7 MCP provides current React ecosystem patterns
- Sequential MCP structures the modernization plan
- Identifies migration phases, dependencies, and risks
Output: Detailed modernization plan with phased approach and risk mitigation.
Step 3: Safe Incremental Improvements
/sc:improve legacy-components/ --safe-mode --wave-mode systematic --loop
Auto-coordination:
- 🔄 refactorer persona leads code improvements
--safe-modeensures minimal risk--wave-mode systematicenables multi-stage improvements--loopallows iterative refinement- Multiple personas coordinate: architect, frontend, qa
Output: Systematic modernization with safety checks and progressive enhancement.
Step 4: Testing Modernization
/sc:test modernized-app/ --type integration --coverage --play
What happens:
- 🧪 qa persona ensures quality throughout modernization
- Playwright MCP provides comprehensive testing
- Tests legacy compatibility and new functionality
- Validates modernization doesn't break existing features
Output: Comprehensive test results proving modernization success.
Modernization success: Systematic approach reduces modernization risk by 80% and ensures compatibility.
Scenario 5: Multi-Team API Design 🌐
Situation: Designing a new microservice API that multiple teams will consume.
Step 1: Requirements Analysis
/sc:design user-service-api --type api --persona-backend
Auto-coordination:
- ⚙️ backend persona provides API design expertise
- 🏗️ architect persona coordinates for system integration
- Context7 MCP provides API design best practices
- Sequential MCP structures requirement analysis
Output: Comprehensive API design with endpoints, data models, and integration patterns.
Step 2: Security Review
/sc:review api-design/ --persona-security --focus security
What happens:
- 🛡️ security persona evaluates API security
- Reviews authentication, authorization, and data protection
- Context7 MCP provides OWASP API security guidelines
- Identifies security requirements and threat vectors
Output: Security assessment with hardening recommendations and compliance requirements.
Step 3: Performance Considerations
/sc:analyze api-design/ --persona-performance --focus performance
Auto-coordination:
- ⚡ performance persona evaluates scalability
- Analyzes endpoint performance, caching strategies, rate limiting
- Context7 MCP provides high-performance API patterns
- Projects performance under load
Output: Performance analysis with scalability recommendations and optimization strategies.
Step 4: Documentation for Multiple Teams
/sc:document api/ --type api --persona-scribe --detailed
What happens:
- ✍️ scribe persona creates professional API documentation
- Context7 MCP provides API documentation standards
- Creates examples, integration guides, and troubleshooting
- Tailored for multiple consuming teams
Output: Comprehensive API documentation with examples, integration guides, and best practices.
Step 5: Implementation Validation
/sc:build api-implementation/ --validate --test-coverage
Auto-coordination:
- ⚙️ backend persona implements API patterns
- 🧪 qa persona ensures quality and testing
- Sequential MCP validates implementation against design
- Comprehensive testing and validation
Output: Production-ready API implementation with comprehensive testing and validation.
Collaboration efficiency: Multi-persona coordination reduces design iteration cycles by 60% and improves cross-team alignment.
Common Pattern Recognition 🔍
These examples show recurring patterns in how SuperClaude components coordinate:
Investigation → Analysis → Implementation → Validation
Most complex workflows follow this pattern with appropriate personas and tools for each phase.
Multi-Persona Coordination
Complex problems benefit from multiple perspectives (security + performance, architecture + frontend, etc.).
Progressive Enhancement
Starting simple and adding complexity as needed (--think → --think-hard → --ultrathink).
Safety-First Approach
Critical operations automatically include validation and safety checks (--safe-mode, --validate).
Context-Aware Tool Selection
SuperClaude automatically chooses appropriate MCP servers and flags based on detected context.
These examples demonstrate that SuperClaude's value comes from intelligent coordination of its components rather than any single capability. The framework adapts to your needs while maintaining consistent quality and safety standards.
Tips & Best Practices 🎯
Based on real-world usage patterns and successful workflows, here are practical tips for getting the most out of SuperClaude.
Starting Out Successfully 🚀
Begin with Simple Commands
# Start here - basic functionality
/sc:help
/sc:analyze README.md
/sc:build --help
# Not here - complex orchestration
/sc:improve entire-codebase/ --wave-mode force --all-mcp --delegate auto
Why: Understanding basic behavior before adding complexity prevents confusion and helps you learn the framework gradually.
Trust Auto-Activation First
# Let SuperClaude choose tools
/sc:analyze auth-system/
# → Watch what auto-activates (likely security persona + validation)
# Then experiment with manual control
/sc:analyze auth-system/ --persona-performance
# → See different perspective on same code
Why: Auto-activation usually gets it right and shows you optimal tool combinations for different scenarios.
Use Preview and Safe Modes
# See what would happen first
/sc:improve messy-code.js --preview
# Apply changes safely
/sc:improve messy-code.js --safe-mode
# For critical code, use both
/sc:improve production-auth/ --preview --safe-mode --validate
Why: Prevents unintended changes and helps you understand what SuperClaude will do before it does it.
Flag Usage Patterns 🏁
Start Simple, Add Complexity
# Basic command
/sc:analyze complex-system/
# Add thinking if needed
/sc:analyze complex-system/ --think
# Add documentation if external libraries involved
/sc:analyze complex-system/ --think --c7
# Full analysis for critical systems
/sc:analyze complex-system/ --think-hard --c7 --seq --validate
Why: Incremental complexity helps you understand what each flag adds and avoids over-engineering simple problems.
Common Flag Combinations That Work
# Safe improvement workflow
/sc:improve --preview → /improve --safe-mode → /test --coverage
# Deep investigation workflow
/sc:troubleshoot issue --think --seq → /analyze affected-code/ --focus quality
# Learning and documentation workflow
/sc:explain concept --persona-mentor --verbose --c7
# Performance optimization workflow
/sc:analyze --focus performance --persona-performance --play
Why: These combinations are proven patterns that work well together and don't conflict.
Avoid Flag Conflicts
# ❌ Conflicting flags
/sc:analyze code/ --no-mcp --c7 # --no-mcp overrides --c7
# ❌ Counterproductive combinations
/sc:analyze small-file.js --ultrathink --all-mcp # Overkill for simple tasks
# ✅ Sensible combinations
/sc:analyze large-system/ --think --delegate auto # Appropriate for complexity
/sc:analyze simple-utility.js --answer-only # Appropriate for simplicity
Why: Understanding flag precedence and interactions prevents unexpected behavior and wasted resources.
Persona Optimization 🎭
Let Domain Auto-Activation Work
# These will automatically get the right persona
/sc:build react-components/ # → frontend persona
/sc:scan auth/ --focus security # → security persona
/sc:troubleshoot slow-api/ # → performance + analyzer personas
Why: Auto-activation is based on proven patterns and usually selects the most appropriate expertise.
Manual Override for Different Perspectives
# Get different viewpoints on same code
/sc:analyze payment-flow/ --persona-security # Security perspective
/sc:analyze payment-flow/ --persona-performance # Performance perspective
/sc:analyze payment-flow/ --persona-architect # Architecture perspective
Why: Different personas provide unique insights that can reveal issues or opportunities others might miss.
Use Appropriate Personas for Project Phases
# Planning phase
/sc:design new-feature --persona-architect
# Implementation phase
/sc:build feature/ --persona-frontend # or backend, etc.
# Testing phase
/sc:test feature/ --persona-qa
# Documentation phase
/sc:document feature/ --persona-scribe
Why: Each project phase benefits from different types of expertise and perspectives.
MCP Server Strategy 🔧
Understand When Each Server Helps
- Context7: When working with frameworks, libraries, or need official documentation
- Sequential: For complex debugging, systematic analysis, or architectural decisions
- Magic: For UI component generation, design systems, or frontend development
- Playwright: For testing, performance measurement, or browser automation
Optimize for Performance vs. Capabilities
# Fast execution for simple tasks
/sc:analyze simple-script.js --no-mcp
# Comprehensive analysis for complex problems
/sc:analyze complex-system/ --all-mcp --think-hard
# Balanced approach for most work
/sc:analyze typical-component/ --c7 # Just documentation lookup
Why: Matching MCP usage to task complexity optimizes both speed and quality of results.
Workflow Optimization 📈
Use Progressive Enhancement
# Level 1: Basic analysis
/sc:analyze component.js
# Level 2: Add thinking if complex
/sc:analyze component.js --think
# Level 3: Add documentation for frameworks
/sc:analyze component.js --think --c7
# Level 4: Full analysis for critical code
/sc:analyze component.js --think-hard --c7 --seq --validate
Why: Start with what you need and add complexity only when necessary. Prevents over-engineering and saves time.
Batch Related Operations
# ✅ Efficient: Related operations together
/sc:analyze auth-system/ --focus security
/sc:improve auth-system/ --focus security --safe-mode
/sc:test auth-system/ --type security
# ❌ Inefficient: Scattered operations
/sc:analyze auth-system/
/sc:review different-system/
/sc:improve auth-system/ # Context lost between operations
Why: Batching related work maintains context and allows SuperClaude to build on previous analysis.
Use Appropriate Scope
# File-level for specific issues
/sc:improve single-component.js --focus performance
# Module-level for related functionality
/sc:analyze user-auth/ --scope module
# Project-level for architectural concerns
/sc:analyze --scope project --focus architecture
# System-level only when necessary
/sc:analyze --scope system --delegate auto --uc
Why: Matching scope to problem prevents both under-analysis and resource waste.
Performance and Efficiency 🏃♂️
Manage Context and Token Usage
# For large operations, use compression
/sc:analyze huge-codebase/ --uc --delegate auto
# For repeated analysis, cache results
/sc:load project-context/ # Cache project understanding
/sc:analyze specific-issue/ # Build on cached context
# For simple questions, minimize overhead
/sc:explain quick-concept --answer-only --no-mcp
Why: Token efficiency keeps operations fast and prevents context overflow in large projects.
Use Delegation for Large Projects
# Automatically delegate when appropriate
/sc:analyze monorepo/ --delegate auto
# Manual delegation for specific needs
/sc:analyze large-project/ --delegate folders --concurrency 3
# Skip delegation for small projects
/sc:analyze small-app/ --no-delegate
Why: Delegation provides significant speedup (40-70%) for large-scale operations while maintaining quality.
Optimize Command Sequences
# ✅ Efficient sequence
/sc:load project/ # Understand context once
/sc:analyze --focus quality # Build on understanding
/sc:improve --safe-mode # Apply improvements
/sc:test --coverage # Validate changes
# ❌ Inefficient sequence
/sc:analyze file1.js
/sc:analyze file2.js # Duplicate setup
/sc:analyze file3.js # Lost optimization opportunities
Why: Sequential commands can build on each other's context and analysis for better results.
Quality and Safety 🛡️
Always Validate Important Changes
# For production code
/sc:improve production-auth/ --safe-mode --validate --preview
# For experimental features
/sc:improve experimental-feature/ --validate
# For learning/exploration
/sc:improve test-code/ --preview # See what it would do
Why: Validation prevents breaking changes and helps you understand the impact of modifications.
Use Quality Gates Effectively
# Let quality gates run automatically
/sc:build production-app/ # 8-step validation process runs
# Add extra validation for critical systems
/sc:build payment-system/ --validate --safe-mode
# Skip validation only for experimental work
/sc:build prototype/ --no-validate # Use sparingly
Why: Quality gates catch issues early when they're cheaper and easier to fix.
Maintain Evidence Trail
# Commands that provide evidence
/sc:analyze --focus performance # → Performance metrics
/sc:test --coverage # → Coverage reports
/sc:scan --focus security # → Security assessment
# Use introspection for complex decisions
/sc:analyze complex-system/ --introspect # → Decision reasoning
Why: Evidence-based development leads to better decisions and easier debugging when issues arise.
Learning and Growth 📚
Use Mentor Persona for Learning
# Learn new concepts
/sc:explain GraphQL --persona-mentor --verbose
# Understand complex code
/sc:analyze complex-algorithm.js --persona-mentor
# Get step-by-step guidance
/sc:build new-feature/ --persona-mentor --plan
Why: Mentor persona optimizes for understanding and knowledge transfer rather than just task completion.
Experiment with Different Approaches
# Try different personas on same problem
/sc:analyze api-design/ --persona-architect
/sc:analyze api-design/ --persona-security
/sc:analyze api-design/ --persona-performance
# Compare tool combinations
/sc:build app/ --magic --c7
/sc:build app/ --no-mcp --uc # Faster but simpler
Why: Understanding different approaches helps you choose the best tools for different situations.
Build Your Own Patterns
# Identify what works for your workflow
# Security-focused API development
/sc:design api --persona-security --validate
/sc:build api --persona-backend --c7
/sc:test api --type security --play
# Create your own efficient combinations
/sc:analyze code/ --think --c7 --safe-mode # Your personal "thorough analysis"
Why: Developing your own proven patterns increases productivity and ensures consistent quality.
Common Pitfalls to Avoid ⚠️
Don't Over-Engineer Simple Tasks
# ❌ Overkill for simple tasks
/sc:analyze simple-utility.js --ultrathink --all-mcp --wave-mode force
# ✅ Appropriate for simple tasks
/sc:analyze simple-utility.js --focus quality
Don't Ignore Auto-Activation Wisdom
# ❌ Fighting the system
/sc:build react-app/ --persona-backend --no-magic # Wrong tools for the job
# ✅ Working with the system
/sc:build react-app/ # Let frontend persona and Magic activate automatically
Don't Skip Safety for Speed
# ❌ Risky for important code
/sc:improve production-auth/ --force --no-validate
# ✅ Balanced approach
/sc:improve production-auth/ --safe-mode --validate # Safer but still efficient
Don't Use Flags You Don't Understand
# ❌ Cargo cult flag usage
/sc:command --random-flags-that-look-important
# ✅ Understand what each flag does
/sc:command --think # Because I need deeper analysis
/sc:command --c7 # Because I'm working with external libraries
Measuring Success 📊
Track what works well for your specific needs:
- Speed: How quickly do different flag combinations complete?
- Quality: Which approaches produce better results for your type of work?
- Learning: Which combinations help you understand problems better?
- Safety: Which patterns prevent issues in your environment?
Remember: SuperClaude learns from successful patterns, so using effective combinations consistently helps the framework get better at auto-activation for your specific workflow.
Troubleshooting & Common Issues 🚨
When SuperClaude doesn't work as expected, here's how to diagnose and fix common problems.
Command Issues 🛠️
Commands Not Working as Expected
Problem: Command produces unexpected results or seems to ignore your request.
Diagnosis:
# Check what auto-activated
/sc:analyze code.js --introspect
# → Shows decision-making process
# Try with explicit control
/sc:analyze code.js --persona-analyzer --think --seq
# → Override auto-activation
Solutions:
# Be more specific about what you want
/sc:improve code.js --focus performance --safe-mode
# Use preview to understand what will happen
/sc:improve code.js --preview
# Start simple and add complexity
/sc:analyze code.js # Basic
/sc:analyze code.js --think # Add depth
/sc:analyze code.js --think --c7 # Add documentation
Common Causes:
- Auto-activation chose different tools than you expected
- Request was too vague for SuperClaude to understand intent
- Complexity mismatch (simple request with complex flags or vice versa)
Commands Running Too Slowly
Problem: Operations take much longer than expected.
Diagnosis:
# Check what's activated
/sc:analyze large-project/ --introspect
# → See what tools and servers are being used
# Monitor resource usage
/sc:analyze large-project/ --verbose
# → Shows detailed execution steps
Solutions:
# Optimize for speed
/sc:analyze large-project/ --uc --no-mcp --scope module
# Use delegation for large operations
/sc:analyze huge-codebase/ --delegate auto --concurrency 3
# Reduce scope
/sc:analyze specific-component.js # Instead of entire project
# Disable expensive features
/sc:analyze code/ --no-mcp --answer-only
Performance Optimization Priority:
- Reduce scope (
--scope filevs--scope project) - Use compression (
--uc) - Disable MCP servers (
--no-mcp) - Use delegation (
--delegate auto) - Use answer-only mode (
--answer-only)
Commands Producing Too Much Output
Problem: Information overload, hard to find relevant information.
Solutions:
# Use compression
/sc:analyze large-system/ --uc
# Be more specific about focus
/sc:analyze system/ --focus security # Instead of general analysis
# Use answer-only for simple questions
/sc:explain concept --answer-only
# Limit scope
/sc:analyze --scope file specific-issue.js
Flag Issues 🏁
Flag Conflicts and Unexpected Behavior
Problem: Flags don't seem to work or produce unexpected results.
Common Conflicts:
# ❌ These conflict
/sc:command --no-mcp --c7 # --no-mcp overrides --c7
/sc:command --answer-only --plan # --answer-only skips planning
/sc:command --uc --verbose # --uc overrides --verbose
# ✅ These work together
/sc:command --think --c7 --seq # Complementary capabilities
/sc:command --safe-mode --validate --preview # Layered safety
Flag Precedence Order:
- Safety flags (
--safe-mode) > optimization flags - Explicit flags > auto-activation
--no-mcpoverrides all individual MCP flags- Last specified persona wins
- Scope: system > project > module > file
Diagnosis:
# Check what flags are actually active
/sc:command args --introspect
# → Shows final flag configuration after precedence resolution
Auto-Activation Issues
Problem: Wrong flags or personas auto-activate.
Solutions:
# Override auto-activation explicitly
/sc:analyze frontend-code/ --persona-security # Force security view
/sc:build project/ --no-mcp # Force native tools only
# Use more specific language
/sc:analyze "security vulnerabilities in auth system" # Clear intent
# vs
/sc:analyze auth system # Ambiguous
# Check what keywords trigger auto-activation
/sc:help analyze # Shows auto-activation patterns
Auto-Activation Debugging:
# See why certain flags activated
/sc:troubleshoot "why did --think-hard activate?" --introspect
Persona Issues 🎭
Wrong Persona Activated
Problem: SuperClaude uses the wrong specialist for your needs.
Diagnosis:
# Check what triggered persona activation
/sc:analyze code/ --introspect
# → Shows persona selection reasoning
Solutions:
# Override with explicit persona
/sc:analyze backend-api/ --persona-security # Security view of backend code
/sc:analyze ui-component/ --persona-performance # Performance view of frontend
# Use more specific language
/sc:analyze "security issues in payment processing" # Triggers security persona
/sc:analyze "slow database queries" # Triggers performance persona
# Try different personas for different perspectives
/sc:analyze payment-system/ --persona-security # Security view
/sc:analyze payment-system/ --persona-architect # Architecture view
Persona Doesn't Seem Active
Problem: Expected persona behavior but getting generic responses.
Check Persona Activation:
# Verify persona is active
/sc:analyze auth/ --persona-security --introspect
# → Should show security-focused reasoning
# Check if domain keywords are clear
/sc:scan authentication --focus security # Should auto-activate security persona
Solutions:
# Be explicit about persona and focus
/sc:analyze code/ --persona-security --focus security
# Use appropriate commands for personas
/sc:scan --persona-security # Security scanning
/sc:test --persona-qa # Quality-focused testing
/sc:document --persona-scribe # Professional documentation
MCP Server Issues 🔧
MCP Servers Not Activating
Problem: Expected MCP capabilities but they don't seem to work.
Diagnosis:
# Check MCP server status
/sc:troubleshoot "MCP servers not working" --introspect
# Verify MCP installation
/sc:load --summary # Should show available MCP servers
# Test specific servers
/sc:analyze react-app/ --c7 # Should use Context7
/sc:troubleshoot issue --seq # Should use Sequential
/sc:build ui/ --magic # Should use Magic
/sc:test app/ --play # Should use Playwright
Common Solutions:
# Force MCP activation
/sc:analyze code/ --all-mcp
# Check if servers are disabled
/sc:analyze code/ --c7 # If this doesn't work, Context7 may be unavailable
# Use fallback approaches
/sc:analyze react-app/ --no-mcp # Use native tools if MCP unavailable
MCP Servers Too Slow
Problem: MCP server integration causes slow performance.
Solutions:
# Disable MCP for speed
/sc:analyze large-project/ --no-mcp
# Use selective MCP activation
/sc:analyze react-code/ --magic --no-seq # Only UI generation, skip analysis
# Optimize MCP usage
/sc:analyze code/ --uc --c7 # Compression + documentation only
Performance Issues ⚡
Operations Using Too Many Tokens
Problem: Hitting context limits or expensive operations.
Solutions:
# Enable compression automatically
/sc:analyze huge-project/ --uc
# Reduce scope
/sc:analyze --scope module specific-area/
/sc:analyze --scope file specific-file.js
# Use delegation
/sc:analyze large-codebase/ --delegate auto --uc
# Disable expensive features
/sc:analyze code/ --no-mcp --answer-only
Memory or Resource Issues
Problem: Operations failing or very slow due to resource constraints.
Solutions:
# Reduce concurrency
/sc:analyze large-project/ --delegate auto --concurrency 1
# Use safe mode
/sc:improve large-system/ --safe-mode # More conservative resource usage
# Break work into smaller chunks
/sc:analyze module1/
/sc:analyze module2/
/sc:analyze module3/
# Instead of /analyze entire-project/
Quality and Safety Issues 🛡️
Unsafe or Risky Suggestions
Problem: SuperClaude suggests changes that seem risky.
Always Use Safety Features:
# Preview before applying
/sc:improve important-code/ --preview
# Use safe mode for critical code
/sc:improve production-auth/ --safe-mode
# Add validation
/sc:improve system/ --validate --safe-mode
# Use iterative approach
/sc:improve complex-system/ --loop --safe-mode
Changes Breaking Functionality
Problem: Applied improvements cause issues.
Prevention:
# Always use preview first
/sc:improve code/ --preview
# Use safe mode
/sc:improve code/ --safe-mode
# Test after changes
/sc:improve code/ --safe-mode && /test code/
Recovery:
- Use git to revert changes
- Apply improvements incrementally with
--safe-mode - Use
--validateto check before applying changes
Framework and Integration Issues 🔗
SuperClaude Doesn't Understand Project Context
Problem: Recommendations don't fit your project's patterns or constraints.
Solutions:
# Load project context first
/sc:load --deep --summary
# Be explicit about project type
/sc:analyze react-typescript-app/ --c7 # Include tech stack in description
# Use appropriate personas
/sc:analyze node-api/ --persona-backend
/sc:analyze react-ui/ --persona-frontend
Inconsistent Results
Problem: Same command produces different results at different times.
Diagnosis:
# Check what's auto-activating differently
/sc:command args --introspect
# Use explicit flags for consistency
/sc:analyze code/ --persona-analyzer --think --c7 # Explicit configuration
Solutions:
# Be more explicit about requirements
/sc:improve code/ --focus performance --persona-performance --safe-mode
# Use consistent flag patterns
/sc:analyze --think --c7 # Your standard thorough analysis
/sc:improve --safe-mode # Your standard safe improvement
Getting Help 🆘
When You're Stuck
Self-Diagnosis Steps:
- Use
--introspectto understand what SuperClaude is thinking - Try simpler versions of your command
- Check auto-activation with explicit flags
- Use
--helpon commands to see options
Escalation Path:
# Get framework help
/sc:troubleshoot "SuperClaude framework issues" --introspect
# Check documentation
/sc:help # Command overview
/sc:analyze --help # Specific command help
# Test basic functionality
/sc:analyze README.md # Simple test
/sc:build --help # Check if commands work
Reporting Issues
When reporting problems, include:
- Exact command used:
/analyze code/ --think --c7 - Expected behavior: "Should provide security analysis"
- Actual behavior: "Only provided basic code review"
- Context: "Working on Node.js authentication system"
- SuperClaude version: Check with
/help
Useful Debug Information:
# Get diagnostic information
/sc:troubleshoot "describe your issue" --introspect --verbose
# → Provides detailed context for bug reports
Quick Reference for Common Problems 📋
| Problem | Quick Fix | Command |
|---|---|---|
| Too slow | Reduce scope + compression | --scope file --uc |
| Wrong persona | Override explicitly | --persona-security |
| Too much output | Use compression | --uc |
| Risky changes | Use safety features | --safe-mode --preview |
| MCP not working | Force activation or disable | --all-mcp or --no-mcp |
| Inconsistent results | Use explicit flags | --persona-x --think --c7 |
| Context issues | Load project context | /load --deep |
| Token limits | Enable compression + delegation | --uc --delegate auto |
Remember: When in doubt, start simple and add complexity gradually. Use --introspect to understand what SuperClaude is thinking, and don't hesitate to override auto-activation when you need specific behavior.
What's Next 🔮
SuperClaude v3.0 is fresh out of beta, and we're honest about what that means: it works pretty well for what it does, but there are rough edges and room for improvement. Here's what you can expect as the framework evolves.
Current Limitations (Let's Be Honest) ⚠️
Known Issues We're Working On
Performance Optimization
- Some operations are slower than we'd like, especially with all MCP servers active
- Token usage could be more efficient for large-scale operations
- Memory usage spikes on very large codebases (>1000 files)
MCP Server Integration
- Server connections occasionally timeout or become unresponsive
- Error handling between MCP servers could be smoother
- Some advanced MCP features are experimental and may not work reliably
Quality Gates
- The 8-step validation process sometimes misses edge cases
- Quality metrics could be more granular and actionable
- Integration testing validation needs improvement
Auto-Activation Intelligence
- Persona selection occasionally misses context clues
- Flag auto-activation can be overly aggressive for simple tasks
- Pattern recognition works well for common scenarios but struggles with edge cases
What We Removed (And Why)
Hooks System (Coming Back in v4)
- The v2 hooks system became too complex and buggy
- Caused performance issues and unpredictable behavior
- Being redesigned from scratch with better architecture
- Will return in v4 with improved reliability and simpler configuration
Some Advanced Commands
- Consolidated 20+ commands down to 16 essential ones
- Removed experimental commands that weren't stable enough
- Focus on making core commands excellent rather than having many mediocre ones
Short-Term Improvements (v3.x) 🔧
Our immediate focus is making v3 stable and polished:
Performance Optimization (v3.1)
- MCP Connection Pooling: Reuse connections to reduce startup overhead
- Intelligent Caching: Cache MCP results and analysis outcomes
- Token Optimization: Better compression algorithms and smarter batching
- Resource Management: Better memory usage for large projects
Expected Impact: 30-50% performance improvement for common operations.
MCP Server Reliability (v3.2)
- Connection Resilience: Better handling of MCP server timeouts and failures
- Graceful Degradation: Fallback strategies when servers are unavailable
- Health Monitoring: Real-time monitoring of MCP server status
- Error Recovery: Automatic retry and recovery mechanisms
Expected Impact: 80% reduction in MCP-related failures and timeouts.
Quality Gate Enhancement (v3.3)
- Granular Metrics: More specific and actionable quality measurements
- Custom Validation: User-configurable quality checks
- Evidence Tracking: Better documentation of validation outcomes
- Integration Testing: Improved validation of system-wide changes
Expected Impact: Higher confidence in automated improvements and better quality metrics.
Medium-Term Evolution (v4.0) 🚀
The next major version will focus on intelligence and user experience:
Redesigned Hooks System
- Event-Driven Architecture: Clean separation between framework and hooks
- Performance Optimized: No impact on core operations when hooks aren't used
- Simple Configuration: Easy setup and debugging
- Extensibility: Community hooks and custom integrations
Enhanced AI Coordination
- Smarter Auto-Activation: Better context understanding and tool selection
- Learning Patterns: Framework learns from your successful workflows
- Predictive Assistance: Suggests next steps based on current context
- Personalization: Adapts to your coding style and preferences
Advanced Orchestration
- Dynamic Resource Allocation: Intelligent scaling based on operation complexity
- Parallel Processing: True parallelization for independent operations
- Context Preservation: Better memory of previous work within sessions
- Workflow Templates: Reusable patterns for common development scenarios
Extended MCP Ecosystem
- More Servers: Additional specialized capabilities (database, cloud, monitoring)
- Community Servers: Framework for community-contributed MCP servers
- Server Marketplace: Easy discovery and installation of new capabilities
- Local Development: Run MCP servers locally for better performance
Long-Term Vision (v5.0+) 🌟
Looking further ahead, we're exploring more ambitious improvements:
Intelligence and Automation
- Contextual Understanding: Deep comprehension of project goals and constraints
- Proactive Assistance: Suggestions based on code analysis and project patterns
- Automated Workflows: End-to-end automation for common development tasks
- Code Evolution Tracking: Understanding how your codebase changes over time
Team and Enterprise Features
- Multi-Developer Coordination: Team-aware analysis and recommendations
- Project Memory: Persistent understanding of project context across sessions
- Policy Enforcement: Automated enforcement of team coding standards
- Analytics Dashboard: Insights into development patterns and productivity
Platform Integration
- IDE Deep Integration: Native integration with popular development environments
- CI/CD Pipeline Integration: Automated quality checks and improvements in build processes
- Cloud Development: Integration with cloud development platforms
- API Ecosystem: Rich APIs for custom integrations and tooling
How You Can Influence Development 📝
Feedback and Usage Patterns
We actively monitor:
- Command usage patterns: Which commands are most/least useful
- Flag combinations: What combinations work well in practice
- Error patterns: Common failure modes and user confusion points
- Performance bottlenecks: Where users experience slowdowns
Community Involvement
- GitHub Issues: Bug reports and feature requests help prioritize development
- Usage Examples: Real-world usage examples inform our testing and optimization
- Documentation Feedback: Gaps in documentation highlight areas for improvement
- Integration Requests: Requests for specific tool/framework integrations guide MCP development
Beta Testing Program
- Early Access: Test new features before public release
- Feedback Loop: Direct input on experimental features
- Performance Testing: Help validate optimizations across different environments
- Use Case Validation: Ensure new features work for real development scenarios
Staying Updated 📡
How to Keep Current
# Check for updates regularly
/sc:help # Shows current version and update availability
# Monitor development progress
# - GitHub releases: Feature announcements and changelogs
# - Documentation updates: New patterns and best practices
# - Community discussions: Tips and advanced usage patterns
Migration and Compatibility
- Backwards Compatibility: v3.x updates maintain command compatibility
- Configuration Migration: Automatic migration of settings between versions
- Deprecation Warnings: Advance notice of changing features
- Migration Guides: Step-by-step guides for major version upgrades
Realistic Expectations 📊
What to Expect from Updates
- v3.x updates: Bug fixes, performance improvements, stability enhancements
- Major versions: New features, architectural improvements, expanded capabilities
- Community contributions: Additional MCP servers, workflow patterns, integrations
What Not to Expect
- Perfect AI: SuperClaude will continue to have limitations and edge cases
- One-Size-Fits-All: Different projects and teams will need different approaches
- Zero Learning Curve: New features will require learning and experimentation
- Magical Solutions: Complex problems still require human expertise and judgment
Contributing to SuperClaude 🤝
Ways to Help
- Bug Reports: Detailed reports help improve stability and reliability
- Feature Requests: Real-world needs drive development priorities
- Documentation: Examples, guides, and clarifications help the community
- Community Support: Helping other users builds a stronger ecosystem
What We Value Most
- Honest Feedback: Both positive experiences and frustrations help improve the framework
- Real-World Usage: How SuperClaude works (or doesn't work) in actual development workflows
- Specific Examples: Concrete scenarios are more valuable than abstract feature requests
- Patience: Remember that v3.0 is fresh out of beta - improvement takes time
The Bottom Line 🎯
SuperClaude v3.0 is a solid foundation with room to grow. We're committed to:
- Honest Communication: No overpromising, clear about limitations and timelines
- User-Driven Development: Prioritizing features that solve real problems
- Quality Over Features: Making existing capabilities excellent before adding new ones
- Community Focus: Building a framework that serves the development community
We believe SuperClaude can become significantly more helpful for software development workflows, but it will take time, feedback, and iteration to get there. We appreciate your patience, feedback, and continued use as we improve the framework together.
Want to stay involved? Watch the GitHub repository, try new features when they're released, and let us know what works (and what doesn't) in your development workflows. Your real-world usage and feedback are what will make SuperClaude truly valuable for the development community.
Conclusion 🎉
You've now got a comprehensive understanding of SuperClaude v3.0 - its components, capabilities, and how to use them effectively. Let's wrap up with the key takeaways that will help you get the most out of the framework.
Key Takeaways 🎯
SuperClaude's Core Value
SuperClaude transforms Claude Code from a general-purpose AI assistant into a specialized development partner through:
- 15 specialized commands that understand development workflows
- 11 expert personas that bring domain-specific knowledge
- Intelligent orchestration that coordinates tools automatically
- Quality-first approach that maintains safety and reliability
The Power is in the Coordination
SuperClaude's power comes not from any single feature, but from how components work together:
- Commands usually activate appropriate personas and MCP servers
- Personas coordinate with each other for multi-domain problems
- The orchestrator optimizes tool selection and resource usage
- Quality gates ensure consistent, reliable outcomes
Start Simple, Scale Intelligently
The best approach to SuperClaude is progressive:
- Begin with basic commands to understand core functionality
- Trust auto-activation to learn optimal tool combinations
- Add manual control when you need specific perspectives
- Experiment with advanced features as your confidence grows
What Makes SuperClaude Different 🌟
Honest About Limitations
- We acknowledge v3.0 is fresh out of beta with rough edges
- We clearly document what works well vs. what's still experimental
- We prioritize reliability over flashy features
- We provide realistic timelines and expectations
Evidence-Based Development
- All recommendations backed by verifiable data
- Quality gates ensure changes don't break existing functionality
- Performance optimizations based on real usage patterns
- Continuous improvement driven by user feedback
Respectful of Your Workflow
- Enhances existing tools rather than replacing them
- Maintains compatibility with standard development practices
- Provides manual override for all automatic decisions
- Scales from simple tasks to complex enterprise scenarios
Practical Next Steps 🛣️
For New Users
- Start with installation: Follow the Installation Guide
- Try basic commands:
/help,/analyze README.md,/build --help - Explore domain guides: Commands, Flags, Personas
- Build confidence gradually: Simple tasks → complex workflows → advanced features
For Experienced Users
- Optimize your workflows: Identify flag combinations that work well for your needs
- Experiment with coordination: Try different persona combinations on complex problems
- Contribute feedback: Share what works (and what doesn't) in your environment
- Explore advanced features: Wave orchestration, sub-agent delegation, introspection mode
When to Use SuperClaude 🤔
SuperClaude Excels At
- Development workflows: Building, testing, deploying, documenting
- Code analysis: Quality assessment, security scanning, performance optimization
- Learning and understanding: Explaining complex systems, onboarding to new projects
- Quality improvement: Systematic refactoring, technical debt reduction
- Multi-domain problems: Issues requiring multiple types of expertise
When to Use Standard Claude Code
- Simple questions: Quick explanations that don't need specialized tools
- Creative writing: Non-technical content creation
- General research: Topics outside software development
- Brainstorming: Open-ended ideation without specific implementation needs
The SuperClaude Philosophy 💭
Human-AI Collaboration
SuperClaude is designed to augment human expertise, not replace it:
- You provide context and goals - SuperClaude provides execution and expertise
- You make decisions - SuperClaude provides evidence and recommendations
- You understand your constraints - SuperClaude respects and works within them
- You own the outcomes - SuperClaude helps you achieve better results
Continuous Improvement
The framework gets better through:
- Usage patterns: Learning what combinations work well in practice
- User feedback: Real-world experiences drive development priorities
- Evidence-based optimization: Data-driven improvements to tools and workflows
- Community contributions: Shared knowledge and best practices
Looking Forward 🔮
Short-Term (Next 6 Months)
- Performance optimizations making operations 30-50% faster
- Improved MCP server reliability reducing failures by 80%
- Enhanced quality gates providing more actionable feedback
- Better documentation based on user questions and feedback
Medium-Term (6-18 Months)
- Redesigned hooks system with better architecture and performance
- Smarter auto-activation based on learning from usage patterns
- Extended MCP ecosystem with community-contributed servers
- Advanced orchestration with true parallel processing
Long-Term Vision
- Deep contextual understanding of projects and team workflows
- Proactive assistance based on code analysis and project patterns
- Team-aware features for collaborative development
- Rich integration ecosystem with IDEs, CI/CD, and cloud platforms
Final Thoughts 🎉
SuperClaude v3.0 represents a solid foundation for enhanced software development workflows. While it's not perfect and still has room for improvement, it demonstrates how AI can be thoughtfully integrated into development practices without disrupting existing workflows or replacing human expertise.
The framework succeeds when it makes you more productive, helps you learn new things, or catches issues you might have missed. It's designed to be a helpful colleague rather than a replacement for understanding your craft.
Thank You 🙏
Thanks for taking the time to understand SuperClaude thoroughly. Your thoughtful usage, honest feedback, and patience with rough edges are what will make this framework truly valuable for the development community.
Whether you use SuperClaude occasionally for specific tasks or integrate it deeply into your daily workflow, we hope it makes your development experience a bit better. And when it doesn't work as expected, please let us know - that feedback is invaluable for making improvements.
Happy coding! 🚀 We're excited to see what you build with SuperClaude as your development partner.
Last updated: July 2024
SuperClaude v3.0 User Guide
For questions, feedback, or contributions, visit our GitHub repository or join the community discussions. We're always happy to hear from users and learn about your experiences with the framework.