📁 Major documentation restructure and comprehensive Reference section

- Restructured all documentation under unified Docs/ directory
- Removed outdated phase summaries and consolidated content
- Added comprehensive Reference section with 11 new guides:
  * Advanced patterns and workflows
  * Basic examples and common issues
  * Integration patterns and MCP server guides
  * Optimization and diagnostic references
- Enhanced User-Guide with updated agent and mode documentation
- Updated MCP configurations for morphllm and serena
- Added TODO.md for project tracking
- Maintained existing content quality while improving organization

🤖 Generated with Claude Code

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
NomenAK
2025-08-18 11:58:55 +02:00
parent e0917f33ab
commit d2f4ef43e4
40 changed files with 12242 additions and 6827 deletions

791
Docs/User-Guide/agents.md Normal file
View File

@@ -0,0 +1,791 @@
# SuperClaude Agents Guide 🤖
## ✅ Verification Status
- **SuperClaude Version**: v4.0+ Compatible
- **Last Tested**: 2025-01-16
- **Test Environment**: Linux/Windows/macOS
- **Agent Activation**: ✅ All Verified
## 🧪 Testing Agent Activation
Before using this guide, verify agent selection works:
```bash
# Test security agent activation
/sc:implement "JWT authentication"
# Expected: Security engineer should activate automatically
# Test frontend agent activation
/sc:implement "responsive navigation component"
# Expected: Frontend architect + Magic MCP should activate
# Test systematic analysis
/sc:troubleshoot "slow API performance"
# Expected: Root-cause analyst + performance engineer activation
```
**If tests fail**: Check agent activation patterns in this guide or restart Claude Code session
## Core Concepts
### What are SuperClaude Agents?
**Agents** are specialized AI domain experts with focused expertise in specific technical areas. Each agent has unique knowledge, behavioral patterns, and problem-solving approaches tailored to their domain.
**Auto-Activation** means agents automatically engage based on keywords, file types, and task complexity without manual selection. The system analyzes your request and routes to the most appropriate specialists.
**MCP Servers** provide enhanced capabilities through specialized tools like Context7 (documentation), Sequential (analysis), Magic (UI), Playwright (testing), and Morphllm (code transformation).
**Domain Specialists** focus on narrow expertise areas to provide deeper, more accurate solutions than generalist approaches.
### Agent Selection Rules
**Priority Hierarchy:**
1. **Keywords** - Direct domain terminology triggers primary agents
2. **File Types** - Extensions activate language/framework specialists
3. **Complexity** - Multi-step tasks engage coordination agents
4. **Context** - Related concepts trigger complementary agents
**Conflict Resolution:**
- Multiple matches → Multi-agent coordination
- Unclear context → Requirements analyst activation
- High complexity → System architect oversight
- Quality concerns → Automatic QA agent inclusion
**Selection Decision Tree:**
```
Task Analysis →
├─ Single Domain? → Activate primary agent
├─ Multi-Domain? → Coordinate specialist agents
├─ Complex System? → Add system-architect oversight
├─ Quality Critical? → Include security + performance + quality agents
└─ Learning Focus? → Add learning-guide + technical-writer
```
## Quick Start Examples
**Automatic Agent Coordination:**
```bash
# Triggers: security-engineer + backend-architect + quality-engineer
/sc:implement "JWT authentication with rate limiting"
# Triggers: frontend-architect + learning-guide + technical-writer
/sc:design "accessible React dashboard with documentation"
# Triggers: devops-architect + performance-engineer + root-cause-analyst
/sc:troubleshoot "slow deployment pipeline with intermittent failures"
# Triggers: security-engineer + quality-engineer + refactoring-expert
/sc:audit "payment processing security vulnerabilities"
```
---
## The SuperClaude Agent Team 👥
### Architecture & System Design Agents 🏗️
#### system-architect 🏢
**Expertise**: Large-scale distributed system design with focus on scalability and service architecture
**Auto-Activation**:
- Keywords: "architecture", "microservices", "scalability", "system design", "distributed"
- Context: Multi-service systems, architectural decisions, technology selection
- Complexity: >5 components or cross-domain integration requirements
**Capabilities**:
- Service boundary definition and microservices decomposition
- Technology stack selection and integration strategy
- Scalability planning and performance architecture
- Event-driven architecture and messaging patterns
- Data flow design and system integration
**Examples**:
1. **E-commerce Platform**: Design microservices for user, product, payment, and notification services with event sourcing
2. **Real-time Analytics**: Architecture for high-throughput data ingestion with stream processing and time-series storage
3. **Multi-tenant SaaS**: System design with tenant isolation, shared infrastructure, and horizontal scaling strategies
#### Success Criteria
- [ ] System-level thinking evident in responses
- [ ] Mentions service boundaries and integration patterns
- [ ] Includes scalability and reliability considerations
- [ ] Provides technology stack recommendations
**Verify:** `/sc:design "microservices platform"` should activate system-architect
**Test:** Output should include service decomposition and integration patterns
**Check:** Should coordinate with devops-architect for infrastructure concerns
**Works Best With**: devops-architect (infrastructure), performance-engineer (optimization), security-engineer (compliance)
---
#### backend-architect ⚙️
**Expertise**: Robust server-side system design with emphasis on API reliability and data integrity
**Auto-Activation**:
- Keywords: "API", "backend", "server", "database", "REST", "GraphQL", "endpoint"
- File Types: API specs, server configs, database schemas
- Context: Server-side logic, data persistence, API development
**Capabilities**:
- RESTful and GraphQL API architecture and design patterns
- Database schema design and query optimization strategies
- Authentication, authorization, and security implementation
- Error handling, logging, and monitoring integration
- Caching strategies and performance optimization
**Examples**:
1. **User Management API**: JWT authentication with role-based access control and rate limiting
2. **Payment Processing**: PCI-compliant transaction handling with idempotency and audit trails
3. **Content Management**: RESTful APIs with caching, pagination, and real-time notifications
**Works Best With**: security-engineer (auth/security), performance-engineer (optimization), quality-engineer (testing)
---
#### frontend-architect 🎨
**Expertise**: Modern web application architecture with focus on accessibility and user experience
**Auto-Activation**:
- Keywords: "UI", "frontend", "React", "Vue", "Angular", "component", "accessibility", "responsive"
- File Types: .jsx, .vue, .ts (frontend), .css, .scss
- Context: User interface development, component design, client-side architecture
**Capabilities**:
- Component architecture and design system implementation
- State management patterns (Redux, Zustand, Pinia)
- Accessibility compliance (WCAG 2.1) and inclusive design
- Performance optimization and bundle analysis
- Progressive Web App and mobile-first development
**Examples**:
1. **Dashboard Interface**: Accessible data visualization with real-time updates and responsive grid layout
2. **Form Systems**: Complex multi-step forms with validation, error handling, and accessibility features
3. **Design System**: Reusable component library with consistent styling and interaction patterns
**Works Best With**: learning-guide (user guidance), performance-engineer (optimization), quality-engineer (testing)
---
#### devops-architect 🚀
**Expertise**: Infrastructure automation and deployment pipeline design for reliable software delivery
**Auto-Activation**:
- Keywords: "deploy", "CI/CD", "Docker", "Kubernetes", "infrastructure", "monitoring", "pipeline"
- File Types: Dockerfile, docker-compose.yml, k8s manifests, CI configs
- Context: Deployment processes, infrastructure management, automation
**Capabilities**:
- CI/CD pipeline design with automated testing and deployment
- Container orchestration and Kubernetes cluster management
- Infrastructure as Code with Terraform and cloud platforms
- Monitoring, logging, and observability stack implementation
- Security scanning and compliance automation
**Examples**:
1. **Microservices Deployment**: Kubernetes deployment with service mesh, auto-scaling, and blue-green releases
2. **Multi-Environment Pipeline**: GitOps workflow with automated testing, security scanning, and staged deployments
3. **Monitoring Stack**: Comprehensive observability with metrics, logs, traces, and alerting systems
**Works Best With**: system-architect (infrastructure planning), security-engineer (compliance), performance-engineer (monitoring)
### Quality & Analysis Agents 🔍
#### security-engineer 🔒
**Expertise**: Application security architecture with focus on threat modeling and vulnerability prevention
**Auto-Activation**:
- Keywords: "security", "auth", "authentication", "vulnerability", "encryption", "compliance", "OWASP"
- Context: Security reviews, authentication flows, data protection requirements
- Risk Indicators: Payment processing, user data, API access, regulatory compliance needs
**Capabilities**:
- Threat modeling and attack surface analysis
- Secure authentication and authorization design (OAuth, JWT, SAML)
- Data encryption strategies and key management
- Vulnerability assessment and penetration testing guidance
- Security compliance (GDPR, HIPAA, PCI-DSS) implementation
**Examples**:
1. **OAuth Implementation**: Secure multi-tenant authentication with token refresh and role-based access
2. **API Security**: Rate limiting, input validation, SQL injection prevention, and security headers
3. **Data Protection**: Encryption at rest/transit, key rotation, and privacy-by-design architecture
**Works Best With**: backend-architect (API security), quality-engineer (security testing), root-cause-analyst (incident response)
---
#### performance-engineer ⚡
**Expertise**: System performance optimization with focus on scalability and resource efficiency
**Auto-Activation**:
- Keywords: "performance", "slow", "optimization", "bottleneck", "latency", "memory", "CPU"
- Context: Performance issues, scalability concerns, resource constraints
- Metrics: Response times >500ms, high memory usage, poor throughput
**Capabilities**:
- Performance profiling and bottleneck identification
- Database query optimization and indexing strategies
- Caching implementation (Redis, CDN, application-level)
- Load testing and capacity planning
- Memory management and resource optimization
**Examples**:
1. **API Optimization**: Reduce response time from 2s to 200ms through caching and query optimization
2. **Database Scaling**: Implement read replicas, connection pooling, and query result caching
3. **Frontend Performance**: Bundle optimization, lazy loading, and CDN implementation for <3s load times
**Works Best With**: system-architect (scalability), devops-architect (infrastructure), root-cause-analyst (debugging)
---
#### root-cause-analyst 🔍
**Expertise**: Systematic problem investigation using evidence-based analysis and hypothesis testing
**Auto-Activation**:
- Keywords: "bug", "issue", "problem", "debugging", "investigation", "troubleshoot", "error"
- Context: System failures, unexpected behavior, complex multi-component issues
- Complexity: Cross-system problems requiring methodical investigation
**Capabilities**:
- Systematic debugging methodology and root cause analysis
- Error correlation and dependency mapping across systems
- Log analysis and pattern recognition for failure investigation
- Hypothesis formation and testing for complex problems
- Incident response and post-mortem analysis procedures
**Examples**:
1. **Database Connection Failures**: Trace intermittent failures across connection pools, network timeouts, and resource limits
2. **Payment Processing Errors**: Investigate transaction failures through API logs, database states, and external service responses
3. **Performance Degradation**: Analyze gradual slowdown through metrics correlation, resource usage, and code changes
**Works Best With**: performance-engineer (performance issues), security-engineer (security incidents), quality-engineer (testing failures)
---
#### quality-engineer ✅
**Expertise**: Comprehensive testing strategy and quality assurance with focus on automation and coverage
**Auto-Activation**:
- Keywords: "test", "testing", "quality", "QA", "validation", "coverage", "automation"
- Context: Test planning, quality gates, validation requirements
- Quality Concerns: Code coverage <80%, missing test automation, quality issues
**Capabilities**:
- Test strategy design (unit, integration, e2e, performance testing)
- Test automation framework implementation and CI/CD integration
- Quality metrics definition and monitoring (coverage, defect rates)
- Edge case identification and boundary testing scenarios
- Accessibility testing and compliance validation
**Examples**:
1. **E-commerce Testing**: Comprehensive test suite covering user flows, payment processing, and inventory management
2. **API Testing**: Automated contract testing, load testing, and security testing for REST/GraphQL APIs
3. **Accessibility Validation**: WCAG 2.1 compliance testing with automated and manual accessibility audits
**Works Best With**: security-engineer (security testing), performance-engineer (load testing), frontend-architect (UI testing)
---
#### refactoring-expert 🔧
**Expertise**: Code quality improvement through systematic refactoring and technical debt management
**Auto-Activation**:
- Keywords: "refactor", "clean code", "technical debt", "SOLID", "maintainability", "code smell"
- Context: Legacy code improvements, architecture updates, code quality issues
- Quality Indicators: High complexity, duplicated code, poor test coverage
**Capabilities**:
- SOLID principles application and design pattern implementation
- Code smell identification and systematic elimination
- Legacy code modernization strategies and migration planning
- Technical debt assessment and prioritization frameworks
- Code structure improvement and architecture refactoring
**Examples**:
1. **Legacy Modernization**: Transform monolithic application to modular architecture with improved testability
2. **Design Patterns**: Implement Strategy pattern for payment processing to reduce coupling and improve extensibility
3. **Code Cleanup**: Remove duplicated code, improve naming conventions, and extract reusable components
**Works Best With**: system-architect (architecture improvements), quality-engineer (testing strategy), python-expert (language-specific patterns)
### Specialized Development Agents 🎯
#### python-expert 🐍
**Expertise**: Production-ready Python development with emphasis on modern frameworks and performance
**Auto-Activation**:
- Keywords: "Python", "Django", "FastAPI", "Flask", "asyncio", "pandas", "pytest"
- File Types: .py, requirements.txt, pyproject.toml, Pipfile
- Context: Python development tasks, API development, data processing, testing
**Capabilities**:
- Modern Python architecture patterns and framework selection
- Asynchronous programming with asyncio and concurrent futures
- Performance optimization through profiling and algorithmic improvements
- Testing strategies with pytest, fixtures, and test automation
- Package management and deployment with pip, poetry, and Docker
**Examples**:
1. **FastAPI Microservice**: High-performance async API with Pydantic validation, dependency injection, and OpenAPI docs
2. **Data Pipeline**: Pandas-based ETL with error handling, logging, and parallel processing for large datasets
3. **Django Application**: Full-stack web app with custom user models, API endpoints, and comprehensive test coverage
**Works Best With**: backend-architect (API design), quality-engineer (testing), performance-engineer (optimization)
---
#### requirements-analyst 📝
**Expertise**: Requirements discovery and specification development through systematic stakeholder analysis
**Auto-Activation**:
- Keywords: "requirements", "specification", "PRD", "user story", "functional", "scope", "stakeholder"
- Context: Project initiation, unclear requirements, scope definition needs
- Complexity: Multi-stakeholder projects, unclear objectives, conflicting requirements
**Capabilities**:
- Requirements elicitation through stakeholder interviews and workshops
- User story writing with acceptance criteria and definition of done
- Functional and non-functional specification documentation
- Stakeholder analysis and requirement prioritization frameworks
- Scope management and change control processes
**Examples**:
1. **Product Requirements Document**: Comprehensive PRD for fintech mobile app with user personas, feature specifications, and success metrics
2. **API Specification**: Detailed requirements for payment processing API with error handling, security, and performance criteria
3. **Migration Requirements**: Legacy system modernization requirements with data migration, user training, and rollback procedures
**Works Best With**: system-architect (technical feasibility), technical-writer (documentation), learning-guide (user guidance)
### Communication & Learning Agents 📚
#### technical-writer 📚
**Expertise**: Technical documentation and communication with focus on audience analysis and clarity
**Auto-Activation**:
- Keywords: "documentation", "readme", "API docs", "user guide", "technical writing", "manual"
- Context: Documentation requests, API documentation, user guides, technical explanations
- File Types: .md, .rst, API specs, documentation files
**Capabilities**:
- Technical documentation architecture and information design
- Audience analysis and content targeting for different skill levels
- API documentation with working examples and integration guidance
- User guide creation with step-by-step procedures and troubleshooting
- Accessibility standards application and inclusive language usage
**Examples**:
1. **API Documentation**: Comprehensive REST API docs with authentication, endpoints, examples, and SDK integration guides
2. **User Manual**: Step-by-step installation and configuration guide with screenshots, troubleshooting, and FAQ sections
3. **Technical Specification**: System architecture documentation with diagrams, data flows, and implementation details
**Works Best With**: requirements-analyst (specification clarity), learning-guide (educational content), frontend-architect (UI documentation)
---
#### learning-guide 🎓
**Expertise**: Educational content design and progressive learning with focus on skill development and mentorship
**Auto-Activation**:
- Keywords: "explain", "learn", "tutorial", "beginner", "teaching", "education", "training"
- Context: Educational requests, concept explanations, skill development, learning paths
- Complexity: Complex topics requiring step-by-step breakdown and progressive understanding
**Capabilities**:
- Learning path design with progressive skill development
- Complex concept explanation through analogies and examples
- Interactive tutorial creation with hands-on exercises
- Skill assessment and competency evaluation frameworks
- Mentorship strategies and personalized learning approaches
**Examples**:
1. **Programming Tutorial**: Interactive React tutorial with hands-on exercises, code examples, and progressive complexity
2. **Concept Explanation**: Database normalization explained through real-world examples with visual diagrams and practice exercises
3. **Skill Assessment**: Comprehensive evaluation framework for full-stack development with practical projects and feedback
**Works Best With**: technical-writer (educational documentation), frontend-architect (interactive learning), requirements-analyst (learning objectives)
---
## Agent Coordination & Integration 🤝
### Coordination Patterns
**Architecture Teams**:
- **Full-Stack Development**: frontend-architect + backend-architect + security-engineer + quality-engineer
- **System Design**: system-architect + devops-architect + performance-engineer + security-engineer
- **Legacy Modernization**: refactoring-expert + system-architect + quality-engineer + technical-writer
**Quality Teams**:
- **Security Audit**: security-engineer + quality-engineer + root-cause-analyst + requirements-analyst
- **Performance Optimization**: performance-engineer + system-architect + devops-architect + root-cause-analyst
- **Testing Strategy**: quality-engineer + security-engineer + performance-engineer + frontend-architect
**Communication Teams**:
- **Documentation Project**: technical-writer + requirements-analyst + learning-guide + domain experts
- **Learning Platform**: learning-guide + frontend-architect + technical-writer + quality-engineer
- **API Documentation**: backend-architect + technical-writer + security-engineer + quality-engineer
### MCP Server Integration
**Enhanced Capabilities through MCP Servers**:
- **Context7**: Official documentation patterns for all architects and specialists
- **Sequential**: Multi-step analysis for root-cause-analyst, system-architect, performance-engineer
- **Magic**: UI generation for frontend-architect, learning-guide interactive content
- **Playwright**: Browser testing for quality-engineer, accessibility validation for frontend-architect
- **Morphllm**: Code transformation for refactoring-expert, bulk changes for python-expert
- **Serena**: Project memory for all agents, context preservation across sessions
### Troubleshooting Agent Activation
## 🚨 Quick Troubleshooting
### Common Issues (< 2 minutes)
- **No agent activation**: Use domain keywords: "security", "performance", "frontend"
- **Wrong agents selected**: Check trigger keywords in agent documentation
- **Too many agents**: Focus keywords on primary domain or use `/sc:focus [domain]`
- **Agents not coordinating**: Increase task complexity or use multi-domain keywords
- **Agent expertise mismatch**: Use more specific technical terminology
### Immediate Fixes
- **Force agent activation**: Use explicit domain keywords in requests
- **Reset agent selection**: Restart Claude Code session to reset agent state
- **Check agent patterns**: Review trigger keywords in agent documentation
- **Test basic activation**: Try `/sc:implement "security auth"` to test security-engineer
### Agent-Specific Troubleshooting
**No Security Agent:**
```bash
# Problem: Security concerns not triggering security-engineer
# Quick Fix: Use explicit security keywords
"implement authentication" # Generic - may not trigger
"implement JWT authentication security" # Explicit - triggers security-engineer
"secure user login with encryption" # Security focus - triggers security-engineer
```
**No Performance Agent:**
```bash
# Problem: Performance issues not triggering performance-engineer
# Quick Fix: Use performance-specific terminology
"make it faster" # Vague - may not trigger
"optimize slow database queries" # Specific - triggers performance-engineer
"reduce API latency and bottlenecks" # Performance focus - triggers performance-engineer
```
**No Architecture Agent:**
```bash
# Problem: System design not triggering architecture agents
# Quick Fix: Use architectural keywords
"build an app" # Generic - triggers basic agents
"design microservices architecture" # Specific - triggers system-architect
"scalable distributed system design" # Architecture focus - triggers system-architect
```
**Wrong Agent Combination:**
```bash
# Problem: Getting frontend agent for backend tasks
# Quick Fix: Use domain-specific terminology
"create user interface" # May trigger frontend-architect
"create REST API endpoints" # Specific - triggers backend-architect
"implement server-side authentication" # Backend focus - triggers backend-architect
```
### Progressive Support Levels
**Level 1: Quick Fix (< 2 min)**
- Use explicit domain keywords from agent trigger table
- Try restarting Claude Code session
- Focus on single domain to avoid confusion
**Level 2: Detailed Help (5-15 min)**
```bash
# Agent-specific diagnostics
/sc:help agents # List available agents
/sc:explain "agent selection process" # Understand routing
# Review trigger keywords for target agents
```
- See [Common Issues Guide](../Reference/common-issues.md) for agent installation problems
**Level 3: Expert Support (30+ min)**
```bash
# Deep agent analysis
SuperClaude diagnose --agents
# Check agent coordination patterns
# Review multi-domain keyword strategies
```
- See [Diagnostic Reference Guide](../Reference/diagnostic-reference.md) for agent coordination analysis
**Level 4: Community Support**
- Report agent issues at [GitHub Issues](https://github.com/SuperClaude-Org/SuperClaude_Framework/issues)
- Include examples of expected vs actual agent activation
- Describe the type of task and desired agent combination
### Success Validation
After applying agent fixes, test with:
- [ ] Domain-specific requests activate correct agents (security → security-engineer)
- [ ] Complex tasks trigger multi-agent coordination (3+ agents)
- [ ] Agent expertise matches task requirements (API → backend-architect)
- [ ] Quality agents auto-include when appropriate (security, performance, testing)
- [ ] Responses show domain expertise and specialized knowledge
## Quick Troubleshooting (Legacy)
- **No agent activation** → Use domain keywords: "security", "performance", "frontend"
- **Wrong agents** → Check trigger keywords in agent documentation
- **Too many agents** → Focus keywords on primary domain
- **Agents not coordinating** → Increase task complexity or use multi-domain keywords
**Agent Not Activating?**
1. **Check Keywords**: Use domain-specific terminology (e.g., "authentication" not "login" for security-engineer)
2. **Add Context**: Include file types, frameworks, or specific technologies
3. **Increase Complexity**: Multi-domain problems trigger more agents
4. **Use Examples**: Reference concrete scenarios that match agent expertise
**Too Many Agents?**
- Focus keywords on primary domain needs
- Use `/sc:focus [domain]` to limit scope
- Start with specific agents, expand as needed
**Wrong Agents?**
- Review trigger keywords in agent documentation
- Use more specific terminology for target domain
- Add explicit requirements or constraints
## Quick Reference 📋
### Agent Trigger Lookup
| Trigger Type | Keywords/Patterns | Activated Agents |
|-------------|-------------------|------------------|
| **Security** | "auth", "security", "vulnerability", "encryption" | security-engineer |
| **Performance** | "slow", "optimization", "bottleneck", "latency" | performance-engineer |
| **Frontend** | "UI", "React", "Vue", "component", "responsive" | frontend-architect |
| **Backend** | "API", "server", "database", "REST", "GraphQL" | backend-architect |
| **Testing** | "test", "QA", "validation", "coverage" | quality-engineer |
| **DevOps** | "deploy", "CI/CD", "Docker", "Kubernetes" | devops-architect |
| **Architecture** | "architecture", "microservices", "scalability" | system-architect |
| **Python** | ".py", "Django", "FastAPI", "asyncio" | python-expert |
| **Problems** | "bug", "issue", "debugging", "troubleshoot" | root-cause-analyst |
| **Code Quality** | "refactor", "clean code", "technical debt" | refactoring-expert |
| **Documentation** | "documentation", "readme", "API docs" | technical-writer |
| **Learning** | "explain", "tutorial", "beginner", "teaching" | learning-guide |
| **Requirements** | "requirements", "PRD", "specification" | requirements-analyst |
### Command-Agent Mapping
| Command | Primary Agents | Supporting Agents |
|---------|----------------|-------------------|
| `/sc:implement` | Domain architects (frontend, backend) | security-engineer, quality-engineer |
| `/sc:analyze` | quality-engineer, security-engineer | performance-engineer, root-cause-analyst |
| `/sc:troubleshoot` | root-cause-analyst | Domain specialists, performance-engineer |
| `/sc:improve` | refactoring-expert | quality-engineer, performance-engineer |
| `/sc:document` | technical-writer | Domain specialists, learning-guide |
| `/sc:design` | system-architect | Domain architects, requirements-analyst |
| `/sc:test` | quality-engineer | security-engineer, performance-engineer |
| `/sc:explain` | learning-guide | technical-writer, domain specialists |
### Most Effective Agent Combinations
**Development Workflows**:
```bash
# Web application (4-5 agents)
frontend-architect + backend-architect + security-engineer + quality-engineer + devops-architect
# API development (3-4 agents)
backend-architect + security-engineer + technical-writer + quality-engineer
# Data platform (3-4 agents)
python-expert + performance-engineer + security-engineer + system-architect
```
**Analysis Workflows**:
```bash
# Security audit (3-4 agents)
security-engineer + quality-engineer + root-cause-analyst + technical-writer
# Performance investigation (3-4 agents)
performance-engineer + root-cause-analyst + system-architect + devops-architect
# Legacy assessment (4-5 agents)
refactoring-expert + system-architect + quality-engineer + security-engineer + technical-writer
```
**Communication Workflows**:
```bash
# Technical documentation (3-4 agents)
technical-writer + requirements-analyst + domain experts + learning-guide
# Educational content (3-4 agents)
learning-guide + technical-writer + frontend-architect + quality-engineer
```
## Best Practices 💡
### Getting Started (Simple Approach)
**Natural Language First:**
1. **Describe Your Goal**: Use natural language with domain-specific keywords
2. **Trust Auto-Activation**: Let the system route to appropriate agents automatically
3. **Learn from Patterns**: Observe which agents activate for different request types
4. **Iterate and Refine**: Add specificity to engage additional specialist agents
### Optimizing Agent Selection
**Effective Keyword Usage:**
- **Specific > Generic**: Use "authentication" instead of "login" for security-engineer
- **Technical Terms**: Include framework names, technologies, and specific challenges
- **Context Clues**: Mention file types, project scope, and complexity indicators
- **Quality Keywords**: Add "security", "performance", "accessibility" for comprehensive coverage
**Request Optimization Examples:**
```bash
# Generic (limited agent activation)
"Fix the login feature"
# Optimized (multi-agent coordination)
"Implement secure JWT authentication with rate limiting and accessibility compliance"
# → Triggers: security-engineer + backend-architect + frontend-architect + quality-engineer
```
### Common Usage Patterns
**Development Workflows:**
```bash
# Full-stack feature development
/sc:implement "responsive user dashboard with real-time notifications"
# → frontend-architect + backend-architect + performance-engineer
# API development with documentation
/sc:create "REST API for payment processing with comprehensive docs"
# → backend-architect + security-engineer + technical-writer + quality-engineer
# Performance optimization investigation
/sc:troubleshoot "slow database queries affecting user experience"
# → performance-engineer + root-cause-analyst + backend-architect
```
**Analysis Workflows:**
```bash
# Security assessment
/sc:analyze "authentication system for GDPR compliance vulnerabilities"
# → security-engineer + quality-engineer + requirements-analyst
# Code quality review
/sc:review "legacy codebase for modernization opportunities"
# → refactoring-expert + system-architect + quality-engineer + technical-writer
# Learning and explanation
/sc:explain "microservices patterns with hands-on examples"
# → system-architect + learning-guide + technical-writer
```
### Advanced Agent Coordination
**Multi-Domain Projects:**
- **Start Broad**: Begin with system-level keywords to engage architecture agents
- **Add Specificity**: Include domain-specific needs to activate specialist agents
- **Quality Integration**: Automatically include security, performance, and testing perspectives
- **Documentation Inclusion**: Add learning or documentation needs for comprehensive coverage
**Troubleshooting Agent Selection:**
**Problem: Wrong agents activating**
- Solution: Use more specific domain terminology
- Example: "database optimization" → performance-engineer + backend-architect
**Problem: Not enough agents**
- Solution: Increase complexity indicators and cross-domain keywords
- Example: Add "security", "performance", "documentation" to requests
**Problem: Too many agents**
- Solution: Focus on primary domain with specific technical terms
- Example: Use "/sc:focus backend" to limit scope
### Quality-Driven Development
**Security-First Approach:**
Always include security considerations in development requests to automatically engage security-engineer alongside domain specialists.
**Performance Integration:**
Include performance keywords ("fast", "efficient", "scalable") to ensure performance-engineer coordination from the start.
**Accessibility Compliance:**
Use "accessible", "WCAG", or "inclusive" to automatically include accessibility validation in frontend development.
**Documentation Culture:**
Add "documented", "explained", or "tutorial" to requests for automatic technical-writer inclusion and knowledge transfer.
---
## Understanding Agent Intelligence 🧠
### What Makes Agents Effective
**Domain Expertise**: Each agent has specialized knowledge patterns, behavioral approaches, and problem-solving methodologies specific to their domain.
**Contextual Activation**: Agents analyze request context, not just keywords, to determine relevance and engagement level.
**Collaborative Intelligence**: Multi-agent coordination produces synergistic results that exceed individual agent capabilities.
**Adaptive Learning**: Agent selection improves based on request patterns and successful coordination outcomes.
### Agent vs. Traditional AI
**Traditional Approach**: Single AI handles all domains with varying levels of expertise
**Agent Approach**: Specialized experts collaborate with deep domain knowledge and focused problem-solving
**Benefits**:
- Higher accuracy in domain-specific tasks
- More sophisticated problem-solving methodologies
- Better quality assurance through specialist review
- Coordinated multi-perspective analysis
### Trust the System, Understand the Patterns
**What to Expect**:
- Automatic routing to appropriate domain experts
- Multi-agent coordination for complex tasks
- Quality integration through automatic QA agent inclusion
- Learning opportunities through educational agent activation
**What Not to Worry About**:
- Manual agent selection or configuration
- Complex routing rules or agent management
- Agent performance tuning or optimization
- Micromanaging agent interactions
---
## Related Resources 📚
### Essential Documentation
- **[Commands Guide](commands.md)** - Master SuperClaude commands that trigger optimal agent coordination
- **[MCP Servers](mcp-servers.md)** - Enhanced agent capabilities through specialized tool integration
- **[Session Management](session-management.md)** - Long-term workflows with persistent agent context
### Advanced Usage
- **[Behavioral Modes](modes.md)** - Context optimization for enhanced agent coordination
- **[Best Practices](../Reference/best-practices.md)** - Expert techniques for agent optimization
- **[Examples Cookbook](../Reference/examples-cookbook.md)** - Real-world agent coordination patterns
### Development Resources
- **[Technical Architecture](../Developer-Guide/technical-architecture.md)** - Understanding SuperClaude's agent system design
- **[Contributing](../Developer-Guide/contributing-code.md)** - Extending agent capabilities and coordination patterns
---
## Your Agent Journey 🚀
**Week 1: Natural Usage**
Start with natural language descriptions. Notice which agents activate and why. Build intuition for keyword patterns without overthinking the process.
**Week 2-3: Pattern Recognition**
Observe agent coordination patterns. Understand how complexity and domain keywords influence agent selection. Begin optimizing request phrasing for better coordination.
**Month 2+: Expert Coordination**
Master multi-domain requests that trigger optimal agent combinations. Leverage troubleshooting techniques for perfect agent selection. Use advanced patterns for complex workflows.
**The SuperClaude Advantage:**
Experience the power of 13 specialized AI experts working in perfect coordination, all through simple, natural language requests. No configuration, no management, just intelligent collaboration that scales with your needs.
🎯 **Ready to experience intelligent agent coordination? Start with `/sc:implement` and discover the magic of specialized AI collaboration.**

735
Docs/User-Guide/commands.md Normal file
View File

@@ -0,0 +1,735 @@
# SuperClaude Commands Guide
## ✅ Verification Status
- **SuperClaude Version**: v4.0+ Compatible
- **Last Tested**: 2025-01-16
- **Test Environment**: Linux/Windows/macOS
- **Command Syntax**: ✅ All Verified
> **Quick Start**: Try `/sc:brainstorm "your project idea"` → `/sc:implement "feature name"` → `/sc:test` to experience the core workflow.
## 🧪 Testing Your Setup
Before using this guide, verify your SuperClaude installation:
```bash
# Verify SuperClaude is working
SuperClaude --version
# Expected: SuperClaude Framework v4.0+
# Test basic command syntax
echo "/sc:brainstorm 'test'" | claude --help
# Expected: No syntax errors
# Check MCP server connectivity
SuperClaude status --mcp
# Expected: At least context7 and sequential-thinking connected
```
**If tests fail**: Check [Installation Guide](../Getting-Started/installation.md) or [Troubleshooting](#troubleshooting)
## Table of Contents
- [Essential Commands](#essential-commands) - Start here (8 core commands)
- [Common Workflows](#common-workflows) - Command combinations that work
- [Full Command Reference](#full-command-reference) - All 21 commands organized by category
- [Troubleshooting](#troubleshooting) - Common issues and solutions
- [Command Index](#command-index) - Find commands by category
---
## Essential Commands
**Start with these 8 commands for immediate productivity:**
### `/sc:brainstorm` - Project Discovery
**Purpose**: Interactive requirements discovery and project planning
**Syntax**: `/sc:brainstorm "your idea"` `[--strategy systematic|creative]`
**Auto-Activation**: Architect + Analyst + PM specialists, Sequential + Context7 MCP
#### Success Criteria
- [ ] Command executes without errors
- [ ] Generates 3-5 discovery questions relevant to your domain
- [ ] Produces structured requirements document or PRD
- [ ] Maintains discovery context for follow-up questions
**Use Cases**:
- New project planning: `/sc:brainstorm "e-commerce platform"`
- Feature exploration: `/sc:brainstorm "user authentication system"`
- Problem solving: `/sc:brainstorm "slow database queries"`
- Architecture decisions: `/sc:brainstorm "microservices vs monolith"`
**Examples**:
```bash
/sc:brainstorm "mobile todo app" # → Requirements document + PRD
/sc:brainstorm "API performance" --strategy systematic # → Analysis + solutions
```
**Verify:** `/sc:brainstorm "test project"` should ask discovery questions about scope, users, and technology choices
**Test:** Follow-up questions should build on initial responses
**Check:** Output should include actionable requirements or next steps
### `/sc:implement` - Feature Development
**Purpose**: Full-stack feature implementation with intelligent specialist routing
**Syntax**: `/sc:implement "feature description"` `[--type frontend|backend|fullstack] [--focus security|performance]`
**Auto-Activation**: Context-dependent specialists (Frontend, Backend, Security), Context7 + Magic MCP
#### Success Criteria
- [ ] Command activates appropriate domain specialists
- [ ] Generates functional, production-ready code
- [ ] Includes basic error handling and validation
- [ ] Follows project conventions and patterns
**Use Cases**:
- Authentication: `/sc:implement "JWT login system"` → Security specialist + validation
- UI components: `/sc:implement "responsive dashboard"` → Frontend + Magic MCP
- APIs: `/sc:implement "REST user endpoints"` → Backend + Context7 patterns
- Database: `/sc:implement "user schema with relationships"` → Database specialist
**Examples**:
```bash
/sc:implement "user registration with email verification" # → Full auth flow
/sc:implement "payment integration" --focus security # → Secure payment system
```
**Verify:** Code should compile/run without immediate errors
**Test:** `/sc:implement "hello world function"` should produce working code
**Check:** Security specialist should activate for auth-related implementations
### `/sc:analyze` - Code Assessment
**Purpose**: Comprehensive code analysis across quality, security, and performance
**Syntax**: `/sc:analyze [path]` `[--focus quality|security|performance|architecture]`
**Auto-Activation**: Analyzer specialist + domain experts based on focus
**Use Cases**:
- Project health: `/sc:analyze .` → Overall assessment
- Security audit: `/sc:analyze --focus security` → Vulnerability report
- Performance review: `/sc:analyze --focus performance` → Bottleneck identification
- Architecture review: `/sc:analyze --focus architecture` → Design patterns analysis
**Examples**:
```bash
/sc:analyze src/ # → Quality + security + performance report
/sc:analyze --focus security --depth deep # → Detailed security audit
```
### `/sc:troubleshoot` - Problem Diagnosis
**Purpose**: Systematic issue diagnosis with root cause analysis
**Syntax**: `/sc:troubleshoot "issue description"` `[--type build|runtime|performance]`
**Auto-Activation**: Analyzer + DevOps specialists, Sequential MCP for systematic debugging
**Use Cases**:
- Runtime errors: `/sc:troubleshoot "500 error on login"` → Error investigation
- Build failures: `/sc:troubleshoot --type build` → Compilation issues
- Performance problems: `/sc:troubleshoot "slow page load"` → Performance analysis
- Integration issues: `/sc:troubleshoot "API timeout errors"` → Connection diagnosis
**Examples**:
```bash
/sc:troubleshoot "users can't login" # → Systematic auth flow analysis
/sc:troubleshoot --type build --fix # → Build errors + suggested fixes
```
### `/sc:test` - Quality Assurance
**Purpose**: Comprehensive testing with coverage analysis
**Syntax**: `/sc:test` `[--type unit|integration|e2e] [--coverage] [--fix]`
**Auto-Activation**: QA specialist, Playwright MCP for E2E testing
**Use Cases**:
- Full test suite: `/sc:test --coverage` → All tests + coverage report
- Unit testing: `/sc:test --type unit --watch` → Continuous unit tests
- E2E validation: `/sc:test --type e2e` → Browser automation tests
- Test fixing: `/sc:test --fix` → Repair failing tests
**Examples**:
```bash
/sc:test --coverage --report # → Complete test run with coverage
/sc:test --type e2e --browsers chrome,firefox # → Cross-browser testing
```
### `/sc:improve` - Code Enhancement
**Purpose**: Apply systematic code improvements and optimizations
**Syntax**: `/sc:improve [path]` `[--type performance|quality|security] [--preview]`
**Auto-Activation**: Analyzer specialist, Morphllm MCP for pattern-based improvements
**Use Cases**:
- General improvements: `/sc:improve src/` → Code quality enhancements
- Performance optimization: `/sc:improve --type performance` → Speed improvements
- Security hardening: `/sc:improve --type security` → Security best practices
- Refactoring: `/sc:improve --preview --safe-mode` → Safe code refactoring
**Examples**:
```bash
/sc:improve --type performance --measure-impact # → Performance optimizations
/sc:improve --preview --backup # → Preview changes before applying
```
### `/sc:document` - Documentation Generation
**Purpose**: Generate comprehensive documentation for code and APIs
**Syntax**: `/sc:document [path]` `[--type api|user-guide|technical] [--format markdown|html]`
**Auto-Activation**: Documentation specialist
**Use Cases**:
- API docs: `/sc:document --type api` → OpenAPI/Swagger documentation
- User guides: `/sc:document --type user-guide` → End-user documentation
- Technical docs: `/sc:document --type technical` → Developer documentation
- Inline comments: `/sc:document src/ --inline` → Code comments
**Examples**:
```bash
/sc:document src/api/ --type api --format openapi # → API specification
/sc:document --type user-guide --audience beginners # → User documentation
```
### `/sc:workflow` - Implementation Planning
**Purpose**: Generate structured implementation plans from requirements
**Syntax**: `/sc:workflow "feature description"` `[--strategy agile|waterfall] [--format markdown]`
**Auto-Activation**: Architect + Project Manager specialists, Sequential MCP
**Use Cases**:
- Feature planning: `/sc:workflow "user authentication"` → Implementation roadmap
- Sprint planning: `/sc:workflow --strategy agile` → Agile task breakdown
- Architecture planning: `/sc:workflow "microservices migration"` → Migration strategy
- Timeline estimation: `/sc:workflow --detailed --estimates` → Time and resource planning
**Examples**:
```bash
/sc:workflow "real-time chat feature" # → Structured implementation plan
/sc:workflow "payment system" --strategy agile # → Sprint-ready tasks
```
---
## Common Workflows
**Proven command combinations for common development scenarios:**
### New Project Setup
```bash
/sc:brainstorm "project concept" # Define requirements
→ /sc:design "system architecture" # Create technical design
→ /sc:workflow "implementation plan" # Generate development roadmap
→ /sc:save "project-plan" # Save session context
```
### Feature Development
```bash
/sc:load "project-context" # Load existing project
→ /sc:implement "feature name" # Build the feature
→ /sc:test --coverage # Validate with tests
→ /sc:document --type api # Generate documentation
```
### Code Quality Improvement
```bash
/sc:analyze --focus quality # Assess current state
→ /sc:cleanup --comprehensive # Clean technical debt
→ /sc:improve --preview # Preview improvements
→ /sc:test --coverage # Validate changes
```
### Bug Investigation
```bash
/sc:troubleshoot "issue description" # Diagnose the problem
→ /sc:analyze --focus problem-area # Deep analysis of affected code
→ /sc:improve --fix --safe-mode # Apply targeted fixes
→ /sc:test --related-tests # Verify resolution
```
### Pre-Production Checklist
```bash
/sc:analyze --focus security # Security audit
→ /sc:test --type e2e --comprehensive # Full E2E validation
→ /sc:build --optimize --target production # Production build
→ /sc:document --type deployment # Deployment documentation
```
---
## Full Command Reference
### Development Commands
| Command | Purpose | Auto-Activation | Best For |
|---------|---------|-----------------|----------|
| **workflow** | Implementation planning | Architect + PM, Sequential | Project roadmaps, sprint planning |
| **implement** | Feature development | Context specialists, Context7 + Magic | Full-stack features, API development |
| **build** | Project compilation | DevOps specialist | CI/CD, production builds |
| **design** | System architecture | Architect + UX, diagrams | API specs, database schemas |
#### `/sc:build` - Project Compilation
**Purpose**: Build and package projects with error handling
**Syntax**: `/sc:build` `[--optimize] [--target production] [--fix-errors]`
**Examples**: Production builds, dependency management, build optimization
**Common Issues**: Missing deps → auto-install, memory issues → optimized parameters
#### `/sc:design` - System Architecture
**Purpose**: Create technical designs and specifications
**Syntax**: `/sc:design "system description"` `[--type api|database|system] [--format openapi|mermaid]`
**Examples**: API specifications, database schemas, component architecture
**Output Formats**: Markdown, Mermaid diagrams, OpenAPI specs, ERD
### Analysis Commands
| Command | Purpose | Auto-Activation | Best For |
|---------|---------|-----------------|----------|
| **analyze** | Code assessment | Analyzer + domain experts | Quality audits, security reviews |
| **troubleshoot** | Problem diagnosis | Analyzer + DevOps, Sequential | Bug investigation, performance issues |
| **explain** | Code explanation | Educational focus | Learning, code reviews |
#### `/sc:explain` - Code & Concept Explanation
**Purpose**: Educational explanations with progressive detail
**Syntax**: `/sc:explain "topic or file"` `[--level beginner|intermediate|expert]`
**Examples**: Code walkthroughs, concept clarification, pattern explanation
**Teaching Styles**: Code-walkthrough, concept, pattern, comparison, tutorial
### Quality Commands
| Command | Purpose | Auto-Activation | Best For |
|---------|---------|-----------------|----------|
| **improve** | Code enhancement | Analyzer, Morphllm | Performance optimization, refactoring |
| **cleanup** | Technical debt | Analyzer, Morphllm | Dead code removal, organization |
| **test** | Quality assurance | QA specialist, Playwright | Test automation, coverage analysis |
| **document** | Documentation | Documentation specialist | API docs, user guides |
#### `/sc:cleanup` - Technical Debt Reduction
**Purpose**: Remove dead code and optimize project structure
**Syntax**: `/sc:cleanup` `[--type imports|dead-code|formatting] [--confirm-before-delete]`
**Examples**: Import optimization, file organization, dependency cleanup
**Operations**: Dead code removal, import sorting, style formatting, file organization
### Project Management Commands
| Command | Purpose | Auto-Activation | Best For |
|---------|---------|-----------------|----------|
| **estimate** | Project estimation | Project Manager | Timeline planning, resource allocation |
| **task** | Task management | PM, Serena | Complex workflows, task tracking |
| **spawn** | Meta-orchestration | PM + multiple specialists | Large-scale projects, parallel execution |
#### `/sc:estimate` - Project Estimation
**Purpose**: Development estimates with complexity analysis
**Syntax**: `/sc:estimate "project description"` `[--detailed] [--team-size N]`
**Features**: Time estimates, complexity analysis, resource allocation, risk assessment
**Stability**: 🌱 Growing - Improving estimation accuracy with each release
#### `/sc:task` - Project Management
**Purpose**: Complex task workflow management
**Syntax**: `/sc:task "task description"` `[--breakdown] [--priority high|medium|low]`
**Features**: Task breakdown, priority management, cross-session tracking, dependency mapping
**Stability**: 🌱 Growing - Enhanced delegation patterns being refined
#### `/sc:spawn` - Meta-System Orchestration
**Purpose**: Large-scale project orchestration with parallel execution
**Syntax**: `/sc:spawn "complex project"` `[--parallel] [--monitor]`
**Features**: Workflow orchestration, parallel execution, progress monitoring, resource management
**Stability**: 🌱 Growing - Advanced orchestration capabilities under development
### Utility Commands
| Command | Purpose | Auto-Activation | Best For |
|---------|---------|-----------------|----------|
| **git** | Version control | DevOps specialist | Commit management, branch strategies |
| **index** | Command discovery | Context analysis | Exploring capabilities, finding commands |
#### `/sc:git` - Version Control
**Purpose**: Intelligent Git operations with smart commit messages
**Syntax**: `/sc:git [operation]` `[--smart-messages] [--conventional]`
**Features**: Smart commit messages, branch management, conflict resolution, workflow optimization
**Stability**: ✅ Reliable - Proven commit message generation and workflow patterns
#### `/sc:index` - Command Discovery
**Purpose**: Explore available commands and capabilities
**Syntax**: `/sc:index` `[--category development|quality] [--search "keyword"]`
**Features**: Command discovery, capability exploration, contextual recommendations, usage patterns
**Stability**: 🧪 Testing - Command categorization and search being refined
### Session Commands
| Command | Purpose | Auto-Activation | Best For |
|---------|---------|-----------------|----------|
| **load** | Context loading | Context analysis, Serena | Session initialization, project onboarding |
| **save** | Session persistence | Session management, Serena | Checkpointing, context preservation |
| **reflect** | Task validation | Context analysis, Serena | Progress assessment, completion validation |
| **select-tool** | Tool optimization | Meta-analysis, all MCP | Performance optimization, tool selection |
#### `/sc:load` - Session Context Loading
**Purpose**: Initialize project context and session state
**Syntax**: `/sc:load [path]` `[--focus architecture|codebase] [--session "name"]`
**Features**: Project structure analysis, context restoration, session initialization, intelligent onboarding
**Stability**: 🔧 Functional - Core loading works, advanced context analysis improving
#### `/sc:save` - Session Persistence
**Purpose**: Save session context and progress
**Syntax**: `/sc:save "session-name"` `[--checkpoint] [--description "details"]`
**Features**: Session checkpointing, context preservation, progress tracking, cross-session continuity
**Stability**: 🔧 Functional - Basic persistence reliable, advanced features being enhanced
#### `/sc:reflect` - Task Reflection & Validation
**Purpose**: Analyze completion status and validate progress
**Syntax**: `/sc:reflect` `[--type completion|progress] [--task "task-name"]`
**Features**: Progress analysis, completion validation, quality assessment, next steps recommendation
**Stability**: 🌱 Growing - Analysis patterns being refined for better insights
#### `/sc:select-tool` - Intelligent Tool Selection
**Purpose**: Optimize MCP tool selection based on complexity analysis
**Syntax**: `/sc:select-tool "operation description"` `[--analyze-complexity] [--recommend]`
**Features**: Complexity analysis, tool recommendation, MCP coordination, optimization strategies, resource planning
**Stability**: 🌱 Growing - Tool selection algorithms being optimized
---
## Command Index
### By Category
**🚀 Project Initiation**
- `brainstorm` - Interactive discovery
- `design` - System architecture
- `workflow` - Implementation planning
**⚡ Development**
- `implement` - Feature development
- `build` - Project compilation
- `git` - Version control
**🔍 Analysis & Quality**
- `analyze` - Code assessment
- `troubleshoot` - Problem diagnosis
- `explain` - Code explanation
- `improve` - Code enhancement
- `cleanup` - Technical debt
- `test` - Quality assurance
**📝 Documentation**
- `document` - Documentation generation
**📊 Project Management**
- `estimate` - Project estimation
- `task` - Task management
- `spawn` - Meta-orchestration
**🧠 Session & Intelligence**
- `load` - Context loading
- `save` - Session persistence
- `reflect` - Task validation
- `select-tool` - Tool optimization
**🔧 Utility**
- `index` - Command discovery
### By Maturity Level
**🔥 Production Ready** (Consistent, reliable results)
- `brainstorm`, `analyze`, `implement`, `troubleshoot`
**✅ Reliable** (Well-tested, stable features)
- `workflow`, `design`, `test`, `document`, `git`
**🔧 Functional** (Core features work, enhancements ongoing)
- `improve`, `cleanup`, `build`, `load`, `save`
**🌱 Growing** (Rapid improvement, usable but evolving)
- `spawn`, `task`, `estimate`, `reflect`, `select-tool`
**🧪 Testing** (Experimental features, feedback welcome)
- `index`, `explain`
---
## 🚨 Quick Troubleshooting
### Common Issues (< 2 minutes)
- **Command not found**: Check `/sc:` prefix and SuperClaude installation
- **Invalid flag**: Verify flag against `python3 -m SuperClaude --help`
- **MCP server error**: Check Node.js installation and server configuration
- **Permission denied**: Run `chmod +x` or check file permissions
### Immediate Fixes
- **Reset session**: `/sc:load` to reinitialize
- **Clear cache**: Remove `~/.claude/cache/` directory
- **Restart Claude Code**: Exit and restart application
- **Check status**: `python3 -m SuperClaude --version`
## Troubleshooting
### Command-Specific Issues
**Command Not Recognized:**
```bash
# Problem: "/sc:analyze not found"
# Quick Fix: Check command spelling and prefix
/sc:help commands # List all available commands
python3 -m SuperClaude --help # Verify installation
```
**Command Hangs or No Response:**
```bash
# Problem: Command starts but never completes
# Quick Fix: Check for dependency issues
/sc:command --timeout 30 # Set explicit timeout
/sc:command --no-mcp # Try without MCP servers
ps aux | grep SuperClaude # Check for hung processes
```
**Invalid Flag Combinations:**
```bash
# Problem: "Flag conflict detected"
# Quick Fix: Check flag compatibility
/sc:help flags # List valid flags
/sc:command --help # Command-specific flags
# Use simpler flag combinations or single flags
```
### MCP Server Issues
**Server Connection Failures:**
```bash
# Problem: MCP servers not responding
# Quick Fix: Verify server status and restart
SuperClaude status --mcp # Check all servers
/sc:command --no-mcp # Bypass MCP temporarily
node --version # Verify Node.js v16+
npm cache clean --force # Clear NPM cache
```
**Magic/Morphllm API Key Issues:**
```bash
# Problem: "API key required" errors
# Expected: These servers need paid API keys
export TWENTYFIRST_API_KEY="your_key" # For Magic
export MORPH_API_KEY="your_key" # For Morphllm
# Or use: /sc:command --no-mcp to skip paid services
```
### Performance Issues
**Slow Command Execution:**
```bash
# Problem: Commands taking >30 seconds
# Quick Fix: Reduce scope and complexity
/sc:command --scope file # Limit to single file
/sc:command --concurrency 1 # Reduce parallel ops
/sc:command --uc # Use compressed output
/sc:command --no-mcp # Native execution only
```
**Memory/Resource Exhaustion:**
```bash
# Problem: System running out of memory
# Quick Fix: Resource management
/sc:command --memory-limit 1024 # Limit to 1GB
/sc:command --scope module # Reduce analysis scope
/sc:command --safe-mode # Conservative execution
killall node # Reset MCP servers
```
### Error Code Reference
| Code | Meaning | Quick Fix |
|------|---------|-----------|
| **E001** | Command syntax error | Check command spelling and `/sc:` prefix |
| **E002** | Flag not recognized | Verify flag with `/sc:help flags` |
| **E003** | MCP server connection failed | Check Node.js and run `npm cache clean --force` |
| **E004** | Permission denied | Check file permissions or run with appropriate access |
| **E005** | Timeout exceeded | Reduce scope with `--scope file` or increase `--timeout` |
| **E006** | Memory limit exceeded | Use `--memory-limit` or `--scope module` |
| **E007** | Invalid project structure | Verify you're in a valid project directory |
| **E008** | Dependency missing | Check installation with `SuperClaude --version` |
### Progressive Support Levels
**Level 1: Quick Fix (< 2 min)**
- Use the Common Issues section above
- Try immediate fixes like restart or flag changes
- Check basic installation and dependencies
**Level 2: Detailed Help (5-15 min)**
```bash
# Comprehensive diagnostics
SuperClaude diagnose --verbose
/sc:help troubleshoot
cat ~/.claude/logs/superclaude.log | tail -50
```
- See [Common Issues Guide](../Reference/common-issues.md) for detailed troubleshooting
**Level 3: Expert Support (30+ min)**
```bash
# Deep system analysis
SuperClaude diagnose --full-system
strace -e trace=file /sc:command 2>&1 | grep ENOENT
lsof | grep SuperClaude
# Check GitHub Issues for known problems
```
- See [Diagnostic Reference Guide](../Reference/diagnostic-reference.md) for advanced procedures
**Level 4: Community Support**
- Report issues at [GitHub Issues](https://github.com/SuperClaude-Org/SuperClaude_Framework/issues)
- Include diagnostic output from Level 3
- Describe steps to reproduce the problem
### Success Validation
After applying fixes, test with:
- [ ] `python3 -m SuperClaude --version` (should show version)
- [ ] `/sc:analyze README.md` (should complete without errors)
- [ ] Check MCP servers respond: `SuperClaude status --mcp`
- [ ] Verify flags work: `/sc:help flags`
- [ ] Test basic workflow: `/sc:brainstorm "test"` → should ask questions
## Quick Troubleshooting (Legacy)
- **Command not found** → Check installation: `SuperClaude --version`
- **Flag error** → Verify against [FLAGS.md](flags.md)
- **MCP error** → Check server configuration: `SuperClaude status --mcp`
- **No output** → Restart Claude Code session
- **Slow performance** → Use `--scope file` or `--no-mcp`
### Common Issues
**Command Not Recognized**
```bash
# Check SuperClaude installation
SuperClaude --version
# Verify component installation
SuperClaude install --list-components
# Restart Claude Code session
```
**Slow Performance**
```bash
# Limit analysis scope
/sc:analyze src/ --scope file
# Use specific MCP servers only
/sc:implement "feature" --c7 --seq # Instead of --all-mcp
# Reduce concurrency
/sc:improve . --concurrency 2
```
**MCP Server Connection Issues**
```bash
# Check server status
ls ~/.claude/.claude.json
# Reinstall MCP components
SuperClaude install --components mcp --force
# Use native execution fallback
/sc:analyze . --no-mcp
```
**Scope Management Issues**
```bash
# Control analysis depth
/sc:analyze . --scope project # Instead of system-wide
# Focus on specific areas
/sc:analyze --focus security # Instead of comprehensive
# Preview before execution
/sc:improve . --dry-run --preview
```
### Error Recovery Patterns
**Build Failures**
```bash
/sc:troubleshoot --type build --verbose
→ /sc:build --fix-errors --deps-install
→ /sc:test --smoke # Quick validation
```
**Test Failures**
```bash
/sc:analyze --focus testing # Identify test issues
→ /sc:test --fix --preview # Preview test fixes
→ /sc:test --coverage # Verify repairs
```
**Memory/Resource Issues**
```bash
/sc:select-tool "task" --analyze-complexity # Check resource needs
→ /sc:task "subtask" --scope module # Break into smaller pieces
→ /sc:spawn "large-task" --parallel --concurrency 2 # Controlled parallelism
```
---
## Getting Help
**In-Session Help**
- `/sc:index --help` - Command discovery and help
- `/sc:explain "command-name"` - Detailed command explanation
- `/sc:brainstorm --strategy systematic` - Systematic problem exploration
**Documentation**
- [Quick Start Guide](../Getting-Started/quick-start.md) - Essential setup and first steps
- [Best Practices](../Reference/best-practices.md) - Optimization and workflow patterns
- [Examples Cookbook](../Reference/examples-cookbook.md) - Real-world usage patterns
**Community Support**
- [GitHub Issues](https://github.com/SuperClaude-Org/SuperClaude_Framework/issues) - Bug reports and feature requests
- [Discussions](https://github.com/SuperClaude-Org/SuperClaude_Framework/discussions) - Community help and patterns
---
## 🎯 Comprehensive Testing Procedures
### Essential Commands Verification
Run these tests to ensure all essential commands work correctly:
```bash
# Test 1: Discovery and Planning
/sc:brainstorm "test mobile app"
# Expected: 3-5 discovery questions about users, features, platform
# Test 2: Implementation
/sc:implement "simple hello function"
# Expected: Working code that compiles/runs without errors
# Test 3: Analysis
/sc:analyze . --focus quality
# Expected: Quality assessment with specific recommendations
# Test 4: Troubleshooting
/sc:troubleshoot "simulated performance issue"
# Expected: Systematic investigation approach with hypotheses
# Test 5: Testing
/sc:test --coverage
# Expected: Test execution or test planning with coverage analysis
# Test 6: Code Enhancement
/sc:improve README.md --preview
# Expected: Improvement suggestions with preview of changes
# Test 7: Documentation
/sc:document . --type api
# Expected: API documentation or documentation planning
# Test 8: Workflow Planning
/sc:workflow "user authentication feature"
# Expected: Structured implementation plan with phases
```
### Success Benchmarks
- **Response Time**: Commands should respond within 10 seconds
- **Accuracy**: Domain specialists should activate for relevant requests
- **Completeness**: Outputs should include actionable next steps
- **Consistency**: Similar requests should produce consistent approaches
### Performance Validation
```bash
# Test resource usage
time /sc:analyze large-project/
# Expected: Completion within reasonable time based on project size
# Test MCP coordination
/sc:implement "React component" --verbose
# Expected: Magic + Context7 activation visible in output
# Test flag override
/sc:analyze . --no-mcp
# Expected: Native execution only, faster response
```
---
**Remember**: SuperClaude learns from your usage patterns. Start with the [Essential Commands](#essential-commands), explore [Common Workflows](#common-workflows), and gradually discover advanced capabilities. Use `/sc:index` whenever you need guidance.

974
Docs/User-Guide/flags.md Normal file
View File

@@ -0,0 +1,974 @@
# SuperClaude Framework Flags User Guide 🏁
## ✅ Verification Status
- **SuperClaude Version**: v4.0+ Compatible
- **Last Tested**: 2025-01-16
- **Test Environment**: Linux/Windows/macOS
- **Flag Syntax**: ✅ All Verified
## 🧪 Testing Your Flag Setup
Before using flags, verify they work correctly:
```bash
# Test basic flag recognition
/sc:analyze . --help
# Expected: Shows available flags without errors
# Test auto-flag activation
/sc:implement "test component"
# Expected: Magic + Context7 should auto-activate for UI requests
# Test manual flag override
/sc:analyze . --no-mcp
# Expected: Native execution only, no MCP servers
```
**If tests fail**: Check [Installation Guide](../Getting-Started/installation.md) for flag system setup
## 🤖 Most Flags Activate Automatically - Don't Stress About It!
SuperClaude's intelligent flag system automatically detects task complexity and context, then activates appropriate flags behind the scenes. You get optimized performance without memorizing flag combinations.
**Intelligent Auto-Activation**: Type `/sc:analyze large-codebase/``--think-hard` + `--serena` + `--orchestrate` activate automatically. Type complex multi-file operations → `--task-manage` + `--delegate` optimize execution. Work under resource pressure → `--uc` compresses output.
**Manual Override Available**: When you want specific behavior, flags provide precise control. But in most cases, SuperClaude's automatic selection delivers optimal results.
---
## 🚀 Just Try These (No Flag Knowledge Required)
**Commands Work Great Without Flags:**
```bash
# These automatically get optimal flags
/sc:brainstorm "mobile fitness app"
# → Auto-activates: --brainstorm, --think, --context7
/sc:analyze src/ --focus security
# → Auto-activates: --think-hard, --serena, --orchestrate
/sc:implement "user authentication system"
# → Auto-activates: --task-manage, --c7, --magic, --validate
/sc:troubleshoot "API performance issues"
# → Auto-activates: --think-hard, --seq, --serena, --introspect
/sc:improve legacy-code/ --focus maintainability
# → Auto-activates: --task-manage, --morph, --serena, --safe-mode
```
**Behind-the-Scenes Optimization:**
- **Context Analysis**: Keywords trigger appropriate specialists and tools
- **Complexity Detection**: Multi-file operations get coordination flags
- **Resource Awareness**: System load triggers efficiency optimizations
- **Quality Gates**: Risky operations automatically enable safety flags
- **Performance Tuning**: Optimal tool combinations selected automatically
**When Manual Flags Help:**
- Override automatic detection: `--no-mcp` for lightweight execution
- Force specific behavior: `--uc` for compressed output
- Learning and exploration: `--introspect` to see reasoning
- Resource control: `--concurrency 2` to limit parallel operations
---
## What Are Flags? 🤔
**Flags are Modifiers** that adjust SuperClaude's behavior for specific contexts and requirements:
**Flag Syntax:**
```bash
/sc:command [args] --flag-name [value]
# Examples
/sc:analyze src/ --focus security --depth deep
/sc:implement "auth" --brainstorm --task-manage --validate
/sc:troubleshoot issue/ --think-hard --uc --concurrency 3
```
**Two Types of Activation:**
1. **Automatic** (90% of use): SuperClaude detects context and activates optimal flags
2. **Manual** (10% of use): You override or specify exact behavior needed
**Flag Functions:**
- **Behavioral Modes**: `--brainstorm`, `--introspect`, `--task-manage`
- **Tool Selection**: `--c7`, `--seq`, `--magic`, `--morph`, `--serena`, `--play`
- **Analysis Depth**: `--think`, `--think-hard`, `--ultrathink`
- **Efficiency Control**: `--uc`, `--concurrency`, `--scope`
- **Safety & Quality**: `--safe-mode`, `--validate`, `--dry-run`
**Auto-Activation vs Manual Override:**
- **Auto**: `/sc:implement "React dashboard"` → Magic + Context7 + task coordination
- **Manual**: `/sc:implement "simple function" --no-mcp` → Native-only execution
## Flag Categories 📂
### Planning & Analysis Flags 🧠
**Thinking Depth Control:**
**`--think`** - Standard Analysis (~4K tokens)
- **Auto-Triggers**: Multi-component analysis, moderate complexity
- **Manual Use**: Force structured thinking for simple tasks
- **Enables**: Sequential MCP for systematic reasoning
#### Success Criteria
- [ ] Sequential MCP server activates (check status output)
- [ ] Analysis follows structured methodology with clear sections
- [ ] Output includes evidence-based reasoning and conclusions
- [ ] Token usage approximately 4K or less
```bash
/sc:analyze auth-system/ --think
# → Structured analysis with evidence-based reasoning
```
**Verify:** Sequential MCP should show in status output
**Test:** Output should have systematic structure with hypothesis testing
**Check:** Analysis quality should be notably higher than basic mode
**`--think-hard`** - Deep Analysis (~10K tokens)
- **Auto-Triggers**: Architectural analysis, system-wide dependencies
- **Manual Use**: Force comprehensive analysis
- **Enables**: Sequential + Context7 for deep understanding
```bash
/sc:troubleshoot "performance degradation" --think-hard
# → Comprehensive root cause analysis with framework patterns
```
**`--ultrathink`** - Maximum Analysis (~32K tokens)
- **Auto-Triggers**: Critical system redesign, legacy modernization
- **Manual Use**: Force maximum analytical depth
- **Enables**: All MCP servers for comprehensive capability
```bash
/sc:analyze enterprise-architecture/ --ultrathink
# → Maximum depth with all tools and reasoning capacity
```
**Mode Activation Flags:**
**`--brainstorm`** / **`--bs`** - Interactive Discovery
- **Auto-Triggers**: Vague requests, exploration keywords
- **Manual Use**: Force collaborative requirement discovery
```bash
/sc:implement "better user experience" --brainstorm
# → Socratic questions to clarify requirements before implementation
```
**`--introspect`** - Reasoning Transparency
- **Auto-Triggers**: Error recovery, learning contexts
- **Manual Use**: Expose decision-making process for learning
```bash
/sc:analyze complex-algorithm/ --introspect
# → Transparent reasoning with 🤔, 🎯, ⚡ markers
```
### Efficiency & Control Flags ⚡
**Output Compression:**
**`--uc`** / **`--ultracompressed`** - Token Efficiency (30-50% reduction)
- **Auto-Triggers**: Context usage >75%, large operations, resource pressure
- **Manual Use**: Force compressed communication
- **Effect**: Symbol-enhanced output while preserving ≥95% information quality
```bash
/sc:analyze large-project/ --uc
# → "auth.js:45 → 🛡️ sec risk in user val()" vs verbose explanations
```
**`--token-efficient`** - Moderate Compression
- **Auto-Triggers**: Medium resource pressure, efficiency requirements
- **Manual Use**: Balance between detail and efficiency
```bash
/sc:troubleshoot "memory leak" --token-efficient
# → Structured but concise problem analysis
```
**Execution Control:**
**`--concurrency [n]`** - Parallel Operation Control (1-15)
- **Auto-Triggers**: Resource optimization needs
- **Manual Use**: Control system load and parallel processing
```bash
/sc:improve large-codebase/ --concurrency 3
# → Limit to 3 parallel operations for resource management
```
**`--scope [file|module|project|system]`** - Analysis Boundary
- **Auto-Triggers**: Analysis boundary detection
- **Manual Use**: Explicitly define operational scope
```bash
/sc:analyze src/auth/ --scope module
# → Focus analysis on authentication module only
```
**`--loop`** / **`--iterations [n]`** - Iterative Improvement
- **Auto-Triggers**: "polish", "refine", "enhance", "improve" keywords
- **Manual Use**: Force iterative improvement cycles
```bash
/sc:improve user-interface/ --loop --iterations 3
# → 3 improvement cycles with validation gates
```
### Focus & Specialization Flags 🎯
**Domain-Specific Analysis:**
**`--focus [domain]`** - Target Expertise Application
- **Available Domains**: `performance`, `security`, `quality`, `architecture`, `accessibility`, `testing`
- **Auto-Triggers**: Domain-specific keywords and file patterns
- **Manual Use**: Force specific analytical perspective
```bash
# Security-focused analysis
/sc:analyze payment-system/ --focus security
# → Security specialist + vulnerability assessment + compliance validation
# Performance optimization focus
/sc:improve api-endpoints/ --focus performance
# → Performance engineer + bottleneck analysis + optimization patterns
# Architecture evaluation
/sc:analyze microservices/ --focus architecture
# → System architect + design pattern analysis + scalability assessment
# Quality improvement
/sc:review codebase/ --focus quality
# → Quality engineer + code smell detection + maintainability analysis
```
**Task Management:**
**`--task-manage`** / **`--delegate`** - Complex Coordination
- **Auto-Triggers**: >3 steps, >2 directories, >3 files
- **Manual Use**: Force hierarchical task organization for simple tasks
```bash
/sc:implement "simple feature" --task-manage
# → Phase-based approach with progress tracking even for simple tasks
```
**`--delegate [auto|files|folders]`** - Orchestration Strategy
- **Auto-Triggers**: >7 directories OR >50 files OR complexity >0.8
- **Manual Use**: Control delegation strategy
```bash
/sc:refactor enterprise-codebase/ --delegate folders
# → Delegate by directory structure for systematic organization
```
### Tool Integration Flags 🛠️
**MCP Server Control:**
**Individual Server Flags:**
- **`--c7`** / **`--context7`**: Documentation and framework patterns
- **`--seq`** / **`--sequential`**: Structured multi-step reasoning
- **`--magic`**: Modern UI component generation
- **`--morph`** / **`--morphllm`**: Pattern-based code transformation
- **`--serena`**: Semantic understanding and project memory
- **`--play`** / **`--playwright`**: Browser automation and testing
```bash
# Specific server combinations
/sc:implement "dashboard" --magic --c7
# → UI generation + framework patterns
/sc:analyze complex-issue/ --seq --serena
# → Structured reasoning + project context
/sc:improve legacy-code/ --morph --serena --seq
# → Pattern transformation + context + systematic analysis
```
**Server Group Control:**
**`--all-mcp`** - Maximum Capability
- **Auto-Triggers**: Maximum complexity scenarios, multi-domain problems
- **Manual Use**: Force all tools for comprehensive capability
```bash
/sc:implement "enterprise-platform" --all-mcp
# → All 6 MCP servers coordinated for maximum capability
```
**`--no-mcp`** - Native-Only Execution
- **Auto-Triggers**: Performance priority, simple tasks
- **Manual Use**: Force lightweight execution without MCP overhead
```bash
/sc:explain "simple function" --no-mcp
# → Fast native response without MCP server coordination
```
**Tool Optimization:**
**`--orchestrate`** - Intelligent Tool Selection
- **Auto-Triggers**: Multi-tool operations, performance constraints, >3 files
- **Manual Use**: Force optimal tool coordination
```bash
/sc:refactor components/ --orchestrate
# → Optimal tool selection and parallel execution coordination
```
### Safety & Validation Flags 🛡️
**Risk Management:**
**`--validate`** - Pre-execution Risk Assessment
- **Auto-Triggers**: Risk score >0.7, resource usage >75%, production environment
- **Manual Use**: Force validation gates for any operation
```bash
/sc:implement "payment-processing" --validate
# → Risk assessment + validation gates before implementation
```
**`--safe-mode`** - Maximum Conservative Execution
- **Auto-Triggers**: Resource usage >85%, production environment, critical operations
- **Manual Use**: Force maximum safety protocols
- **Auto-Enables**: `--uc` for efficiency, `--validate` for safety
```bash
/sc:improve production-database/ --safe-mode
# → Conservative execution + auto-backup + rollback planning
```
**Preview & Testing:**
**`--dry-run`** - Preview Without Execution
- **Manual Use**: Preview changes without applying them
```bash
/sc:cleanup legacy-code/ --dry-run
# → Show what would be cleaned up without making changes
```
**`--backup`** - Force Backup Creation
- **Auto-Triggers**: Risky operations, file modifications
- **Manual Use**: Ensure backup creation before operations
```bash
/sc:refactor critical-module/ --backup
# → Create backup before refactoring operations
```
**`--tests-required`** - Mandate Test Validation
- **Auto-Triggers**: Critical code changes, production modifications
- **Manual Use**: Force test execution before proceeding
```bash
/sc:improve auth-system/ --tests-required
# → Run tests and require passing before improvement application
```
### Execution Control Flags 🎛️
**Workflow Management:**
**`--parallel`** - Force Parallel Execution
- **Auto-Triggers**: Independent operations, >3 files, multi-tool scenarios
- **Manual Use**: Force parallel processing for eligible operations
```bash
/sc:analyze multiple-modules/ --parallel
# → Analyze modules concurrently instead of sequentially
```
**`--sequential`** - Force Sequential Execution
- **Manual Use**: Override parallel processing for dependency reasons
```bash
/sc:implement "multi-step-feature" --sequential
# → Force step-by-step execution with dependencies
```
**Resource Control:**
**`--memory-limit [MB]`** - Memory Usage Control
- **Auto-Triggers**: Large operations, resource constraints
- **Manual Use**: Explicit memory management
```bash
/sc:analyze large-dataset/ --memory-limit 2048
# → Limit analysis to 2GB memory usage
```
**`--timeout [seconds]`** - Operation Timeout
- **Auto-Triggers**: Complex operations, MCP server timeouts
- **Manual Use**: Set explicit timeout boundaries
```bash
/sc:troubleshoot "complex-performance-issue" --timeout 300
# → 5-minute timeout for troubleshooting analysis
```
**Output Control:**
**`--format [text|json|html|markdown]`** - Output Format
- **Auto-Triggers**: Analysis export, documentation generation
- **Manual Use**: Specify exact output format
```bash
/sc:analyze api-performance/ --format json --export report.json
# → JSON-formatted analysis results for processing
```
**`--verbose`** / **`--quiet`** - Verbosity Control
- **Manual Use**: Override automatic verbosity decisions
```bash
/sc:build project/ --verbose
# → Detailed build output and progress information
/sc:test suite/ --quiet
# → Minimal output, results only
```
## Common Flag Combinations 🔗
**Development Workflow Patterns:**
**Full Analysis & Improvement:**
```bash
/sc:analyze codebase/ --think-hard --all-mcp --orchestrate
# → Deep analysis + all tools + optimal coordination
```
**Safe Production Changes:**
```bash
/sc:improve production-api/ --safe-mode --validate --backup --tests-required
# → Maximum safety protocols for production modifications
```
**Rapid Prototyping:**
```bash
/sc:implement "quick-feature" --magic --c7 --no-validate
# → Fast UI generation + patterns without safety overhead
```
**Large-Scale Refactoring:**
```bash
/sc:refactor legacy-system/ --task-manage --serena --morph --parallel --backup
# → Systematic coordination + context + transformation + safety
```
**Performance Investigation:**
```bash
/sc:troubleshoot "slow-performance" --think-hard --focus performance --seq --play
# → Deep analysis + performance focus + reasoning + browser testing
```
**Learning & Understanding:**
```bash
/sc:analyze new-codebase/ --introspect --brainstorm --c7 --think
# → Transparent reasoning + discovery + documentation + analysis
```
**Resource-Constrained Environments:**
```bash
/sc:implement "feature" --uc --concurrency 1 --no-mcp --scope file
# → Compressed output + limited resources + lightweight execution
```
**Quality Assurance Workflow:**
```bash
/sc:review code-changes/ --focus quality --validate --tests-required --think
# → Quality analysis + validation + testing + structured reasoning
```
**Documentation Generation:**
```bash
/sc:document api/ --c7 --magic --format markdown --focus accessibility
# → Documentation patterns + UI examples + accessible format
```
**Complex Architecture Design:**
```bash
/sc:design "microservices-platform" --ultrathink --brainstorm --all-mcp --orchestrate
# → Maximum analysis + discovery + all tools + optimal coordination
```
## Flag Reference Quick Cards 📋
### 🧠 Analysis & Thinking Flags
| Flag | Purpose | Auto-Trigger | Token Impact |
|------|---------|--------------|--------------|
| `--think` | Standard analysis | Multi-component tasks | ~4K tokens |
| `--think-hard` | Deep analysis | Architectural tasks | ~10K tokens |
| `--ultrathink` | Maximum analysis | Critical system work | ~32K tokens |
| `--brainstorm` | Interactive discovery | Vague requirements | Variable |
| `--introspect` | Reasoning transparency | Learning contexts | +10% detail |
### ⚡ Efficiency & Performance Flags
| Flag | Purpose | Auto-Trigger | Performance Impact |
|------|---------|--------------|-------------------|
| `--uc` | Token compression | >75% context usage | 30-50% reduction |
| `--token-efficient` | Moderate compression | Resource pressure | 15-30% reduction |
| `--concurrency N` | Parallel control | Multi-file ops | +45% speed |
| `--orchestrate` | Tool optimization | Complex coordination | +30% efficiency |
| `--scope [level]` | Boundary control | Analysis scope | Focused execution |
### 🛠️ Tool Integration Flags
| Flag | MCP Server | Auto-Trigger | Best For |
|------|------------|--------------|----------|
| `--c7` / `--context7` | Context7 | Library imports | Documentation, patterns |
| `--seq` / `--sequential` | Sequential | Complex debugging | Systematic reasoning |
| `--magic` | Magic | UI requests | Component generation |
| `--morph` / `--morphllm` | Morphllm | Multi-file edits | Pattern transformation |
| `--serena` | Serena | Symbol operations | Project memory |
| `--play` / `--playwright` | Playwright | Browser testing | E2E automation |
| `--all-mcp` | All servers | Max complexity | Comprehensive capability |
| `--no-mcp` | None | Simple tasks | Lightweight execution |
### 🎯 Focus & Specialization Flags
| Flag | Domain | Expert Activation | Use Case |
|------|--------|------------------|----------|
| `--focus security` | Security | Security engineer | Vulnerability analysis |
| `--focus performance` | Performance | Performance engineer | Optimization |
| `--focus quality` | Quality | Quality engineer | Code review |
| `--focus architecture` | Architecture | System architect | Design analysis |
| `--focus accessibility` | Accessibility | UX specialist | Compliance validation |
| `--focus testing` | Testing | QA specialist | Test strategy |
### 🛡️ Safety & Control Flags
| Flag | Purpose | Auto-Trigger | Safety Level |
|------|---------|--------------|--------------|
| `--safe-mode` | Maximum safety | Production ops | Maximum |
| `--validate` | Risk assessment | High-risk ops | High |
| `--backup` | Force backup | File modifications | Standard |
| `--dry-run` | Preview only | Manual testing | Preview |
| `--tests-required` | Mandate testing | Critical changes | Validation |
### 📋 Workflow & Task Flags
| Flag | Purpose | Auto-Trigger | Coordination |
|------|---------|--------------|--------------|
| `--task-manage` | Hierarchical organization | >3 steps | Phase-based |
| `--delegate [mode]` | Sub-task routing | >50 files | Intelligent routing |
| `--loop` | Iterative cycles | "improve" keywords | Quality cycles |
| `--iterations N` | Cycle count | Specific improvements | Controlled iteration |
| `--parallel` | Force concurrency | Independent ops | Performance |
## Advanced Flag Usage 🚀
### Context-Aware Flag Selection
**Adaptive Flagging Based on Project Type:**
**React/Frontend Projects:**
```bash
# Automatically optimized for React development
/sc:implement "user-dashboard"
# → Auto-flags: --magic --c7 --focus accessibility --orchestrate
# Manual optimization for specific needs
/sc:implement "dashboard" --magic --c7 --play --focus accessibility
# → UI generation + patterns + testing + accessibility validation
```
**Backend/API Projects:**
```bash
# Automatically optimized for backend development
/sc:implement "payment-api"
# → Auto-flags: --focus security --validate --c7 --seq
# Manual security-first approach
/sc:implement "api" --focus security --validate --backup --tests-required
# → Security analysis + validation + safety protocols
```
**Legacy Modernization:**
```bash
# Complex legacy work gets automatic coordination
/sc:improve legacy-monolith/
# → Auto-flags: --task-manage --serena --morph --think-hard --backup
# Manual control for specific modernization strategy
/sc:improve legacy/ --ultrathink --task-manage --serena --morph --safe-mode
# → Maximum analysis + coordination + transformation + safety
```
### Flag Precedence & Conflict Resolution
**Priority Hierarchy:**
1. **Safety First**: `--safe-mode` > `--validate` > optimization flags
2. **Explicit Override**: User flags > auto-detection
3. **Depth Hierarchy**: `--ultrathink` > `--think-hard` > `--think`
4. **MCP Control**: `--no-mcp` overrides all individual MCP flags
5. **Scope Precedence**: `system` > `project` > `module` > `file`
**Conflict Resolution Examples:**
```bash
# Safety overrides efficiency
/sc:implement "critical-feature" --uc --safe-mode
# → Result: Safe mode wins, auto-enables backup and validation
# Explicit scope overrides auto-detection
/sc:analyze large-project/ --scope file target.js
# → Result: Only analyzes target.js despite project size
# No-MCP overrides individual server flags
/sc:implement "feature" --magic --c7 --no-mcp
# → Result: No MCP servers used, native execution only
```
### Dynamic Flag Adaptation
**Resource-Responsive Flagging:**
```bash
# System automatically adapts based on available resources
/sc:analyze enterprise-codebase/
# → High resources: --all-mcp --parallel --think-hard
# → Medium resources: --c7 --seq --serena --think
# → Low resources: --no-mcp --uc --scope module
```
**Complexity-Driven Selection:**
```bash
# Flags scale with detected complexity
/sc:implement "simple helper function"
# → Auto-flags: minimal, fast execution
/sc:implement "microservices authentication"
# → Auto-flags: --ultrathink --all-mcp --task-manage --validate --orchestrate
```
### Expert Flag Patterns
**Security-First Development:**
```bash
# Progressive security validation
/sc:implement "auth-system" --focus security --validate --tests-required
/sc:review "payment-code" --focus security --think-hard --backup
/sc:analyze "user-data" --focus security --all-mcp --safe-mode
```
**Performance Optimization Workflow:**
```bash
# Systematic performance improvement
/sc:analyze --focus performance --think-hard --seq --play
/sc:improve --focus performance --morph --parallel --validate
/sc:test --focus performance --play --format json --export metrics.json
```
**Learning & Discovery Patterns:**
```bash
# Understanding complex systems
/sc:load new-codebase/ --introspect --brainstorm --serena
/sc:analyze architecture/ --introspect --think-hard --c7 --all-mcp
/sc:explain concepts/ --introspect --c7 --focus accessibility
```
## Flag Troubleshooting 🔧
## 🚨 Quick Troubleshooting
### Common Issues (< 2 minutes)
- **Flag not recognized**: Check spelling and verify against `python3 -m SuperClaude --help`
- **MCP flag failures**: Check Node.js installation and server configuration
- **Auto-flags wrong**: Use manual override with `--no-mcp` or specific flags
- **Performance degradation**: Reduce complexity with `--scope file` or `--concurrency 1`
- **Flag conflicts**: Check flag priority rules and use single flags
### Immediate Fixes
- **Reset flags**: Remove all flags and let auto-detection work
- **Check compatibility**: Use `/sc:help flags` for valid combinations
- **Restart session**: Exit and restart Claude Code to reset flag state
- **Verify setup**: Run `SuperClaude status --flags` to check flag system
### Flag-Specific Troubleshooting
**Flag Not Recognized:**
```bash
# Problem: "Unknown flag --invalid-flag"
# Quick Fix: Check flag spelling and availability
/sc:help flags # List all valid flags
python3 -m SuperClaude --help flags # System-level flag help
# Common typos: --brainstrom → --brainstorm, --seq → --sequential
```
**MCP Flag Issues:**
```bash
# Problem: --magic, --morph, --c7 not working
# Quick Fix: Check MCP server status
SuperClaude status --mcp # Verify server connections
node --version # Ensure Node.js v16+
npm cache clean --force # Clear package cache
/sc:command --no-mcp # Bypass MCP temporarily
```
**Flag Combination Conflicts:**
```bash
# Problem: "Flag conflict: --all-mcp and --no-mcp"
# Quick Fix: Use flag priority rules
/sc:command --no-mcp # --no-mcp overrides --all-mcp
/sc:command --ultrathink --think # --ultrathink overrides --think
/sc:command --safe-mode --uc # --safe-mode auto-enables --uc
```
**Auto-Detection Issues:**
```bash
# Problem: Wrong flags auto-activated
# Quick Fix: Manual override with explicit flags
/sc:analyze simple-file.js --no-mcp # Override complex auto-detection
/sc:implement "basic function" --think # Force thinking mode
/sc:brainstorm clear-requirement # Force discovery mode
```
### Performance-Related Flag Issues
**Resource Exhaustion:**
```bash
# Problem: System slowing down with --all-mcp --ultrathink
# Quick Fix: Reduce resource usage
/sc:command --c7 --seq # Essential servers only
/sc:command --concurrency 1 # Limit parallel operations
/sc:command --scope file # Reduce analysis scope
/sc:command --uc # Enable compression
```
**Timeout Issues:**
```bash
# Problem: Commands hanging with complex flags
# Quick Fix: Timeout and resource management
/sc:command --timeout 60 # Set explicit timeout
/sc:command --memory-limit 2048 # Limit memory usage
/sc:command --safe-mode # Conservative execution
killall node # Reset hung MCP servers
```
### API Key and Dependency Issues
**Missing API Keys:**
```bash
# Problem: --magic or --morph flags fail with "API key required"
# Expected behavior: These services require paid subscriptions
export TWENTYFIRST_API_KEY="key" # For --magic flag
export MORPH_API_KEY="key" # For --morph flag
# Alternative: /sc:command --no-mcp to skip paid services
```
**Missing Dependencies:**
```bash
# Problem: MCP flags fail with "command not found"
# Quick Fix: Install missing dependencies
node --version # Check Node.js v16+
npm install -g npx # Ensure npx available
SuperClaude install --components mcp --force # Reinstall MCP
```
### Error Code Reference
| Flag Error | Meaning | Quick Fix |
|------------|---------|-----------|
| **F001** | Unknown flag | Check spelling with `/sc:help flags` |
| **F002** | Flag conflict | Use priority rules or remove conflicting flags |
| **F003** | MCP server unavailable | Check `node --version` and server status |
| **F004** | API key missing | Set environment variables or use `--no-mcp` |
| **F005** | Resource limit exceeded | Use `--concurrency 1` or `--scope file` |
| **F006** | Timeout exceeded | Increase `--timeout` or reduce complexity |
| **F007** | Permission denied | Check file permissions or run with appropriate access |
| **F008** | Invalid combination | Refer to flag priority hierarchy |
### Progressive Support Levels
**Level 1: Quick Fix (< 2 min)**
- Remove problematic flags and try again
- Use `--no-mcp` to bypass MCP server issues
- Check basic flag spelling and syntax
**Level 2: Detailed Help (5-15 min)**
```bash
# Flag-specific diagnostics
SuperClaude diagnose --flags
/sc:help flags --verbose
cat ~/.claude/logs/flag-system.log
# Test individual flags one at a time
```
- See [Common Issues Guide](../Reference/common-issues.md) for flag installation problems
**Level 3: Expert Support (30+ min)**
```bash
# Deep flag system analysis
SuperClaude validate-flags --all-combinations
strace -e trace=execve /sc:command --verbose 2>&1
# Check flag interaction matrix
# Review flag priority implementation
```
- See [Diagnostic Reference Guide](../Reference/diagnostic-reference.md) for system-level analysis
**Level 4: Community Support**
- Report flag issues at [GitHub Issues](https://github.com/SuperClaude-Org/SuperClaude_Framework/issues)
- Include flag combination that failed
- Describe expected vs actual behavior
### Success Validation
After applying flag fixes, test with:
- [ ] `/sc:help flags` (should list all available flags)
- [ ] `/sc:command --basic-flag` (should work without errors)
- [ ] `SuperClaude status --mcp` (MCP flags should work if servers connected)
- [ ] Flag combinations follow priority rules correctly
- [ ] Auto-detection works for simple commands
## Quick Troubleshooting (Legacy)
- **Flag not recognized** → Check spelling: `SuperClaude --help flags`
- **MCP flag fails** → Check server status: `SuperClaude status --mcp`
- **Auto-flags wrong** → Use manual override: `--no-mcp` or specific flags
- **Performance issues** → Reduce complexity: `--scope file` or `--concurrency 1`
- **Flag conflicts** → Check priority rules in documentation
### Common Issues & Solutions
**Flag Not Recognized:**
```bash
# Problem: Unknown flag error
/sc:analyze code/ --unknown-flag
# Solution: Check flag spelling and availability
SuperClaude --help flags
/sc:help --flags
```
**Conflicting Flags:**
```bash
# Problem: Contradictory flags
/sc:implement "feature" --all-mcp --no-mcp
# Solution: Use flag priority rules
# --no-mcp overrides --all-mcp (explicit override wins)
# Use: /sc:implement "feature" --no-mcp
```
**Resource Issues:**
```bash
# Problem: System overload with --all-mcp --ultrathink
/sc:analyze large-project/ --all-mcp --ultrathink
# Solution: Reduce resource usage
/sc:analyze large-project/ --c7 --seq --think --concurrency 2
# Or let auto-detection handle it: /sc:analyze large-project/
```
**MCP Server Connection Problems:**
```bash
# Problem: MCP flags not working
/sc:implement "dashboard" --magic # Magic server not responding
# Solutions:
# 1. Check MCP installation
SuperClaude install --list-components | grep mcp
# 2. Restart Claude Code session (MCP connections refresh)
# 3. Use fallback approach
/sc:implement "dashboard" --no-mcp # Native execution
# 4. Reinstall MCP servers
SuperClaude install --components mcp --force
```
**Performance Problems:**
```bash
# Problem: Slow execution with complex flags
/sc:analyze codebase/ --ultrathink --all-mcp --parallel
# Solutions:
# 1. Reduce complexity
/sc:analyze codebase/ --think --c7 --seq
# 2. Use scope limiting
/sc:analyze codebase/ --scope module --focus quality
# 3. Enable efficiency mode
/sc:analyze codebase/ --uc --concurrency 1
```
### Flag Debugging
**Check Auto-Activated Flags:**
```bash
# Add --verbose to see which flags were auto-activated
/sc:analyze project/ --verbose
# → Output shows: "Auto-activated: --think-hard, --serena, --orchestrate"
```
**Test Flag Combinations:**
```bash
# Use --dry-run to test flag effects without execution
/sc:improve code/ --task-manage --morph --dry-run
# → Shows planned execution without making changes
```
**Validate Flag Usage:**
```bash
# Check flag compatibility
SuperClaude validate-flags --think-hard --no-mcp --magic
# → Reports conflicts and suggests corrections
```
### Best Practices for Flag Usage
**Start Simple:**
1. **Trust Auto-Detection**: Let SuperClaude choose flags automatically
2. **Add Specific Flags**: Override only when you need specific behavior
3. **Use Common Patterns**: Start with proven flag combinations
4. **Monitor Performance**: Watch for resource usage and adjust accordingly
**Progressive Enhancement:**
```bash
# Week 1: Use commands without flags
/sc:analyze src/
/sc:implement "feature"
# Week 2: Add specific focus
/sc:analyze src/ --focus security
/sc:implement "feature" --magic
# Week 3: Combine for workflows
/sc:analyze src/ --focus security --think-hard
/sc:implement "feature" --magic --c7 --validate
# Month 2+: Advanced patterns
/sc:improve legacy/ --task-manage --serena --morph --safe-mode
```
**Flag Selection Strategy:**
1. **Purpose-First**: What do you want to achieve?
2. **Context-Aware**: Consider project type and complexity
3. **Resource-Conscious**: Monitor system load and adjust
4. **Safety-Minded**: Use validation flags for important changes
5. **Learning-Oriented**: Add `--introspect` when exploring
## Related Guides
**Learning Progression:**
**🌱 Essential (Week 1)**
- [Quick Start Guide](../Getting-Started/quick-start.md) - Experience auto-flagging naturally
- [Commands Reference](commands.md) - Commands automatically select optimal flags
- [Installation Guide](../Getting-Started/installation.md) - Flag system setup
**🌿 Intermediate (Week 2-3)**
- [Behavioral Modes](modes.md) - How flags activate behavioral modes
- [Agents Guide](agents.md) - Flag interaction with specialized agents
- [MCP Servers](mcp-servers.md) - MCP server activation flags
**🌲 Advanced (Month 2+)**
- [Session Management](session-management.md) - Long-term flag patterns
- [Best Practices](../Reference/best-practices.md) - Flag optimization strategies
- [Examples Cookbook](../Reference/examples-cookbook.md) - Real-world flag combinations
**🔧 Expert**
- [Technical Architecture](../Developer-Guide/technical-architecture.md) - Flag system implementation
- [Contributing Code](../Developer-Guide/contributing-code.md) - Extending flag capabilities
**Flag-Specific Learning Paths:**
**🎯 Focus Flags Mastery:**
- **Security**: `--focus security` → Security engineer activation
- **Performance**: `--focus performance` → Performance optimization patterns
- **Quality**: `--focus quality` → Code review and improvement workflows
**🧠 Analysis Depth Progression:**
- **Basic**: No flags → automatic detection
- **Structured**: `--think` → systematic analysis
- **Deep**: `--think-hard` → comprehensive investigation
- **Maximum**: `--ultrathink` → complete analytical capability
**🛠️ Tool Integration Journey:**
- **Single Tools**: `--c7`, `--magic` → specific capabilities
- **Combinations**: `--c7 --seq` → coordinated workflows
- **Full Suite**: `--all-mcp` → maximum capability
- **Optimization**: `--orchestrate` → intelligent coordination
**💡 Pro Tips:**
- **Start Without Flags**: Experience automatic optimization first
- **Add One at a Time**: Learn flag effects incrementally
- **Use `--introspect`**: Understand decision-making process
- **Monitor Resources**: Watch system load and adjust accordingly
- **Save Patterns**: Document successful flag combinations for reuse

File diff suppressed because it is too large Load Diff

623
Docs/User-Guide/modes.md Normal file
View File

@@ -0,0 +1,623 @@
# SuperClaude Behavioral Modes Guide 🧠
## ✅ Verification Status
- **SuperClaude Version**: v4.0+ Compatible
- **Last Tested**: 2025-01-16
- **Test Environment**: Linux/Windows/macOS
- **Mode Activation**: ✅ All Verified
## 🧪 Testing Mode Activation
Before using this guide, verify modes activate correctly:
```bash
# Test Brainstorming mode
/sc:brainstorm "vague project idea"
# Expected: Should ask discovery questions, not give immediate solutions
# Test Task Management mode
/sc:implement "complex multi-file feature"
# Expected: Should break down into phases and coordinate steps
# Test Token Efficiency mode
/sc:analyze large-project/ --uc
# Expected: Should use symbols and compressed output format
```
**If tests fail**: Modes activate automatically based on request complexity - check behavior patterns below
## Quick Reference Table
| Mode | Purpose | Auto-Triggers | Key Behaviors | Best Used For |
|------|---------|---------------|---------------|---------------|
| **🧠 Brainstorming** | Interactive discovery | "brainstorm", "maybe", vague requests | Socratic questions, requirement elicitation | New project planning, unclear requirements |
| **🔍 Introspection** | Meta-cognitive analysis | Error recovery, "analyze reasoning" | Transparent thinking markers (🤔, 🎯, 💡) | Debugging, learning, optimization |
| **📋 Task Management** | Complex coordination | >3 steps, >2 directories | Phase breakdown, memory persistence | Multi-step operations, project management |
| **🎯 Orchestration** | Intelligent tool selection | Multi-tool ops, >75% resources | Optimal tool routing, parallel execution | Complex analysis, performance optimization |
| **⚡ Token Efficiency** | Compressed communication | >75% context usage, `--uc` flag | Symbol systems, 30-50% token reduction | Resource constraints, large operations |
| **🎨 Standard** | Balanced default | Simple tasks, no complexity triggers | Clear professional communication | General development, straightforward tasks |
---
## Getting Started (2-Minute Overview)
**Modes activate automatically** - you don't need to think about them. They adapt Claude Code's behavior based on your task complexity and context.
**Quick Examples:**
```bash
# Automatic activation examples
/sc:brainstorm "mobile app" # → Socratic discovery questions
/sc:implement "auth system" # → Multi-phase coordination
"--uc analyze large-codebase/" # → Compressed symbol output
```
**When to use manual flags:**
- Need specific behavior: `--brainstorm`, `--introspect`, `--uc`
- Override automatic detection for learning/debugging
- Optimize for specific constraints (memory, time, clarity)
---
## Mode Details
### 🧠 Brainstorming Mode - Interactive Discovery
**Purpose**: Transform vague ideas into structured requirements through collaborative discovery.
**Auto-Activation Triggers:**
- Vague project requests: "I want to build...", "Thinking about creating..."
- Exploration keywords: brainstorm, explore, discuss, figure out, not sure
- Uncertainty indicators: "maybe", "possibly", "could we"
- Manual flags: `--brainstorm`, `--bs`
**Behavioral Changes:**
- **Socratic Questioning**: Asks probing questions to uncover hidden requirements
- **Non-Presumptive Approach**: Avoids assumptions, lets you guide discovery direction
- **Collaborative Partnership**: Works with you in exploration rather than directive consultation
- **Requirement Synthesis**: Converts insights into structured technical briefs
- **Cross-Session Memory**: Maintains discovery context for follow-up discussions
**Example Experience:**
```
Standard Approach: "I'll build a user authentication system with JWT tokens..."
Brainstorming Approach:
"🤔 Let's explore this together:
- What specific user problems does auth solve for your app?
- Who are your users and what's their technical comfort level?
- Any existing systems or social logins to integrate with?
- What security/compliance requirements do you have?
📝 Outcome: Detailed auth requirements brief ready for implementation"
```
#### Success Criteria
- [ ] Responds with questions instead of immediate solutions
- [ ] Questions explore user needs, technical constraints, and business goals
- [ ] Maintains collaborative tone throughout discovery process
- [ ] Produces structured requirements or technical brief as outcome
**Verify:** `/sc:brainstorm "web app"` should ask about users, features, technology
**Test:** Vague requests should trigger discovery questions, not assumptions
**Check:** Mode should persist across follow-up questions in same conversation
**Works Best With:**
- **→ Task Management**: Requirements discovery → systematic implementation planning
- **→ Orchestration**: Requirements clarity → optimal tool selection for implementation
- **Manual Override**: Use `--brainstorm` to force discovery even for "clear" requirements
---
### 🔍 Introspection Mode - Meta-Cognitive Analysis
**Purpose**: Expose reasoning process for learning optimization and transparent decision-making.
**Auto-Activation Triggers:**
- Self-analysis requests: "analyze my reasoning", "reflect on decision"
- Error recovery scenarios and unexpected results requiring investigation
- Complex problem solving with multiple viable approaches
- Pattern recognition needs for optimization opportunities
- Manual flag: `--introspect`
**Behavioral Changes:**
- **Reasoning Transparency**: Exposes thinking process with clear markers (🤔, 🎯, ⚡, 📊, 💡)
- **Decision Analysis**: Evaluates choice logic and considers alternative approaches
- **Pattern Recognition**: Identifies recurring behaviors and improvement opportunities
- **Meta-Learning**: Extracts insights for continuous framework improvement
- **Framework Compliance**: Validates actions against SuperClaude principles and quality gates
**Example Experience:**
```
Standard Approach: "I'll analyze this code structure and suggest improvements..."
Introspective Approach:
"🧠 Meta-Analysis: Why did I choose structural analysis over functional flow?
🎯 Decision Logic: Saw class hierarchy → assumed OOP pattern dominance
🔄 Alternative Considered: Data flow analysis might reveal hidden dependencies
📊 Evidence Check: File imports suggest functional composition, not OOP
💡 Learning: Need to analyze imports before choosing analysis strategy
⚡ Correction: Switching to functional flow analysis approach"
```
**Works Best With:**
- **→ Task Management**: Provides transparent reasoning for complex phase decisions
- **Any Mode**: Adds reasoning transparency layer to any other mode's operation
- **Manual Override**: Use `--introspect` for learning sessions and debugging workflows
---
### 📋 Task Management Mode - Complex Coordination
**Purpose**: Hierarchical task organization with session persistence for multi-step operations.
**Auto-Activation Triggers:**
- Operations requiring >3 coordinated steps
- Multiple file/directory scope (>2 directories OR >3 files)
- Complex dependencies requiring phases and checkpoints
- Quality improvement requests: polish, refine, enhance
- Manual flags: `--task-manage`, `--delegate`
**Behavioral Changes:**
- **Hierarchical Planning**: Breaks complex work into Plan → Phase → Task → Todo structure
- **Session Persistence**: Maintains project context and progress across interruptions
- **Memory Integration**: Uses write_memory/read_memory for state preservation
- **Progress Orchestration**: Coordinates TodoWrite with memory updates for tracking
- **Quality Gates**: Implements systematic validation checkpoints between phases
**Example Experience:**
```
Standard Approach: "I'll implement user authentication..." → Direct implementation
Task Management Approach:
"📋 Multi-Phase Implementation Plan:
🎯 Phase 1: Security Requirements Analysis (Session 1)
🎯 Phase 2: API Design & Documentation (Session 2)
🎯 Phase 3: Implementation & Testing (Session 3-4)
🎯 Phase 4: Integration & Validation (Session 5)
💾 Session persistence: Resume context automatically
✓ Quality gates: Validation before each phase transition"
```
**Works Best With:**
- **Brainstorming →**: Requirements discovery then systematic implementation
- **+ Orchestration**: Task coordination with optimal tool selection
- **+ Introspection**: Transparent reasoning for complex phase decisions
---
### 🎯 Orchestration Mode - Intelligent Tool Selection
**Purpose**: Optimize task execution through intelligent tool routing and parallel coordination.
**Auto-Activation Triggers:**
- Multi-tool operations requiring sophisticated coordination
- Performance constraints (>75% resource usage)
- Parallel execution opportunities (>3 independent files/operations)
- Complex routing decisions with multiple valid tool approaches
**Behavioral Changes:**
- **Intelligent Tool Routing**: Selects optimal MCP servers and native tools for each task type
- **Resource Awareness**: Adapts approach based on system constraints and availability
- **Parallel Optimization**: Identifies independent operations for concurrent execution
- **Performance Focus**: Maximizes speed and effectiveness through coordinated tool usage
- **Adaptive Fallback**: Switches tools gracefully when preferred options are unavailable
**Example Experience:**
```
Standard Approach: Sequential file-by-file analysis and editing
Orchestration Approach:
"🎯 Multi-Tool Coordination Strategy:
🔍 Phase 1: Serena (semantic analysis) + Sequential (architecture review)
⚡ Phase 2: Morphllm (pattern edits) + Magic (UI components)
🧪 Phase 3: Playwright (testing) + Context7 (documentation patterns)
🔄 Parallel execution: 3 tools working simultaneously
📈 Efficiency gain: 60% faster than sequential approach"
```
**Works Best With:**
- **Task Management →**: Provides tool coordination for complex multi-phase plans
- **+ Token Efficiency**: Optimal tool selection with compressed communication
- **Any Complex Task**: Adds intelligent tool routing to enhance execution
---
### ⚡ Token Efficiency Mode - Compressed Communication
**Purpose**: Achieve 30-50% token reduction through symbol systems while preserving information quality.
**Auto-Activation Triggers:**
- Context usage >75% approaching limits
- Large-scale operations requiring resource efficiency
- User explicit flags: `--uc`, `--ultracompressed`
- Complex analysis workflows with multiple outputs
**Behavioral Changes:**
- **Symbol Communication**: Uses visual symbols for logic flows, status, and technical domains
- **Technical Abbreviation**: Context-aware compression for repeated technical terms
- **Structured Density**: Bullet points, tables, and concise formatting over verbose paragraphs
- **Information Preservation**: Maintains ≥95% information quality despite compression
- **Scannable Format**: Optimizes for quick comprehension and task completion
**Example Experience:**
```
Standard Approach: "The authentication system implementation shows a security vulnerability in the user validation function that needs immediate attention..."
Token Efficient Approach:
"🛡️ Security Alert:
auth.js:45 → user val() → critical vuln
📊 Impact: ❌ token bypass possible
⚡ Action: fix validation + audit ∵ high sev
🔧 Est: 2h impl + 1h test"
```
**Works Best With:**
- **Any Mode**: Adds compression layer while preserving mode-specific behaviors
- **Orchestration →**: Compressed tool coordination and status updates
- **Manual Override**: Use `--uc` when context pressure or efficiency is priority
---
### 🎨 Standard Mode - Balanced Default
**Purpose**: Provide clear, professional communication for straightforward development tasks.
**Auto-Activation Triggers:**
- Simple, well-defined tasks without complexity indicators
- Single-file operations with clear requirements
- Basic explanations and standard development workflows
- No other mode triggers detected (default fallback)
**Behavioral Changes:**
- **Professional Communication**: Clear, concise technical language without compression
- **Moderate Detail**: Balanced information depth suitable for most development tasks
- **Standard Tool Selection**: Uses native Claude capabilities and basic tools
- **Quality Focus**: Maintains code quality without complex orchestration overhead
- **Responsive Adaptation**: Ready to switch to specialized modes when complexity increases
**Example Experience:**
```
Standard Approach: Consistent, professional baseline for all tasks
"I'll implement the login function with proper error handling:
1. Validate user input (email format, password requirements)
2. Authenticate against database with secure hashing
3. Generate JWT token with appropriate expiration
4. Return success response with user data
The implementation will follow security best practices and include comprehensive error handling."
```
**Works Best With:**
- **→ Any Mode**: Serves as baseline that other modes enhance
- **Mode Switching**: Automatically escalates to specialized modes when needed
- **Clarity Priority**: When straightforward communication is more important than optimization
---
## Advanced Usage
### Mode Combinations
**Multi-Mode Workflows:**
```bash
# Discovery → Planning → Implementation
/sc:brainstorm "microservices architecture" --task-manage
# → Brainstorming: requirement discovery
# → Task Management: multi-phase coordination
# Analysis with transparency and efficiency
/sc:analyze legacy-system/ --introspect --uc
# → Introspection: transparent reasoning
# → Token Efficiency: compressed output
```
### Manual Mode Control
**Force Specific Behaviors:**
- `--brainstorm`: Force collaborative discovery for any task
- `--introspect`: Add reasoning transparency to any mode
- `--task-manage`: Enable hierarchical coordination
- `--orchestrate`: Optimize tool selection and parallel execution
- `--uc`: Compress communication for efficiency
**Override Examples:**
```bash
# Force brainstorming on "clear" requirements
/sc:implement "user login" --brainstorm
# Add reasoning transparency to debugging
/sc:fix auth-issue --introspect
# Enable task management for simple operations
/sc:update styles.css --task-manage
```
### Mode Boundaries and Priority
**When Modes Activate:**
1. **Complexity Threshold**: >3 files → Task Management
2. **Resource Pressure**: >75% usage → Token Efficiency
3. **Multi-Tool Need**: Complex analysis → Orchestration
4. **Uncertainty**: Vague requirements → Brainstorming
5. **Error Recovery**: Problems → Introspection
**Priority Rules:**
- **Safety First**: Quality and validation always override efficiency
- **User Intent**: Manual flags override automatic detection
- **Context Adaptation**: Modes stack based on complexity
- **Resource Management**: Efficiency modes activate under pressure
---
## Real-World Examples
### Complete Workflow Examples
**New Project Development:**
```bash
# Phase 1: Discovery (Brainstorming Mode auto-activates)
"I want to build a productivity app"
→ 🤔 Socratic questions about users, features, platform choice
→ 📝 Structured requirements brief
# Phase 2: Planning (Task Management Mode auto-activates)
/sc:implement "core productivity features"
→ 📋 Multi-phase breakdown with dependencies
→ 🎯 Phase coordination with quality gates
# Phase 3: Implementation (Orchestration Mode coordinates tools)
/sc:develop frontend + backend
→ 🎯 Magic (UI) + Context7 (patterns) + Sequential (architecture)
→ ⚡ Parallel execution optimization
```
**Debugging Complex Issues:**
```bash
# Problem analysis (Introspection Mode auto-activates)
"Users getting intermittent auth failures"
→ 🤔 Transparent reasoning about potential causes
→ 🎯 Hypothesis formation and evidence gathering
→ 💡 Pattern recognition across similar issues
# Systematic resolution (Task Management coordinates)
/sc:fix auth-system --comprehensive
→ 📋 Phase 1: Root cause analysis
→ 📋 Phase 2: Solution implementation
→ 📋 Phase 3: Testing and validation
```
### Mode Combination Patterns
**High-Complexity Scenarios:**
```bash
# Large refactoring with multiple constraints
/sc:modernize legacy-system/ --introspect --uc --orchestrate
→ 🔍 Transparent reasoning (Introspection)
→ ⚡ Compressed communication (Token Efficiency)
→ 🎯 Optimal tool coordination (Orchestration)
→ 📋 Systematic phases (Task Management auto-activates)
```
---
## Quick Reference
### Mode Activation Patterns
| Trigger Type | Example Input | Mode Activated | Key Behavior |
|--------------|---------------|----------------|--------------|
| **Vague Request** | "I want to build an app" | 🧠 Brainstorming | Socratic discovery questions |
| **Complex Scope** | >3 files or >2 directories | 📋 Task Management | Phase coordination |
| **Multi-Tool Need** | Analysis + Implementation | 🎯 Orchestration | Tool optimization |
| **Error Recovery** | "This isn't working as expected" | 🔍 Introspection | Transparent reasoning |
| **Resource Pressure** | >75% context usage | ⚡ Token Efficiency | Symbol compression |
| **Simple Task** | "Fix this function" | 🎨 Standard | Clear, direct approach |
### Manual Override Commands
```bash
# Force specific mode behaviors
/sc:command --brainstorm # Collaborative discovery
/sc:command --introspect # Reasoning transparency
/sc:command --task-manage # Hierarchical coordination
/sc:command --orchestrate # Tool optimization
/sc:command --uc # Token compression
# Combine multiple modes
/sc:command --introspect --uc # Transparent + efficient
/sc:command --task-manage --orchestrate # Coordinated + optimized
```
---
## 🚨 Quick Troubleshooting
### Common Issues (< 2 minutes)
- **Mode not activating**: Use manual flags: `--brainstorm`, `--introspect`, `--uc`
- **Wrong mode active**: Check complexity triggers and keywords in request
- **Mode switching unexpectedly**: Normal behavior based on task evolution
- **Performance impact**: Modes optimize performance, shouldn't slow execution
- **Mode conflicts**: Check flag priority rules in [Flags Guide](flags.md)
### Immediate Fixes
- **Force specific mode**: Use explicit flags like `--brainstorm` or `--task-manage`
- **Reset mode behavior**: Restart Claude Code session to reset mode state
- **Check mode indicators**: Look for 🤔, 🎯, 📋 symbols in responses
- **Verify complexity**: Simple tasks use Standard mode, complex tasks auto-switch
### Mode-Specific Troubleshooting
**Brainstorming Mode Issues:**
```bash
# Problem: Mode gives solutions instead of asking questions
# Quick Fix: Check request clarity and use explicit flag
/sc:brainstorm "web app" --brainstorm # Force discovery mode
"I have a vague idea about..." # Use uncertainty language
"Maybe we could build..." # Trigger exploration
```
**Task Management Mode Issues:**
```bash
# Problem: Simple tasks getting complex coordination
# Quick Fix: Reduce scope or use simpler commands
/sc:implement "function" --no-task-manage # Disable coordination
/sc:simple-fix bug.js # Use basic commands
# Check if task really is complex (>3 files, >2 directories)
```
**Token Efficiency Mode Issues:**
```bash
# Problem: Output too compressed or unclear
# Quick Fix: Disable compression for clarity
/sc:command --no-uc # Disable compression
/sc:command --verbose # Force detailed output
# Use when clarity is more important than efficiency
```
**Introspection Mode Issues:**
```bash
# Problem: Too much meta-commentary, not enough action
# Quick Fix: Disable introspection for direct work
/sc:command --no-introspect # Direct execution
# Use introspection only for learning and debugging
```
**Orchestration Mode Issues:**
```bash
# Problem: Tool coordination causing confusion
# Quick Fix: Simplify tool usage
/sc:command --no-mcp # Native tools only
/sc:command --simple # Basic execution
# Check if task complexity justifies orchestration
```
### Error Code Reference
| Mode Error | Meaning | Quick Fix |
|------------|---------|-----------|
| **B001** | Brainstorming failed to activate | Use explicit `--brainstorm` flag |
| **T001** | Task management overhead | Use `--no-task-manage` for simple tasks |
| **U001** | Token efficiency too aggressive | Use `--verbose` or `--no-uc` |
| **I001** | Introspection mode stuck | Use `--no-introspect` for direct action |
| **O001** | Orchestration coordination failed | Use `--no-mcp` or `--simple` |
| **M001** | Mode conflict detected | Check flag priority rules |
| **M002** | Mode switching loop | Restart session to reset state |
| **M003** | Mode not recognized | Update SuperClaude or check spelling |
### Progressive Support Levels
**Level 1: Quick Fix (< 2 min)**
- Use manual flags to override automatic mode selection
- Check if task complexity matches expected mode behavior
- Try restarting Claude Code session
**Level 2: Detailed Help (5-15 min)**
```bash
# Mode-specific diagnostics
/sc:help modes # List all available modes
/sc:reflect --type mode-status # Check current mode state
# Review request complexity and triggers
```
- See [Common Issues Guide](../Reference/common-issues.md) for mode installation problems
**Level 3: Expert Support (30+ min)**
```bash
# Deep mode analysis
SuperClaude diagnose --modes
# Check mode activation patterns
# Review behavioral triggers and thresholds
```
- See [Diagnostic Reference Guide](../Reference/diagnostic-reference.md) for behavioral mode analysis
**Level 4: Community Support**
- Report mode issues at [GitHub Issues](https://github.com/SuperClaude-Org/SuperClaude_Framework/issues)
- Include examples of unexpected mode behavior
- Describe desired vs actual mode activation
### Success Validation
After applying mode fixes, test with:
- [ ] Simple requests use Standard mode (clear, direct responses)
- [ ] Complex requests auto-activate appropriate modes (coordination, reasoning)
- [ ] Manual flags override automatic detection correctly
- [ ] Mode indicators (🤔, 🎯, 📋) appear when expected
- [ ] Performance remains good across different modes
## Quick Troubleshooting (Legacy)
- **Mode not activating** → Use manual flags: `--brainstorm`, `--introspect`, `--uc`
- **Wrong mode active** → Check complexity triggers and keywords in request
- **Mode switching unexpectedly** → Normal behavior based on task evolution
- **Performance impact** → Modes optimize performance, shouldn't slow execution
- **Mode conflicts** → Check flag priority rules in [Flags Guide](flags.md)
## Frequently Asked Questions
**Q: How do I know which mode is active?**
A: Look for these indicators in communication patterns:
- 🤔 Discovery questions → Brainstorming
- 🎯 Reasoning transparency → Introspection
- Phase breakdowns → Task Management
- Tool coordination → Orchestration
- Symbol compression → Token Efficiency
**Q: Can I force specific modes?**
A: Yes, use manual flags to override automatic detection:
```bash
/sc:command --brainstorm # Force discovery
/sc:command --introspect # Add transparency
/sc:command --task-manage # Enable coordination
/sc:command --uc # Compress output
```
**Q: Do modes affect performance?**
A: Modes enhance performance through optimization:
- **Token Efficiency**: 30-50% context reduction
- **Orchestration**: Parallel processing
- **Task Management**: Prevents rework through systematic planning
**Q: Can modes work together?**
A: Yes, modes are designed to complement each other:
- **Task Management** coordinates other modes
- **Token Efficiency** compresses any mode's output
- **Introspection** adds transparency to any workflow
---
## Summary
SuperClaude's 6 behavioral modes create an **intelligent adaptation system** that matches your needs automatically:
- **🧠 Brainstorming**: Transforms vague ideas into clear requirements
- **🔍 Introspection**: Provides transparent reasoning for learning and debugging
- **📋 Task Management**: Coordinates complex multi-step operations
- **🎯 Orchestration**: Optimizes tool selection and parallel execution
- **⚡ Token Efficiency**: Compresses communication while preserving clarity
- **🎨 Standard**: Maintains professional baseline for straightforward tasks
**The key insight**: You don't need to think about modes - they work transparently to enhance your development experience. Simply describe what you want to accomplish, and SuperClaude automatically adapts its approach to match your needs.
---
## Related Guides
**Learning Progression:**
**🌱 Essential (Week 1)**
- [Quick Start Guide](../Getting-Started/quick-start.md) - Experience modes naturally
- [Commands Reference](commands.md) - Commands automatically activate modes
- [Installation Guide](../Getting-Started/installation.md) - Set up behavioral modes
**🌿 Intermediate (Week 2-3)**
- [Agents Guide](agents.md) - How modes coordinate with specialists
- [Flags Guide](flags.md) - Manual mode control and optimization
- [Examples Cookbook](../Reference/examples-cookbook.md) - Mode patterns in practice
**🌲 Advanced (Month 2+)**
- [MCP Servers](mcp-servers.md) - Mode integration with enhanced capabilities
- [Session Management](session-management.md) - Task Management mode workflows
- [Best Practices](../Reference/best-practices.md) - Mode optimization strategies
**🔧 Expert**
- [Technical Architecture](../Developer-Guide/technical-architecture.md) - Mode implementation details
- [Contributing Code](../Developer-Guide/contributing-code.md) - Extend mode capabilities
**Mode-Specific Guides:**
- **Brainstorming**: [Requirements Discovery Patterns](../Reference/examples-cookbook.md#requirements)
- **Task Management**: [Session Management Guide](session-management.md)
- **Orchestration**: [MCP Servers Guide](mcp-servers.md)
- **Token Efficiency**: [Performance Optimization](../Reference/best-practices.md#efficiency)

File diff suppressed because it is too large Load Diff