kazuki nakai 882a0d8356
refactor: PM Agent complete independence from external MCP servers (#439)
* refactor: PM Agent complete independence from external MCP servers

## Summary
Implement graceful degradation to ensure PM Agent operates fully without
any MCP server dependencies. MCP servers now serve as optional enhancements
rather than required components.

## Changes

### Responsibility Separation (NEW)
- **PM Agent**: Development workflow orchestration (PDCA cycle, task management)
- **mindbase**: Memory management (long-term, freshness, error learning)
- **Built-in memory**: Session-internal context (volatile)

### 3-Layer Memory Architecture with Fallbacks
1. **Built-in Memory** [OPTIONAL]: Session context via MCP memory server
2. **mindbase** [OPTIONAL]: Long-term semantic search via airis-mcp-gateway
3. **Local Files** [ALWAYS]: Core functionality in docs/memory/

### Graceful Degradation Implementation
- All MCP operations marked with [ALWAYS] or [OPTIONAL]
- Explicit IF/ELSE fallback logic for every MCP call
- Dual storage: Always write to local files + optionally to mindbase
- Smart lookup: Semantic search (if available) → Text search (always works)

### Key Fallback Strategies

**Session Start**:
- mindbase available: search_conversations() for semantic context
- mindbase unavailable: Grep docs/memory/*.jsonl for text-based lookup

**Error Detection**:
- mindbase available: Semantic search for similar past errors
- mindbase unavailable: Grep docs/mistakes/ + solutions_learned.jsonl

**Knowledge Capture**:
- Always: echo >> docs/memory/patterns_learned.jsonl (persistent)
- Optional: mindbase.store() for semantic search enhancement

## Benefits
-  Zero external dependencies (100% functionality without MCP)
-  Enhanced capabilities when MCPs available (semantic search, freshness)
-  No functionality loss, only reduced search intelligence
-  Transparent degradation (no error messages, automatic fallback)

## Related Research
- Serena MCP investigation: Exposes tools (not resources), memory = markdown files
- mindbase superiority: PostgreSQL + pgvector > Serena memory features
- Best practices alignment: /Users/kazuki/github/airis-mcp-gateway/docs/mcp-best-practices.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: add PR template and pre-commit config

- Add structured PR template with Git workflow checklist
- Add pre-commit hooks for secret detection and Conventional Commits
- Enforce code quality gates (YAML/JSON/Markdown lint, shellcheck)

NOTE: Execute pre-commit inside Docker container to avoid host pollution:
  docker compose exec workspace uv tool install pre-commit
  docker compose exec workspace pre-commit run --all-files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: update PM Agent context with token efficiency architecture

- Add Layer 0 Bootstrap (150 tokens, 95% reduction)
- Document Intent Classification System (5 complexity levels)
- Add Progressive Loading strategy (5-layer)
- Document mindbase integration incentive (38% savings)
- Update with 2025-10-17 redesign details

* refactor: PM Agent command with progressive loading

- Replace auto-loading with User Request First philosophy
- Add 5-layer progressive context loading
- Implement intent classification system
- Add workflow metrics collection (.jsonl)
- Document graceful degradation strategy

* fix: installer improvements

Update installer logic for better reliability

* docs: add comprehensive development documentation

- Add architecture overview
- Add PM Agent improvements analysis
- Add parallel execution architecture
- Add CLI install improvements
- Add code style guide
- Add project overview
- Add install process analysis

* docs: add research documentation

Add LLM agent token efficiency research and analysis

* docs: add suggested commands reference

* docs: add session logs and testing documentation

- Add session analysis logs
- Add testing documentation

* feat: migrate CLI to typer + rich for modern UX

## What Changed

### New CLI Architecture (typer + rich)
- Created `superclaude/cli/` module with modern typer-based CLI
- Replaced custom UI utilities with rich native features
- Added type-safe command structure with automatic validation

### Commands Implemented
- **install**: Interactive installation with rich UI (progress, panels)
- **doctor**: System diagnostics with rich table output
- **config**: API key management with format validation

### Technical Improvements
- Dependencies: Added typer>=0.9.0, rich>=13.0.0, click>=8.0.0
- Entry Point: Updated pyproject.toml to use `superclaude.cli.app:cli_main`
- Tests: Added comprehensive smoke tests (11 passed)

### User Experience Enhancements
- Rich formatted help messages with panels and tables
- Automatic input validation with retry loops
- Clear error messages with actionable suggestions
- Non-interactive mode support for CI/CD

## Testing

```bash
uv run superclaude --help     # ✓ Works
uv run superclaude doctor     # ✓ Rich table output
uv run superclaude config show # ✓ API key management
pytest tests/test_cli_smoke.py # ✓ 11 passed, 1 skipped
```

## Migration Path

-  P0: Foundation complete (typer + rich + smoke tests)
- 🔜 P1: Pydantic validation models (next sprint)
- 🔜 P2: Enhanced error messages (next sprint)
- 🔜 P3: API key retry loops (next sprint)

## Performance Impact

- **Code Reduction**: Prepared for -300 lines (custom UI → rich)
- **Type Safety**: Automatic validation from type hints
- **Maintainability**: Framework primitives vs custom code

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: consolidate documentation directories

Merged claudedocs/ into docs/research/ for consistent documentation structure.

Changes:
- Moved all claudedocs/*.md files to docs/research/
- Updated all path references in documentation (EN/KR)
- Updated RULES.md and research.md command templates
- Removed claudedocs/ directory
- Removed ClaudeDocs/ from .gitignore

Benefits:
- Single source of truth for all research reports
- PEP8-compliant lowercase directory naming
- Clearer documentation organization
- Prevents future claudedocs/ directory creation

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* perf: reduce /sc:pm command output from 1652 to 15 lines

- Remove 1637 lines of documentation from command file
- Keep only minimal bootstrap message
- 99% token reduction on command execution
- Detailed specs remain in superclaude/agents/pm-agent.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* perf: split PM Agent into execution workflows and guide

- Reduce pm-agent.md from 735 to 429 lines (42% reduction)
- Move philosophy/examples to docs/agents/pm-agent-guide.md
- Execution workflows (PDCA, file ops) stay in pm-agent.md
- Guide (examples, quality standards) read once when needed

Token savings:
- Agent loading: ~6K → ~3.5K tokens (42% reduction)
- Total with pm.md: 71% overall reduction

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: consolidate PM Agent optimization and pending changes

PM Agent optimization (already committed separately):
- superclaude/commands/pm.md: 1652→14 lines
- superclaude/agents/pm-agent.md: 735→429 lines
- docs/agents/pm-agent-guide.md: new guide file

Other pending changes:
- setup: framework_docs, mcp, logger, remove ui.py
- superclaude: __main__, cli/app, cli/commands/install
- tests: test_ui updates
- scripts: workflow metrics analysis tools
- docs/memory: session state updates

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: simplify MCP installer to unified gateway with legacy mode

## Changes

### MCP Component (setup/components/mcp.py)
- Simplified to single airis-mcp-gateway by default
- Added legacy mode for individual official servers (sequential-thinking, context7, magic, playwright)
- Dynamic prerequisites based on mode:
  - Default: uv + claude CLI only
  - Legacy: node (18+) + npm + claude CLI
- Removed redundant server definitions

### CLI Integration
- Added --legacy flag to setup/cli/commands/install.py
- Added --legacy flag to superclaude/cli/commands/install.py
- Config passes legacy_mode to component installer

## Benefits
-  Simpler: 1 gateway vs 9+ individual servers
-  Lighter: No Node.js/npm required (default mode)
-  Unified: All tools in one gateway (sequential-thinking, context7, magic, playwright, serena, morphllm, tavily, chrome-devtools, git, puppeteer)
-  Flexible: --legacy flag for official servers if needed

## Usage
```bash
superclaude install              # Default: airis-mcp-gateway (推奨)
superclaude install --legacy     # Legacy: individual official servers
```

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: rename CoreComponent to FrameworkDocsComponent and add PM token tracking

## Changes

### Component Renaming (setup/components/)
- Renamed CoreComponent → FrameworkDocsComponent for clarity
- Updated all imports in __init__.py, agents.py, commands.py, mcp_docs.py, modes.py
- Better reflects the actual purpose (framework documentation files)

### PM Agent Enhancement (superclaude/commands/pm.md)
- Added token usage tracking instructions
- PM Agent now reports:
  1. Current token usage from system warnings
  2. Percentage used (e.g., "27% used" for 54K/200K)
  3. Status zone: 🟢 <75% | 🟡 75-85% | 🔴 >85%
- Helps prevent token exhaustion during long sessions

### UI Utilities (setup/utils/ui.py)
- Added new UI utility module for installer
- Provides consistent user interface components

## Benefits
-  Clearer component naming (FrameworkDocs vs Core)
-  PM Agent token awareness for efficiency
-  Better visual feedback with status zones

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor(pm-agent): minimize output verbosity (471→284 lines, 40% reduction)

**Problem**: PM Agent generated excessive output with redundant explanations
- "System Status Report" with decorative formatting
- Repeated "Common Tasks" lists user already knows
- Verbose session start/end protocols
- Duplicate file operations documentation

**Solution**: Compress without losing functionality
- Session Start: Reduced to symbol-only status (🟢 branch | nM nD | token%)
- Session End: Compressed to essential actions only
- File Operations: Consolidated from 2 sections to 1 line reference
- Self-Improvement: 5 phases → 1 unified workflow
- Output Rules: Explicit constraints to prevent Claude over-explanation

**Quality Preservation**:
-  All core functions retained (PDCA, memory, patterns, mistakes)
-  PARALLEL Read/Write preserved (performance critical)
-  Workflow unchanged (session lifecycle intact)
-  Added output constraints (prevents verbose generation)

**Reduction Method**:
- Deleted: Explanatory text, examples, redundant sections
- Retained: Action definitions, file paths, core workflows
- Added: Explicit output constraints to enforce minimalism

**Token Impact**: 40% reduction in agent documentation size
**Before**: Verbose multi-section report with task lists
**After**: Single line status: 🟢 integration | 15M 17D | 36%

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: consolidate MCP integration to unified gateway

**Changes**:
- Remove individual MCP server docs (superclaude/mcp/*.md)
- Remove MCP server configs (superclaude/mcp/configs/*.json)
- Delete MCP docs component (setup/components/mcp_docs.py)
- Simplify installer (setup/core/installer.py)
- Update components for unified gateway approach

**Rationale**:
- Unified gateway (airis-mcp-gateway) provides all MCP servers
- Individual docs/configs no longer needed (managed centrally)
- Reduces maintenance burden and file count
- Simplifies installation process

**Files Removed**: 17 MCP files (docs + configs)
**Installer Changes**: Removed legacy MCP installation logic

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: update version and component metadata

- Bump version (pyproject.toml, setup/__init__.py)
- Update CLAUDE.md import service references
- Reflect component structure changes

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-17 05:43:06 +05:30

28 KiB

SuperClaude Behavioral Modes Guide 🧠

Quick Verification

Test modes by using /sc: commands - they activate automatically based on task complexity. For full command reference, see Commands Guide.

Quick Reference Table

Mode Purpose Auto-Triggers Key Behaviors Best Used For
🧠 Brainstorming Interactive discovery "brainstorm", "maybe", vague requests Socratic questions, requirement elicitation New project planning, unclear requirements
🔍 Introspection Meta-cognitive analysis Error recovery, "analyze reasoning" Transparent thinking markers (🤔, 🎯, 💡) Debugging, learning, optimization
🔬 Deep Research Systematic investigation mindset /sc:research, investigation keywords 6-phase workflow, evidence-based reasoning Technical research, current events, market analysis
📋 Task Management Complex coordination >3 steps, >2 directories Phase breakdown, memory persistence Multi-step operations, project management
🎯 Orchestration Intelligent tool selection Multi-tool ops, high resource usage Optimal tool routing, parallel execution Complex analysis, performance optimization
Token Efficiency Compressed communication High context usage, --uc flag Symbol systems, estimated 30-50% token reduction Resource constraints, large operations

Getting Started (2-Minute Overview)

Modes activate through behavioral instructions - Claude Code reads context files to determine which mode behaviors to adopt based on your task patterns and complexity.

Quick Examples:

# Automatic activation examples
/sc:brainstorm "mobile app"        # → Socratic discovery questions
/sc:implement "auth system"        # → Multi-phase coordination  
"--uc analyze large-codebase/"     # → Compressed symbol output

When to use manual flags:

  • Need specific behavior: --brainstorm, --introspect, --uc
  • Override automatic detection for learning/debugging
  • Optimize for specific constraints (memory, time, clarity)

Mode Details

🧠 Brainstorming Mode - Interactive Discovery

Purpose: Transform vague ideas into structured requirements through collaborative discovery.

Auto-Activation Triggers:

  • Vague project requests: "I want to build...", "Thinking about creating..."
  • Exploration keywords: brainstorm, explore, discuss, figure out, not sure
  • Uncertainty indicators: "maybe", "possibly", "could we"
  • Manual flags: --brainstorm, --bs

Behavioral Changes:

  • Socratic Questioning: Asks probing questions to uncover hidden requirements
  • Non-Presumptive Approach: Avoids assumptions, lets you guide discovery direction
  • Collaborative Partnership: Works with you in exploration rather than directive consultation
  • Requirement Synthesis: Converts insights into structured technical briefs
  • Cross-Session Memory: Maintains discovery context for follow-up discussions

Example Experience:

Standard Approach: "I'll build a user authentication system with JWT tokens..."
Brainstorming Approach: 
"🤔 Let's explore this together:
 - What specific user problems does auth solve for your app?
 - Who are your users and what's their technical comfort level?  
 - Any existing systems or social logins to integrate with?
 - What security/compliance requirements do you have?
 📝 Outcome: Detailed auth requirements brief ready for implementation"

Success Criteria

  • Responds with questions instead of immediate solutions
  • Questions explore user needs, technical constraints, and business goals
  • Maintains collaborative tone throughout discovery process
  • Produces structured requirements or technical brief as outcome

Verify: /sc:brainstorm "web app" should ask about users, features, technology
Test: Vague requests should trigger discovery questions, not assumptions
Check: Mode should persist across follow-up questions in same conversation

Works Best With:

  • → Task Management: Requirements discovery → systematic implementation planning
  • → Orchestration: Requirements clarity → optimal tool selection for implementation
  • Manual Override: Use --brainstorm to force discovery even for "clear" requirements

🔍 Introspection Mode - Meta-Cognitive Analysis

Purpose: Expose reasoning process for learning optimization and transparent decision-making.

Auto-Activation Triggers:

  • Self-analysis requests: "analyze my reasoning", "reflect on decision"
  • Error recovery scenarios and unexpected results requiring investigation
  • Complex problem solving with multiple viable approaches
  • Pattern recognition needs for optimization opportunities
  • Manual flag: --introspect

Behavioral Changes:

  • Reasoning Transparency: Exposes thinking process with clear markers (🤔, 🎯, , 📊, 💡)
  • Decision Analysis: Evaluates choice logic and considers alternative approaches
  • Pattern Recognition: Identifies recurring behaviors and improvement opportunities
  • Meta-Learning: Extracts insights for continuous framework improvement
  • Framework Compliance: Validates actions against SuperClaude principles and quality gates

Example Experience:

Standard Approach: "I'll analyze this code structure and suggest improvements..."
Introspective Approach:
"🧠 Meta-Analysis: Why did I choose structural analysis over functional flow?
 🎯 Decision Logic: Saw class hierarchy → assumed OOP pattern dominance
 🔄 Alternative Considered: Data flow analysis might reveal hidden dependencies
 📊 Evidence Check: File imports suggest functional composition, not OOP
 💡 Learning: Need to analyze imports before choosing analysis strategy
 ⚡ Correction: Switching to functional flow analysis approach"

Works Best With:

  • → Task Management: Provides transparent reasoning for complex phase decisions
  • Any Mode: Adds reasoning transparency layer to any other mode's operation
  • Manual Override: Use --introspect for learning sessions and debugging workflows

🔬 Deep Research Mode - Systematic Investigation Mindset

Purpose: Research mindset for systematic investigation and evidence-based reasoning.

Auto-Activation Triggers:

  • /sc:research command invocation
  • Research-related keywords: investigate, explore, discover, analyze
  • Questions requiring current information beyond knowledge cutoff
  • Complex research requirements
  • Manual flag: --research

Behavioral Modifications:

  • Thinking Style: Systematic over casual, evidence over assumption, progressive depth exploration
  • Communication: Lead with confidence levels, provide inline citations, acknowledge uncertainties
  • Priority Shifts: Completeness over speed, accuracy over speculation, verification over assumption
  • Process Adaptations: Always create investigation plans, default to parallel operations, maintain evidence chains

6-Phase Research Workflow:

  • 📋 Understand (5-10%): Assess query complexity and requirements
  • 📝 Plan (10-15%): Select strategy (planning/intent/unified) and identify parallelization
  • TodoWrite (5%): Create adaptive task hierarchy (3-15 tasks based on complexity)
  • 🔄 Execute (50-60%): Parallel-first searches and smart extraction routing
  • 📊 Track (Continuous): Monitor progress and update confidence scores
  • Validate (10-15%): Verify evidence chains and ensure completeness

Example Experience:

Standard Mode: "Here are some search results about quantum computing..."
Deep Research Mode: 
"📊 Research Plan: Quantum computing breakthroughs
 ✓ TodoWrite: Created 8 research tasks
 🔄 Executing parallel searches across domains
 📈 Confidence: 0.82 across 15 verified sources
 📝 Report saved: docs/research/research_quantum_[timestamp].md"

Quality Standards

  • Minimum 2 sources per claim with inline citations
  • Confidence scoring (0.0-1.0) for all findings
  • Parallel execution by default for independent operations
  • Reports saved to docs/research/ with proper structure
  • Clear methodology and evidence presentation

Verify: /sc:research "test topic" should create TodoWrite and execute systematically Test: All research should include confidence scores and citations Check: Reports should be saved to docs/research/ automatically

Works Best With:

  • → Task Management: Research planning with TodoWrite integration
  • → Orchestration: Parallel Tavily/Playwright coordination
  • Manual Override: Use --depth and --strategy for fine control

📋 Task Management Mode - Complex Coordination

Purpose: Hierarchical task organization with session persistence for multi-step operations.

Auto-Activation Triggers:

  • Operations requiring >3 coordinated steps
  • Multiple file/directory scope (>2 directories OR >3 files)
  • Complex dependencies requiring phases and checkpoints
  • Quality improvement requests: polish, refine, enhance
  • Manual flags: --task-manage, --delegate

Behavioral Changes:

  • Hierarchical Planning: Breaks complex work into Plan → Phase → Task → Todo structure
  • Session Persistence: Maintains project context and progress across interruptions
  • Memory Integration: Uses write_memory/read_memory for state preservation
  • Progress Orchestration: Coordinates TodoWrite with memory updates for tracking
  • Quality Gates: Implements systematic validation checkpoints between phases

Example Experience:

Standard Approach: "I'll implement user authentication..." → Direct implementation
Task Management Approach:
"📋 Multi-Phase Implementation Plan:
 🎯 Phase 1: Security Requirements Analysis (Session 1)
 🎯 Phase 2: API Design & Documentation (Session 2)  
 🎯 Phase 3: Implementation & Testing (Session 3-4)
 🎯 Phase 4: Integration & Validation (Session 5)
 💾 Session persistence: Resume context automatically
 ✓ Quality gates: Validation before each phase transition"

Works Best With:

  • Brainstorming →: Requirements discovery then systematic implementation
  • + Orchestration: Task coordination with optimal tool selection
  • + Introspection: Transparent reasoning for complex phase decisions

🎯 Orchestration Mode - Intelligent Tool Selection

Purpose: Optimize task execution through intelligent tool routing and parallel coordination.

Auto-Activation Triggers:

  • Multi-tool operations requiring sophisticated coordination
  • Performance constraints (high resource usage)
  • Parallel execution opportunities (>3 independent files/operations)
  • Complex routing decisions with multiple valid tool approaches

Behavioral Changes:

  • Intelligent Tool Routing: Selects optimal MCP servers and native tools for each task type
  • Resource Awareness: Adapts approach based on system constraints and availability
  • Parallel Optimization: Identifies independent operations for concurrent execution
  • Coordination Focus: Optimizes tool selection and usage through coordinated execution
  • Adaptive Fallback: Switches tools gracefully when preferred options are unavailable

Example Experience:

Standard Approach: Sequential file-by-file analysis and editing
Orchestration Approach:
"🎯 Multi-Tool Coordination Strategy:
 🔍 Phase 1: Serena (semantic analysis) + Sequential (architecture review)
 ⚡ Phase 2: Morphllm (pattern edits) + Magic (UI components) 
 🧪 Phase 3: Playwright (testing) + Context7 (documentation patterns)
 🔄 Parallel execution: 3 tools working simultaneously
\"

Works Best With:

  • Task Management →: Provides tool coordination for complex multi-phase plans
  • + Token Efficiency: Optimal tool selection with compressed communication
  • Any Complex Task: Adds intelligent tool routing to enhance execution

Token Efficiency Mode - Compressed Communication

Purpose: Achieve estimated 30-50% token reduction through symbol systems while preserving information quality.

Auto-Activation Triggers:

  • High context usage approaching limits
  • Large-scale operations requiring resource efficiency
  • User explicit flags: --uc, --ultracompressed
  • Complex analysis workflows with multiple outputs

Behavioral Changes:

  • Symbol Communication: Uses visual symbols for logic flows, status, and technical domains
  • Technical Abbreviation: Context-aware compression for repeated technical terms
  • Structured Density: Bullet points, tables, and concise formatting over verbose paragraphs
  • Information Preservation: Maintains ≥95% information quality despite compression
  • Structured Format: Organized for clarity and task completion

Example Experience:

Standard Approach: "The authentication system implementation shows a security vulnerability in the user validation function that needs immediate attention..."
Token Efficient Approach:
"🛡️ Security Alert:
 auth.js:45 → user val() → critical vuln
 📊 Impact: ❌ token bypass possible
 ⚡ Action: fix validation + audit ∵ high sev
 🔧 Est: 2h impl + 1h test"

Works Best With:

  • Any Mode: Adds compression layer while preserving mode-specific behaviors
  • Orchestration →: Compressed tool coordination and status updates
  • Manual Override: Use --uc when context pressure or efficiency is priority

🎨 Standard Mode - Balanced Default

Purpose: Provide clear, professional communication for straightforward development tasks.

Auto-Activation Triggers:

  • Simple, well-defined tasks without complexity indicators
  • Single-file operations with clear requirements
  • Basic explanations and standard development workflows
  • No other mode triggers detected (default fallback)

Behavioral Changes:

  • Professional Communication: Clear, concise technical language without compression
  • Moderate Detail: Balanced information depth suitable for most development tasks
  • Standard Tool Selection: Uses native Claude capabilities and basic tools
  • Quality Focus: Maintains code quality without complex orchestration overhead
  • Responsive Adaptation: Ready to switch to specialized modes when complexity increases

Example Experience:

Standard Approach: Consistent, professional baseline for all tasks
"I'll implement the login function with proper error handling:
 
 1. Validate user input (email format, password requirements)
 2. Authenticate against database with secure hashing
 3. Generate JWT token with appropriate expiration
 4. Return success response with user data
 
 The implementation will follow security best practices and include comprehensive error handling."

Works Best With:

  • → Any Mode: Serves as baseline that other modes enhance
  • Mode Switching: Automatically escalates to specialized modes when needed
  • Clarity Priority: When straightforward communication is more important than optimization

Advanced Usage

Mode Combinations

Multi-Mode Workflows:

# Discovery → Planning → Implementation
/sc:brainstorm "microservices architecture" --task-manage
# → Brainstorming: requirement discovery
# → Task Management: multi-phase coordination

# Analysis with transparency and efficiency
/sc:analyze legacy-system/ --introspect --uc
# → Introspection: transparent reasoning
# → Token Efficiency: compressed output

Manual Mode Control

Force Specific Behaviors:

  • --brainstorm: Force collaborative discovery for any task
  • --introspect: Add reasoning transparency to any mode
  • --task-manage: Enable hierarchical coordination
  • --orchestrate: Optimize tool selection and parallel execution
  • --uc: Compress communication for efficiency

Override Examples:

# Force brainstorming on "clear" requirements
/sc:implement "user login" --brainstorm

# Add reasoning transparency to debugging
# Debug authentication issue with transparent reasoning

# Enable task management for simple operations
# Update styles.css with systematic task management

Mode Boundaries and Priority

When Modes Activate:

  1. Complexity Threshold: >3 files → Task Management
  2. Resource Pressure: High context usage → Token Efficiency
  3. Multi-Tool Need: Complex analysis → Orchestration
  4. Uncertainty: Vague requirements → Brainstorming
  5. Error Recovery: Problems → Introspection

Priority Rules:

  • Safety First: Quality and validation always override efficiency
  • User Intent: Manual flags override automatic detection
  • Context Adaptation: Modes stack based on complexity
  • Resource Management: Efficiency modes activate under pressure

Real-World Examples

Complete Workflow Examples

New Project Development:

# Phase 1: Discovery (Brainstorming Mode auto-activates)
"I want to build a productivity app"
→ 🤔 Socratic questions about users, features, platform choice
→ 📝 Structured requirements brief

# Phase 2: Planning (Task Management Mode auto-activates)  
/sc:implement "core productivity features"
→ 📋 Multi-phase breakdown with dependencies
→ 🎯 Phase coordination with quality gates

# Phase 3: Implementation (Orchestration Mode coordinates tools)
/sc:implement "frontend and backend systems"
→ 🎯 Magic (UI) + Context7 (patterns) + Sequential (architecture)
→ ⚡ Parallel execution optimization

Debugging Complex Issues:

# Problem analysis (Introspection Mode auto-activates)
"Users getting intermittent auth failures"
→ 🤔 Transparent reasoning about potential causes
→ 🎯 Hypothesis formation and evidence gathering
→ 💡 Pattern recognition across similar issues

# Systematic resolution (Task Management coordinates)
# Fix authentication system comprehensively
→ 📋 Phase 1: Root cause analysis
→ 📋 Phase 2: Solution implementation  
→ 📋 Phase 3: Testing and validation

Mode Combination Patterns

High-Complexity Scenarios:

# Large refactoring with multiple constraints
/sc:improve legacy-system/ --introspect --uc --orchestrate
→ 🔍 Transparent reasoning (Introspection)
→ ⚡ Compressed communication (Token Efficiency)  
→ 🎯 Optimal tool coordination (Orchestration)
→ 📋 Systematic phases (Task Management auto-activates)

Quick Reference

Mode Activation Patterns

Trigger Type Example Input Mode Activated Key Behavior
Vague Request "I want to build an app" 🧠 Brainstorming Socratic discovery questions
Complex Scope >3 files or >2 directories 📋 Task Management Phase coordination
Multi-Tool Need Analysis + Implementation 🎯 Orchestration Tool optimization
Error Recovery "This isn't working as expected" 🔍 Introspection Transparent reasoning
Resource Pressure High context usage Token Efficiency Symbol compression
Simple Task "Fix this function" 🎨 Standard Clear, direct approach

Manual Override Commands

# Force specific mode behaviors
/sc:command --brainstorm    # Collaborative discovery
/sc:command --introspect    # Reasoning transparency
/sc:command --task-manage   # Hierarchical coordination
/sc:command --orchestrate   # Tool optimization
/sc:command --uc           # Token compression

# Combine multiple modes
/sc:command --introspect --uc    # Transparent + efficient
/sc:command --task-manage --orchestrate  # Coordinated + optimized

Troubleshooting

For troubleshooting help, see:

Common Issues

  • Mode not activating: Use manual flags: --brainstorm, --introspect, --uc
  • Wrong mode active: Check complexity triggers and keywords in request
  • Mode switching unexpectedly: Normal behavior based on task evolution
  • Execution impact: Modes optimize tool usage, shouldn't affect execution
  • Mode conflicts: Check flag priority rules in Flags Guide

Immediate Fixes

  • Force specific mode: Use explicit flags like --brainstorm or --task-manage
  • Reset mode behavior: Restart Claude Code session to reset mode state
  • Check mode indicators: Look for 🤔, 🎯, 📋 symbols in responses
  • Verify complexity: Simple tasks use Standard mode, complex tasks auto-switch

Mode-Specific Troubleshooting

Brainstorming Mode Issues:

# Problem: Mode gives solutions instead of asking questions
# Quick Fix: Check request clarity and use explicit flag
/sc:brainstorm "web app" --brainstorm         # Force discovery mode
"I have a vague idea about..."                # Use uncertainty language
"Maybe we could build..."                     # Trigger exploration

Task Management Mode Issues:

# Problem: Simple tasks getting complex coordination
# Quick Fix: Reduce scope or use simpler commands
/sc:implement "function" --no-task-manage     # Disable coordination
/sc:troubleshoot bug.js                       # Use basic commands
# Check if task really is complex (>3 files, >2 directories)

Token Efficiency Mode Issues:

# Problem: Output too compressed or unclear
# Quick Fix: Disable compression for clarity
/sc:command --no-uc                           # Disable compression
/sc:command --verbose                         # Force detailed output
# Use when clarity is more important than efficiency

Introspection Mode Issues:

# Problem: Too much meta-commentary, not enough action
# Quick Fix: Disable introspection for direct work
/sc:command --no-introspect                   # Direct execution
# Use introspection only for learning and debugging

Orchestration Mode Issues:

# Problem: Tool coordination causing confusion
# Quick Fix: Simplify tool usage
/sc:command --no-mcp                          # Native tools only
/sc:command --simple                          # Basic execution
# Check if task complexity justifies orchestration

Error Code Reference

Mode Error Meaning Quick Fix
B001 Brainstorming failed to activate Use explicit --brainstorm flag
T001 Task management overhead Use --no-task-manage for simple tasks
U001 Token efficiency too aggressive Use --verbose or --no-uc
I001 Introspection mode stuck Use --no-introspect for direct action
O001 Orchestration coordination failed Use --no-mcp or --simple
M001 Mode conflict detected Check flag priority rules
M002 Mode switching loop Restart session to reset state
M003 Mode not recognized Update SuperClaude or check spelling

Progressive Support Levels

Level 1: Quick Fix (< 2 min)

  • Use manual flags to override automatic mode selection
  • Check if task complexity matches expected mode behavior
  • Try restarting Claude Code session

Level 2: Detailed Help (5-15 min)

# Mode-specific diagnostics
/sc:help modes                            # List all available modes
/sc:reflect --type mode-status            # Check current mode state
# Review request complexity and triggers

Level 3: Expert Support (30+ min)

# Deep mode analysis
SuperClaude install --diagnose
# Check mode activation patterns
# Review behavioral triggers and thresholds

Level 4: Community Support

  • Report mode issues at GitHub Issues
  • Include examples of unexpected mode behavior
  • Describe desired vs actual mode activation

Success Validation

After applying mode fixes, test with:

  • Simple requests use Standard mode (clear, direct responses)
  • Complex requests auto-activate appropriate modes (coordination, reasoning)
  • Manual flags override automatic detection correctly
  • Mode indicators (🤔, 🎯, 📋) appear when expected
  • Performance remains good across different modes

Quick Troubleshooting (Legacy)

  • Mode not activating → Use manual flags: --brainstorm, --introspect, --uc
  • Wrong mode active → Check complexity triggers and keywords in request
  • Mode switching unexpectedly → Normal behavior based on task evolution
  • Execution impact → Modes optimize tool usage, shouldn't affect execution
  • Mode conflicts → Check flag priority rules in Flags Guide

Frequently Asked Questions

Q: How do I know which mode is active? A: Look for these indicators in communication patterns:

  • 🤔 Discovery questions → Brainstorming
  • 🎯 Reasoning transparency → Introspection
  • Phase breakdowns → Task Management
  • Tool coordination → Orchestration
  • Symbol compression → Token Efficiency

Q: Can I force specific modes? A: Yes, use manual flags to override automatic detection:

/sc:command --brainstorm     # Force discovery
/sc:command --introspect     # Add transparency
/sc:command --task-manage    # Enable coordination
/sc:command --uc            # Compress output

Q: Do modes affect execution? A: Modes optimize tool usage through coordination:

  • Token Efficiency: 30-50% context reduction
  • Orchestration: Parallel processing
  • Task Management: Prevents rework through systematic planning

Q: Can modes work together? A: Yes, modes are designed to complement each other:

  • Task Management coordinates other modes
  • Token Efficiency compresses any mode's output
  • Introspection adds transparency to any workflow

Summary

SuperClaude's 5 behavioral modes create an intelligent adaptation system that matches your needs automatically:

  • 🧠 Brainstorming: Transforms vague ideas into clear requirements
  • 🔍 Introspection: Provides transparent reasoning for learning and debugging
  • 📋 Task Management: Coordinates complex multi-step operations
  • 🎯 Orchestration: Optimizes tool selection and parallel execution
  • Token Efficiency: Compresses communication while preserving clarity
  • 🎨 Standard: Maintains professional baseline for straightforward tasks

The key insight: You don't need to think about modes - they work transparently to enhance your development experience. Simply describe what you want to accomplish, and SuperClaude automatically adapts its approach to match your needs.


Learning Progression:

🌱 Essential (Week 1)

🌿 Intermediate (Week 2-3)

🌲 Advanced (Month 2+)

🔧 Expert

Mode-Specific Guides: