mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-17 17:56:46 +00:00
490 lines
14 KiB
Markdown
490 lines
14 KiB
Markdown
|
|
# SuperClaude Installation Process Analysis
|
||
|
|
|
||
|
|
**Date**: 2025-10-17
|
||
|
|
**Analyzer**: PM Agent + User Feedback
|
||
|
|
**Status**: Critical Issues Identified
|
||
|
|
|
||
|
|
## 🚨 Critical Issues
|
||
|
|
|
||
|
|
### Issue 1: Misleading "Core is recommended" Message
|
||
|
|
|
||
|
|
**Location**: `setup/cli/commands/install.py:343`
|
||
|
|
|
||
|
|
**Problem**:
|
||
|
|
```yaml
|
||
|
|
Stage 2 Message: "Select components (Core is recommended):"
|
||
|
|
|
||
|
|
User Behavior:
|
||
|
|
- Sees "Core is recommended"
|
||
|
|
- Selects only "core"
|
||
|
|
- Expects complete working installation
|
||
|
|
|
||
|
|
Actual Result:
|
||
|
|
- mcp_docs NOT installed (unless user selects 'all')
|
||
|
|
- airis-mcp-gateway documentation missing
|
||
|
|
- Potentially broken MCP server functionality
|
||
|
|
|
||
|
|
Root Cause:
|
||
|
|
- auto_selected_mcp_docs logic exists (L362-368)
|
||
|
|
- BUT only triggers if MCP servers selected in Stage 1
|
||
|
|
- If user skips Stage 1 → no mcp_docs auto-selection
|
||
|
|
```
|
||
|
|
|
||
|
|
**Evidence**:
|
||
|
|
```python
|
||
|
|
# setup/cli/commands/install.py:362-368
|
||
|
|
if auto_selected_mcp_docs and "mcp_docs" not in selected_components:
|
||
|
|
mcp_docs_index = len(framework_components)
|
||
|
|
if mcp_docs_index not in selections:
|
||
|
|
# User didn't select it, but we auto-select it
|
||
|
|
selected_components.append("mcp_docs")
|
||
|
|
logger.info("Auto-selected MCP documentation for configured servers")
|
||
|
|
```
|
||
|
|
|
||
|
|
**Impact**:
|
||
|
|
- 🔴 **High**: Users following "Core is recommended" get incomplete installation
|
||
|
|
- 🔴 **High**: No warning about missing MCP documentation
|
||
|
|
- 🟡 **Medium**: User confusion about "why doesn't airis-mcp-gateway work?"
|
||
|
|
|
||
|
|
### Issue 2: Redundant Interactive Installation
|
||
|
|
|
||
|
|
**Problem**:
|
||
|
|
```yaml
|
||
|
|
Current Flow:
|
||
|
|
Stage 1: MCP Server Selection (interactive menu)
|
||
|
|
Stage 2: Framework Component Selection (interactive menu)
|
||
|
|
|
||
|
|
Inefficiency:
|
||
|
|
- Two separate interactive prompts
|
||
|
|
- User must manually select each time
|
||
|
|
- No quick install option
|
||
|
|
|
||
|
|
Better Approach:
|
||
|
|
CLI flags: --recommended, --minimal, --all, --components core,mcp
|
||
|
|
```
|
||
|
|
|
||
|
|
**Evidence**:
|
||
|
|
```python
|
||
|
|
# setup/cli/commands/install.py:64-66
|
||
|
|
parser.add_argument(
|
||
|
|
"--components", type=str, nargs="+", help="Specific components to install"
|
||
|
|
)
|
||
|
|
```
|
||
|
|
|
||
|
|
CLI support EXISTS but is not promoted or well-documented.
|
||
|
|
|
||
|
|
**Impact**:
|
||
|
|
- 🟡 **Medium**: Poor developer experience (slow, repetitive)
|
||
|
|
- 🟡 **Medium**: Discourages experimentation (too many clicks)
|
||
|
|
- 🟢 **Low**: Advanced users can use --components, but most don't know
|
||
|
|
|
||
|
|
### Issue 3: No Performance Validation
|
||
|
|
|
||
|
|
**Problem**:
|
||
|
|
```yaml
|
||
|
|
Assumption: "Install all components = best experience"
|
||
|
|
|
||
|
|
Unverified Questions:
|
||
|
|
1. Does full install increase Claude Code context pressure?
|
||
|
|
2. Does full install slow down session initialization?
|
||
|
|
3. Are all components actually needed for most users?
|
||
|
|
4. What's the token usage difference: minimal vs full?
|
||
|
|
|
||
|
|
No Benchmark Data:
|
||
|
|
- No before/after performance tests
|
||
|
|
- No token usage comparisons
|
||
|
|
- No load time measurements
|
||
|
|
- No context pressure analysis
|
||
|
|
```
|
||
|
|
|
||
|
|
**Impact**:
|
||
|
|
- 🟡 **Medium**: Potential performance regression unknown
|
||
|
|
- 🟡 **Medium**: Users may install unnecessary components
|
||
|
|
- 🟢 **Low**: May increase context usage unnecessarily
|
||
|
|
|
||
|
|
## 📊 Proposed Solutions
|
||
|
|
|
||
|
|
### Solution 1: Installation Profiles (Quick Win)
|
||
|
|
|
||
|
|
**Add CLI shortcuts**:
|
||
|
|
```bash
|
||
|
|
# Current (verbose)
|
||
|
|
uv run superclaude install
|
||
|
|
→ Interactive Stage 1 (MCP selection)
|
||
|
|
→ Interactive Stage 2 (Component selection)
|
||
|
|
|
||
|
|
# Proposed (efficient)
|
||
|
|
uv run superclaude install --recommended
|
||
|
|
→ Installs: core + modes + commands + agents + mcp_docs + airis-mcp-gateway
|
||
|
|
→ One command, fully working installation
|
||
|
|
|
||
|
|
uv run superclaude install --minimal
|
||
|
|
→ Installs: core only (for testing/development)
|
||
|
|
|
||
|
|
uv run superclaude install --all
|
||
|
|
→ Installs: everything (current 'all' behavior)
|
||
|
|
|
||
|
|
uv run superclaude install --components core,mcp --mcp-servers airis-mcp-gateway
|
||
|
|
→ Explicit component selection (current functionality, clearer)
|
||
|
|
```
|
||
|
|
|
||
|
|
**Implementation**:
|
||
|
|
```python
|
||
|
|
# Add to setup/cli/commands/install.py
|
||
|
|
|
||
|
|
parser.add_argument(
|
||
|
|
"--recommended",
|
||
|
|
action="store_true",
|
||
|
|
help="Install recommended components (core + modes + commands + agents + mcp_docs + airis-mcp-gateway)"
|
||
|
|
)
|
||
|
|
|
||
|
|
parser.add_argument(
|
||
|
|
"--minimal",
|
||
|
|
action="store_true",
|
||
|
|
help="Minimal installation (core only)"
|
||
|
|
)
|
||
|
|
|
||
|
|
parser.add_argument(
|
||
|
|
"--all",
|
||
|
|
action="store_true",
|
||
|
|
help="Install all components"
|
||
|
|
)
|
||
|
|
|
||
|
|
parser.add_argument(
|
||
|
|
"--mcp-servers",
|
||
|
|
type=str,
|
||
|
|
nargs="+",
|
||
|
|
help="Specific MCP servers to install"
|
||
|
|
)
|
||
|
|
```
|
||
|
|
|
||
|
|
### Solution 2: Fix Auto-Selection Logic
|
||
|
|
|
||
|
|
**Problem**: `mcp_docs` not included when user selects "Core" only
|
||
|
|
|
||
|
|
**Fix**:
|
||
|
|
```python
|
||
|
|
# setup/cli/commands/install.py:select_framework_components
|
||
|
|
|
||
|
|
# After line 360, add:
|
||
|
|
# ALWAYS include mcp_docs if ANY MCP server will be used
|
||
|
|
if selected_mcp_servers:
|
||
|
|
if "mcp_docs" not in selected_components:
|
||
|
|
selected_components.append("mcp_docs")
|
||
|
|
logger.info(f"Auto-included mcp_docs for {len(selected_mcp_servers)} MCP servers")
|
||
|
|
|
||
|
|
# Additionally: If airis-mcp-gateway is detected in existing installation,
|
||
|
|
# auto-include mcp_docs even if not explicitly selected
|
||
|
|
```
|
||
|
|
|
||
|
|
### Solution 3: Performance Benchmark Suite
|
||
|
|
|
||
|
|
**Create**: `tests/performance/test_installation_performance.py`
|
||
|
|
|
||
|
|
**Test Scenarios**:
|
||
|
|
```python
|
||
|
|
import pytest
|
||
|
|
import time
|
||
|
|
from pathlib import Path
|
||
|
|
|
||
|
|
class TestInstallationPerformance:
|
||
|
|
"""Benchmark installation profiles"""
|
||
|
|
|
||
|
|
def test_minimal_install_size(self):
|
||
|
|
"""Measure minimal installation footprint"""
|
||
|
|
# Install core only
|
||
|
|
# Measure: directory size, file count, token usage
|
||
|
|
|
||
|
|
def test_recommended_install_size(self):
|
||
|
|
"""Measure recommended installation footprint"""
|
||
|
|
# Install recommended profile
|
||
|
|
# Compare to minimal baseline
|
||
|
|
|
||
|
|
def test_full_install_size(self):
|
||
|
|
"""Measure full installation footprint"""
|
||
|
|
# Install all components
|
||
|
|
# Compare to recommended baseline
|
||
|
|
|
||
|
|
def test_context_pressure_minimal(self):
|
||
|
|
"""Measure context usage with minimal install"""
|
||
|
|
# Simulate Claude Code session
|
||
|
|
# Track token usage for common operations
|
||
|
|
|
||
|
|
def test_context_pressure_full(self):
|
||
|
|
"""Measure context usage with full install"""
|
||
|
|
# Compare to minimal baseline
|
||
|
|
# Acceptable threshold: < 20% increase
|
||
|
|
|
||
|
|
def test_load_time_comparison(self):
|
||
|
|
"""Measure Claude Code initialization time"""
|
||
|
|
# Minimal vs Full install
|
||
|
|
# Load CLAUDE.md + all imported files
|
||
|
|
# Measure parsing + processing time
|
||
|
|
```
|
||
|
|
|
||
|
|
**Expected Metrics**:
|
||
|
|
```yaml
|
||
|
|
Minimal Install:
|
||
|
|
Size: ~5 MB
|
||
|
|
Files: ~10 files
|
||
|
|
Token Usage: ~50K tokens
|
||
|
|
Load Time: < 1 second
|
||
|
|
|
||
|
|
Recommended Install:
|
||
|
|
Size: ~30 MB
|
||
|
|
Files: ~50 files
|
||
|
|
Token Usage: ~150K tokens (3x minimal)
|
||
|
|
Load Time: < 3 seconds
|
||
|
|
|
||
|
|
Full Install:
|
||
|
|
Size: ~50 MB
|
||
|
|
Files: ~80 files
|
||
|
|
Token Usage: ~250K tokens (5x minimal)
|
||
|
|
Load Time: < 5 seconds
|
||
|
|
|
||
|
|
Acceptance Criteria:
|
||
|
|
- Recommended should be < 3x minimal overhead
|
||
|
|
- Full should be < 5x minimal overhead
|
||
|
|
- Load time should be < 5 seconds for any profile
|
||
|
|
```
|
||
|
|
|
||
|
|
## 🎯 PM Agent Parallel Architecture Proposal
|
||
|
|
|
||
|
|
**Current PM Agent Design**:
|
||
|
|
- Sequential sub-agent delegation
|
||
|
|
- One agent at a time execution
|
||
|
|
- Manual coordination required
|
||
|
|
|
||
|
|
**Proposed: Deep Research-Style Parallel Execution**:
|
||
|
|
```yaml
|
||
|
|
PM Agent as Meta-Layer Commander:
|
||
|
|
|
||
|
|
Request Analysis:
|
||
|
|
- Parse user intent
|
||
|
|
- Identify required domains (backend, frontend, security, etc.)
|
||
|
|
- Classify dependencies (parallel vs sequential)
|
||
|
|
|
||
|
|
Parallel Execution Strategy:
|
||
|
|
Phase 1 - Independent Analysis (Parallel):
|
||
|
|
→ [backend-architect] analyzes API requirements
|
||
|
|
→ [frontend-architect] analyzes UI requirements
|
||
|
|
→ [security-engineer] analyzes threat model
|
||
|
|
→ All run simultaneously, no blocking
|
||
|
|
|
||
|
|
Phase 2 - Design Integration (Sequential):
|
||
|
|
→ PM Agent synthesizes Phase 1 results
|
||
|
|
→ Creates unified architecture plan
|
||
|
|
→ Identifies conflicts or gaps
|
||
|
|
|
||
|
|
Phase 3 - Parallel Implementation (Parallel):
|
||
|
|
→ [backend-architect] implements APIs
|
||
|
|
→ [frontend-architect] implements UI components
|
||
|
|
→ [quality-engineer] writes tests
|
||
|
|
→ All run simultaneously with coordination
|
||
|
|
|
||
|
|
Phase 4 - Validation (Sequential):
|
||
|
|
→ Integration testing
|
||
|
|
→ Performance validation
|
||
|
|
→ Security audit
|
||
|
|
|
||
|
|
Example Timeline:
|
||
|
|
Traditional Sequential: 40 minutes
|
||
|
|
- backend: 10 min
|
||
|
|
- frontend: 10 min
|
||
|
|
- security: 10 min
|
||
|
|
- quality: 10 min
|
||
|
|
|
||
|
|
PM Agent Parallel: 15 minutes (62.5% faster)
|
||
|
|
- Phase 1 (parallel): 10 min (longest single task)
|
||
|
|
- Phase 2 (synthesis): 2 min
|
||
|
|
- Phase 3 (parallel): 10 min
|
||
|
|
- Phase 4 (validation): 3 min
|
||
|
|
- Total: 25 min → 15 min with tool optimization
|
||
|
|
```
|
||
|
|
|
||
|
|
**Implementation Sketch**:
|
||
|
|
```python
|
||
|
|
# superclaude/commands/pm.md (enhanced)
|
||
|
|
|
||
|
|
class PMAgentParallelOrchestrator:
|
||
|
|
"""
|
||
|
|
PM Agent with Deep Research-style parallel execution
|
||
|
|
"""
|
||
|
|
|
||
|
|
async def execute_parallel_phase(self, agents: List[str], context: Dict) -> Dict:
|
||
|
|
"""Execute multiple sub-agents in parallel"""
|
||
|
|
tasks = []
|
||
|
|
for agent_name in agents:
|
||
|
|
task = self.delegate_to_agent(agent_name, context)
|
||
|
|
tasks.append(task)
|
||
|
|
|
||
|
|
# Run all agents concurrently
|
||
|
|
results = await asyncio.gather(*tasks)
|
||
|
|
|
||
|
|
# Synthesize results
|
||
|
|
return self.synthesize_results(results)
|
||
|
|
|
||
|
|
async def execute_request(self, user_request: str):
|
||
|
|
"""Main orchestration flow"""
|
||
|
|
|
||
|
|
# Phase 0: Analysis
|
||
|
|
analysis = await self.analyze_request(user_request)
|
||
|
|
|
||
|
|
# Phase 1: Parallel Investigation
|
||
|
|
if analysis.requires_multiple_domains:
|
||
|
|
domain_agents = analysis.identify_required_agents()
|
||
|
|
results_phase1 = await self.execute_parallel_phase(
|
||
|
|
agents=domain_agents,
|
||
|
|
context={"task": "analyze", "request": user_request}
|
||
|
|
)
|
||
|
|
|
||
|
|
# Phase 2: Synthesis
|
||
|
|
unified_plan = await self.synthesize_plan(results_phase1)
|
||
|
|
|
||
|
|
# Phase 3: Parallel Implementation
|
||
|
|
if unified_plan.has_independent_tasks:
|
||
|
|
impl_agents = unified_plan.identify_implementation_agents()
|
||
|
|
results_phase3 = await self.execute_parallel_phase(
|
||
|
|
agents=impl_agents,
|
||
|
|
context={"task": "implement", "plan": unified_plan}
|
||
|
|
)
|
||
|
|
|
||
|
|
# Phase 4: Validation
|
||
|
|
validation_result = await self.validate_implementation(results_phase3)
|
||
|
|
|
||
|
|
return validation_result
|
||
|
|
```
|
||
|
|
|
||
|
|
## 🔄 Dependency Analysis
|
||
|
|
|
||
|
|
**Current Dependency Chain**:
|
||
|
|
```
|
||
|
|
core → (foundation)
|
||
|
|
modes → depends on core
|
||
|
|
commands → depends on core, modes
|
||
|
|
agents → depends on core, commands
|
||
|
|
mcp → depends on core (optional)
|
||
|
|
mcp_docs → depends on mcp (should always be included if mcp selected)
|
||
|
|
```
|
||
|
|
|
||
|
|
**Proposed Dependency Fix**:
|
||
|
|
```yaml
|
||
|
|
Strict Dependencies:
|
||
|
|
mcp_docs → MUST include if ANY mcp server selected
|
||
|
|
agents → SHOULD include for optimal PM Agent operation
|
||
|
|
commands → SHOULD include for slash command functionality
|
||
|
|
|
||
|
|
Optional Dependencies:
|
||
|
|
modes → OPTIONAL (behavior enhancements)
|
||
|
|
specific_mcp_servers → OPTIONAL (feature enhancements)
|
||
|
|
|
||
|
|
Recommended Profile:
|
||
|
|
- core (required)
|
||
|
|
- commands (optimal experience)
|
||
|
|
- agents (PM Agent sub-agent delegation)
|
||
|
|
- mcp_docs (if using any MCP servers)
|
||
|
|
- airis-mcp-gateway (zero-token baseline + on-demand loading)
|
||
|
|
```
|
||
|
|
|
||
|
|
## 📋 Action Items
|
||
|
|
|
||
|
|
### Immediate (Critical)
|
||
|
|
1. ✅ Document current issues (this file)
|
||
|
|
2. ⏳ Fix `mcp_docs` auto-selection logic
|
||
|
|
3. ⏳ Add `--recommended` CLI flag
|
||
|
|
|
||
|
|
### Short-term (Important)
|
||
|
|
4. ⏳ Design performance benchmark suite
|
||
|
|
5. ⏳ Run baseline performance tests
|
||
|
|
6. ⏳ Add `--minimal` and `--mcp-servers` CLI flags
|
||
|
|
|
||
|
|
### Medium-term (Enhancement)
|
||
|
|
7. ⏳ Implement PM Agent parallel orchestration
|
||
|
|
8. ⏳ Run performance tests (before/after parallel)
|
||
|
|
9. ⏳ Prepare Pull Request with evidence
|
||
|
|
|
||
|
|
### Long-term (Strategic)
|
||
|
|
10. ⏳ Community feedback on installation profiles
|
||
|
|
11. ⏳ A/B testing: interactive vs CLI default
|
||
|
|
12. ⏳ Documentation updates
|
||
|
|
|
||
|
|
## 🧪 Testing Strategy
|
||
|
|
|
||
|
|
**Before Pull Request**:
|
||
|
|
```bash
|
||
|
|
# 1. Baseline Performance Test
|
||
|
|
uv run superclaude install --minimal
|
||
|
|
→ Measure: size, token usage, load time
|
||
|
|
|
||
|
|
uv run superclaude install --recommended
|
||
|
|
→ Compare to baseline
|
||
|
|
|
||
|
|
uv run superclaude install --all
|
||
|
|
→ Compare to recommended
|
||
|
|
|
||
|
|
# 2. Functional Tests
|
||
|
|
pytest tests/test_install_command.py -v
|
||
|
|
pytest tests/performance/ -v
|
||
|
|
|
||
|
|
# 3. User Acceptance
|
||
|
|
- Install with --recommended
|
||
|
|
- Verify airis-mcp-gateway works
|
||
|
|
- Verify PM Agent can delegate to sub-agents
|
||
|
|
- Verify no warnings or errors
|
||
|
|
|
||
|
|
# 4. Documentation
|
||
|
|
- Update README.md with new flags
|
||
|
|
- Update CONTRIBUTING.md with benchmark requirements
|
||
|
|
- Create docs/installation-guide.md
|
||
|
|
```
|
||
|
|
|
||
|
|
## 💡 Expected Outcomes
|
||
|
|
|
||
|
|
**After Implementing Fixes**:
|
||
|
|
```yaml
|
||
|
|
User Experience:
|
||
|
|
Before: "Core is recommended" → Incomplete install → Confusion
|
||
|
|
After: "--recommended" → Complete working install → Clear expectations
|
||
|
|
|
||
|
|
Performance:
|
||
|
|
Before: Unknown (no benchmarks)
|
||
|
|
After: Measured, optimized, validated
|
||
|
|
|
||
|
|
PM Agent:
|
||
|
|
Before: Sequential sub-agent execution (slow)
|
||
|
|
After: Parallel sub-agent execution (60%+ faster)
|
||
|
|
|
||
|
|
Developer Experience:
|
||
|
|
Before: Interactive only (slow for repeated installs)
|
||
|
|
After: CLI flags (fast, scriptable, CI-friendly)
|
||
|
|
```
|
||
|
|
|
||
|
|
## 🎯 Pull Request Checklist
|
||
|
|
|
||
|
|
Before sending PR to SuperClaude-Org/SuperClaude_Framework:
|
||
|
|
|
||
|
|
- [ ] Performance benchmark suite implemented
|
||
|
|
- [ ] Baseline tests executed (minimal, recommended, full)
|
||
|
|
- [ ] Before/After data collected and analyzed
|
||
|
|
- [ ] CLI flags (`--recommended`, `--minimal`) implemented
|
||
|
|
- [ ] `mcp_docs` auto-selection logic fixed
|
||
|
|
- [ ] All tests passing (`pytest tests/ -v`)
|
||
|
|
- [ ] Documentation updated (README, CONTRIBUTING, installation guide)
|
||
|
|
- [ ] User feedback gathered (if possible)
|
||
|
|
- [ ] PM Agent parallel architecture proposal documented
|
||
|
|
- [ ] No breaking changes introduced
|
||
|
|
- [ ] Backward compatibility maintained
|
||
|
|
|
||
|
|
**Evidence Required**:
|
||
|
|
- Performance comparison table (minimal vs recommended vs full)
|
||
|
|
- Token usage analysis report
|
||
|
|
- Load time measurements
|
||
|
|
- Before/After installation flow screenshots
|
||
|
|
- Test coverage report (>80%)
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
**Conclusion**: The installation process has clear improvement opportunities. With CLI flags, fixed auto-selection, and performance benchmarks, we can provide a much better user experience. The PM Agent parallel architecture proposal offers significant performance gains (60%+ faster) for complex multi-domain tasks.
|
||
|
|
|
||
|
|
**Next Step**: Implement performance benchmark suite to gather evidence before making changes.
|