mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-17 17:56:46 +00:00
* refactor: PM Agent complete independence from external MCP servers ## Summary Implement graceful degradation to ensure PM Agent operates fully without any MCP server dependencies. MCP servers now serve as optional enhancements rather than required components. ## Changes ### Responsibility Separation (NEW) - **PM Agent**: Development workflow orchestration (PDCA cycle, task management) - **mindbase**: Memory management (long-term, freshness, error learning) - **Built-in memory**: Session-internal context (volatile) ### 3-Layer Memory Architecture with Fallbacks 1. **Built-in Memory** [OPTIONAL]: Session context via MCP memory server 2. **mindbase** [OPTIONAL]: Long-term semantic search via airis-mcp-gateway 3. **Local Files** [ALWAYS]: Core functionality in docs/memory/ ### Graceful Degradation Implementation - All MCP operations marked with [ALWAYS] or [OPTIONAL] - Explicit IF/ELSE fallback logic for every MCP call - Dual storage: Always write to local files + optionally to mindbase - Smart lookup: Semantic search (if available) → Text search (always works) ### Key Fallback Strategies **Session Start**: - mindbase available: search_conversations() for semantic context - mindbase unavailable: Grep docs/memory/*.jsonl for text-based lookup **Error Detection**: - mindbase available: Semantic search for similar past errors - mindbase unavailable: Grep docs/mistakes/ + solutions_learned.jsonl **Knowledge Capture**: - Always: echo >> docs/memory/patterns_learned.jsonl (persistent) - Optional: mindbase.store() for semantic search enhancement ## Benefits - ✅ Zero external dependencies (100% functionality without MCP) - ✅ Enhanced capabilities when MCPs available (semantic search, freshness) - ✅ No functionality loss, only reduced search intelligence - ✅ Transparent degradation (no error messages, automatic fallback) ## Related Research - Serena MCP investigation: Exposes tools (not resources), memory = markdown files - mindbase superiority: PostgreSQL + pgvector > Serena memory features - Best practices alignment: /Users/kazuki/github/airis-mcp-gateway/docs/mcp-best-practices.md 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * chore: add PR template and pre-commit config - Add structured PR template with Git workflow checklist - Add pre-commit hooks for secret detection and Conventional Commits - Enforce code quality gates (YAML/JSON/Markdown lint, shellcheck) NOTE: Execute pre-commit inside Docker container to avoid host pollution: docker compose exec workspace uv tool install pre-commit docker compose exec workspace pre-commit run --all-files 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: update PM Agent context with token efficiency architecture - Add Layer 0 Bootstrap (150 tokens, 95% reduction) - Document Intent Classification System (5 complexity levels) - Add Progressive Loading strategy (5-layer) - Document mindbase integration incentive (38% savings) - Update with 2025-10-17 redesign details * refactor: PM Agent command with progressive loading - Replace auto-loading with User Request First philosophy - Add 5-layer progressive context loading - Implement intent classification system - Add workflow metrics collection (.jsonl) - Document graceful degradation strategy * fix: installer improvements Update installer logic for better reliability * docs: add comprehensive development documentation - Add architecture overview - Add PM Agent improvements analysis - Add parallel execution architecture - Add CLI install improvements - Add code style guide - Add project overview - Add install process analysis * docs: add research documentation Add LLM agent token efficiency research and analysis * docs: add suggested commands reference * docs: add session logs and testing documentation - Add session analysis logs - Add testing documentation * feat: migrate CLI to typer + rich for modern UX ## What Changed ### New CLI Architecture (typer + rich) - Created `superclaude/cli/` module with modern typer-based CLI - Replaced custom UI utilities with rich native features - Added type-safe command structure with automatic validation ### Commands Implemented - **install**: Interactive installation with rich UI (progress, panels) - **doctor**: System diagnostics with rich table output - **config**: API key management with format validation ### Technical Improvements - Dependencies: Added typer>=0.9.0, rich>=13.0.0, click>=8.0.0 - Entry Point: Updated pyproject.toml to use `superclaude.cli.app:cli_main` - Tests: Added comprehensive smoke tests (11 passed) ### User Experience Enhancements - Rich formatted help messages with panels and tables - Automatic input validation with retry loops - Clear error messages with actionable suggestions - Non-interactive mode support for CI/CD ## Testing ```bash uv run superclaude --help # ✓ Works uv run superclaude doctor # ✓ Rich table output uv run superclaude config show # ✓ API key management pytest tests/test_cli_smoke.py # ✓ 11 passed, 1 skipped ``` ## Migration Path - ✅ P0: Foundation complete (typer + rich + smoke tests) - 🔜 P1: Pydantic validation models (next sprint) - 🔜 P2: Enhanced error messages (next sprint) - 🔜 P3: API key retry loops (next sprint) ## Performance Impact - **Code Reduction**: Prepared for -300 lines (custom UI → rich) - **Type Safety**: Automatic validation from type hints - **Maintainability**: Framework primitives vs custom code 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: consolidate documentation directories Merged claudedocs/ into docs/research/ for consistent documentation structure. Changes: - Moved all claudedocs/*.md files to docs/research/ - Updated all path references in documentation (EN/KR) - Updated RULES.md and research.md command templates - Removed claudedocs/ directory - Removed ClaudeDocs/ from .gitignore Benefits: - Single source of truth for all research reports - PEP8-compliant lowercase directory naming - Clearer documentation organization - Prevents future claudedocs/ directory creation 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * perf: reduce /sc:pm command output from 1652 to 15 lines - Remove 1637 lines of documentation from command file - Keep only minimal bootstrap message - 99% token reduction on command execution - Detailed specs remain in superclaude/agents/pm-agent.md 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * perf: split PM Agent into execution workflows and guide - Reduce pm-agent.md from 735 to 429 lines (42% reduction) - Move philosophy/examples to docs/agents/pm-agent-guide.md - Execution workflows (PDCA, file ops) stay in pm-agent.md - Guide (examples, quality standards) read once when needed Token savings: - Agent loading: ~6K → ~3.5K tokens (42% reduction) - Total with pm.md: 71% overall reduction 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: consolidate PM Agent optimization and pending changes PM Agent optimization (already committed separately): - superclaude/commands/pm.md: 1652→14 lines - superclaude/agents/pm-agent.md: 735→429 lines - docs/agents/pm-agent-guide.md: new guide file Other pending changes: - setup: framework_docs, mcp, logger, remove ui.py - superclaude: __main__, cli/app, cli/commands/install - tests: test_ui updates - scripts: workflow metrics analysis tools - docs/memory: session state updates 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: simplify MCP installer to unified gateway with legacy mode ## Changes ### MCP Component (setup/components/mcp.py) - Simplified to single airis-mcp-gateway by default - Added legacy mode for individual official servers (sequential-thinking, context7, magic, playwright) - Dynamic prerequisites based on mode: - Default: uv + claude CLI only - Legacy: node (18+) + npm + claude CLI - Removed redundant server definitions ### CLI Integration - Added --legacy flag to setup/cli/commands/install.py - Added --legacy flag to superclaude/cli/commands/install.py - Config passes legacy_mode to component installer ## Benefits - ✅ Simpler: 1 gateway vs 9+ individual servers - ✅ Lighter: No Node.js/npm required (default mode) - ✅ Unified: All tools in one gateway (sequential-thinking, context7, magic, playwright, serena, morphllm, tavily, chrome-devtools, git, puppeteer) - ✅ Flexible: --legacy flag for official servers if needed ## Usage ```bash superclaude install # Default: airis-mcp-gateway (推奨) superclaude install --legacy # Legacy: individual official servers ``` 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: rename CoreComponent to FrameworkDocsComponent and add PM token tracking ## Changes ### Component Renaming (setup/components/) - Renamed CoreComponent → FrameworkDocsComponent for clarity - Updated all imports in __init__.py, agents.py, commands.py, mcp_docs.py, modes.py - Better reflects the actual purpose (framework documentation files) ### PM Agent Enhancement (superclaude/commands/pm.md) - Added token usage tracking instructions - PM Agent now reports: 1. Current token usage from system warnings 2. Percentage used (e.g., "27% used" for 54K/200K) 3. Status zone: 🟢 <75% | 🟡 75-85% | 🔴 >85% - Helps prevent token exhaustion during long sessions ### UI Utilities (setup/utils/ui.py) - Added new UI utility module for installer - Provides consistent user interface components ## Benefits - ✅ Clearer component naming (FrameworkDocs vs Core) - ✅ PM Agent token awareness for efficiency - ✅ Better visual feedback with status zones 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor(pm-agent): minimize output verbosity (471→284 lines, 40% reduction) **Problem**: PM Agent generated excessive output with redundant explanations - "System Status Report" with decorative formatting - Repeated "Common Tasks" lists user already knows - Verbose session start/end protocols - Duplicate file operations documentation **Solution**: Compress without losing functionality - Session Start: Reduced to symbol-only status (🟢 branch | nM nD | token%) - Session End: Compressed to essential actions only - File Operations: Consolidated from 2 sections to 1 line reference - Self-Improvement: 5 phases → 1 unified workflow - Output Rules: Explicit constraints to prevent Claude over-explanation **Quality Preservation**: - ✅ All core functions retained (PDCA, memory, patterns, mistakes) - ✅ PARALLEL Read/Write preserved (performance critical) - ✅ Workflow unchanged (session lifecycle intact) - ✅ Added output constraints (prevents verbose generation) **Reduction Method**: - Deleted: Explanatory text, examples, redundant sections - Retained: Action definitions, file paths, core workflows - Added: Explicit output constraints to enforce minimalism **Token Impact**: 40% reduction in agent documentation size **Before**: Verbose multi-section report with task lists **After**: Single line status: 🟢 integration | 15M 17D | 36% 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: consolidate MCP integration to unified gateway **Changes**: - Remove individual MCP server docs (superclaude/mcp/*.md) - Remove MCP server configs (superclaude/mcp/configs/*.json) - Delete MCP docs component (setup/components/mcp_docs.py) - Simplify installer (setup/core/installer.py) - Update components for unified gateway approach **Rationale**: - Unified gateway (airis-mcp-gateway) provides all MCP servers - Individual docs/configs no longer needed (managed centrally) - Reduces maintenance burden and file count - Simplifies installation process **Files Removed**: 17 MCP files (docs + configs) **Installer Changes**: Removed legacy MCP installation logic 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * chore: update version and component metadata - Bump version (pyproject.toml, setup/__init__.py) - Update CLAUDE.md import service references - Reflect component structure changes 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local> Co-authored-by: Claude <noreply@anthropic.com>
310 lines
10 KiB
Python
Executable File
310 lines
10 KiB
Python
Executable File
#!/usr/bin/env python3
|
|
"""
|
|
A/B Testing Framework for Workflow Variants
|
|
|
|
Compares two workflow variants with statistical significance testing.
|
|
|
|
Usage:
|
|
python scripts/ab_test_workflows.py \\
|
|
--variant-a progressive_v3_layer2 \\
|
|
--variant-b experimental_eager_layer3 \\
|
|
--metric tokens_used
|
|
"""
|
|
|
|
import json
|
|
import argparse
|
|
from pathlib import Path
|
|
from typing import Dict, List, Tuple
|
|
import statistics
|
|
from scipy import stats
|
|
|
|
|
|
class ABTestAnalyzer:
|
|
"""A/B testing framework for workflow optimization"""
|
|
|
|
def __init__(self, metrics_file: Path):
|
|
self.metrics_file = metrics_file
|
|
self.metrics: List[Dict] = []
|
|
self._load_metrics()
|
|
|
|
def _load_metrics(self):
|
|
"""Load metrics from JSONL file"""
|
|
if not self.metrics_file.exists():
|
|
print(f"Error: {self.metrics_file} not found")
|
|
return
|
|
|
|
with open(self.metrics_file, 'r') as f:
|
|
for line in f:
|
|
if line.strip():
|
|
self.metrics.append(json.loads(line))
|
|
|
|
def get_variant_metrics(self, workflow_id: str) -> List[Dict]:
|
|
"""Get all metrics for a specific workflow variant"""
|
|
return [m for m in self.metrics if m['workflow_id'] == workflow_id]
|
|
|
|
def extract_metric_values(self, metrics: List[Dict], metric: str) -> List[float]:
|
|
"""Extract specific metric values from metrics list"""
|
|
values = []
|
|
for m in metrics:
|
|
if metric in m:
|
|
value = m[metric]
|
|
# Handle boolean metrics
|
|
if isinstance(value, bool):
|
|
value = 1.0 if value else 0.0
|
|
values.append(float(value))
|
|
return values
|
|
|
|
def calculate_statistics(self, values: List[float]) -> Dict:
|
|
"""Calculate statistical measures"""
|
|
if not values:
|
|
return {
|
|
'count': 0,
|
|
'mean': 0,
|
|
'median': 0,
|
|
'stdev': 0,
|
|
'min': 0,
|
|
'max': 0
|
|
}
|
|
|
|
return {
|
|
'count': len(values),
|
|
'mean': statistics.mean(values),
|
|
'median': statistics.median(values),
|
|
'stdev': statistics.stdev(values) if len(values) > 1 else 0,
|
|
'min': min(values),
|
|
'max': max(values)
|
|
}
|
|
|
|
def perform_ttest(
|
|
self,
|
|
variant_a_values: List[float],
|
|
variant_b_values: List[float]
|
|
) -> Tuple[float, float]:
|
|
"""
|
|
Perform independent t-test between two variants.
|
|
|
|
Returns:
|
|
(t_statistic, p_value)
|
|
"""
|
|
if len(variant_a_values) < 2 or len(variant_b_values) < 2:
|
|
return 0.0, 1.0 # Not enough data
|
|
|
|
t_stat, p_value = stats.ttest_ind(variant_a_values, variant_b_values)
|
|
return t_stat, p_value
|
|
|
|
def determine_winner(
|
|
self,
|
|
variant_a_stats: Dict,
|
|
variant_b_stats: Dict,
|
|
p_value: float,
|
|
metric: str,
|
|
lower_is_better: bool = True
|
|
) -> str:
|
|
"""
|
|
Determine winning variant based on statistics.
|
|
|
|
Args:
|
|
variant_a_stats: Statistics for variant A
|
|
variant_b_stats: Statistics for variant B
|
|
p_value: Statistical significance (p-value)
|
|
metric: Metric being compared
|
|
lower_is_better: True if lower values are better (e.g., tokens_used)
|
|
|
|
Returns:
|
|
Winner description
|
|
"""
|
|
# Require statistical significance (p < 0.05)
|
|
if p_value >= 0.05:
|
|
return "No significant difference (p ≥ 0.05)"
|
|
|
|
# Require minimum sample size (20 trials per variant)
|
|
if variant_a_stats['count'] < 20 or variant_b_stats['count'] < 20:
|
|
return f"Insufficient data (need 20 trials, have {variant_a_stats['count']}/{variant_b_stats['count']})"
|
|
|
|
# Compare means
|
|
a_mean = variant_a_stats['mean']
|
|
b_mean = variant_b_stats['mean']
|
|
|
|
if lower_is_better:
|
|
if a_mean < b_mean:
|
|
improvement = ((b_mean - a_mean) / b_mean) * 100
|
|
return f"Variant A wins ({improvement:.1f}% better)"
|
|
else:
|
|
improvement = ((a_mean - b_mean) / a_mean) * 100
|
|
return f"Variant B wins ({improvement:.1f}% better)"
|
|
else:
|
|
if a_mean > b_mean:
|
|
improvement = ((a_mean - b_mean) / b_mean) * 100
|
|
return f"Variant A wins ({improvement:.1f}% better)"
|
|
else:
|
|
improvement = ((b_mean - a_mean) / a_mean) * 100
|
|
return f"Variant B wins ({improvement:.1f}% better)"
|
|
|
|
def generate_recommendation(
|
|
self,
|
|
winner: str,
|
|
variant_a_stats: Dict,
|
|
variant_b_stats: Dict,
|
|
p_value: float
|
|
) -> str:
|
|
"""Generate actionable recommendation"""
|
|
if "No significant difference" in winner:
|
|
return "⚖️ Keep current workflow (no improvement detected)"
|
|
|
|
if "Insufficient data" in winner:
|
|
return "📊 Continue testing (need more trials)"
|
|
|
|
if "Variant A wins" in winner:
|
|
return "✅ Keep Variant A as standard (statistically better)"
|
|
|
|
if "Variant B wins" in winner:
|
|
if variant_b_stats['mean'] > variant_a_stats['mean'] * 0.8: # At least 20% better
|
|
return "🚀 Promote Variant B to standard (significant improvement)"
|
|
else:
|
|
return "⚠️ Marginal improvement - continue testing before promotion"
|
|
|
|
return "🤔 Manual review recommended"
|
|
|
|
def compare_variants(
|
|
self,
|
|
variant_a_id: str,
|
|
variant_b_id: str,
|
|
metric: str = 'tokens_used',
|
|
lower_is_better: bool = True
|
|
) -> str:
|
|
"""
|
|
Compare two workflow variants on a specific metric.
|
|
|
|
Args:
|
|
variant_a_id: Workflow ID for variant A
|
|
variant_b_id: Workflow ID for variant B
|
|
metric: Metric to compare (default: tokens_used)
|
|
lower_is_better: True if lower values are better
|
|
|
|
Returns:
|
|
Comparison report
|
|
"""
|
|
# Get metrics for each variant
|
|
variant_a_metrics = self.get_variant_metrics(variant_a_id)
|
|
variant_b_metrics = self.get_variant_metrics(variant_b_id)
|
|
|
|
if not variant_a_metrics:
|
|
return f"Error: No data for variant A ({variant_a_id})"
|
|
if not variant_b_metrics:
|
|
return f"Error: No data for variant B ({variant_b_id})"
|
|
|
|
# Extract metric values
|
|
a_values = self.extract_metric_values(variant_a_metrics, metric)
|
|
b_values = self.extract_metric_values(variant_b_metrics, metric)
|
|
|
|
# Calculate statistics
|
|
a_stats = self.calculate_statistics(a_values)
|
|
b_stats = self.calculate_statistics(b_values)
|
|
|
|
# Perform t-test
|
|
t_stat, p_value = self.perform_ttest(a_values, b_values)
|
|
|
|
# Determine winner
|
|
winner = self.determine_winner(a_stats, b_stats, p_value, metric, lower_is_better)
|
|
|
|
# Generate recommendation
|
|
recommendation = self.generate_recommendation(winner, a_stats, b_stats, p_value)
|
|
|
|
# Format report
|
|
report = []
|
|
report.append("=" * 80)
|
|
report.append("A/B TEST COMPARISON REPORT")
|
|
report.append("=" * 80)
|
|
report.append("")
|
|
report.append(f"Metric: {metric}")
|
|
report.append(f"Better: {'Lower' if lower_is_better else 'Higher'} values")
|
|
report.append("")
|
|
|
|
report.append(f"## Variant A: {variant_a_id}")
|
|
report.append(f" Trials: {a_stats['count']}")
|
|
report.append(f" Mean: {a_stats['mean']:.2f}")
|
|
report.append(f" Median: {a_stats['median']:.2f}")
|
|
report.append(f" Std Dev: {a_stats['stdev']:.2f}")
|
|
report.append(f" Range: {a_stats['min']:.2f} - {a_stats['max']:.2f}")
|
|
report.append("")
|
|
|
|
report.append(f"## Variant B: {variant_b_id}")
|
|
report.append(f" Trials: {b_stats['count']}")
|
|
report.append(f" Mean: {b_stats['mean']:.2f}")
|
|
report.append(f" Median: {b_stats['median']:.2f}")
|
|
report.append(f" Std Dev: {b_stats['stdev']:.2f}")
|
|
report.append(f" Range: {b_stats['min']:.2f} - {b_stats['max']:.2f}")
|
|
report.append("")
|
|
|
|
report.append("## Statistical Significance")
|
|
report.append(f" t-statistic: {t_stat:.4f}")
|
|
report.append(f" p-value: {p_value:.4f}")
|
|
if p_value < 0.01:
|
|
report.append(" Significance: *** (p < 0.01) - Highly significant")
|
|
elif p_value < 0.05:
|
|
report.append(" Significance: ** (p < 0.05) - Significant")
|
|
elif p_value < 0.10:
|
|
report.append(" Significance: * (p < 0.10) - Marginally significant")
|
|
else:
|
|
report.append(" Significance: n.s. (p ≥ 0.10) - Not significant")
|
|
report.append("")
|
|
|
|
report.append(f"## Result: {winner}")
|
|
report.append(f"## Recommendation: {recommendation}")
|
|
report.append("")
|
|
report.append("=" * 80)
|
|
|
|
return "\n".join(report)
|
|
|
|
|
|
def main():
|
|
parser = argparse.ArgumentParser(description="A/B test workflow variants")
|
|
parser.add_argument(
|
|
'--variant-a',
|
|
required=True,
|
|
help='Workflow ID for variant A'
|
|
)
|
|
parser.add_argument(
|
|
'--variant-b',
|
|
required=True,
|
|
help='Workflow ID for variant B'
|
|
)
|
|
parser.add_argument(
|
|
'--metric',
|
|
default='tokens_used',
|
|
help='Metric to compare (default: tokens_used)'
|
|
)
|
|
parser.add_argument(
|
|
'--higher-is-better',
|
|
action='store_true',
|
|
help='Higher values are better (default: lower is better)'
|
|
)
|
|
parser.add_argument(
|
|
'--output',
|
|
help='Output file (default: stdout)'
|
|
)
|
|
|
|
args = parser.parse_args()
|
|
|
|
# Find metrics file
|
|
metrics_file = Path('docs/memory/workflow_metrics.jsonl')
|
|
|
|
analyzer = ABTestAnalyzer(metrics_file)
|
|
report = analyzer.compare_variants(
|
|
args.variant_a,
|
|
args.variant_b,
|
|
args.metric,
|
|
lower_is_better=not args.higher_is_better
|
|
)
|
|
|
|
if args.output:
|
|
with open(args.output, 'w') as f:
|
|
f.write(report)
|
|
print(f"Report written to {args.output}")
|
|
else:
|
|
print(report)
|
|
|
|
|
|
if __name__ == '__main__':
|
|
main()
|