mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-20 11:16:33 +00:00
* fix(orchestration): add WebFetch auto-trigger for infrastructure configuration Problem: Infrastructure configuration changes (e.g., Traefik port settings) were being made based on assumptions without consulting official documentation, violating the 'Evidence > assumptions' principle in PRINCIPLES.md. Solution: - Added Infrastructure Configuration Validation section to MODE_Orchestration.md - Auto-triggers WebFetch for infrastructure tools (Traefik, nginx, Docker, etc.) - Enforces MODE_DeepResearch activation for investigation - BLOCKS assumption-based configuration changes Testing: Verified WebFetch successfully retrieves Traefik official docs (port 80 default) This prevents production outages from infrastructure misconfiguration by ensuring all technical recommendations are backed by official documentation. * feat: Add PM Agent (Project Manager Agent) for seamless orchestration Introduces PM Agent as the default orchestration layer that coordinates all sub-agents and manages workflows automatically. Key Features: - Default orchestration: All user interactions handled by PM Agent - Auto-delegation: Intelligent sub-agent selection based on task analysis - Docker Gateway integration: Zero-token baseline with dynamic MCP loading - Self-improvement loop: Automatic documentation of patterns and mistakes - Optional override: Users can specify sub-agents explicitly if desired Architecture: - Agent spec: SuperClaude/Agents/pm-agent.md - Command: SuperClaude/Commands/pm.md - Updated docs: README.md (15→16 agents), agents.md (new Orchestration category) User Experience: - Default: PM Agent handles everything (seamless, no manual routing) - Optional: Explicit --agent flag for direct sub-agent access - Both modes available simultaneously (no user downside) Implementation Status: - ✅ Specification complete - ✅ Documentation complete - ⏳ Prototype implementation needed - ⏳ Docker Gateway integration needed - ⏳ Testing and validation needed Refs: kazukinakai/docker-mcp-gateway (IRIS MCP Gateway integration) * feat: Add Agent Orchestration rules for PM Agent default activation Implements PM Agent as the default orchestration layer in RULES.md. Key Changes: - New 'Agent Orchestration' section (CRITICAL priority) - PM Agent receives ALL user requests by default - Manual override with @agent-[name] bypasses PM Agent - Agent Selection Priority clearly defined: 1. Manual override → Direct routing 2. Default → PM Agent → Auto-delegation 3. Delegation based on keywords, file types, complexity, context User Experience: - Default: PM Agent handles everything (seamless) - Override: @agent-[name] for direct specialist access - Transparent: PM Agent reports delegation decisions This establishes PM Agent as the orchestration layer while respecting existing auto-activation patterns and manual overrides. Next Steps: - Local testing in agiletec project - Iteration based on actual behavior - Documentation updates as needed * refactor(pm-agent): redesign as self-improvement meta-layer Problem Resolution: PM Agent's initial design competed with existing auto-activation for task routing, creating confusion about orchestration responsibilities and adding unnecessary complexity. Design Change: Redefined PM Agent as a meta-layer agent that operates AFTER specialist agents complete tasks, focusing on: - Post-implementation documentation and pattern recording - Immediate mistake analysis with prevention checklists - Monthly documentation maintenance and noise reduction - Pattern extraction and knowledge synthesis Two-Layer Orchestration System: 1. Task Execution Layer: Existing auto-activation handles task routing (unchanged) 2. Self-Improvement Layer: PM Agent meta-layer handles documentation (new) Files Modified: - SuperClaude/Agents/pm-agent.md: Complete rewrite with meta-layer design - Category: orchestration → meta - Triggers: All user interactions → Post-implementation, mistakes, monthly - Behavioral Mindset: Continuous learning system - Self-Improvement Workflow: BEFORE/DURING/AFTER/MISTAKE RECOVERY/MAINTENANCE - SuperClaude/Core/RULES.md: Agent Orchestration section updated - Split into Task Execution Layer + Self-Improvement Layer - Added orchestration flow diagram - Clarified PM Agent activates AFTER task completion - README.md: Updated PM Agent description - "orchestrates all interactions" → "ensures continuous learning" - Docs/User-Guide/agents.md: PM Agent section rewritten - Section: Orchestration Agent → Meta-Layer Agent - Expertise: Project orchestration → Self-improvement workflow executor - Examples: Task coordination → Post-implementation documentation - PR_DOCUMENTATION.md: Comprehensive PR documentation added - Summary, motivation, changes, testing, breaking changes - Two-layer orchestration system diagram - Verification checklist Integration Validated: Tested with agiletec project's self-improvement-workflow.md: ✅ PM Agent aligns with existing BEFORE/DURING/AFTER/MISTAKE RECOVERY phases ✅ Complements (not competes with) existing workflow ✅ agiletec workflow defines WHAT, PM Agent defines WHO executes it Breaking Changes: None - Existing auto-activation continues unchanged - Specialist agents unaffected - User workflows remain the same - New capability: Automatic documentation and knowledge maintenance Value Proposition: Transforms SuperClaude into a continuously learning system that accumulates knowledge, prevents recurring mistakes, and maintains fresh documentation without manual intervention. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: add Claude Code conversation history management research Research covering .jsonl file structure, performance impact, and retention policies. Content: - Claude Code .jsonl file format and message types - Performance issues from GitHub (memory leaks, conversation compaction) - Retention policies (consumer vs enterprise) - Rotation recommendations based on actual data - File history snapshot tracking mechanics Source: Moved from agiletec project (research applicable to all Claude Code projects) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: add Development documentation structure Phase 1: Documentation Structure complete - Add Docs/Development/ directory for development documentation - Add ARCHITECTURE.md - System architecture with PM Agent meta-layer - Add ROADMAP.md - 5-phase development plan with checkboxes - Add TASKS.md - Daily task tracking with progress indicators - Add PROJECT_STATUS.md - Current status dashboard and metrics - Add pm-agent-integration.md - Implementation guide for PM Agent mode This establishes comprehensive documentation foundation for: - System architecture understanding - Development planning and tracking - Implementation guidance - Progress visibility Related: #pm-agent-mode #documentation #phase-1 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: PM Agent session lifecycle and PDCA implementation Phase 2: PM Agent Mode Integration (Design Phase) Commands/pm.md updates: - Add "Always-Active Foundation Layer" concept - Add Session Lifecycle (Session Start/During Work/Session End) - Add PDCA Cycle (Plan/Do/Check/Act) automation - Add Serena MCP Memory Integration (list/read/write_memory) - Document auto-activation triggers Agents/pm-agent.md updates: - Add Session Start Protocol (MANDATORY auto-activation) - Add During Work PDCA Cycle with example workflows - Add Session End Protocol with state preservation - Add PDCA Self-Evaluation Pattern - Add Documentation Strategy (temp → patterns/mistakes) - Add Memory Operations Reference Key Features: - Session start auto-activation for context restoration - 30-minute checkpoint saves during work - Self-evaluation with think_about_* operations - Systematic documentation lifecycle - Knowledge evolution to CLAUDE.md Implementation Status: - ✅ Design complete (Commands/pm.md, Agents/pm-agent.md) - ⏳ Implementation pending (Core components) - ⏳ Serena MCP integration pending Salvaged from mistaken development in ~/.claude directory Related: #pm-agent-mode #session-lifecycle #pdca-cycle #phase-2 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: disable Serena MCP auto-browser launch Disable web dashboard and GUI log window auto-launch in Serena MCP server to prevent intrusive browser popups on startup. Users can still manually access the dashboard at http://localhost:24282/dashboard/ if needed. Changes: - Add CLI flags to Serena run command: - --enable-web-dashboard false - --enable-gui-log-window false - Ensures Git-tracked configuration (no reliance on ~/.serena/serena_config.yml) - Aligns with AIRIS MCP Gateway integration approach 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: rename directories to lowercase for PEP8 compliance - Rename superclaude/Agents -> superclaude/agents - Rename superclaude/Commands -> superclaude/commands - Rename superclaude/Core -> superclaude/core - Rename superclaude/Examples -> superclaude/examples - Rename superclaude/MCP -> superclaude/mcp - Rename superclaude/Modes -> superclaude/modes This change follows Python PEP8 naming conventions for package directories. * style: fix PEP8 violations and update package name to lowercase Changes: - Format all Python files with black (43 files reformatted) - Update package name from 'SuperClaude' to 'superclaude' in pyproject.toml - Fix import statements to use lowercase package name - Add missing imports (timedelta, __version__) - Remove old SuperClaude.egg-info directory PEP8 violations reduced from 2672 to 701 (mostly E501 line length due to black's 88 char vs flake8's 79 char limit). * docs: add PM Agent development documentation Add comprehensive PM Agent development documentation: - PM Agent ideal workflow (7-phase autonomous cycle) - Project structure understanding (Git vs installed environment) - Installation flow understanding (CommandsComponent behavior) - Task management system (current-tasks.md) Purpose: Eliminate repeated explanations and enable autonomous PDCA cycles 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat(pm-agent): add self-correcting execution and warning investigation culture ## Changes ### superclaude/commands/pm.md - Add "Self-Correcting Execution" section with root cause analysis protocol - Add "Warning/Error Investigation Culture" section enforcing zero-tolerance for dismissal - Define error detection protocol: STOP → Investigate → Hypothesis → Different Solution → Execute - Document anti-patterns (retry without understanding) and correct patterns (research-first) ### docs/Development/hypothesis-pm-autonomous-enhancement-2025-10-14.md - Add PDCA workflow hypothesis document for PM Agent autonomous enhancement ## Rationale PM Agent must never retry failed operations without understanding root causes. All warnings and errors require investigation via context7/WebFetch/documentation to ensure production-quality code and prevent technical debt accumulation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat(installer): add airis-mcp-gateway MCP server option ## Changes - Add airis-mcp-gateway to MCP server options in installer - Configuration: GitHub-based installation via uvx - Repository: https://github.com/oraios/airis-mcp-gateway - Purpose: Dynamic MCP Gateway for zero-token baseline and on-demand tool loading ## Implementation Added to setup/components/mcp.py self.mcp_servers dictionary with: - install_method: github - install_command: uvx test installation - run_command: uvx runtime execution - required: False (optional server) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local> Co-authored-by: Claude <noreply@anthropic.com>
361 lines
12 KiB
Python
361 lines
12 KiB
Python
"""
|
|
Logging system for SuperClaude installation suite
|
|
"""
|
|
|
|
import logging
|
|
import sys
|
|
from datetime import datetime
|
|
from pathlib import Path
|
|
from typing import Optional, Dict, Any
|
|
from enum import Enum
|
|
|
|
from .ui import Colors
|
|
from .symbols import symbols
|
|
from .paths import get_home_directory
|
|
|
|
|
|
class LogLevel(Enum):
|
|
"""Log levels"""
|
|
|
|
DEBUG = logging.DEBUG
|
|
INFO = logging.INFO
|
|
WARNING = logging.WARNING
|
|
ERROR = logging.ERROR
|
|
CRITICAL = logging.CRITICAL
|
|
|
|
|
|
class Logger:
|
|
"""Enhanced logger with console and file output"""
|
|
|
|
def __init__(
|
|
self,
|
|
name: str = "superclaude",
|
|
log_dir: Optional[Path] = None,
|
|
console_level: LogLevel = LogLevel.INFO,
|
|
file_level: LogLevel = LogLevel.DEBUG,
|
|
):
|
|
"""
|
|
Initialize logger
|
|
|
|
Args:
|
|
name: Logger name
|
|
log_dir: Directory for log files (defaults to ~/.claude/logs)
|
|
console_level: Minimum level for console output
|
|
file_level: Minimum level for file output
|
|
"""
|
|
self.name = name
|
|
self.log_dir = log_dir or (get_home_directory() / ".claude" / "logs")
|
|
self.console_level = console_level
|
|
self.file_level = file_level
|
|
self.session_start = datetime.now()
|
|
|
|
# Create logger
|
|
self.logger = logging.getLogger(name)
|
|
self.logger.setLevel(logging.DEBUG) # Accept all levels, handlers will filter
|
|
|
|
# Remove existing handlers to avoid duplicates
|
|
self.logger.handlers.clear()
|
|
|
|
# Setup handlers
|
|
self._setup_console_handler()
|
|
self._setup_file_handler()
|
|
|
|
self.log_counts: Dict[str, int] = {
|
|
"debug": 0,
|
|
"info": 0,
|
|
"warning": 0,
|
|
"error": 0,
|
|
"critical": 0,
|
|
}
|
|
|
|
def _setup_console_handler(self) -> None:
|
|
"""Setup colorized console handler"""
|
|
handler = logging.StreamHandler(sys.stdout)
|
|
handler.setLevel(self.console_level.value)
|
|
|
|
# Custom formatter with colors
|
|
class ColorFormatter(logging.Formatter):
|
|
def format(self, record):
|
|
# Color mapping
|
|
colors = {
|
|
"DEBUG": Colors.WHITE,
|
|
"INFO": Colors.BLUE,
|
|
"WARNING": Colors.YELLOW,
|
|
"ERROR": Colors.RED,
|
|
"CRITICAL": Colors.RED + Colors.BRIGHT,
|
|
}
|
|
|
|
# Prefix mapping
|
|
prefixes = {
|
|
"DEBUG": "[DEBUG]",
|
|
"INFO": "[INFO]",
|
|
"WARNING": "[!]",
|
|
"ERROR": f"[{symbols.crossmark}]",
|
|
"CRITICAL": "[CRITICAL]",
|
|
}
|
|
|
|
color = colors.get(record.levelname, Colors.WHITE)
|
|
prefix = prefixes.get(record.levelname, "[LOG]")
|
|
|
|
return f"{color}{prefix} {record.getMessage()}{Colors.RESET}"
|
|
|
|
handler.setFormatter(ColorFormatter())
|
|
self.logger.addHandler(handler)
|
|
|
|
def _setup_file_handler(self) -> None:
|
|
"""Setup file handler with rotation"""
|
|
try:
|
|
# Ensure log directory exists
|
|
self.log_dir.mkdir(parents=True, exist_ok=True)
|
|
|
|
# Create timestamped log file
|
|
timestamp = self.session_start.strftime("%Y%m%d_%H%M%S")
|
|
log_file = self.log_dir / f"{self.name}_{timestamp}.log"
|
|
|
|
handler = logging.FileHandler(log_file, encoding="utf-8")
|
|
handler.setLevel(self.file_level.value)
|
|
|
|
# Detailed formatter for files
|
|
formatter = logging.Formatter(
|
|
"%(asctime)s | %(levelname)-8s | %(name)s | %(message)s",
|
|
datefmt="%Y-%m-%d %H:%M:%S",
|
|
)
|
|
handler.setFormatter(formatter)
|
|
|
|
self.logger.addHandler(handler)
|
|
self.log_file = log_file
|
|
|
|
# Clean up old log files (keep last 10)
|
|
self._cleanup_old_logs()
|
|
|
|
except Exception as e:
|
|
# If file logging fails, continue with console only
|
|
print(f"{Colors.YELLOW}[!] Could not setup file logging: {e}{Colors.RESET}")
|
|
self.log_file = None
|
|
|
|
def _cleanup_old_logs(self, keep_count: int = 10) -> None:
|
|
"""Clean up old log files"""
|
|
try:
|
|
# Get all log files for this logger
|
|
log_files = list(self.log_dir.glob(f"{self.name}_*.log"))
|
|
|
|
# Sort by modification time, newest first
|
|
log_files.sort(key=lambda f: f.stat().st_mtime, reverse=True)
|
|
|
|
# Remove old files
|
|
for old_file in log_files[keep_count:]:
|
|
try:
|
|
old_file.unlink()
|
|
except OSError:
|
|
pass # Ignore errors when cleaning up
|
|
|
|
except Exception:
|
|
pass # Ignore cleanup errors
|
|
|
|
def debug(self, message: str, **kwargs) -> None:
|
|
"""Log debug message"""
|
|
self.logger.debug(message, **kwargs)
|
|
self.log_counts["debug"] += 1
|
|
|
|
def info(self, message: str, **kwargs) -> None:
|
|
"""Log info message"""
|
|
self.logger.info(message, **kwargs)
|
|
self.log_counts["info"] += 1
|
|
|
|
def warning(self, message: str, **kwargs) -> None:
|
|
"""Log warning message"""
|
|
self.logger.warning(message, **kwargs)
|
|
self.log_counts["warning"] += 1
|
|
|
|
def error(self, message: str, **kwargs) -> None:
|
|
"""Log error message"""
|
|
self.logger.error(message, **kwargs)
|
|
self.log_counts["error"] += 1
|
|
|
|
def critical(self, message: str, **kwargs) -> None:
|
|
"""Log critical message"""
|
|
self.logger.critical(message, **kwargs)
|
|
self.log_counts["critical"] += 1
|
|
|
|
def success(self, message: str, **kwargs) -> None:
|
|
"""Log success message (info level with special formatting)"""
|
|
# Use a custom success formatter for console
|
|
if self.logger.handlers:
|
|
console_handler = self.logger.handlers[0]
|
|
if hasattr(console_handler, "formatter"):
|
|
original_format = console_handler.formatter.format
|
|
|
|
def success_format(record):
|
|
return f"{Colors.GREEN}[{symbols.checkmark}] {record.getMessage()}{Colors.RESET}"
|
|
|
|
console_handler.formatter.format = success_format
|
|
self.logger.info(message, **kwargs)
|
|
console_handler.formatter.format = original_format
|
|
else:
|
|
self.logger.info(f"SUCCESS: {message}", **kwargs)
|
|
else:
|
|
self.logger.info(f"SUCCESS: {message}", **kwargs)
|
|
|
|
self.log_counts["info"] += 1
|
|
|
|
def step(self, step: int, total: int, message: str, **kwargs) -> None:
|
|
"""Log step progress"""
|
|
step_msg = f"[{step}/{total}] {message}"
|
|
self.info(step_msg, **kwargs)
|
|
|
|
def section(self, title: str, **kwargs) -> None:
|
|
"""Log section header"""
|
|
separator = "=" * min(50, len(title) + 4)
|
|
self.info(separator, **kwargs)
|
|
self.info(f" {title}", **kwargs)
|
|
self.info(separator, **kwargs)
|
|
|
|
def exception(self, message: str, exc_info: bool = True, **kwargs) -> None:
|
|
"""Log exception with traceback"""
|
|
self.logger.error(message, exc_info=exc_info, **kwargs)
|
|
self.log_counts["error"] += 1
|
|
|
|
def log_system_info(self, info: Dict[str, Any]) -> None:
|
|
"""Log system information"""
|
|
self.section("System Information")
|
|
for key, value in info.items():
|
|
self.info(f"{key}: {value}")
|
|
|
|
def log_operation_start(
|
|
self, operation: str, details: Optional[Dict[str, Any]] = None
|
|
) -> None:
|
|
"""Log start of operation"""
|
|
self.section(f"Starting: {operation}")
|
|
if details:
|
|
for key, value in details.items():
|
|
self.info(f"{key}: {value}")
|
|
|
|
def log_operation_end(
|
|
self,
|
|
operation: str,
|
|
success: bool,
|
|
duration: float,
|
|
details: Optional[Dict[str, Any]] = None,
|
|
) -> None:
|
|
"""Log end of operation"""
|
|
status = "SUCCESS" if success else "FAILED"
|
|
self.info(
|
|
f"Operation {operation} completed: {status} (Duration: {duration:.2f}s)"
|
|
)
|
|
|
|
if details:
|
|
for key, value in details.items():
|
|
self.info(f"{key}: {value}")
|
|
|
|
def get_statistics(self) -> Dict[str, Any]:
|
|
"""Get logging statistics"""
|
|
runtime = datetime.now() - self.session_start
|
|
|
|
return {
|
|
"session_start": self.session_start.isoformat(),
|
|
"runtime_seconds": runtime.total_seconds(),
|
|
"log_counts": self.log_counts.copy(),
|
|
"total_messages": sum(self.log_counts.values()),
|
|
"log_file": (
|
|
str(self.log_file)
|
|
if hasattr(self, "log_file") and self.log_file
|
|
else None
|
|
),
|
|
"has_errors": self.log_counts["error"] + self.log_counts["critical"] > 0,
|
|
}
|
|
|
|
def set_console_level(self, level: LogLevel) -> None:
|
|
"""Change console logging level"""
|
|
self.console_level = level
|
|
if self.logger.handlers:
|
|
self.logger.handlers[0].setLevel(level.value)
|
|
|
|
def set_file_level(self, level: LogLevel) -> None:
|
|
"""Change file logging level"""
|
|
self.file_level = level
|
|
if len(self.logger.handlers) > 1:
|
|
self.logger.handlers[1].setLevel(level.value)
|
|
|
|
def flush(self) -> None:
|
|
"""Flush all handlers"""
|
|
for handler in self.logger.handlers:
|
|
if hasattr(handler, "flush"):
|
|
handler.flush()
|
|
|
|
def close(self) -> None:
|
|
"""Close logger and handlers"""
|
|
self.section("Installation Session Complete")
|
|
stats = self.get_statistics()
|
|
|
|
self.info(f"Total runtime: {stats['runtime_seconds']:.1f} seconds")
|
|
self.info(f"Messages logged: {stats['total_messages']}")
|
|
if stats["has_errors"]:
|
|
self.warning(
|
|
f"Errors/warnings: {stats['log_counts']['error'] + stats['log_counts']['warning']}"
|
|
)
|
|
|
|
if stats["log_file"]:
|
|
self.info(f"Full log saved to: {stats['log_file']}")
|
|
|
|
# Close all handlers
|
|
for handler in self.logger.handlers[:]:
|
|
handler.close()
|
|
self.logger.removeHandler(handler)
|
|
|
|
|
|
# Global logger instance
|
|
_global_logger: Optional[Logger] = None
|
|
|
|
|
|
def get_logger(name: str = "superclaude") -> Logger:
|
|
"""Get or create global logger instance"""
|
|
global _global_logger
|
|
|
|
if _global_logger is None or _global_logger.name != name:
|
|
_global_logger = Logger(name)
|
|
|
|
return _global_logger
|
|
|
|
|
|
def setup_logging(
|
|
name: str = "superclaude",
|
|
log_dir: Optional[Path] = None,
|
|
console_level: LogLevel = LogLevel.INFO,
|
|
file_level: LogLevel = LogLevel.DEBUG,
|
|
) -> Logger:
|
|
"""Setup logging with specified configuration"""
|
|
global _global_logger
|
|
_global_logger = Logger(name, log_dir, console_level, file_level)
|
|
return _global_logger
|
|
|
|
|
|
# Convenience functions using global logger
|
|
def debug(message: str, **kwargs) -> None:
|
|
"""Log debug message using global logger"""
|
|
get_logger().debug(message, **kwargs)
|
|
|
|
|
|
def info(message: str, **kwargs) -> None:
|
|
"""Log info message using global logger"""
|
|
get_logger().info(message, **kwargs)
|
|
|
|
|
|
def warning(message: str, **kwargs) -> None:
|
|
"""Log warning message using global logger"""
|
|
get_logger().warning(message, **kwargs)
|
|
|
|
|
|
def error(message: str, **kwargs) -> None:
|
|
"""Log error message using global logger"""
|
|
get_logger().error(message, **kwargs)
|
|
|
|
|
|
def critical(message: str, **kwargs) -> None:
|
|
"""Log critical message using global logger"""
|
|
get_logger().critical(message, **kwargs)
|
|
|
|
|
|
def success(message: str, **kwargs) -> None:
|
|
"""Log success message using global logger"""
|
|
get_logger().success(message, **kwargs)
|