mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-29 16:16:08 +00:00
refactor: PEP8 compliance - directory rename and code formatting (#425)
* fix(orchestration): add WebFetch auto-trigger for infrastructure configuration Problem: Infrastructure configuration changes (e.g., Traefik port settings) were being made based on assumptions without consulting official documentation, violating the 'Evidence > assumptions' principle in PRINCIPLES.md. Solution: - Added Infrastructure Configuration Validation section to MODE_Orchestration.md - Auto-triggers WebFetch for infrastructure tools (Traefik, nginx, Docker, etc.) - Enforces MODE_DeepResearch activation for investigation - BLOCKS assumption-based configuration changes Testing: Verified WebFetch successfully retrieves Traefik official docs (port 80 default) This prevents production outages from infrastructure misconfiguration by ensuring all technical recommendations are backed by official documentation. * feat: Add PM Agent (Project Manager Agent) for seamless orchestration Introduces PM Agent as the default orchestration layer that coordinates all sub-agents and manages workflows automatically. Key Features: - Default orchestration: All user interactions handled by PM Agent - Auto-delegation: Intelligent sub-agent selection based on task analysis - Docker Gateway integration: Zero-token baseline with dynamic MCP loading - Self-improvement loop: Automatic documentation of patterns and mistakes - Optional override: Users can specify sub-agents explicitly if desired Architecture: - Agent spec: SuperClaude/Agents/pm-agent.md - Command: SuperClaude/Commands/pm.md - Updated docs: README.md (15→16 agents), agents.md (new Orchestration category) User Experience: - Default: PM Agent handles everything (seamless, no manual routing) - Optional: Explicit --agent flag for direct sub-agent access - Both modes available simultaneously (no user downside) Implementation Status: - ✅ Specification complete - ✅ Documentation complete - ⏳ Prototype implementation needed - ⏳ Docker Gateway integration needed - ⏳ Testing and validation needed Refs: kazukinakai/docker-mcp-gateway (IRIS MCP Gateway integration) * feat: Add Agent Orchestration rules for PM Agent default activation Implements PM Agent as the default orchestration layer in RULES.md. Key Changes: - New 'Agent Orchestration' section (CRITICAL priority) - PM Agent receives ALL user requests by default - Manual override with @agent-[name] bypasses PM Agent - Agent Selection Priority clearly defined: 1. Manual override → Direct routing 2. Default → PM Agent → Auto-delegation 3. Delegation based on keywords, file types, complexity, context User Experience: - Default: PM Agent handles everything (seamless) - Override: @agent-[name] for direct specialist access - Transparent: PM Agent reports delegation decisions This establishes PM Agent as the orchestration layer while respecting existing auto-activation patterns and manual overrides. Next Steps: - Local testing in agiletec project - Iteration based on actual behavior - Documentation updates as needed * refactor(pm-agent): redesign as self-improvement meta-layer Problem Resolution: PM Agent's initial design competed with existing auto-activation for task routing, creating confusion about orchestration responsibilities and adding unnecessary complexity. Design Change: Redefined PM Agent as a meta-layer agent that operates AFTER specialist agents complete tasks, focusing on: - Post-implementation documentation and pattern recording - Immediate mistake analysis with prevention checklists - Monthly documentation maintenance and noise reduction - Pattern extraction and knowledge synthesis Two-Layer Orchestration System: 1. Task Execution Layer: Existing auto-activation handles task routing (unchanged) 2. Self-Improvement Layer: PM Agent meta-layer handles documentation (new) Files Modified: - SuperClaude/Agents/pm-agent.md: Complete rewrite with meta-layer design - Category: orchestration → meta - Triggers: All user interactions → Post-implementation, mistakes, monthly - Behavioral Mindset: Continuous learning system - Self-Improvement Workflow: BEFORE/DURING/AFTER/MISTAKE RECOVERY/MAINTENANCE - SuperClaude/Core/RULES.md: Agent Orchestration section updated - Split into Task Execution Layer + Self-Improvement Layer - Added orchestration flow diagram - Clarified PM Agent activates AFTER task completion - README.md: Updated PM Agent description - "orchestrates all interactions" → "ensures continuous learning" - Docs/User-Guide/agents.md: PM Agent section rewritten - Section: Orchestration Agent → Meta-Layer Agent - Expertise: Project orchestration → Self-improvement workflow executor - Examples: Task coordination → Post-implementation documentation - PR_DOCUMENTATION.md: Comprehensive PR documentation added - Summary, motivation, changes, testing, breaking changes - Two-layer orchestration system diagram - Verification checklist Integration Validated: Tested with agiletec project's self-improvement-workflow.md: ✅ PM Agent aligns with existing BEFORE/DURING/AFTER/MISTAKE RECOVERY phases ✅ Complements (not competes with) existing workflow ✅ agiletec workflow defines WHAT, PM Agent defines WHO executes it Breaking Changes: None - Existing auto-activation continues unchanged - Specialist agents unaffected - User workflows remain the same - New capability: Automatic documentation and knowledge maintenance Value Proposition: Transforms SuperClaude into a continuously learning system that accumulates knowledge, prevents recurring mistakes, and maintains fresh documentation without manual intervention. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: add Claude Code conversation history management research Research covering .jsonl file structure, performance impact, and retention policies. Content: - Claude Code .jsonl file format and message types - Performance issues from GitHub (memory leaks, conversation compaction) - Retention policies (consumer vs enterprise) - Rotation recommendations based on actual data - File history snapshot tracking mechanics Source: Moved from agiletec project (research applicable to all Claude Code projects) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: add Development documentation structure Phase 1: Documentation Structure complete - Add Docs/Development/ directory for development documentation - Add ARCHITECTURE.md - System architecture with PM Agent meta-layer - Add ROADMAP.md - 5-phase development plan with checkboxes - Add TASKS.md - Daily task tracking with progress indicators - Add PROJECT_STATUS.md - Current status dashboard and metrics - Add pm-agent-integration.md - Implementation guide for PM Agent mode This establishes comprehensive documentation foundation for: - System architecture understanding - Development planning and tracking - Implementation guidance - Progress visibility Related: #pm-agent-mode #documentation #phase-1 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: PM Agent session lifecycle and PDCA implementation Phase 2: PM Agent Mode Integration (Design Phase) Commands/pm.md updates: - Add "Always-Active Foundation Layer" concept - Add Session Lifecycle (Session Start/During Work/Session End) - Add PDCA Cycle (Plan/Do/Check/Act) automation - Add Serena MCP Memory Integration (list/read/write_memory) - Document auto-activation triggers Agents/pm-agent.md updates: - Add Session Start Protocol (MANDATORY auto-activation) - Add During Work PDCA Cycle with example workflows - Add Session End Protocol with state preservation - Add PDCA Self-Evaluation Pattern - Add Documentation Strategy (temp → patterns/mistakes) - Add Memory Operations Reference Key Features: - Session start auto-activation for context restoration - 30-minute checkpoint saves during work - Self-evaluation with think_about_* operations - Systematic documentation lifecycle - Knowledge evolution to CLAUDE.md Implementation Status: - ✅ Design complete (Commands/pm.md, Agents/pm-agent.md) - ⏳ Implementation pending (Core components) - ⏳ Serena MCP integration pending Salvaged from mistaken development in ~/.claude directory Related: #pm-agent-mode #session-lifecycle #pdca-cycle #phase-2 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: disable Serena MCP auto-browser launch Disable web dashboard and GUI log window auto-launch in Serena MCP server to prevent intrusive browser popups on startup. Users can still manually access the dashboard at http://localhost:24282/dashboard/ if needed. Changes: - Add CLI flags to Serena run command: - --enable-web-dashboard false - --enable-gui-log-window false - Ensures Git-tracked configuration (no reliance on ~/.serena/serena_config.yml) - Aligns with AIRIS MCP Gateway integration approach 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: rename directories to lowercase for PEP8 compliance - Rename superclaude/Agents -> superclaude/agents - Rename superclaude/Commands -> superclaude/commands - Rename superclaude/Core -> superclaude/core - Rename superclaude/Examples -> superclaude/examples - Rename superclaude/MCP -> superclaude/mcp - Rename superclaude/Modes -> superclaude/modes This change follows Python PEP8 naming conventions for package directories. * style: fix PEP8 violations and update package name to lowercase Changes: - Format all Python files with black (43 files reformatted) - Update package name from 'SuperClaude' to 'superclaude' in pyproject.toml - Fix import statements to use lowercase package name - Add missing imports (timedelta, __version__) - Remove old SuperClaude.egg-info directory PEP8 violations reduced from 2672 to 701 (mostly E501 line length due to black's 88 char vs flake8's 79 char limit). * docs: add PM Agent development documentation Add comprehensive PM Agent development documentation: - PM Agent ideal workflow (7-phase autonomous cycle) - Project structure understanding (Git vs installed environment) - Installation flow understanding (CommandsComponent behavior) - Task management system (current-tasks.md) Purpose: Eliminate repeated explanations and enable autonomous PDCA cycles 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat(pm-agent): add self-correcting execution and warning investigation culture ## Changes ### superclaude/commands/pm.md - Add "Self-Correcting Execution" section with root cause analysis protocol - Add "Warning/Error Investigation Culture" section enforcing zero-tolerance for dismissal - Define error detection protocol: STOP → Investigate → Hypothesis → Different Solution → Execute - Document anti-patterns (retry without understanding) and correct patterns (research-first) ### docs/Development/hypothesis-pm-autonomous-enhancement-2025-10-14.md - Add PDCA workflow hypothesis document for PM Agent autonomous enhancement ## Rationale PM Agent must never retry failed operations without understanding root causes. All warnings and errors require investigation via context7/WebFetch/documentation to ensure production-quality code and prevent technical debt accumulation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat(installer): add airis-mcp-gateway MCP server option ## Changes - Add airis-mcp-gateway to MCP server options in installer - Configuration: GitHub-based installation via uvx - Repository: https://github.com/oraios/airis-mcp-gateway - Purpose: Dynamic MCP Gateway for zero-token baseline and on-demand tool loading ## Implementation Added to setup/components/mcp.py self.mcp_servers dictionary with: - install_method: github - install_command: uvx test installation - run_command: uvx runtime execution - required: False (optional server) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local> Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -10,16 +10,18 @@ from pathlib import Path
|
||||
try:
|
||||
import jsonschema
|
||||
from jsonschema import validate, ValidationError
|
||||
|
||||
JSONSCHEMA_AVAILABLE = True
|
||||
except ImportError:
|
||||
JSONSCHEMA_AVAILABLE = False
|
||||
|
||||
|
||||
class ValidationError(Exception):
|
||||
"""Simple validation error for when jsonschema is not available"""
|
||||
|
||||
def __init__(self, message):
|
||||
self.message = message
|
||||
super().__init__(message)
|
||||
|
||||
|
||||
def validate(instance, schema):
|
||||
"""Dummy validation function"""
|
||||
# Basic type checking only
|
||||
@@ -32,17 +34,19 @@ except ImportError:
|
||||
elif expected_type == "string" and not isinstance(instance, str):
|
||||
raise ValidationError(f"Expected string, got {type(instance).__name__}")
|
||||
elif expected_type == "integer" and not isinstance(instance, int):
|
||||
raise ValidationError(f"Expected integer, got {type(instance).__name__}")
|
||||
raise ValidationError(
|
||||
f"Expected integer, got {type(instance).__name__}"
|
||||
)
|
||||
# Skip detailed validation if jsonschema not available
|
||||
|
||||
|
||||
class ConfigService:
|
||||
"""Manages configuration files and validation"""
|
||||
|
||||
|
||||
def __init__(self, config_dir: Path):
|
||||
"""
|
||||
Initialize config manager
|
||||
|
||||
|
||||
Args:
|
||||
config_dir: Directory containing configuration files
|
||||
"""
|
||||
@@ -51,7 +55,7 @@ class ConfigService:
|
||||
self.requirements_file = config_dir / "requirements.json"
|
||||
self._features_cache = None
|
||||
self._requirements_cache = None
|
||||
|
||||
|
||||
# Schema for features.json
|
||||
self.features_schema = {
|
||||
"type": "object",
|
||||
@@ -68,24 +72,24 @@ class ConfigService:
|
||||
"category": {"type": "string"},
|
||||
"dependencies": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"}
|
||||
"items": {"type": "string"},
|
||||
},
|
||||
"enabled": {"type": "boolean"},
|
||||
"required_tools": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"}
|
||||
}
|
||||
"items": {"type": "string"},
|
||||
},
|
||||
},
|
||||
"required": ["name", "version", "description", "category"],
|
||||
"additionalProperties": False
|
||||
"additionalProperties": False,
|
||||
}
|
||||
}
|
||||
},
|
||||
}
|
||||
},
|
||||
"required": ["components"],
|
||||
"additionalProperties": False
|
||||
"additionalProperties": False,
|
||||
}
|
||||
|
||||
|
||||
# Schema for requirements.json
|
||||
self.requirements_schema = {
|
||||
"type": "object",
|
||||
@@ -94,21 +98,18 @@ class ConfigService:
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"min_version": {"type": "string"},
|
||||
"max_version": {"type": "string"}
|
||||
"max_version": {"type": "string"},
|
||||
},
|
||||
"required": ["min_version"]
|
||||
"required": ["min_version"],
|
||||
},
|
||||
"node": {
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"min_version": {"type": "string"},
|
||||
"max_version": {"type": "string"},
|
||||
"required_for": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"}
|
||||
}
|
||||
"required_for": {"type": "array", "items": {"type": "string"}},
|
||||
},
|
||||
"required": ["min_version"]
|
||||
"required": ["min_version"],
|
||||
},
|
||||
"disk_space_mb": {"type": "integer"},
|
||||
"external_tools": {
|
||||
@@ -121,14 +122,14 @@ class ConfigService:
|
||||
"min_version": {"type": "string"},
|
||||
"required_for": {
|
||||
"type": "array",
|
||||
"items": {"type": "string"}
|
||||
"items": {"type": "string"},
|
||||
},
|
||||
"optional": {"type": "boolean"}
|
||||
"optional": {"type": "boolean"},
|
||||
},
|
||||
"required": ["command"],
|
||||
"additionalProperties": False
|
||||
"additionalProperties": False,
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
"installation_commands": {
|
||||
"type": "object",
|
||||
@@ -140,136 +141,138 @@ class ConfigService:
|
||||
"darwin": {"type": "string"},
|
||||
"win32": {"type": "string"},
|
||||
"all": {"type": "string"},
|
||||
"description": {"type": "string"}
|
||||
"description": {"type": "string"},
|
||||
},
|
||||
"additionalProperties": False
|
||||
"additionalProperties": False,
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
},
|
||||
},
|
||||
"required": ["python", "disk_space_mb"],
|
||||
"additionalProperties": False
|
||||
"additionalProperties": False,
|
||||
}
|
||||
|
||||
|
||||
def load_features(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Load and validate features configuration
|
||||
|
||||
|
||||
Returns:
|
||||
Features configuration dict
|
||||
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If features.json not found
|
||||
ValidationError: If features.json is invalid
|
||||
"""
|
||||
if self._features_cache is not None:
|
||||
return self._features_cache
|
||||
|
||||
|
||||
if not self.features_file.exists():
|
||||
raise FileNotFoundError(f"Features config not found: {self.features_file}")
|
||||
|
||||
|
||||
try:
|
||||
with open(self.features_file, 'r') as f:
|
||||
with open(self.features_file, "r") as f:
|
||||
features = json.load(f)
|
||||
|
||||
|
||||
# Validate schema
|
||||
validate(instance=features, schema=self.features_schema)
|
||||
|
||||
|
||||
self._features_cache = features
|
||||
return features
|
||||
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
raise ValidationError(f"Invalid JSON in {self.features_file}: {e}")
|
||||
except ValidationError as e:
|
||||
raise ValidationError(f"Invalid features schema: {str(e)}")
|
||||
|
||||
|
||||
def load_requirements(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Load and validate requirements configuration
|
||||
|
||||
|
||||
Returns:
|
||||
Requirements configuration dict
|
||||
|
||||
|
||||
Raises:
|
||||
FileNotFoundError: If requirements.json not found
|
||||
ValidationError: If requirements.json is invalid
|
||||
"""
|
||||
if self._requirements_cache is not None:
|
||||
return self._requirements_cache
|
||||
|
||||
|
||||
if not self.requirements_file.exists():
|
||||
raise FileNotFoundError(f"Requirements config not found: {self.requirements_file}")
|
||||
|
||||
raise FileNotFoundError(
|
||||
f"Requirements config not found: {self.requirements_file}"
|
||||
)
|
||||
|
||||
try:
|
||||
with open(self.requirements_file, 'r') as f:
|
||||
with open(self.requirements_file, "r") as f:
|
||||
requirements = json.load(f)
|
||||
|
||||
|
||||
# Validate schema
|
||||
validate(instance=requirements, schema=self.requirements_schema)
|
||||
|
||||
|
||||
self._requirements_cache = requirements
|
||||
return requirements
|
||||
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
raise ValidationError(f"Invalid JSON in {self.requirements_file}: {e}")
|
||||
except ValidationError as e:
|
||||
raise ValidationError(f"Invalid requirements schema: {str(e)}")
|
||||
|
||||
|
||||
def get_component_info(self, component_name: str) -> Optional[Dict[str, Any]]:
|
||||
"""
|
||||
Get information about a specific component
|
||||
|
||||
|
||||
Args:
|
||||
component_name: Name of component
|
||||
|
||||
|
||||
Returns:
|
||||
Component info dict or None if not found
|
||||
"""
|
||||
features = self.load_features()
|
||||
return features.get("components", {}).get(component_name)
|
||||
|
||||
|
||||
def get_enabled_components(self) -> List[str]:
|
||||
"""
|
||||
Get list of enabled component names
|
||||
|
||||
|
||||
Returns:
|
||||
List of enabled component names
|
||||
"""
|
||||
features = self.load_features()
|
||||
enabled = []
|
||||
|
||||
|
||||
for name, info in features.get("components", {}).items():
|
||||
if info.get("enabled", True): # Default to enabled
|
||||
enabled.append(name)
|
||||
|
||||
|
||||
return enabled
|
||||
|
||||
|
||||
def get_components_by_category(self, category: str) -> List[str]:
|
||||
"""
|
||||
Get component names by category
|
||||
|
||||
|
||||
Args:
|
||||
category: Component category
|
||||
|
||||
|
||||
Returns:
|
||||
List of component names in category
|
||||
"""
|
||||
features = self.load_features()
|
||||
components = []
|
||||
|
||||
|
||||
for name, info in features.get("components", {}).items():
|
||||
if info.get("category") == category:
|
||||
components.append(name)
|
||||
|
||||
|
||||
return components
|
||||
|
||||
|
||||
def get_component_dependencies(self, component_name: str) -> List[str]:
|
||||
"""
|
||||
Get dependencies for a component
|
||||
|
||||
|
||||
Args:
|
||||
component_name: Name of component
|
||||
|
||||
|
||||
Returns:
|
||||
List of dependency component names
|
||||
"""
|
||||
@@ -277,82 +280,86 @@ class ConfigService:
|
||||
if component_info:
|
||||
return component_info.get("dependencies", [])
|
||||
return []
|
||||
|
||||
|
||||
def get_system_requirements(self) -> Dict[str, Any]:
|
||||
"""
|
||||
Get system requirements
|
||||
|
||||
|
||||
Returns:
|
||||
System requirements dict
|
||||
"""
|
||||
return self.load_requirements()
|
||||
|
||||
def get_requirements_for_components(self, component_names: List[str]) -> Dict[str, Any]:
|
||||
|
||||
def get_requirements_for_components(
|
||||
self, component_names: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Get consolidated requirements for specific components
|
||||
|
||||
|
||||
Args:
|
||||
component_names: List of component names
|
||||
|
||||
|
||||
Returns:
|
||||
Consolidated requirements dict
|
||||
"""
|
||||
requirements = self.load_requirements()
|
||||
features = self.load_features()
|
||||
|
||||
|
||||
# Start with base requirements
|
||||
result = {
|
||||
"python": requirements["python"],
|
||||
"disk_space_mb": requirements["disk_space_mb"],
|
||||
"external_tools": {}
|
||||
"external_tools": {},
|
||||
}
|
||||
|
||||
|
||||
# Add Node.js requirements if needed
|
||||
node_required = False
|
||||
for component_name in component_names:
|
||||
component_info = features.get("components", {}).get(component_name, {})
|
||||
required_tools = component_info.get("required_tools", [])
|
||||
|
||||
|
||||
if "node" in required_tools:
|
||||
node_required = True
|
||||
break
|
||||
|
||||
|
||||
if node_required and "node" in requirements:
|
||||
result["node"] = requirements["node"]
|
||||
|
||||
|
||||
# Add external tool requirements
|
||||
for component_name in component_names:
|
||||
component_info = features.get("components", {}).get(component_name, {})
|
||||
required_tools = component_info.get("required_tools", [])
|
||||
|
||||
|
||||
for tool in required_tools:
|
||||
if tool in requirements.get("external_tools", {}):
|
||||
result["external_tools"][tool] = requirements["external_tools"][tool]
|
||||
|
||||
result["external_tools"][tool] = requirements["external_tools"][
|
||||
tool
|
||||
]
|
||||
|
||||
return result
|
||||
|
||||
|
||||
def validate_config_files(self) -> List[str]:
|
||||
"""
|
||||
Validate all configuration files
|
||||
|
||||
|
||||
Returns:
|
||||
List of validation errors (empty if all valid)
|
||||
"""
|
||||
errors = []
|
||||
|
||||
|
||||
try:
|
||||
self.load_features()
|
||||
except Exception as e:
|
||||
errors.append(f"Features config error: {e}")
|
||||
|
||||
|
||||
try:
|
||||
self.load_requirements()
|
||||
except Exception as e:
|
||||
errors.append(f"Requirements config error: {e}")
|
||||
|
||||
|
||||
return errors
|
||||
|
||||
|
||||
def clear_cache(self) -> None:
|
||||
"""Clear cached configuration data"""
|
||||
self._features_cache = None
|
||||
self._requirements_cache = None
|
||||
self._requirements_cache = None
|
||||
|
||||
Reference in New Issue
Block a user