kazuki nakai 050d5ea2ab
refactor: PEP8 compliance - directory rename and code formatting (#425)
* fix(orchestration): add WebFetch auto-trigger for infrastructure configuration

Problem: Infrastructure configuration changes (e.g., Traefik port settings)
were being made based on assumptions without consulting official documentation,
violating the 'Evidence > assumptions' principle in PRINCIPLES.md.

Solution:
- Added Infrastructure Configuration Validation section to MODE_Orchestration.md
- Auto-triggers WebFetch for infrastructure tools (Traefik, nginx, Docker, etc.)
- Enforces MODE_DeepResearch activation for investigation
- BLOCKS assumption-based configuration changes

Testing: Verified WebFetch successfully retrieves Traefik official docs (port 80 default)

This prevents production outages from infrastructure misconfiguration by ensuring
all technical recommendations are backed by official documentation.

* feat: Add PM Agent (Project Manager Agent) for seamless orchestration

Introduces PM Agent as the default orchestration layer that coordinates
all sub-agents and manages workflows automatically.

Key Features:
- Default orchestration: All user interactions handled by PM Agent
- Auto-delegation: Intelligent sub-agent selection based on task analysis
- Docker Gateway integration: Zero-token baseline with dynamic MCP loading
- Self-improvement loop: Automatic documentation of patterns and mistakes
- Optional override: Users can specify sub-agents explicitly if desired

Architecture:
- Agent spec: SuperClaude/Agents/pm-agent.md
- Command: SuperClaude/Commands/pm.md
- Updated docs: README.md (15→16 agents), agents.md (new Orchestration category)

User Experience:
- Default: PM Agent handles everything (seamless, no manual routing)
- Optional: Explicit --agent flag for direct sub-agent access
- Both modes available simultaneously (no user downside)

Implementation Status:
-  Specification complete
-  Documentation complete
-  Prototype implementation needed
-  Docker Gateway integration needed
-  Testing and validation needed

Refs: kazukinakai/docker-mcp-gateway (IRIS MCP Gateway integration)

* feat: Add Agent Orchestration rules for PM Agent default activation

Implements PM Agent as the default orchestration layer in RULES.md.

Key Changes:
- New 'Agent Orchestration' section (CRITICAL priority)
- PM Agent receives ALL user requests by default
- Manual override with @agent-[name] bypasses PM Agent
- Agent Selection Priority clearly defined:
  1. Manual override → Direct routing
  2. Default → PM Agent → Auto-delegation
  3. Delegation based on keywords, file types, complexity, context

User Experience:
- Default: PM Agent handles everything (seamless)
- Override: @agent-[name] for direct specialist access
- Transparent: PM Agent reports delegation decisions

This establishes PM Agent as the orchestration layer while
respecting existing auto-activation patterns and manual overrides.

Next Steps:
- Local testing in agiletec project
- Iteration based on actual behavior
- Documentation updates as needed

* refactor(pm-agent): redesign as self-improvement meta-layer

Problem Resolution:
PM Agent's initial design competed with existing auto-activation for task routing,
creating confusion about orchestration responsibilities and adding unnecessary complexity.

Design Change:
Redefined PM Agent as a meta-layer agent that operates AFTER specialist agents
complete tasks, focusing on:
- Post-implementation documentation and pattern recording
- Immediate mistake analysis with prevention checklists
- Monthly documentation maintenance and noise reduction
- Pattern extraction and knowledge synthesis

Two-Layer Orchestration System:
1. Task Execution Layer: Existing auto-activation handles task routing (unchanged)
2. Self-Improvement Layer: PM Agent meta-layer handles documentation (new)

Files Modified:
- SuperClaude/Agents/pm-agent.md: Complete rewrite with meta-layer design
  - Category: orchestration → meta
  - Triggers: All user interactions → Post-implementation, mistakes, monthly
  - Behavioral Mindset: Continuous learning system
  - Self-Improvement Workflow: BEFORE/DURING/AFTER/MISTAKE RECOVERY/MAINTENANCE

- SuperClaude/Core/RULES.md: Agent Orchestration section updated
  - Split into Task Execution Layer + Self-Improvement Layer
  - Added orchestration flow diagram
  - Clarified PM Agent activates AFTER task completion

- README.md: Updated PM Agent description
  - "orchestrates all interactions" → "ensures continuous learning"

- Docs/User-Guide/agents.md: PM Agent section rewritten
  - Section: Orchestration Agent → Meta-Layer Agent
  - Expertise: Project orchestration → Self-improvement workflow executor
  - Examples: Task coordination → Post-implementation documentation

- PR_DOCUMENTATION.md: Comprehensive PR documentation added
  - Summary, motivation, changes, testing, breaking changes
  - Two-layer orchestration system diagram
  - Verification checklist

Integration Validated:
Tested with agiletec project's self-improvement-workflow.md:
 PM Agent aligns with existing BEFORE/DURING/AFTER/MISTAKE RECOVERY phases
 Complements (not competes with) existing workflow
 agiletec workflow defines WHAT, PM Agent defines WHO executes it

Breaking Changes: None
- Existing auto-activation continues unchanged
- Specialist agents unaffected
- User workflows remain the same
- New capability: Automatic documentation and knowledge maintenance

Value Proposition:
Transforms SuperClaude into a continuously learning system that accumulates
knowledge, prevents recurring mistakes, and maintains fresh documentation
without manual intervention.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: add Claude Code conversation history management research

Research covering .jsonl file structure, performance impact, and retention policies.

Content:
- Claude Code .jsonl file format and message types
- Performance issues from GitHub (memory leaks, conversation compaction)
- Retention policies (consumer vs enterprise)
- Rotation recommendations based on actual data
- File history snapshot tracking mechanics

Source: Moved from agiletec project (research applicable to all Claude Code projects)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: add Development documentation structure

Phase 1: Documentation Structure complete

- Add Docs/Development/ directory for development documentation
- Add ARCHITECTURE.md - System architecture with PM Agent meta-layer
- Add ROADMAP.md - 5-phase development plan with checkboxes
- Add TASKS.md - Daily task tracking with progress indicators
- Add PROJECT_STATUS.md - Current status dashboard and metrics
- Add pm-agent-integration.md - Implementation guide for PM Agent mode

This establishes comprehensive documentation foundation for:
- System architecture understanding
- Development planning and tracking
- Implementation guidance
- Progress visibility

Related: #pm-agent-mode #documentation #phase-1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: PM Agent session lifecycle and PDCA implementation

Phase 2: PM Agent Mode Integration (Design Phase)

Commands/pm.md updates:
- Add "Always-Active Foundation Layer" concept
- Add Session Lifecycle (Session Start/During Work/Session End)
- Add PDCA Cycle (Plan/Do/Check/Act) automation
- Add Serena MCP Memory Integration (list/read/write_memory)
- Document auto-activation triggers

Agents/pm-agent.md updates:
- Add Session Start Protocol (MANDATORY auto-activation)
- Add During Work PDCA Cycle with example workflows
- Add Session End Protocol with state preservation
- Add PDCA Self-Evaluation Pattern
- Add Documentation Strategy (temp → patterns/mistakes)
- Add Memory Operations Reference

Key Features:
- Session start auto-activation for context restoration
- 30-minute checkpoint saves during work
- Self-evaluation with think_about_* operations
- Systematic documentation lifecycle
- Knowledge evolution to CLAUDE.md

Implementation Status:
-  Design complete (Commands/pm.md, Agents/pm-agent.md)
-  Implementation pending (Core components)
-  Serena MCP integration pending

Salvaged from mistaken development in ~/.claude directory

Related: #pm-agent-mode #session-lifecycle #pdca-cycle #phase-2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: disable Serena MCP auto-browser launch

Disable web dashboard and GUI log window auto-launch in Serena MCP server
to prevent intrusive browser popups on startup. Users can still manually
access the dashboard at http://localhost:24282/dashboard/ if needed.

Changes:
- Add CLI flags to Serena run command:
  - --enable-web-dashboard false
  - --enable-gui-log-window false
- Ensures Git-tracked configuration (no reliance on ~/.serena/serena_config.yml)
- Aligns with AIRIS MCP Gateway integration approach

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: rename directories to lowercase for PEP8 compliance

- Rename superclaude/Agents -> superclaude/agents
- Rename superclaude/Commands -> superclaude/commands
- Rename superclaude/Core -> superclaude/core
- Rename superclaude/Examples -> superclaude/examples
- Rename superclaude/MCP -> superclaude/mcp
- Rename superclaude/Modes -> superclaude/modes

This change follows Python PEP8 naming conventions for package directories.

* style: fix PEP8 violations and update package name to lowercase

Changes:
- Format all Python files with black (43 files reformatted)
- Update package name from 'SuperClaude' to 'superclaude' in pyproject.toml
- Fix import statements to use lowercase package name
- Add missing imports (timedelta, __version__)
- Remove old SuperClaude.egg-info directory

PEP8 violations reduced from 2672 to 701 (mostly E501 line length due to black's 88 char vs flake8's 79 char limit).

* docs: add PM Agent development documentation

Add comprehensive PM Agent development documentation:
- PM Agent ideal workflow (7-phase autonomous cycle)
- Project structure understanding (Git vs installed environment)
- Installation flow understanding (CommandsComponent behavior)
- Task management system (current-tasks.md)

Purpose: Eliminate repeated explanations and enable autonomous PDCA cycles

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat(pm-agent): add self-correcting execution and warning investigation culture

## Changes

### superclaude/commands/pm.md
- Add "Self-Correcting Execution" section with root cause analysis protocol
- Add "Warning/Error Investigation Culture" section enforcing zero-tolerance for dismissal
- Define error detection protocol: STOP → Investigate → Hypothesis → Different Solution → Execute
- Document anti-patterns (retry without understanding) and correct patterns (research-first)

### docs/Development/hypothesis-pm-autonomous-enhancement-2025-10-14.md
- Add PDCA workflow hypothesis document for PM Agent autonomous enhancement

## Rationale

PM Agent must never retry failed operations without understanding root causes.
All warnings and errors require investigation via context7/WebFetch/documentation
to ensure production-quality code and prevent technical debt accumulation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat(installer): add airis-mcp-gateway MCP server option

## Changes

- Add airis-mcp-gateway to MCP server options in installer
- Configuration: GitHub-based installation via uvx
- Repository: https://github.com/oraios/airis-mcp-gateway
- Purpose: Dynamic MCP Gateway for zero-token baseline and on-demand tool loading

## Implementation

Added to setup/components/mcp.py self.mcp_servers dictionary with:
- install_method: github
- install_command: uvx test installation
- run_command: uvx runtime execution
- required: False (optional server)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-14 08:47:09 +05:30

443 lines
13 KiB
Python

"""
Cross-platform file management for SuperClaude installation system
"""
import shutil
import stat
from typing import List, Optional, Callable, Dict, Any
from pathlib import Path
import fnmatch
import hashlib
class FileService:
"""Cross-platform file operations manager"""
def __init__(self, dry_run: bool = False):
"""
Initialize file manager
Args:
dry_run: If True, only simulate file operations
"""
self.dry_run = dry_run
self.copied_files: List[Path] = []
self.created_dirs: List[Path] = []
def copy_file(
self, source: Path, target: Path, preserve_permissions: bool = True
) -> bool:
"""
Copy single file with permission preservation
Args:
source: Source file path
target: Target file path
preserve_permissions: Whether to preserve file permissions
Returns:
True if successful, False otherwise
"""
if not source.exists():
raise FileNotFoundError(f"Source file not found: {source}")
if not source.is_file():
raise ValueError(f"Source is not a file: {source}")
if self.dry_run:
print(f"[DRY RUN] Would copy {source} -> {target}")
return True
try:
# Ensure target directory exists
target.parent.mkdir(parents=True, exist_ok=True)
# Copy file
if preserve_permissions:
shutil.copy2(source, target)
else:
shutil.copy(source, target)
self.copied_files.append(target)
return True
except Exception as e:
print(f"Error copying {source} to {target}: {e}")
return False
def copy_directory(
self, source: Path, target: Path, ignore_patterns: Optional[List[str]] = None
) -> bool:
"""
Recursively copy directory with gitignore-style patterns
Args:
source: Source directory path
target: Target directory path
ignore_patterns: List of patterns to ignore (gitignore style)
Returns:
True if successful, False otherwise
"""
if not source.exists():
raise FileNotFoundError(f"Source directory not found: {source}")
if not source.is_dir():
raise ValueError(f"Source is not a directory: {source}")
ignore_patterns = ignore_patterns or []
default_ignores = [".git", ".gitignore", "__pycache__", "*.pyc", ".DS_Store"]
all_ignores = ignore_patterns + default_ignores
if self.dry_run:
print(f"[DRY RUN] Would copy directory {source} -> {target}")
return True
try:
# Create ignore function
def ignore_func(directory: str, contents: List[str]) -> List[str]:
ignored = []
for item in contents:
item_path = Path(directory) / item
rel_path = item_path.relative_to(source)
# Check against ignore patterns
for pattern in all_ignores:
if fnmatch.fnmatch(item, pattern) or fnmatch.fnmatch(
str(rel_path), pattern
):
ignored.append(item)
break
return ignored
# Copy tree
shutil.copytree(source, target, ignore=ignore_func, dirs_exist_ok=True)
# Track created directories and files
for item in target.rglob("*"):
if item.is_dir():
self.created_dirs.append(item)
else:
self.copied_files.append(item)
return True
except Exception as e:
print(f"Error copying directory {source} to {target}: {e}")
return False
def ensure_directory(self, directory: Path, mode: int = 0o755) -> bool:
"""
Create directory and parents if they don't exist
Args:
directory: Directory path to create
mode: Directory permissions (Unix only)
Returns:
True if successful, False otherwise
"""
if self.dry_run:
print(f"[DRY RUN] Would create directory {directory}")
return True
try:
directory.mkdir(parents=True, exist_ok=True, mode=mode)
if directory not in self.created_dirs:
self.created_dirs.append(directory)
return True
except Exception as e:
print(f"Error creating directory {directory}: {e}")
return False
def remove_file(self, file_path: Path) -> bool:
"""
Remove single file
Args:
file_path: Path to file to remove
Returns:
True if successful, False otherwise
"""
if not file_path.exists():
return True # Already gone
if self.dry_run:
print(f"[DRY RUN] Would remove file {file_path}")
return True
try:
if file_path.is_file():
file_path.unlink()
else:
print(f"Warning: {file_path} is not a file, skipping")
return False
# Remove from tracking
if file_path in self.copied_files:
self.copied_files.remove(file_path)
return True
except Exception as e:
print(f"Error removing file {file_path}: {e}")
return False
def remove_directory(self, directory: Path, recursive: bool = False) -> bool:
"""
Remove directory
Args:
directory: Directory path to remove
recursive: Whether to remove recursively
Returns:
True if successful, False otherwise
"""
if not directory.exists():
return True # Already gone
if self.dry_run:
action = "recursively remove" if recursive else "remove"
print(f"[DRY RUN] Would {action} directory {directory}")
return True
try:
if recursive:
shutil.rmtree(directory)
else:
directory.rmdir() # Only works if empty
# Remove from tracking
if directory in self.created_dirs:
self.created_dirs.remove(directory)
return True
except Exception as e:
print(f"Error removing directory {directory}: {e}")
return False
def resolve_home_path(self, path: str) -> Path:
"""
Convert path with ~ to actual home path on any OS
Args:
path: Path string potentially containing ~
Returns:
Resolved Path object
"""
return Path(path).expanduser().resolve()
def make_executable(self, file_path: Path) -> bool:
"""
Make file executable (Unix/Linux/macOS)
Args:
file_path: Path to file to make executable
Returns:
True if successful, False otherwise
"""
if not file_path.exists():
return False
if self.dry_run:
print(f"[DRY RUN] Would make {file_path} executable")
return True
try:
# Get current permissions
current_mode = file_path.stat().st_mode
# Add execute permissions for owner, group, and others
new_mode = current_mode | stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
file_path.chmod(new_mode)
return True
except Exception as e:
print(f"Error making {file_path} executable: {e}")
return False
def get_file_hash(
self, file_path: Path, algorithm: str = "sha256"
) -> Optional[str]:
"""
Calculate file hash
Args:
file_path: Path to file
algorithm: Hash algorithm (md5, sha1, sha256, etc.)
Returns:
Hex hash string or None if error
"""
if not file_path.exists() or not file_path.is_file():
return None
try:
hasher = hashlib.new(algorithm)
with open(file_path, "rb") as f:
# Read in chunks for large files
for chunk in iter(lambda: f.read(8192), b""):
hasher.update(chunk)
return hasher.hexdigest()
except Exception:
return None
def verify_file_integrity(
self, file_path: Path, expected_hash: str, algorithm: str = "sha256"
) -> bool:
"""
Verify file integrity using hash
Args:
file_path: Path to file to verify
expected_hash: Expected hash value
algorithm: Hash algorithm used
Returns:
True if file matches expected hash, False otherwise
"""
actual_hash = self.get_file_hash(file_path, algorithm)
return actual_hash is not None and actual_hash.lower() == expected_hash.lower()
def get_directory_size(self, directory: Path) -> int:
"""
Calculate total size of directory in bytes
Args:
directory: Directory path
Returns:
Total size in bytes
"""
if not directory.exists() or not directory.is_dir():
return 0
total_size = 0
try:
for file_path in directory.rglob("*"):
if file_path.is_file():
total_size += file_path.stat().st_size
except Exception:
pass # Skip files we can't access
return total_size
def find_files(
self, directory: Path, pattern: str = "*", recursive: bool = True
) -> List[Path]:
"""
Find files matching pattern
Args:
directory: Directory to search
pattern: Glob pattern to match
recursive: Whether to search recursively
Returns:
List of matching file paths
"""
if not directory.exists() or not directory.is_dir():
return []
try:
if recursive:
return list(directory.rglob(pattern))
else:
return list(directory.glob(pattern))
except Exception:
return []
def backup_file(
self, file_path: Path, backup_suffix: str = ".backup"
) -> Optional[Path]:
"""
Create backup copy of file
Args:
file_path: Path to file to backup
backup_suffix: Suffix to add to backup file
Returns:
Path to backup file or None if failed
"""
if not file_path.exists() or not file_path.is_file():
return None
backup_path = file_path.with_suffix(file_path.suffix + backup_suffix)
if self.copy_file(file_path, backup_path):
return backup_path
return None
def get_free_space(self, path: Path) -> int:
"""
Get free disk space at path in bytes
Args:
path: Path to check (can be file or directory)
Returns:
Free space in bytes
"""
try:
if path.is_file():
path = path.parent
stat_result = shutil.disk_usage(path)
return stat_result.free
except Exception:
return 0
def cleanup_tracked_files(self) -> None:
"""Remove all files and directories created during this session"""
if self.dry_run:
print("[DRY RUN] Would cleanup tracked files")
return
# Remove files first
for file_path in reversed(self.copied_files):
try:
if file_path.exists():
file_path.unlink()
except Exception:
pass
# Remove directories (in reverse order of creation)
for directory in reversed(self.created_dirs):
try:
if directory.exists() and not any(directory.iterdir()):
directory.rmdir()
except Exception:
pass
self.copied_files.clear()
self.created_dirs.clear()
def get_operation_summary(self) -> Dict[str, Any]:
"""
Get summary of file operations performed
Returns:
Dict with operation statistics
"""
return {
"files_copied": len(self.copied_files),
"directories_created": len(self.created_dirs),
"dry_run": self.dry_run,
"copied_files": [str(f) for f in self.copied_files],
"created_directories": [str(d) for d in self.created_dirs],
}