kazuki nakai 050d5ea2ab
refactor: PEP8 compliance - directory rename and code formatting (#425)
* fix(orchestration): add WebFetch auto-trigger for infrastructure configuration

Problem: Infrastructure configuration changes (e.g., Traefik port settings)
were being made based on assumptions without consulting official documentation,
violating the 'Evidence > assumptions' principle in PRINCIPLES.md.

Solution:
- Added Infrastructure Configuration Validation section to MODE_Orchestration.md
- Auto-triggers WebFetch for infrastructure tools (Traefik, nginx, Docker, etc.)
- Enforces MODE_DeepResearch activation for investigation
- BLOCKS assumption-based configuration changes

Testing: Verified WebFetch successfully retrieves Traefik official docs (port 80 default)

This prevents production outages from infrastructure misconfiguration by ensuring
all technical recommendations are backed by official documentation.

* feat: Add PM Agent (Project Manager Agent) for seamless orchestration

Introduces PM Agent as the default orchestration layer that coordinates
all sub-agents and manages workflows automatically.

Key Features:
- Default orchestration: All user interactions handled by PM Agent
- Auto-delegation: Intelligent sub-agent selection based on task analysis
- Docker Gateway integration: Zero-token baseline with dynamic MCP loading
- Self-improvement loop: Automatic documentation of patterns and mistakes
- Optional override: Users can specify sub-agents explicitly if desired

Architecture:
- Agent spec: SuperClaude/Agents/pm-agent.md
- Command: SuperClaude/Commands/pm.md
- Updated docs: README.md (15→16 agents), agents.md (new Orchestration category)

User Experience:
- Default: PM Agent handles everything (seamless, no manual routing)
- Optional: Explicit --agent flag for direct sub-agent access
- Both modes available simultaneously (no user downside)

Implementation Status:
-  Specification complete
-  Documentation complete
-  Prototype implementation needed
-  Docker Gateway integration needed
-  Testing and validation needed

Refs: kazukinakai/docker-mcp-gateway (IRIS MCP Gateway integration)

* feat: Add Agent Orchestration rules for PM Agent default activation

Implements PM Agent as the default orchestration layer in RULES.md.

Key Changes:
- New 'Agent Orchestration' section (CRITICAL priority)
- PM Agent receives ALL user requests by default
- Manual override with @agent-[name] bypasses PM Agent
- Agent Selection Priority clearly defined:
  1. Manual override → Direct routing
  2. Default → PM Agent → Auto-delegation
  3. Delegation based on keywords, file types, complexity, context

User Experience:
- Default: PM Agent handles everything (seamless)
- Override: @agent-[name] for direct specialist access
- Transparent: PM Agent reports delegation decisions

This establishes PM Agent as the orchestration layer while
respecting existing auto-activation patterns and manual overrides.

Next Steps:
- Local testing in agiletec project
- Iteration based on actual behavior
- Documentation updates as needed

* refactor(pm-agent): redesign as self-improvement meta-layer

Problem Resolution:
PM Agent's initial design competed with existing auto-activation for task routing,
creating confusion about orchestration responsibilities and adding unnecessary complexity.

Design Change:
Redefined PM Agent as a meta-layer agent that operates AFTER specialist agents
complete tasks, focusing on:
- Post-implementation documentation and pattern recording
- Immediate mistake analysis with prevention checklists
- Monthly documentation maintenance and noise reduction
- Pattern extraction and knowledge synthesis

Two-Layer Orchestration System:
1. Task Execution Layer: Existing auto-activation handles task routing (unchanged)
2. Self-Improvement Layer: PM Agent meta-layer handles documentation (new)

Files Modified:
- SuperClaude/Agents/pm-agent.md: Complete rewrite with meta-layer design
  - Category: orchestration → meta
  - Triggers: All user interactions → Post-implementation, mistakes, monthly
  - Behavioral Mindset: Continuous learning system
  - Self-Improvement Workflow: BEFORE/DURING/AFTER/MISTAKE RECOVERY/MAINTENANCE

- SuperClaude/Core/RULES.md: Agent Orchestration section updated
  - Split into Task Execution Layer + Self-Improvement Layer
  - Added orchestration flow diagram
  - Clarified PM Agent activates AFTER task completion

- README.md: Updated PM Agent description
  - "orchestrates all interactions" → "ensures continuous learning"

- Docs/User-Guide/agents.md: PM Agent section rewritten
  - Section: Orchestration Agent → Meta-Layer Agent
  - Expertise: Project orchestration → Self-improvement workflow executor
  - Examples: Task coordination → Post-implementation documentation

- PR_DOCUMENTATION.md: Comprehensive PR documentation added
  - Summary, motivation, changes, testing, breaking changes
  - Two-layer orchestration system diagram
  - Verification checklist

Integration Validated:
Tested with agiletec project's self-improvement-workflow.md:
 PM Agent aligns with existing BEFORE/DURING/AFTER/MISTAKE RECOVERY phases
 Complements (not competes with) existing workflow
 agiletec workflow defines WHAT, PM Agent defines WHO executes it

Breaking Changes: None
- Existing auto-activation continues unchanged
- Specialist agents unaffected
- User workflows remain the same
- New capability: Automatic documentation and knowledge maintenance

Value Proposition:
Transforms SuperClaude into a continuously learning system that accumulates
knowledge, prevents recurring mistakes, and maintains fresh documentation
without manual intervention.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* docs: add Claude Code conversation history management research

Research covering .jsonl file structure, performance impact, and retention policies.

Content:
- Claude Code .jsonl file format and message types
- Performance issues from GitHub (memory leaks, conversation compaction)
- Retention policies (consumer vs enterprise)
- Rotation recommendations based on actual data
- File history snapshot tracking mechanics

Source: Moved from agiletec project (research applicable to all Claude Code projects)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: add Development documentation structure

Phase 1: Documentation Structure complete

- Add Docs/Development/ directory for development documentation
- Add ARCHITECTURE.md - System architecture with PM Agent meta-layer
- Add ROADMAP.md - 5-phase development plan with checkboxes
- Add TASKS.md - Daily task tracking with progress indicators
- Add PROJECT_STATUS.md - Current status dashboard and metrics
- Add pm-agent-integration.md - Implementation guide for PM Agent mode

This establishes comprehensive documentation foundation for:
- System architecture understanding
- Development planning and tracking
- Implementation guidance
- Progress visibility

Related: #pm-agent-mode #documentation #phase-1

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: PM Agent session lifecycle and PDCA implementation

Phase 2: PM Agent Mode Integration (Design Phase)

Commands/pm.md updates:
- Add "Always-Active Foundation Layer" concept
- Add Session Lifecycle (Session Start/During Work/Session End)
- Add PDCA Cycle (Plan/Do/Check/Act) automation
- Add Serena MCP Memory Integration (list/read/write_memory)
- Document auto-activation triggers

Agents/pm-agent.md updates:
- Add Session Start Protocol (MANDATORY auto-activation)
- Add During Work PDCA Cycle with example workflows
- Add Session End Protocol with state preservation
- Add PDCA Self-Evaluation Pattern
- Add Documentation Strategy (temp → patterns/mistakes)
- Add Memory Operations Reference

Key Features:
- Session start auto-activation for context restoration
- 30-minute checkpoint saves during work
- Self-evaluation with think_about_* operations
- Systematic documentation lifecycle
- Knowledge evolution to CLAUDE.md

Implementation Status:
-  Design complete (Commands/pm.md, Agents/pm-agent.md)
-  Implementation pending (Core components)
-  Serena MCP integration pending

Salvaged from mistaken development in ~/.claude directory

Related: #pm-agent-mode #session-lifecycle #pdca-cycle #phase-2

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix: disable Serena MCP auto-browser launch

Disable web dashboard and GUI log window auto-launch in Serena MCP server
to prevent intrusive browser popups on startup. Users can still manually
access the dashboard at http://localhost:24282/dashboard/ if needed.

Changes:
- Add CLI flags to Serena run command:
  - --enable-web-dashboard false
  - --enable-gui-log-window false
- Ensures Git-tracked configuration (no reliance on ~/.serena/serena_config.yml)
- Aligns with AIRIS MCP Gateway integration approach

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* refactor: rename directories to lowercase for PEP8 compliance

- Rename superclaude/Agents -> superclaude/agents
- Rename superclaude/Commands -> superclaude/commands
- Rename superclaude/Core -> superclaude/core
- Rename superclaude/Examples -> superclaude/examples
- Rename superclaude/MCP -> superclaude/mcp
- Rename superclaude/Modes -> superclaude/modes

This change follows Python PEP8 naming conventions for package directories.

* style: fix PEP8 violations and update package name to lowercase

Changes:
- Format all Python files with black (43 files reformatted)
- Update package name from 'SuperClaude' to 'superclaude' in pyproject.toml
- Fix import statements to use lowercase package name
- Add missing imports (timedelta, __version__)
- Remove old SuperClaude.egg-info directory

PEP8 violations reduced from 2672 to 701 (mostly E501 line length due to black's 88 char vs flake8's 79 char limit).

* docs: add PM Agent development documentation

Add comprehensive PM Agent development documentation:
- PM Agent ideal workflow (7-phase autonomous cycle)
- Project structure understanding (Git vs installed environment)
- Installation flow understanding (CommandsComponent behavior)
- Task management system (current-tasks.md)

Purpose: Eliminate repeated explanations and enable autonomous PDCA cycles

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat(pm-agent): add self-correcting execution and warning investigation culture

## Changes

### superclaude/commands/pm.md
- Add "Self-Correcting Execution" section with root cause analysis protocol
- Add "Warning/Error Investigation Culture" section enforcing zero-tolerance for dismissal
- Define error detection protocol: STOP → Investigate → Hypothesis → Different Solution → Execute
- Document anti-patterns (retry without understanding) and correct patterns (research-first)

### docs/Development/hypothesis-pm-autonomous-enhancement-2025-10-14.md
- Add PDCA workflow hypothesis document for PM Agent autonomous enhancement

## Rationale

PM Agent must never retry failed operations without understanding root causes.
All warnings and errors require investigation via context7/WebFetch/documentation
to ensure production-quality code and prevent technical debt accumulation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* feat(installer): add airis-mcp-gateway MCP server option

## Changes

- Add airis-mcp-gateway to MCP server options in installer
- Configuration: GitHub-based installation via uvx
- Repository: https://github.com/oraios/airis-mcp-gateway
- Purpose: Dynamic MCP Gateway for zero-token baseline and on-demand tool loading

## Implementation

Added to setup/components/mcp.py self.mcp_servers dictionary with:
- install_method: github
- install_command: uvx test installation
- run_command: uvx runtime execution
- required: False (optional server)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

---------

Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local>
Co-authored-by: Claude <noreply@anthropic.com>
2025-10-14 08:47:09 +05:30

610 lines
19 KiB
Python

"""
SuperClaude Backup Operation Module
Refactored from backup.py for unified CLI hub
"""
import sys
import time
import tarfile
import json
from pathlib import Path
from ...utils.paths import get_home_directory
from datetime import datetime, timedelta
from typing import List, Optional, Dict, Any, Tuple
import argparse
from ...services.settings import SettingsService
from ...utils.ui import (
display_header,
display_info,
display_success,
display_error,
display_warning,
Menu,
confirm,
ProgressBar,
Colors,
format_size,
)
from ...utils.logger import get_logger
from ... import DEFAULT_INSTALL_DIR
from . import OperationBase
class BackupOperation(OperationBase):
"""Backup operation implementation"""
def __init__(self):
super().__init__("backup")
def register_parser(subparsers, global_parser=None) -> argparse.ArgumentParser:
"""Register backup CLI arguments"""
parents = [global_parser] if global_parser else []
parser = subparsers.add_parser(
"backup",
help="Backup and restore SuperClaude installations",
description="Create, list, restore, and manage SuperClaude installation backups",
epilog="""
Examples:
SuperClaude backup --create # Create new backup
SuperClaude backup --list --verbose # List available backups (verbose)
SuperClaude backup --restore # Interactive restore
SuperClaude backup --restore backup.tar.gz # Restore specific backup
SuperClaude backup --info backup.tar.gz # Show backup information
SuperClaude backup --cleanup --force # Clean up old backups (forced)
""",
formatter_class=argparse.RawDescriptionHelpFormatter,
parents=parents,
)
# Backup operations (mutually exclusive)
operation_group = parser.add_mutually_exclusive_group(required=True)
operation_group.add_argument(
"--create", action="store_true", help="Create a new backup"
)
operation_group.add_argument(
"--list", action="store_true", help="List available backups"
)
operation_group.add_argument(
"--restore",
nargs="?",
const="interactive",
help="Restore from backup (optionally specify backup file)",
)
operation_group.add_argument(
"--info", type=str, help="Show information about a specific backup file"
)
operation_group.add_argument(
"--cleanup", action="store_true", help="Clean up old backup files"
)
# Backup options
parser.add_argument(
"--backup-dir",
type=Path,
help="Backup directory (default: <install-dir>/backups)",
)
parser.add_argument("--name", type=str, help="Custom backup name (for --create)")
parser.add_argument(
"--compress",
choices=["none", "gzip", "bzip2"],
default="gzip",
help="Compression method (default: gzip)",
)
# Restore options
parser.add_argument(
"--overwrite",
action="store_true",
help="Overwrite existing files during restore",
)
# Cleanup options
parser.add_argument(
"--keep",
type=int,
default=5,
help="Number of backups to keep during cleanup (default: 5)",
)
parser.add_argument(
"--older-than", type=int, help="Remove backups older than N days"
)
return parser
def get_backup_directory(args: argparse.Namespace) -> Path:
"""Get the backup directory path"""
if args.backup_dir:
return args.backup_dir
else:
return args.install_dir / "backups"
def check_installation_exists(install_dir: Path) -> bool:
"""Check if SuperClaude installation (v2 included) exists"""
settings_manager = SettingsService(install_dir)
return (
settings_manager.check_installation_exists()
or settings_manager.check_v2_installation_exists()
)
def get_backup_info(backup_path: Path) -> Dict[str, Any]:
"""Get information about a backup file"""
info = {
"path": backup_path,
"exists": backup_path.exists(),
"size": 0,
"created": None,
"metadata": {},
}
if not backup_path.exists():
return info
try:
# Get file stats
stats = backup_path.stat()
info["size"] = stats.st_size
info["created"] = datetime.fromtimestamp(stats.st_mtime)
# Try to read metadata from backup
if backup_path.suffix == ".gz":
mode = "r:gz"
elif backup_path.suffix == ".bz2":
mode = "r:bz2"
else:
mode = "r"
with tarfile.open(backup_path, mode) as tar:
# Look for metadata file
try:
metadata_member = tar.getmember("backup_metadata.json")
metadata_file = tar.extractfile(metadata_member)
if metadata_file:
info["metadata"] = json.loads(metadata_file.read().decode())
except KeyError:
pass # No metadata file
# Get list of files in backup
info["files"] = len(tar.getnames())
except Exception as e:
info["error"] = str(e)
return info
def list_backups(backup_dir: Path) -> List[Dict[str, Any]]:
"""List all available backups"""
backups = []
if not backup_dir.exists():
return backups
# Find all backup files
for backup_file in backup_dir.glob("*.tar*"):
if backup_file.is_file():
info = get_backup_info(backup_file)
backups.append(info)
# Sort by creation date (newest first)
backups.sort(key=lambda x: x.get("created", datetime.min), reverse=True)
return backups
def display_backup_list(backups: List[Dict[str, Any]]) -> None:
"""Display list of available backups"""
print(f"\n{Colors.CYAN}{Colors.BRIGHT}Available Backups{Colors.RESET}")
print("=" * 70)
if not backups:
print(f"{Colors.YELLOW}No backups found{Colors.RESET}")
return
print(f"{'Name':<30} {'Size':<10} {'Created':<20} {'Files':<8}")
print("-" * 70)
for backup in backups:
name = backup["path"].name
size = format_size(backup["size"]) if backup["size"] > 0 else "unknown"
created = (
backup["created"].strftime("%Y-%m-%d %H:%M")
if backup["created"]
else "unknown"
)
files = str(backup.get("files", "unknown"))
print(f"{name:<30} {size:<10} {created:<20} {files:<8}")
print()
def create_backup_metadata(install_dir: Path) -> Dict[str, Any]:
"""Create metadata for the backup"""
from setup import __version__
metadata = {
"backup_version": __version__,
"created": datetime.now().isoformat(),
"install_dir": str(install_dir),
"components": {},
"framework_version": "unknown",
}
try:
# Get installed components from metadata
settings_manager = SettingsService(install_dir)
framework_config = settings_manager.get_metadata_setting("framework")
if framework_config:
metadata["framework_version"] = framework_config.get("version", "unknown")
if "components" in framework_config:
for component_name in framework_config["components"]:
version = settings_manager.get_component_version(component_name)
if version:
metadata["components"][component_name] = version
except Exception:
pass # Continue without metadata
return metadata
def create_backup(args: argparse.Namespace) -> bool:
"""Create a new backup"""
logger = get_logger()
try:
# Check if installation exists
if not check_installation_exists(args.install_dir):
logger.error(f"No SuperClaude installation found in {args.install_dir}")
return False
# Setup backup directory
backup_dir = get_backup_directory(args)
backup_dir.mkdir(parents=True, exist_ok=True)
# Generate backup filename
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
if args.name:
backup_name = f"{args.name}_{timestamp}"
else:
backup_name = f"superclaude_backup_{timestamp}"
# Determine compression
if args.compress == "gzip":
backup_file = backup_dir / f"{backup_name}.tar.gz"
mode = "w:gz"
elif args.compress == "bzip2":
backup_file = backup_dir / f"{backup_name}.tar.bz2"
mode = "w:bz2"
else:
backup_file = backup_dir / f"{backup_name}.tar"
mode = "w"
logger.info(f"Creating backup: {backup_file}")
# Create metadata
metadata = create_backup_metadata(args.install_dir)
# Create backup
start_time = time.time()
with tarfile.open(backup_file, mode) as tar:
# Add metadata file
import tempfile
with tempfile.NamedTemporaryFile(
mode="w", suffix=".json", delete=False
) as temp_file:
json.dump(metadata, temp_file, indent=2)
temp_file.flush()
tar.add(temp_file.name, arcname="backup_metadata.json")
Path(temp_file.name).unlink() # Clean up temp file
# Add installation directory contents (excluding backups and local dirs)
files_added = 0
for item in args.install_dir.rglob("*"):
if item.is_file() and item != backup_file:
try:
# Create relative path for archive
rel_path = item.relative_to(args.install_dir)
# Skip files in excluded directories
if rel_path.parts and rel_path.parts[0] in ["backups", "local"]:
continue
tar.add(item, arcname=str(rel_path))
files_added += 1
if files_added % 10 == 0:
logger.debug(f"Added {files_added} files to backup")
except Exception as e:
logger.warning(f"Could not add {item} to backup: {e}")
duration = time.time() - start_time
file_size = backup_file.stat().st_size
logger.success(f"Backup created successfully in {duration:.1f} seconds")
logger.info(f"Backup file: {backup_file}")
logger.info(f"Files archived: {files_added}")
logger.info(f"Backup size: {format_size(file_size)}")
return True
except Exception as e:
logger.exception(f"Failed to create backup: {e}")
return False
def restore_backup(backup_path: Path, args: argparse.Namespace) -> bool:
"""Restore from a backup file"""
logger = get_logger()
try:
if not backup_path.exists():
logger.error(f"Backup file not found: {backup_path}")
return False
# Check backup file
info = get_backup_info(backup_path)
if "error" in info:
logger.error(f"Invalid backup file: {info['error']}")
return False
logger.info(f"Restoring from backup: {backup_path}")
# Determine compression
if backup_path.suffix == ".gz":
mode = "r:gz"
elif backup_path.suffix == ".bz2":
mode = "r:bz2"
else:
mode = "r"
# Create backup of current installation if it exists
if check_installation_exists(args.install_dir) and not args.dry_run:
logger.info("Creating backup of current installation before restore")
# This would call create_backup internally
# Extract backup
start_time = time.time()
files_restored = 0
with tarfile.open(backup_path, mode) as tar:
# Extract all files except metadata
for member in tar.getmembers():
if member.name == "backup_metadata.json":
continue
try:
target_path = args.install_dir / member.name
# Check if file exists and overwrite flag
if target_path.exists() and not args.overwrite:
logger.warning(f"Skipping existing file: {target_path}")
continue
# Extract file
tar.extract(member, args.install_dir)
files_restored += 1
if files_restored % 10 == 0:
logger.debug(f"Restored {files_restored} files")
except Exception as e:
logger.warning(f"Could not restore {member.name}: {e}")
duration = time.time() - start_time
logger.success(f"Restore completed successfully in {duration:.1f} seconds")
logger.info(f"Files restored: {files_restored}")
return True
except Exception as e:
logger.exception(f"Failed to restore backup: {e}")
return False
def interactive_restore_selection(backups: List[Dict[str, Any]]) -> Optional[Path]:
"""Interactive backup selection for restore"""
if not backups:
print(f"{Colors.YELLOW}No backups available for restore{Colors.RESET}")
return None
print(f"\n{Colors.CYAN}Select Backup to Restore:{Colors.RESET}")
# Create menu options
backup_options = []
for backup in backups:
name = backup["path"].name
size = format_size(backup["size"]) if backup["size"] > 0 else "unknown"
created = (
backup["created"].strftime("%Y-%m-%d %H:%M")
if backup["created"]
else "unknown"
)
backup_options.append(f"{name} ({size}, {created})")
menu = Menu("Select backup:", backup_options)
choice = menu.display()
if choice == -1 or choice >= len(backups):
return None
return backups[choice]["path"]
def cleanup_old_backups(backup_dir: Path, args: argparse.Namespace) -> bool:
"""Clean up old backup files"""
logger = get_logger()
try:
backups = list_backups(backup_dir)
if not backups:
logger.info("No backups found to clean up")
return True
to_remove = []
# Remove by age
if args.older_than:
cutoff_date = datetime.now() - timedelta(days=args.older_than)
for backup in backups:
if backup["created"] and backup["created"] < cutoff_date:
to_remove.append(backup)
# Keep only N most recent
if args.keep and len(backups) > args.keep:
# Sort by date and take oldest ones to remove
backups.sort(key=lambda x: x.get("created", datetime.min), reverse=True)
to_remove.extend(backups[args.keep :])
# Remove duplicates
to_remove = list({backup["path"]: backup for backup in to_remove}.values())
if not to_remove:
logger.info("No backups need to be cleaned up")
return True
logger.info(f"Cleaning up {len(to_remove)} old backups")
for backup in to_remove:
try:
backup["path"].unlink()
logger.info(f"Removed backup: {backup['path'].name}")
except Exception as e:
logger.warning(f"Could not remove {backup['path'].name}: {e}")
return True
except Exception as e:
logger.exception(f"Failed to cleanup backups: {e}")
return False
def run(args: argparse.Namespace) -> int:
"""Execute backup operation with parsed arguments"""
operation = BackupOperation()
operation.setup_operation_logging(args)
logger = get_logger()
# ✅ Inserted validation code
expected_home = get_home_directory().resolve()
actual_dir = args.install_dir.resolve()
if not str(actual_dir).startswith(str(expected_home)):
print(f"\n[x] Installation must be inside your user profile directory.")
print(f" Expected prefix: {expected_home}")
print(f" Provided path: {actual_dir}")
sys.exit(1)
try:
# Validate global arguments
success, errors = operation.validate_global_args(args)
if not success:
for error in errors:
logger.error(error)
return 1
# Display header
if not args.quiet:
from setup.cli.base import __version__
display_header(
f"SuperClaude Backup v{__version__}",
"Backup and restore SuperClaude installations",
)
backup_dir = get_backup_directory(args)
# Handle different backup operations
if args.create:
success = create_backup(args)
elif args.list:
backups = list_backups(backup_dir)
display_backup_list(backups)
success = True
elif args.restore:
if args.restore == "interactive":
# Interactive restore
backups = list_backups(backup_dir)
backup_path = interactive_restore_selection(backups)
if not backup_path:
logger.info("Restore cancelled by user")
return 0
else:
# Specific backup file
backup_path = Path(args.restore)
if not backup_path.is_absolute():
backup_path = backup_dir / backup_path
success = restore_backup(backup_path, args)
elif args.info:
backup_path = Path(args.info)
if not backup_path.is_absolute():
backup_path = backup_dir / backup_path
info = get_backup_info(backup_path)
if info["exists"]:
print(f"\n{Colors.CYAN}Backup Information:{Colors.RESET}")
print(f"File: {info['path']}")
print(f"Size: {format_size(info['size'])}")
print(f"Created: {info['created']}")
print(f"Files: {info.get('files', 'unknown')}")
if info["metadata"]:
metadata = info["metadata"]
print(
f"Framework Version: {metadata.get('framework_version', 'unknown')}"
)
if metadata.get("components"):
print("Components:")
for comp, ver in metadata["components"].items():
print(f" {comp}: v{ver}")
else:
logger.error(f"Backup file not found: {backup_path}")
success = False
success = True
elif args.cleanup:
success = cleanup_old_backups(backup_dir, args)
else:
logger.error("No backup operation specified")
success = False
if success:
if not args.quiet and args.create:
display_success("Backup operation completed successfully!")
elif not args.quiet and args.restore:
display_success("Restore operation completed successfully!")
return 0
else:
display_error("Backup operation failed. Check logs for details.")
return 1
except KeyboardInterrupt:
print(f"\n{Colors.YELLOW}Backup operation cancelled by user{Colors.RESET}")
return 130
except Exception as e:
return operation.handle_operation_error("backup", e)