mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-18 18:26:46 +00:00
* fix(orchestration): add WebFetch auto-trigger for infrastructure configuration Problem: Infrastructure configuration changes (e.g., Traefik port settings) were being made based on assumptions without consulting official documentation, violating the 'Evidence > assumptions' principle in PRINCIPLES.md. Solution: - Added Infrastructure Configuration Validation section to MODE_Orchestration.md - Auto-triggers WebFetch for infrastructure tools (Traefik, nginx, Docker, etc.) - Enforces MODE_DeepResearch activation for investigation - BLOCKS assumption-based configuration changes Testing: Verified WebFetch successfully retrieves Traefik official docs (port 80 default) This prevents production outages from infrastructure misconfiguration by ensuring all technical recommendations are backed by official documentation. * feat: Add PM Agent (Project Manager Agent) for seamless orchestration Introduces PM Agent as the default orchestration layer that coordinates all sub-agents and manages workflows automatically. Key Features: - Default orchestration: All user interactions handled by PM Agent - Auto-delegation: Intelligent sub-agent selection based on task analysis - Docker Gateway integration: Zero-token baseline with dynamic MCP loading - Self-improvement loop: Automatic documentation of patterns and mistakes - Optional override: Users can specify sub-agents explicitly if desired Architecture: - Agent spec: SuperClaude/Agents/pm-agent.md - Command: SuperClaude/Commands/pm.md - Updated docs: README.md (15→16 agents), agents.md (new Orchestration category) User Experience: - Default: PM Agent handles everything (seamless, no manual routing) - Optional: Explicit --agent flag for direct sub-agent access - Both modes available simultaneously (no user downside) Implementation Status: - ✅ Specification complete - ✅ Documentation complete - ⏳ Prototype implementation needed - ⏳ Docker Gateway integration needed - ⏳ Testing and validation needed Refs: kazukinakai/docker-mcp-gateway (IRIS MCP Gateway integration) * feat: Add Agent Orchestration rules for PM Agent default activation Implements PM Agent as the default orchestration layer in RULES.md. Key Changes: - New 'Agent Orchestration' section (CRITICAL priority) - PM Agent receives ALL user requests by default - Manual override with @agent-[name] bypasses PM Agent - Agent Selection Priority clearly defined: 1. Manual override → Direct routing 2. Default → PM Agent → Auto-delegation 3. Delegation based on keywords, file types, complexity, context User Experience: - Default: PM Agent handles everything (seamless) - Override: @agent-[name] for direct specialist access - Transparent: PM Agent reports delegation decisions This establishes PM Agent as the orchestration layer while respecting existing auto-activation patterns and manual overrides. Next Steps: - Local testing in agiletec project - Iteration based on actual behavior - Documentation updates as needed * refactor(pm-agent): redesign as self-improvement meta-layer Problem Resolution: PM Agent's initial design competed with existing auto-activation for task routing, creating confusion about orchestration responsibilities and adding unnecessary complexity. Design Change: Redefined PM Agent as a meta-layer agent that operates AFTER specialist agents complete tasks, focusing on: - Post-implementation documentation and pattern recording - Immediate mistake analysis with prevention checklists - Monthly documentation maintenance and noise reduction - Pattern extraction and knowledge synthesis Two-Layer Orchestration System: 1. Task Execution Layer: Existing auto-activation handles task routing (unchanged) 2. Self-Improvement Layer: PM Agent meta-layer handles documentation (new) Files Modified: - SuperClaude/Agents/pm-agent.md: Complete rewrite with meta-layer design - Category: orchestration → meta - Triggers: All user interactions → Post-implementation, mistakes, monthly - Behavioral Mindset: Continuous learning system - Self-Improvement Workflow: BEFORE/DURING/AFTER/MISTAKE RECOVERY/MAINTENANCE - SuperClaude/Core/RULES.md: Agent Orchestration section updated - Split into Task Execution Layer + Self-Improvement Layer - Added orchestration flow diagram - Clarified PM Agent activates AFTER task completion - README.md: Updated PM Agent description - "orchestrates all interactions" → "ensures continuous learning" - Docs/User-Guide/agents.md: PM Agent section rewritten - Section: Orchestration Agent → Meta-Layer Agent - Expertise: Project orchestration → Self-improvement workflow executor - Examples: Task coordination → Post-implementation documentation - PR_DOCUMENTATION.md: Comprehensive PR documentation added - Summary, motivation, changes, testing, breaking changes - Two-layer orchestration system diagram - Verification checklist Integration Validated: Tested with agiletec project's self-improvement-workflow.md: ✅ PM Agent aligns with existing BEFORE/DURING/AFTER/MISTAKE RECOVERY phases ✅ Complements (not competes with) existing workflow ✅ agiletec workflow defines WHAT, PM Agent defines WHO executes it Breaking Changes: None - Existing auto-activation continues unchanged - Specialist agents unaffected - User workflows remain the same - New capability: Automatic documentation and knowledge maintenance Value Proposition: Transforms SuperClaude into a continuously learning system that accumulates knowledge, prevents recurring mistakes, and maintains fresh documentation without manual intervention. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: add Claude Code conversation history management research Research covering .jsonl file structure, performance impact, and retention policies. Content: - Claude Code .jsonl file format and message types - Performance issues from GitHub (memory leaks, conversation compaction) - Retention policies (consumer vs enterprise) - Rotation recommendations based on actual data - File history snapshot tracking mechanics Source: Moved from agiletec project (research applicable to all Claude Code projects) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: add Development documentation structure Phase 1: Documentation Structure complete - Add Docs/Development/ directory for development documentation - Add ARCHITECTURE.md - System architecture with PM Agent meta-layer - Add ROADMAP.md - 5-phase development plan with checkboxes - Add TASKS.md - Daily task tracking with progress indicators - Add PROJECT_STATUS.md - Current status dashboard and metrics - Add pm-agent-integration.md - Implementation guide for PM Agent mode This establishes comprehensive documentation foundation for: - System architecture understanding - Development planning and tracking - Implementation guidance - Progress visibility Related: #pm-agent-mode #documentation #phase-1 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: PM Agent session lifecycle and PDCA implementation Phase 2: PM Agent Mode Integration (Design Phase) Commands/pm.md updates: - Add "Always-Active Foundation Layer" concept - Add Session Lifecycle (Session Start/During Work/Session End) - Add PDCA Cycle (Plan/Do/Check/Act) automation - Add Serena MCP Memory Integration (list/read/write_memory) - Document auto-activation triggers Agents/pm-agent.md updates: - Add Session Start Protocol (MANDATORY auto-activation) - Add During Work PDCA Cycle with example workflows - Add Session End Protocol with state preservation - Add PDCA Self-Evaluation Pattern - Add Documentation Strategy (temp → patterns/mistakes) - Add Memory Operations Reference Key Features: - Session start auto-activation for context restoration - 30-minute checkpoint saves during work - Self-evaluation with think_about_* operations - Systematic documentation lifecycle - Knowledge evolution to CLAUDE.md Implementation Status: - ✅ Design complete (Commands/pm.md, Agents/pm-agent.md) - ⏳ Implementation pending (Core components) - ⏳ Serena MCP integration pending Salvaged from mistaken development in ~/.claude directory Related: #pm-agent-mode #session-lifecycle #pdca-cycle #phase-2 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: disable Serena MCP auto-browser launch Disable web dashboard and GUI log window auto-launch in Serena MCP server to prevent intrusive browser popups on startup. Users can still manually access the dashboard at http://localhost:24282/dashboard/ if needed. Changes: - Add CLI flags to Serena run command: - --enable-web-dashboard false - --enable-gui-log-window false - Ensures Git-tracked configuration (no reliance on ~/.serena/serena_config.yml) - Aligns with AIRIS MCP Gateway integration approach 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: rename directories to lowercase for PEP8 compliance - Rename superclaude/Agents -> superclaude/agents - Rename superclaude/Commands -> superclaude/commands - Rename superclaude/Core -> superclaude/core - Rename superclaude/Examples -> superclaude/examples - Rename superclaude/MCP -> superclaude/mcp - Rename superclaude/Modes -> superclaude/modes This change follows Python PEP8 naming conventions for package directories. * style: fix PEP8 violations and update package name to lowercase Changes: - Format all Python files with black (43 files reformatted) - Update package name from 'SuperClaude' to 'superclaude' in pyproject.toml - Fix import statements to use lowercase package name - Add missing imports (timedelta, __version__) - Remove old SuperClaude.egg-info directory PEP8 violations reduced from 2672 to 701 (mostly E501 line length due to black's 88 char vs flake8's 79 char limit). * docs: add PM Agent development documentation Add comprehensive PM Agent development documentation: - PM Agent ideal workflow (7-phase autonomous cycle) - Project structure understanding (Git vs installed environment) - Installation flow understanding (CommandsComponent behavior) - Task management system (current-tasks.md) Purpose: Eliminate repeated explanations and enable autonomous PDCA cycles 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat(pm-agent): add self-correcting execution and warning investigation culture ## Changes ### superclaude/commands/pm.md - Add "Self-Correcting Execution" section with root cause analysis protocol - Add "Warning/Error Investigation Culture" section enforcing zero-tolerance for dismissal - Define error detection protocol: STOP → Investigate → Hypothesis → Different Solution → Execute - Document anti-patterns (retry without understanding) and correct patterns (research-first) ### docs/Development/hypothesis-pm-autonomous-enhancement-2025-10-14.md - Add PDCA workflow hypothesis document for PM Agent autonomous enhancement ## Rationale PM Agent must never retry failed operations without understanding root causes. All warnings and errors require investigation via context7/WebFetch/documentation to ensure production-quality code and prevent technical debt accumulation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat(installer): add airis-mcp-gateway MCP server option ## Changes - Add airis-mcp-gateway to MCP server options in installer - Configuration: GitHub-based installation via uvx - Repository: https://github.com/oraios/airis-mcp-gateway - Purpose: Dynamic MCP Gateway for zero-token baseline and on-demand tool loading ## Implementation Added to setup/components/mcp.py self.mcp_servers dictionary with: - install_method: github - install_command: uvx test installation - run_command: uvx runtime execution - required: False (optional server) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: kazuki <kazuki@kazukinoMacBook-Air.local> Co-authored-by: Claude <noreply@anthropic.com>
536 lines
17 KiB
Python
536 lines
17 KiB
Python
"""
|
|
Environment variable management for SuperClaude
|
|
Cross-platform utilities for setting up persistent environment variables
|
|
"""
|
|
|
|
import os
|
|
import sys
|
|
import subprocess
|
|
import json
|
|
from pathlib import Path
|
|
from typing import Dict, Optional
|
|
from datetime import datetime
|
|
from .ui import display_info, display_success, display_warning, Colors
|
|
from .logger import get_logger
|
|
from .paths import get_home_directory
|
|
|
|
|
|
def _get_env_tracking_file() -> Path:
|
|
"""Get path to environment variable tracking file"""
|
|
from .. import DEFAULT_INSTALL_DIR
|
|
|
|
install_dir = get_home_directory() / ".claude"
|
|
install_dir.mkdir(exist_ok=True)
|
|
return install_dir / "superclaude_env_vars.json"
|
|
|
|
|
|
def _load_env_tracking() -> Dict[str, Dict[str, str]]:
|
|
"""Load environment variable tracking data"""
|
|
tracking_file = _get_env_tracking_file()
|
|
|
|
try:
|
|
if tracking_file.exists():
|
|
with open(tracking_file, "r") as f:
|
|
return json.load(f)
|
|
except Exception as e:
|
|
get_logger().warning(f"Could not load environment tracking: {e}")
|
|
|
|
return {}
|
|
|
|
|
|
def _save_env_tracking(tracking_data: Dict[str, Dict[str, str]]) -> bool:
|
|
"""Save environment variable tracking data"""
|
|
tracking_file = _get_env_tracking_file()
|
|
|
|
try:
|
|
with open(tracking_file, "w") as f:
|
|
json.dump(tracking_data, f, indent=2)
|
|
return True
|
|
except Exception as e:
|
|
get_logger().error(f"Could not save environment tracking: {e}")
|
|
return False
|
|
|
|
|
|
def _add_env_tracking(env_vars: Dict[str, str]) -> None:
|
|
"""Add environment variables to tracking"""
|
|
if not env_vars:
|
|
return
|
|
|
|
tracking_data = _load_env_tracking()
|
|
timestamp = datetime.now().isoformat()
|
|
|
|
for env_var, value in env_vars.items():
|
|
tracking_data[env_var] = {
|
|
"set_by": "superclaude",
|
|
"timestamp": timestamp,
|
|
"value_hash": str(hash(value)), # Store hash, not actual value for security
|
|
}
|
|
|
|
_save_env_tracking(tracking_data)
|
|
get_logger().info(f"Added {len(env_vars)} environment variables to tracking")
|
|
|
|
|
|
def _remove_env_tracking(env_vars: list) -> None:
|
|
"""Remove environment variables from tracking"""
|
|
if not env_vars:
|
|
return
|
|
|
|
tracking_data = _load_env_tracking()
|
|
|
|
for env_var in env_vars:
|
|
if env_var in tracking_data:
|
|
del tracking_data[env_var]
|
|
|
|
_save_env_tracking(tracking_data)
|
|
get_logger().info(f"Removed {len(env_vars)} environment variables from tracking")
|
|
|
|
|
|
def detect_shell_config() -> Optional[Path]:
|
|
"""
|
|
Detect user's shell configuration file
|
|
|
|
Returns:
|
|
Path to the shell configuration file, or None if not found
|
|
"""
|
|
home = get_home_directory()
|
|
|
|
# Check in order of preference
|
|
configs = [
|
|
home / ".zshrc", # Zsh (Mac default)
|
|
home / ".bashrc", # Bash
|
|
home / ".profile", # Generic shell profile
|
|
home / ".bash_profile", # Mac Bash profile
|
|
]
|
|
|
|
for config in configs:
|
|
if config.exists():
|
|
return config
|
|
|
|
# Default to .bashrc if none exist (will be created)
|
|
return home / ".bashrc"
|
|
|
|
|
|
def setup_environment_variables(api_keys: Dict[str, str]) -> bool:
|
|
"""
|
|
Set up environment variables across platforms
|
|
|
|
Args:
|
|
api_keys: Dictionary of environment variable names to values
|
|
|
|
Returns:
|
|
True if all variables were set successfully, False otherwise
|
|
"""
|
|
logger = get_logger()
|
|
success = True
|
|
|
|
if not api_keys:
|
|
return True
|
|
|
|
print(f"\n{Colors.BLUE}[INFO] Setting up environment variables...{Colors.RESET}")
|
|
|
|
for env_var, value in api_keys.items():
|
|
try:
|
|
# Set for current session
|
|
os.environ[env_var] = value
|
|
|
|
if os.name == "nt": # Windows
|
|
# Use setx for persistent user variable
|
|
result = subprocess.run(
|
|
["setx", env_var, value], capture_output=True, text=True
|
|
)
|
|
if result.returncode != 0:
|
|
display_warning(
|
|
f"Could not set {env_var} persistently: {result.stderr.strip()}"
|
|
)
|
|
success = False
|
|
else:
|
|
logger.info(
|
|
f"Windows environment variable {env_var} set persistently"
|
|
)
|
|
else: # Unix-like systems
|
|
shell_config = detect_shell_config()
|
|
|
|
# Check if the export already exists
|
|
export_line = f'export {env_var}="{value}"'
|
|
|
|
try:
|
|
with open(shell_config, "r") as f:
|
|
content = f.read()
|
|
|
|
# Check if this environment variable is already set
|
|
if f"export {env_var}=" in content:
|
|
# Variable exists - don't duplicate
|
|
logger.info(
|
|
f"Environment variable {env_var} already exists in {shell_config.name}"
|
|
)
|
|
else:
|
|
# Append export to shell config
|
|
with open(shell_config, "a") as f:
|
|
f.write(f"\n# SuperClaude API Key\n{export_line}\n")
|
|
|
|
display_info(f"Added {env_var} to {shell_config.name}")
|
|
logger.info(f"Added {env_var} to {shell_config}")
|
|
|
|
except Exception as e:
|
|
display_warning(f"Could not update {shell_config.name}: {e}")
|
|
success = False
|
|
|
|
logger.info(
|
|
f"Environment variable {env_var} configured for current session"
|
|
)
|
|
|
|
except Exception as e:
|
|
logger.error(f"Failed to set {env_var}: {e}")
|
|
display_warning(f"Failed to set {env_var}: {e}")
|
|
success = False
|
|
|
|
if success:
|
|
# Add to tracking
|
|
_add_env_tracking(api_keys)
|
|
|
|
display_success("Environment variables configured successfully")
|
|
if os.name != "nt":
|
|
display_info(
|
|
"Restart your terminal or run 'source ~/.bashrc' to apply changes"
|
|
)
|
|
else:
|
|
display_info(
|
|
"New environment variables will be available in new terminal sessions"
|
|
)
|
|
else:
|
|
display_warning("Some environment variables could not be set persistently")
|
|
display_info("You can set them manually or check the logs for details")
|
|
|
|
return success
|
|
|
|
|
|
def validate_environment_setup(env_vars: Dict[str, str]) -> bool:
|
|
"""
|
|
Validate that environment variables are properly set
|
|
|
|
Args:
|
|
env_vars: Dictionary of environment variable names to expected values
|
|
|
|
Returns:
|
|
True if all variables are set correctly, False otherwise
|
|
"""
|
|
logger = get_logger()
|
|
all_valid = True
|
|
|
|
for env_var, expected_value in env_vars.items():
|
|
current_value = os.environ.get(env_var)
|
|
|
|
if current_value is None:
|
|
logger.warning(f"Environment variable {env_var} is not set")
|
|
all_valid = False
|
|
elif current_value != expected_value:
|
|
logger.warning(f"Environment variable {env_var} has unexpected value")
|
|
all_valid = False
|
|
else:
|
|
logger.info(f"Environment variable {env_var} is set correctly")
|
|
|
|
return all_valid
|
|
|
|
|
|
def get_shell_name() -> str:
|
|
"""
|
|
Get the name of the current shell
|
|
|
|
Returns:
|
|
Name of the shell (e.g., 'bash', 'zsh', 'fish')
|
|
"""
|
|
shell_path = os.environ.get("SHELL", "")
|
|
if shell_path:
|
|
return Path(shell_path).name
|
|
return "unknown"
|
|
|
|
|
|
def get_superclaude_environment_variables() -> Dict[str, str]:
|
|
"""
|
|
Get environment variables that were set by SuperClaude
|
|
|
|
Returns:
|
|
Dictionary of environment variable names to their current values
|
|
"""
|
|
# Load tracking data to get SuperClaude-managed variables
|
|
tracking_data = _load_env_tracking()
|
|
|
|
found_vars = {}
|
|
for env_var, metadata in tracking_data.items():
|
|
if metadata.get("set_by") == "superclaude":
|
|
value = os.environ.get(env_var)
|
|
if value:
|
|
found_vars[env_var] = value
|
|
|
|
# Fallback: check known SuperClaude API key environment variables
|
|
# (for backwards compatibility with existing installations)
|
|
known_superclaude_env_vars = [
|
|
"TWENTYFIRST_API_KEY", # Magic server
|
|
"MORPH_API_KEY", # Morphllm server
|
|
]
|
|
|
|
for env_var in known_superclaude_env_vars:
|
|
if env_var not in found_vars:
|
|
value = os.environ.get(env_var)
|
|
if value:
|
|
found_vars[env_var] = value
|
|
|
|
return found_vars
|
|
|
|
|
|
def cleanup_environment_variables(
|
|
env_vars_to_remove: Dict[str, str], create_restore_script: bool = True
|
|
) -> bool:
|
|
"""
|
|
Safely remove environment variables with backup and restore options
|
|
|
|
Args:
|
|
env_vars_to_remove: Dictionary of environment variable names to remove
|
|
create_restore_script: Whether to create a script to restore the variables
|
|
|
|
Returns:
|
|
True if cleanup was successful, False otherwise
|
|
"""
|
|
logger = get_logger()
|
|
success = True
|
|
|
|
if not env_vars_to_remove:
|
|
return True
|
|
|
|
# Create restore script if requested
|
|
if create_restore_script:
|
|
restore_script_path = _create_restore_script(env_vars_to_remove)
|
|
if restore_script_path:
|
|
display_info(f"Created restore script: {restore_script_path}")
|
|
else:
|
|
display_warning("Could not create restore script")
|
|
|
|
print(f"\n{Colors.BLUE}[INFO] Removing environment variables...{Colors.RESET}")
|
|
|
|
for env_var, value in env_vars_to_remove.items():
|
|
try:
|
|
# Remove from current session
|
|
if env_var in os.environ:
|
|
del os.environ[env_var]
|
|
logger.info(f"Removed {env_var} from current session")
|
|
|
|
if os.name == "nt": # Windows
|
|
# Remove persistent user variable using reg command
|
|
result = subprocess.run(
|
|
["reg", "delete", "HKCU\\Environment", "/v", env_var, "/f"],
|
|
capture_output=True,
|
|
text=True,
|
|
)
|
|
if result.returncode != 0:
|
|
# Variable might not exist in registry, which is fine
|
|
logger.debug(
|
|
f"Registry deletion for {env_var}: {result.stderr.strip()}"
|
|
)
|
|
else:
|
|
logger.info(f"Removed {env_var} from Windows registry")
|
|
else: # Unix-like systems
|
|
shell_config = detect_shell_config()
|
|
if shell_config and shell_config.exists():
|
|
_remove_env_var_from_shell_config(shell_config, env_var)
|
|
|
|
except Exception as e:
|
|
logger.error(f"Failed to remove {env_var}: {e}")
|
|
display_warning(f"Could not remove {env_var}: {e}")
|
|
success = False
|
|
|
|
if success:
|
|
# Remove from tracking
|
|
_remove_env_tracking(list(env_vars_to_remove.keys()))
|
|
|
|
display_success("Environment variables removed successfully")
|
|
if os.name != "nt":
|
|
display_info(
|
|
"Restart your terminal or source your shell config to apply changes"
|
|
)
|
|
else:
|
|
display_info("Changes will take effect in new terminal sessions")
|
|
else:
|
|
display_warning("Some environment variables could not be removed")
|
|
|
|
return success
|
|
|
|
|
|
def _create_restore_script(env_vars: Dict[str, str]) -> Optional[Path]:
|
|
"""Create a script to restore environment variables"""
|
|
try:
|
|
home = get_home_directory()
|
|
if os.name == "nt": # Windows
|
|
script_path = home / "restore_superclaude_env.bat"
|
|
with open(script_path, "w") as f:
|
|
f.write("@echo off\n")
|
|
f.write("REM SuperClaude Environment Variable Restore Script\n")
|
|
f.write("REM Generated during uninstall\n\n")
|
|
for env_var, value in env_vars.items():
|
|
f.write(f'setx {env_var} "{value}"\n')
|
|
f.write("\necho Environment variables restored\n")
|
|
f.write("pause\n")
|
|
else: # Unix-like
|
|
script_path = home / "restore_superclaude_env.sh"
|
|
with open(script_path, "w") as f:
|
|
f.write("#!/bin/bash\n")
|
|
f.write("# SuperClaude Environment Variable Restore Script\n")
|
|
f.write("# Generated during uninstall\n\n")
|
|
shell_config = detect_shell_config()
|
|
for env_var, value in env_vars.items():
|
|
f.write(f'export {env_var}="{value}"\n')
|
|
if shell_config:
|
|
f.write(
|
|
f"echo 'export {env_var}=\"{value}\"' >> {shell_config}\n"
|
|
)
|
|
f.write("\necho 'Environment variables restored'\n")
|
|
|
|
# Make script executable
|
|
script_path.chmod(0o755)
|
|
|
|
return script_path
|
|
|
|
except Exception as e:
|
|
get_logger().error(f"Failed to create restore script: {e}")
|
|
return None
|
|
|
|
|
|
def _remove_env_var_from_shell_config(shell_config: Path, env_var: str) -> bool:
|
|
"""Remove environment variable export from shell configuration file"""
|
|
try:
|
|
# Read current content
|
|
with open(shell_config, "r") as f:
|
|
lines = f.readlines()
|
|
|
|
# Filter out lines that export this variable
|
|
filtered_lines = []
|
|
skip_next_blank = False
|
|
|
|
for line in lines:
|
|
# Check if this line exports our variable
|
|
if f"export {env_var}=" in line or line.strip() == f"# SuperClaude API Key":
|
|
skip_next_blank = True
|
|
continue
|
|
|
|
# Skip blank line after removed export
|
|
if skip_next_blank and line.strip() == "":
|
|
skip_next_blank = False
|
|
continue
|
|
|
|
skip_next_blank = False
|
|
filtered_lines.append(line)
|
|
|
|
# Write back the filtered content
|
|
with open(shell_config, "w") as f:
|
|
f.writelines(filtered_lines)
|
|
|
|
get_logger().info(f"Removed {env_var} export from {shell_config.name}")
|
|
return True
|
|
|
|
except Exception as e:
|
|
get_logger().error(f"Failed to remove {env_var} from {shell_config}: {e}")
|
|
return False
|
|
|
|
|
|
def create_env_file(
|
|
api_keys: Dict[str, str], env_file_path: Optional[Path] = None
|
|
) -> bool:
|
|
"""
|
|
Create a .env file with the API keys (alternative to shell config)
|
|
|
|
Args:
|
|
api_keys: Dictionary of environment variable names to values
|
|
env_file_path: Path to the .env file (defaults to home directory)
|
|
|
|
Returns:
|
|
True if .env file was created successfully, False otherwise
|
|
"""
|
|
if env_file_path is None:
|
|
env_file_path = get_home_directory() / ".env"
|
|
|
|
logger = get_logger()
|
|
|
|
try:
|
|
# Read existing .env file if it exists
|
|
existing_content = ""
|
|
if env_file_path.exists():
|
|
with open(env_file_path, "r") as f:
|
|
existing_content = f.read()
|
|
|
|
# Prepare new content
|
|
new_lines = []
|
|
for env_var, value in api_keys.items():
|
|
line = f'{env_var}="{value}"'
|
|
|
|
# Check if this variable already exists
|
|
if f"{env_var}=" in existing_content:
|
|
logger.info(f"Variable {env_var} already exists in .env file")
|
|
else:
|
|
new_lines.append(line)
|
|
|
|
# Append new lines if any
|
|
if new_lines:
|
|
with open(env_file_path, "a") as f:
|
|
if existing_content and not existing_content.endswith("\n"):
|
|
f.write("\n")
|
|
f.write("# SuperClaude API Keys\n")
|
|
for line in new_lines:
|
|
f.write(line + "\n")
|
|
|
|
# Set file permissions (readable only by owner)
|
|
env_file_path.chmod(0o600)
|
|
|
|
display_success(f"Created .env file at {env_file_path}")
|
|
logger.info(f"Created .env file with {len(new_lines)} new variables")
|
|
|
|
return True
|
|
|
|
except Exception as e:
|
|
logger.error(f"Failed to create .env file: {e}")
|
|
display_warning(f"Could not create .env file: {e}")
|
|
return False
|
|
|
|
|
|
def check_research_prerequisites() -> tuple[bool, list[str]]:
|
|
"""
|
|
Check if deep research prerequisites are met
|
|
|
|
Returns:
|
|
Tuple of (success: bool, warnings: List[str])
|
|
"""
|
|
warnings = []
|
|
logger = get_logger()
|
|
|
|
# Check Tavily API key
|
|
if not os.environ.get("TAVILY_API_KEY"):
|
|
warnings.append(
|
|
"TAVILY_API_KEY not set - Deep research web search will not work\n"
|
|
"Get your key from: https://app.tavily.com"
|
|
)
|
|
logger.warning("TAVILY_API_KEY not found in environment")
|
|
else:
|
|
logger.info("Found TAVILY_API_KEY in environment")
|
|
|
|
# Check Node.js for MCP
|
|
import shutil
|
|
|
|
if not shutil.which("node"):
|
|
warnings.append(
|
|
"Node.js not found - Required for Tavily MCP\n"
|
|
"Install from: https://nodejs.org"
|
|
)
|
|
logger.warning("Node.js not found - required for Tavily MCP")
|
|
else:
|
|
logger.info("Node.js found")
|
|
|
|
# Check npm
|
|
if not shutil.which("npm"):
|
|
warnings.append(
|
|
"npm not found - Required for MCP server installation\n"
|
|
"Usually installed with Node.js"
|
|
)
|
|
logger.warning("npm not found - required for MCP installation")
|
|
else:
|
|
logger.info("npm found")
|
|
|
|
return len(warnings) == 0, warnings
|