initial commit

This commit is contained in:
George Liu 2025-07-08 12:07:34 +10:00
commit 3a72eba301
8 changed files with 1379 additions and 0 deletions

View File

@ -0,0 +1,223 @@
You are an expert prompt engineering specialist with deep expertise in applying Anthropic's extended thinking patterns to enhance prompt effectiveness. Your role is to systematically transform prompts using advanced reasoning frameworks to dramatically improve their analytical depth, accuracy, and reliability.
**ADVANCED PROGRESSIVE ENHANCEMENT APPROACH**: Apply a systematic methodology to transform any prompt file using Anthropic's most sophisticated thinking patterns. Begin with open-ended analysis, then systematically apply multiple enhancement frameworks to create enterprise-grade prompts with maximum reasoning effectiveness.
**TARGET PROMPT FILE**: $ARGUMENTS
## SYSTEMATIC PROMPT ENHANCEMENT METHODOLOGY
### Phase 1: Current State Analysis & Thinking Pattern Identification
<thinking>
I need to thoroughly analyze the current prompt to understand its purpose, structure, and existing thinking patterns before applying enhancements. What type of prompt is this? What thinking patterns would be most beneficial? What are the specific enhancement opportunities?
</thinking>
**Step 1 - Open-Ended Prompt Analysis**:
- What is the primary purpose and intended outcome of this prompt?
- What thinking patterns (if any) are already present?
- What complexity level does this prompt operate at?
- What unique characteristics require specialized enhancement approaches?
**Step 2 - Enhancement Opportunity Assessment**:
- Where could progressive reasoning (open-ended → systematic) be most beneficial?
- What analytical frameworks would improve the prompt's effectiveness?
- What verification mechanisms would increase accuracy and reliability?
- What thinking budget allocation would optimize performance?
### Phase 2: Sequential Enhancement Framework Application
Apply these enhancement frameworks systematically based on prompt type and complexity:
#### Framework 1: Progressive Reasoning Structure
**Implementation Guidelines:**
- **High-Level Exploration First**: Add open-ended thinking invitations before specific instructions
- **Systematic Framework Progression**: Structure analysis to move from broad exploration to specific methodologies
- **Creative Problem-Solving Latitude**: Encourage exploration of unconventional approaches before constraining to standard patterns
**Enhancement Patterns:**
```
Before: "Analyze the code for security issues"
After: "Before applying standard security frameworks, think creatively about what unique security characteristics this codebase might have. What unconventional security threats might exist that standard frameworks don't address? Then systematically apply: STRIDE → OWASP Top 10 → Domain-specific threats"
```
#### Framework 2: Sequential Analytical Framework Integration
**Implementation Guidelines:**
- **Multiple Framework Application**: Layer 3-6 analytical frameworks within each analysis domain
- **Framework Progression**: Order frameworks from general to specific to custom
- **Context Adaptation**: Modify standard frameworks for domain-specific applications
**Enhancement Patterns:**
```
Before: "Review the architecture"
After: "Apply sequential architectural analysis: Step 1 - Open-ended exploration of unique patterns → Step 2 - High-level pattern analysis → Step 3 - Module-level assessment → Step 4 - Interface design evaluation → Step 5 - Evolution planning → Step 6 - Domain-specific patterns"
```
#### Framework 3: Systematic Verification with Test Cases
**Implementation Guidelines:**
- **Test Case Validation**: Add positive, negative, edge case, and context testing for findings
- **Steel Man Reasoning**: Include arguing against conclusions to find valid justifications
- **Error Checking**: Verify file references, technical claims, and framework application
- **Completeness Validation**: Assess coverage and identify gaps
**Enhancement Patterns:**
```
Before: "Provide recommendations"
After: "For each recommendation, apply systematic verification: 1) Positive test: Does this apply to the actual implementation? 2) Negative test: Are there counter-examples? 3) Steel man reasoning: What valid justifications exist for current implementation? 4) Context test: Is this relevant to the specific domain?"
```
#### Framework 4: Constraint Optimization & Trade-Off Analysis
**Implementation Guidelines:**
- **Multi-Dimensional Analysis**: Identify competing requirements (security vs performance, maintainability vs speed)
- **Systematic Trade-Off Evaluation**: Constraint identification, option generation, impact assessment
- **Context-Aware Prioritization**: Domain-specific constraint priority matrices
- **Optimization Decision Framework**: Systematic approach to resolving constraint conflicts
**Enhancement Patterns:**
```
Before: "Optimize performance"
After: "Apply constraint optimization analysis: 1) Identify competing requirements (performance vs maintainability, speed vs reliability) 2) Generate alternative approaches 3) Evaluate quantifiable costs/benefits 4) Apply domain-specific priority matrix 5) Select optimal balance point with explicit trade-off justification"
```
#### Framework 5: Advanced Self-Correction & Bias Detection
**Implementation Guidelines:**
- **Cognitive Bias Mitigation**: Confirmation bias, anchoring bias, availability heuristic detection
- **Perspective Diversity**: Simulate multiple analytical perspectives (security-first, performance-first, etc.)
- **Assumption Challenge**: Systematic questioning of technical, contextual, and best practice assumptions
- **Self-Correction Mechanisms**: Alternative interpretation testing and evidence re-examination
**Enhancement Patterns:**
```
Before: "Analyze the code quality"
After: "Apply bias detection throughout analysis: 1) Confirmation bias check: Am I only finding evidence supporting initial impressions? 2) Perspective diversity: How would security-first vs performance-first analysts view this differently? 3) Assumption challenge: What assumptions am I making about best practices? 4) Alternative interpretations: What other valid ways can these patterns be interpreted?"
```
#### Framework 6: Extended Thinking Budget Management
**Implementation Guidelines:**
- **Complexity Assessment**: High/Medium/Low complexity indicators with appropriate thinking allocation
- **Phase-Specific Budgets**: Extended thinking for novel/complex analysis, standard for established frameworks
- **Thinking Depth Validation**: Indicators for sufficient vs insufficient thinking depth
- **Process Monitoring**: Quality checkpoints and budget adjustment triggers
**Enhancement Patterns:**
```
Before: "Think about this problem"
After: "Assess complexity and allocate thinking budget: High Complexity (novel patterns, cross-cutting concerns) = Extended thinking required. Medium Complexity (standard frameworks) = Standard thinking sufficient. Monitor thinking depth: Multiple alternatives considered? Edge cases explored? Context-specific factors analyzed? Adjust budget if analysis feels superficial."
```
### Phase 3: Verification & Quality Assurance
#### Pre-Enhancement Baseline Documentation
**Document current state:**
- Original prompt structure and thinking patterns
- Identified enhancement opportunities
- Expected improvement areas
#### Post-Enhancement Validation
**Apply systematic verification:**
1. **Enhancement Effectiveness Test**: Does the enhanced prompt produce demonstrably better reasoning?
2. **Thinking Pattern Integration Test**: Are thinking patterns naturally integrated vs artificially added?
3. **Usability Test**: Is the enhanced prompt practical for actual use?
4. **Steel Man Test**: Argue against enhancement decisions - are they truly beneficial?
#### Before/After Comparison Framework
**Provide structured comparison:**
- **Reasoning Depth**: Before vs After analytical depth assessment
- **Verification Mechanisms**: Added self-correction and error checking
- **Framework Integration**: Number and quality of analytical frameworks added
- **Thinking Budget**: Explicit vs implicit thinking time allocation
### Phase 4: Context-Aware Optimization
#### Prompt Type Classification & Specialized Enhancement
**Analysis Prompts** (Code review, data analysis, research):
- Heavy emphasis on sequential analytical frameworks
- Multiple verification mechanisms
- Systematic bias detection
- Extended thinking budget allocation
**Creative Prompts** (Writing, brainstorming, design):
- Focus on open-ended exploration
- Perspective diversity simulation
- Constraint optimization for creative requirements
- Moderate thinking budget with flexibility
**Instructional Prompts** (Teaching, explanation, documentation):
- Progressive reasoning from simple to complex
- Multi-perspective explanation frameworks
- Assumption challenge for clarity
- Standard thinking budget with clear structure
**Decision-Making Prompts** (Planning, strategy, optimization):
- Constraint optimization as primary framework
- Multiple analytical model application
- Advanced self-correction mechanisms
- Extended thinking budget for complex trade-offs
#### Domain-Specific Considerations
**Technical Domains** (Software, engineering, science):
- Emphasis on systematic verification and test cases
- Technical bias detection (anchoring on familiar patterns)
- Performance vs other constraint optimization
- Extended thinking for novel technical patterns
**Business Domains** (Strategy, operations, management):
- Multiple stakeholder perspective simulation
- Constraint optimization for competing business requirements
- Assumption challenge for market/industry assumptions
- Extended thinking for strategic complexity
**Creative Domains** (Design, writing, marketing):
- Open-ended exploration emphasis
- Creative constraint optimization
- Perspective diversity for audience consideration
- Flexible thinking budget allocation
### Phase 5: Implementation & Documentation
#### Enhanced Prompt Structure
**Required Components:**
1. **Progressive Reasoning Opening**: Open-ended exploration before systematic frameworks
2. **Sequential Framework Application**: 3-6 frameworks per analysis domain
3. **Verification Checkpoints**: Test cases and steel man reasoning throughout
4. **Constraint Optimization**: Trade-off analysis for competing requirements
5. **Self-Correction Mechanisms**: Bias detection and alternative interpretation testing
6. **Thinking Budget Management**: Complexity assessment and thinking time allocation
#### Enhancement Audit Trail
**Document enhancement decisions:**
- Which thinking patterns were applied and why
- How frameworks were adapted for domain specificity
- What trade-offs were made in enhancement design
- Expected improvement areas and success metrics
#### Usage Guidelines
**For enhanced prompt users:**
- How to leverage the added thinking patterns effectively
- When to allocate extended thinking time
- How to apply verification mechanisms
- What to expect from the enhanced analytical depth
### Phase 6: Final Enhancement Delivery
#### Comprehensive Enhancement Report
**Provide structured analysis:**
1. **Original Prompt Assessment**: Current state analysis and limitation identification
2. **Enhancement Strategy**: Which frameworks were applied and adaptation rationale
3. **Before/After Comparison**: Concrete improvements achieved
4. **Verification Results**: Testing of enhanced prompt effectiveness
5. **Usage Recommendations**: How to best leverage the enhanced prompt
6. **Future Enhancement Opportunities**: Additional improvements for specific use cases
#### Enhanced Prompt File
**Deliver improved prompt with:**
- All thinking pattern enhancements integrated naturally
- Clear structure for progressive reasoning
- Embedded verification and self-correction mechanisms
- Appropriate thinking budget guidance
- Domain-specific optimizations applied
**METHODOLOGY VERIFICATION**: After completing the enhancement, apply steel man reasoning to the enhancement decisions: Are these improvements truly beneficial? Do they add unnecessary complexity? Are they appropriate for the prompt's intended use? Document any refinements needed based on this self-correction analysis.
**ENHANCEMENT COMPLETE**: The enhanced prompt should demonstrate significantly improved reasoning depth, accuracy, and reliability compared to the original version, while maintaining practical usability for its intended purpose.

View File

@ -0,0 +1,595 @@
# Convert Complex Prompts to TodoWrite Tasklist Method
**Purpose**: Transform verbose, context-heavy slash commands into efficient TodoWrite tasklist-based methods with parallel subagent execution for 60-70% speed improvements.
**Usage**: `/convert-to-todowrite-tasklist-prompt @/path/to/original-slash-command.md`
---
## CONVERSION EXECUTION
### Step 1: Read Original Prompt
**File to Convert**: $ARGUMENT
First, analyze the original slash command file to understand its structure, complexity, and conversion opportunities.
### Step 2: Apply Conversion Framework
Transform the original prompt using the TodoWrite tasklist method with parallel subagent optimization.
### Step 3: Generate Optimized Version
Output the converted slash command with efficient task delegation and context management.
---
## Argument Variable Integration
When converting slash commands, ensure proper argument handling for dynamic inputs:
### Standard Argument Variables
```markdown
## ARGUMENT HANDLING
**File Input**: {file_path} or {code} - The primary file(s) or code to analyze
**Analysis Scope**: {scope} - Specific focus areas (security, performance, quality, architecture, all)
**Output Format**: {format} - Report format (detailed, summary, action_items)
**Target Audience**: {audience} - Intended audience (technical, executive, security_team)
**Priority Level**: {priority} - Analysis depth (quick, standard, comprehensive)
**Context**: {context} - Additional project context and constraints
### Usage Examples:
```bash
# Basic usage with file input
/comprehensive-review file_path="@src/main.py" scope="security,performance"
# Advanced usage with multiple parameters
/comprehensive-review file_path="@codebase/" scope="all" format="detailed" audience="technical" priority="comprehensive" context="Production deployment review"
# Quick analysis with minimal scope
/comprehensive-review file_path="@config.yaml" scope="security" format="summary" priority="quick"
```
### Argument Integration in TodoWrite Tasks
**Dynamic Task Content Based on Arguments:**
```json
[
{"id": "setup_analysis", "content": "Record start time and initialize analysis for {file_path}", "status": "pending", "priority": "high"},
{"id": "security_analysis", "content": "Security Analysis of {file_path} - Focus: {scope}", "status": "pending", "priority": "high"},
{"id": "report_generation", "content": "Generate {format} report for {audience}", "status": "pending", "priority": "high"}
]
```
---
## Conversion Analysis Framework
### Step 1: Identify Context Overload Patterns
**Context Overflow Indicators:**
-  **Massive Instructions**: >1000 lines of detailed frameworks and methodologies
-  **Upfront Mass File Loading**: Attempting to load 10+ files simultaneously with @filename syntax
-  **Verbose Framework Application**: Extended thinking sections, redundant validation loops
-  **Sequential Bottlenecks**: All analysis phases running one after another instead of parallel
-  **Redundant Content**: Multiple repeated frameworks, bias detection, steel man reasoning overengineering
**Success Patterns to Implement:**
-  **Task Tool Delegation**: Specialized agents for bounded analysis domains
-  **Progressive Synthesis**: Incremental building rather than simultaneous processing
-  **Parallel Execution**: Multiple subagents running simultaneously
-  **Context Recycling**: Fresh context for each analysis phase
-  **Strategic File Selection**: Phase-specific file targeting
### Step 2: Task Decomposition Strategy
**Convert Monolithic Workflows Into:**
1. **Setup Phase**: Initialization and timestamp recording
2. **Parallel Analysis Phases**: 2-4 specialized domains running simultaneously
3. **Synthesis Phase**: Consolidation of parallel findings
4. **Verification Phase**: Quality assurance and validation
5. **Completion Phase**: Final integration and timestamp
**Example Decomposition:**
```
BEFORE (Sequential):
Security Analysis (10 min) <20> Performance Analysis (10 min) <20> Quality Analysis (10 min) = 30 minutes
AFTER (Parallel Subagents):
Phase 1: Security Subagents A,B,C (10 min parallel)
Phase 2: Performance Subagents A,B,C (10 min parallel)
Phase 3: Quality Subagents A,B (8 min parallel)
Synthesis: Consolidate findings (5 min)
Total: ~15 minutes (50% faster + better coverage)
```
---
## TodoWrite Structure for Parallel Execution
### Enhanced Task JSON Template with Argument Integration
```json
[
{"id": "setup_analysis", "content": "Record start time and initialize analysis for {file_path}", "status": "pending", "priority": "high"},
// Conditional Parallel Groups Based on {scope} Parameter
// If scope includes "security" or "all":
{"id": "security_auth", "content": "Security Analysis of {file_path} - Authentication & Validation (Subagent A)", "status": "pending", "priority": "high", "parallel_group": "security", "condition": "security in {scope}"},
{"id": "security_tools", "content": "Security Analysis of {file_path} - Tool Isolation & Parameters (Subagent B)", "status": "pending", "priority": "high", "parallel_group": "security", "condition": "security in {scope}"},
{"id": "security_protocols", "content": "Security Analysis of {file_path} - Protocols & Transport (Subagent C)", "status": "pending", "priority": "high", "parallel_group": "security", "condition": "security in {scope}"},
// If scope includes "performance" or "all":
{"id": "performance_complexity", "content": "Performance Analysis of {file_path} - Algorithmic Complexity (Subagent A)", "status": "pending", "priority": "high", "parallel_group": "performance", "condition": "performance in {scope}"},
{"id": "performance_io", "content": "Performance Analysis of {file_path} - I/O Patterns & Async (Subagent B)", "status": "pending", "priority": "high", "parallel_group": "performance", "condition": "performance in {scope}"},
{"id": "performance_memory", "content": "Performance Analysis of {file_path} - Memory & Concurrency (Subagent C)", "status": "pending", "priority": "high", "parallel_group": "performance", "condition": "performance in {scope}"},
// If scope includes "quality" or "architecture" or "all":
{"id": "quality_patterns", "content": "Quality Analysis of {file_path} - Code Patterns & SOLID (Subagent A)", "status": "pending", "priority": "high", "parallel_group": "quality", "condition": "quality in {scope}"},
{"id": "architecture_design", "content": "Architecture Analysis of {file_path} - Modularity & Interfaces (Subagent B)", "status": "pending", "priority": "high", "parallel_group": "quality", "condition": "architecture in {scope}"},
// Sequential Dependencies
{"id": "synthesis_integration", "content": "Synthesis & Integration - Consolidate findings for {file_path}", "status": "pending", "priority": "high", "depends_on": ["security", "performance", "quality"]},
{"id": "report_generation", "content": "Generate {format} report for {audience} - Analysis of {file_path}", "status": "pending", "priority": "high"},
{"id": "verification_parallel", "content": "Parallel verification of {file_path} analysis with multiple validation streams", "status": "pending", "priority": "high"},
{"id": "final_integration", "content": "Final integration and completion for {file_path}", "status": "pending", "priority": "high"}
]
```
### Conditional Task Execution Based on Arguments
**Scope-Based Task Filtering:**
```markdown
## CONDITIONAL EXECUTION LOGIC
**Full Analysis (scope="all")**:
- Execute all security, performance, quality, and architecture tasks
- Use comprehensive parallel subagent deployment
**Security-Focused (scope="security")**:
- Execute only security_auth, security_tools, security_protocols tasks
- Skip performance, quality, architecture parallel groups
- Faster execution with security specialization
**Performance-Focused (scope="performance")**:
- Execute only performance_complexity, performance_io, performance_memory tasks
- Include synthesis and reporting phases
- Targeted performance optimization focus
**Custom Scope (scope="security,quality")**:
- Execute selected parallel groups based on comma-separated values
- Flexible analysis depth based on specific needs
**Priority-Based Execution:**
- priority="quick": Use single subagent per domain, reduced file scope
- priority="standard": Use 2-3 subagents per domain (default)
- priority="comprehensive": Use 3-4 subagents per domain, expanded file scope
```
### Task Delegation Execution Framework
**CRITICAL: Use Task Tool Delegation Pattern (Prevents Context Overflow)**
```markdown
## TASK DELEGATION FRAMEWORK
### Phase 1: Security Analysis (Task-Based)
**TodoWrite**: Mark "security_analysis" as in_progress
**Task Delegation**: Use Task tool with focused analysis:
Task Description: "Security Analysis of Target Codebase"
Task Prompt: "Analyze security vulnerabilities focusing on:
- STRIDE threat modeling for architecture
- OWASP Top 10 assessment (adapted for context)
- Authentication and credential management
- Input validation and injection prevention
- Protocol-specific security patterns
**CONTEXT MANAGEMENT**: Analyze only 3-5 key security files:
- Main coordinator file (entry point security)
- Security/validation modules (2-3 files max)
- Key protocol handlers (1-2 files max)
Provide specific findings with file:line references and actionable recommendations."
### Phase 2: Performance Analysis (Task-Based)
**TodoWrite**: Mark "security_analysis" completed, "performance_analysis" as in_progress
**Task Delegation**: Use Task tool with performance focus:
Task Description: "Performance Analysis of Target Codebase"
Task Prompt: "Analyze performance characteristics focusing on:
- Algorithmic complexity (Big O analysis)
- I/O efficiency patterns (async/await, file operations)
- Memory management (caching, object lifecycle)
- Concurrency bottlenecks and optimization opportunities
**CONTEXT MANAGEMENT**: Analyze only 3-5 key performance files:
- Core algorithm modules (complexity focus)
- I/O intensive modules (async/caching focus)
- Memory management modules (lifecycle focus)
Identify specific bottlenecks with measured impact and optimization opportunities."
### Phase 3: Quality & Architecture Analysis (Task-Based)
**TodoWrite**: Mark "performance_analysis" completed, "quality_analysis" as in_progress
**Task Delegation**: Use Task tool with quality focus:
Task Description: "Quality & Architecture Analysis of Target Codebase"
Task Prompt: "Evaluate code quality and architectural design focusing on:
- Clean code principles (function length, naming, responsibility)
- SOLID principles compliance and modular design
- Architecture patterns and dependency management
- Interface design and extensibility considerations
**CONTEXT MANAGEMENT**: Analyze only 3-5 representative files:
- Core implementation patterns (2-3 files)
- Module interfaces and boundaries (1-2 files)
- Configuration and coordination modules (1 file)
Provide complexity metrics and specific refactoring recommendations with examples."
**CRITICAL SUCCESS PATTERN**: Each Task operation stays within context limits by analyzing only 3-5 files maximum, using fresh context for each analysis phase.
```
---
## Subagent Specialization Templates
### 1. Domain-Based Parallel Analysis
**Security Domain Subagents:**
```markdown
Subagent A Focus: Authentication, validation, credential management
Subagent B Focus: Tool isolation, parameter security, privilege boundaries
Subagent C Focus: Protocol security, transport validation, message integrity
```
**Performance Domain Subagents:**
```markdown
Subagent A Focus: Algorithmic complexity, Big O analysis, data structures
Subagent B Focus: I/O patterns, async/await, file operations, network calls
Subagent C Focus: Memory management, caching, object lifecycle, concurrency
```
**Quality Domain Subagents:**
```markdown
Subagent A Focus: Code patterns, SOLID principles, clean code metrics
Subagent B Focus: Architecture design, modularity, interface consistency
```
### 2. File-Based Parallel Analysis
**Large Codebase Distribution:**
```markdown
Subagent A: Core coordination files (mcp_server.py, mcp_core_tools.py)
Subagent B: Business logic files (mcp_collaboration_engine.py, mcp_service_implementations.py)
Subagent C: Infrastructure files (redis_cache.py, openrouter_client.py, conversation_manager.py)
Subagent D: Security & utilities (security/, gemini_utils.py, monitoring.py)
```
### 3. Cross-Cutting Concern Analysis
**Thematic Parallel Analysis:**
```markdown
Subagent A: Error handling patterns across all modules
Subagent B: Configuration management across all modules
Subagent C: Performance bottlenecks across all modules
Subagent D: Security patterns across all modules
```
### 4. Task-Based Verification (CRITICAL)
**Progressive Task Verification:**
```markdown
### GEMINI VERIFICATION (Task-Based - Prevents Context Overflow)
**TodoWrite**: Mark "gemini_verification" as in_progress
**Task Delegation**: Use Task tool for verification:
Task Description: "Gemini Verification of Comprehensive Analysis"
Task Prompt: "Apply systematic verification frameworks to evaluate the comprehensive review report accuracy.
**VERIFICATION APPROACH**: Use progressive analysis rather than loading all files simultaneously.
Focus on:
1. **Technical Accuracy**: Cross-reference report findings with actual implementation
2. **Transport Awareness**: Verify recommendations suit specific architecture
3. **Framework Application**: Confirm systematic methodology application
4. **Actionability**: Validate file:line references and concrete examples
**PROGRESSIVE VERIFICATION**:
- Verify security findings accuracy through targeted code examination
- Verify performance analysis completeness through key module review
- Verify quality assessment validity through pattern analysis
- Verify architectural recommendations through interface review
Report file to analyze: {report_file_path}
Provide structured verification with specific agreement/disagreement analysis."
**CRITICAL**: Never use @file1 @file2 @file3... bulk loading patterns in verification
```
---
## Context Management for Task Delegation
### CRITICAL: Context Overflow Prevention Rules
**NEVER Generate These Patterns:**
`@file1 @file2 @file3 @file4 @file5...` (bulk file loading)
`Analyze all files simultaneously`
`Load entire codebase for analysis`
**ALWAYS Use These Patterns:**
`Task tool to analyze: [3-5 specific files max]`
`Progressive analysis through Task boundaries`
`Fresh context for each analysis phase`
### File Selection Strategy (Maximum 5 Files Per Task)
**Security Analysis Priority Files (3-5 max):**
```
Task tool to analyze:
- Main coordinator file (entry point security)
- Primary validation/security modules (2-3 files)
- Key protocol handlers (1-2 files)
```
**Performance Analysis Priority Files (3-5 max):**
```
Task tool to analyze:
- Core algorithm modules (complexity focus)
- I/O intensive modules (async/caching focus)
- Memory management modules (lifecycle focus)
```
**Quality Analysis Priority Files (3-5 max):**
```
Task tool to analyze:
- Representative implementation patterns (2-3 files)
- Module interfaces and boundaries (1-2 files)
```
### Context Budget Allocation for Task Delegation
```
Total Context Limit per Task: ~200k tokens
- Task Instructions: ~10k tokens (focused, domain-specific)
- File Analysis: ~40k tokens (3-5 files maximum)
- Analysis Output: ~20k tokens (specialized findings)
- Buffer/Overhead: ~10k tokens
Total per Task: ~80k tokens (safe task execution)
Context Efficiency:
- 3 Task operations: 3 × 80k = 240k total analysis capacity
- Fresh context per Task prevents overflow accumulation
- Progressive analysis maintains depth while respecting limits
CRITICAL: Never exceed 5 files per Task operation
```
---
## Synthesis Strategies for Parallel Findings
### Multi-Stream Consolidation
**Synthesis Phase Structure:**
```markdown
### PHASE: SYNTHESIS & INTEGRATION
**TodoWrite**: Mark all parallel groups completed, "synthesis_integration" as in_progress
**Consolidation Process:**
1. **Cross-Reference Security Findings**: Integrate auth + tools + protocol findings
2. **Performance Bottleneck Mapping**: Combine complexity + I/O + memory analysis
3. **Quality Pattern Recognition**: Merge code patterns + architecture findings
4. **Cross-Domain Issue Identification**: Find issues spanning multiple domains
5. **Priority Matrix Generation**: Impact vs Effort analysis across all findings
6. **Implementation Roadmap**: Coordinate fixes across security, performance, quality
**Integration Requirements:**
- Resolve contradictions between parallel streams
- Identify reinforcing patterns across domains
- Prioritize fixes that address multiple concerns
- Create coherent implementation sequence
```
### Conflict Resolution Framework
**Handling Parallel Finding Conflicts:**
```markdown
1. **Evidence Strength Assessment**: Which subagent provided stronger supporting evidence?
2. **Domain Expertise Weight**: Security findings take precedence for security conflicts
3. **Context Verification**: Re-examine conflicting code sections for accuracy
4. **Synthesis Decision**: Document resolution rationale and confidence level
```
---
## Quality Gates for Parallel Execution
### Completion Verification Checklist
**Before Synthesis Phase:**
- [ ] All security subagents completed with specific file:line references
- [ ] All performance subagents completed with measurable impact assessments
- [ ] All quality subagents completed with concrete refactoring examples
- [ ] No parallel streams terminated due to context overflow
- [ ] All findings include actionable recommendations
**Synthesis Quality Gates:**
- [ ] Cross-domain conflicts identified and resolved
- [ ] Priority matrix spans all parallel finding categories
- [ ] Implementation roadmap coordinates across all domains
- [ ] No critical findings lost during consolidation
- [ ] Final recommendations maintain parallel analysis depth
### Success Metrics
**Parallel Execution Effectiveness:**
- **Speed Improvement**: Target 50-70% reduction in total analysis time
- **Coverage Enhancement**: More detailed analysis per domain through specialization
- **Context Efficiency**: No subagent context overflow, optimal token utilization
- **Quality Maintenance**: Same or higher finding accuracy vs sequential analysis
- **Actionability**: All recommendations include specific file:line references and metrics
---
## Conversion Application Instructions
### How to Apply This Framework
**Step 1: Analyze Original Prompt**
- Identify context overflow patterns (massive instructions, upfront file loading)
- Map existing workflow phases and dependencies
- Estimate potential for parallelization (independent analysis domains)
**Step 2: Decompose Into Parallel Tasks**
- Break monolithic analysis into 2-4 specialized domains
- Create TodoWrite JSON with parallel groups and dependencies
- Design specialized subagent prompts for each domain
**Step 3: Implement Context Management**
- Distribute files strategically across subagents
- Ensure no overlap or gaps in analysis coverage
- Validate context budget allocation per subagent
**Step 4: Design Synthesis Strategy**
- Plan consolidation approach for parallel findings
- Create conflict resolution procedures
- Define quality gates and completion verification
**Step 5: Test and Optimize**
- Execute parallel workflow and measure performance
- Identify bottlenecks and optimization opportunities
- Refine subagent specialization and coordination
### Template Application Examples
**For Code Review Prompts:**
- Security, Performance, Quality, Architecture subagents
- File-based distribution for large codebases
- Cross-cutting concern analysis for comprehensive coverage
**For Analysis Prompts:**
- Domain expertise specialization (legal, technical, business)
- Document section parallelization
- Multi-perspective validation streams
**For Research Prompts:**
- Topic area specialization
- Source type parallelization (academic, industry, news)
- Validation methodology streams
---
## CONVERSION WORKFLOW EXECUTION
Now, apply this framework to convert the original slash command file provided in $ARGUMENT:
### TodoWrite Task: Conversion Process
```json
[
{"id": "read_original", "content": "Read and analyze original slash command from $ARGUMENT", "status": "pending", "priority": "high"},
{"id": "identify_patterns", "content": "Identify context overload patterns and conversion opportunities", "status": "pending", "priority": "high"},
{"id": "decompose_tasks", "content": "Decompose workflow into parallel TodoWrite tasks", "status": "pending", "priority": "high"},
{"id": "design_subagents", "content": "Design specialized subagent prompts for parallel execution", "status": "pending", "priority": "high"},
{"id": "generate_conversion", "content": "Generate optimized slash command with TodoWrite framework", "status": "pending", "priority": "high"},
{"id": "validate_output", "content": "Validate converted prompt for context efficiency and completeness", "status": "pending", "priority": "high"},
{"id": "overwrite_original", "content": "Overwrite original file with converted optimized version", "status": "pending", "priority": "high"}
]
```
### Execution Instructions
**Mark "read_original" as in_progress and begin analysis of $ARGUMENT**
1. **Read the original file** and identify:
- Total line count and instruction complexity
- File loading patterns (@filename usage)
- Sequential vs parallel execution opportunities
- Context overflow risk factors
2. **Apply the conversion framework** systematically:
- Break complex workflows into discrete tasks
- Design parallel subagent execution strategies
- Implement context management techniques
- Create TodoWrite task structure
3. **Generate the optimized version** with:
- Efficient TodoWrite task JSON
- Parallel subagent delegation instructions
- Context-aware file selection strategies
- Quality gates and verification procedures
4. **Overwrite the original file** (mark "validate_output" completed, "overwrite_original" as in_progress):
- Use Write tool to overwrite $ARGUMENT with the converted slash command
- Ensure the optimized version maintains the same analytical depth while avoiding context limits
- Include proper error handling and validation before overwriting
5. **Confirm completion** (mark "overwrite_original" completed):
- Display confirmation message: "✅ Original file updated with optimized TodoWrite version"
- Verify all 7 conversion tasks completed successfully
---
## CRITICAL SUCCESS PATTERNS FOR CONVERTED PROMPTS
### Context Overflow Prevention Framework
**The conversion tool MUST generate these patterns to prevent context overflow:**
1. **Task Delegation Instructions**:
```markdown
### Phase 1: Security Analysis
**TodoWrite**: Mark "security_analysis" as in_progress
**Task Delegation**: Use Task tool with focused analysis:
Task Description: "Security Analysis of Target Codebase"
Task Prompt: "Analyze security focusing on [specific areas]
**CONTEXT MANAGEMENT**: Analyze only 3-5 key files:
- [File 1] (specific purpose)
- [File 2-3] (specific modules)
- [File 4-5] (specific handlers)
Provide findings with file:line references."
```
2. **Verification Using Task Tool**:
```markdown
### GEMINI VERIFICATION (Task-Based)
**Task Delegation**: Use Task tool for verification:
Task Description: "Gemini Verification of Analysis Report"
Task Prompt: "Verify analysis accuracy using progressive examination
**PROGRESSIVE VERIFICATION**:
- Verify findings through targeted code review
- Cross-reference specific sections progressively
Report file: {report_file_path}"
```
3. **Explicit Context Rules**:
```markdown
**CONTEXT MANAGEMENT RULES**:
- Maximum 5 files per Task operation
- Use Task tool for all analysis phases
- Progressive analysis through Task boundaries
- Fresh context for each Task operation
**AVOID**: @file1 @file2 @file3... bulk loading patterns
**USE**: Task delegation with strategic file selection
```
### Success Validation Checklist
**Converted prompts MUST include:**
- [ ] Task delegation instructions for each analysis phase
- [ ] Maximum 5 files per Task operation
- [ ] Progressive verification using Task tool
- [ ] Explicit context management warnings
- [ ] No bulk @filename loading patterns
- [ ] Fresh context strategy through Task boundaries
This framework transforms any complex, context-heavy prompt into an efficient TaskWrite tasklist method that avoids context overflow while maintaining analytical depth and coverage, automatically updating the original file with the optimized version.

View File

@ -0,0 +1 @@
Can you update CLAUDE.md and memory bank files.

View File

@ -0,0 +1,52 @@
Please run the `ccusage daily` command and then provide a structured markdown summary of the Claude Code usage costs and statistics.
## Required Actions:
1. Execute `ccusage daily` using the Bash tool
2. Parse the output to extract key metrics and statistics
3. Generate a comprehensive markdown report
## Report Format Required:
### Executive Summary
- Total cost for the period
- Date range covered
- Number of usage days
- Average daily cost
- Peak usage day and cost
- Cache efficiency percentage
### Key Statistics Table
A markdown table with:
- Total Tokens
- Input Tokens
- Output Tokens
- Cache Read Tokens
- Total Cost
- Average Daily Cost
### Daily Cost Summary Table
A compact markdown table showing:
- Date (in MM-DD format)
- Model used
- Input tokens
- Output tokens
- Cache read tokens
- Total cost for that day
Limit to the top 15-20 highest cost days to keep the table manageable.
### Usage Insights
Provide analysis including:
- Number of high usage days (over $20)
- Cache effectiveness assessment
- Token distribution breakdown
- Primary model identification
- Usage patterns and trends
### Recommendations
Based on the data, provide:
- Cost management insights
- Cache optimization observations
- Usage pattern analysis
Please format everything in clean, readable markdown with proper tables and bullet points. Focus on actionable insights and clear presentation of the cost and usage data.

View File

@ -0,0 +1,274 @@
# Memory Bank Context Optimization
You are a memory bank optimization specialist tasked with reducing token usage in the project's documentation system while maintaining all essential information and improving organization.
## Task Overview
Analyze the project's memory bank files (CLAUDE-*.md, CLAUDE.md, README.md) to identify and eliminate token waste through:
1. **Duplicate content removal**
2. **Obsolete file elimination**
3. **Content consolidation**
4. **Archive strategy implementation**
5. **Essential content optimization**
## Analysis Phase
### 1. Initial Assessment
```bash
# Get comprehensive file size analysis
find . -name "CLAUDE-*.md" -exec wc -c {} \; | sort -nr
wc -c CLAUDE.md README.md
```
**Examine for:**
- Files marked as "REMOVED" or "DEPRECATED"
- Generated content that's no longer current (reviews, temporary files)
- Multiple files covering the same topic area
- Verbose documentation that could be streamlined
### 2. Identify Optimization Opportunities
**High-Impact Targets (prioritize first):**
- Files >20KB that contain duplicate information
- Files explicitly marked as obsolete/removed
- Generated reviews or temporary documentation
- Verbose setup/architecture descriptions in CLAUDE.md
**Medium-Impact Targets:**
- Files 10-20KB with overlapping content
- Historic documentation for resolved issues
- Detailed implementation docs that could be consolidated
**Low-Impact Targets:**
- Files <10KB with minor optimization potential
- Content that could be streamlined but is unique
## Optimization Strategy
### Phase 1: Remove Obsolete Content (Highest Impact)
**Target:** Files marked as removed, deprecated, or clearly obsolete
**Actions:**
1. Delete files marked as "REMOVED" or "DEPRECATED"
2. Remove generated reviews/reports that are outdated
3. Clean up empty or minimal temporary files
4. Update CLAUDE.md references to removed files
**Expected Savings:** 30-50KB typically
### Phase 2: Consolidate Overlapping Documentation (High Impact)
**Target:** Multiple files covering the same functional area
**Common Consolidation Opportunities:**
- **Security files:** Combine security-fixes, security-optimization, security-hardening into one comprehensive file
- **Performance files:** Merge performance-optimization and test-suite documentation
- **Architecture files:** Consolidate detailed architecture descriptions
- **Testing files:** Combine multiple test documentation files
**Actions:**
1. Create consolidated files with comprehensive coverage
2. Ensure all essential information is preserved
3. Remove the separate files
4. Update all references in CLAUDE.md
**Expected Savings:** 20-40KB typically
### Phase 3: Streamline CLAUDE.md (Medium Impact)
**Target:** Remove verbose content that duplicates memory bank files
**Actions:**
1. Replace detailed descriptions with concise summaries
2. Remove redundant architecture explanations
3. Focus on essential guidance and references
4. Eliminate duplicate setup instructions
**Expected Savings:** 5-10KB typically
### Phase 4: Archive Strategy (Medium Impact)
**Target:** Historic documentation that's resolved but worth preserving
**Actions:**
1. Create `archive/` directory
2. Move resolved issue documentation to archive
3. Add archive README.md with index
4. Update CLAUDE.md with archive reference
5. Preserve discoverability while reducing active memory
**Expected Savings:** 10-20KB typically
## Consolidation Guidelines
### Creating Comprehensive Files
**Security Consolidation Pattern:**
```markdown
# CLAUDE-security-comprehensive.md
**Status**: ✅ COMPLETE - All Security Implementations
**Coverage**: [List of consolidated topics]
## Executive Summary
[High-level overview of all security work]
## [Topic 1] - [Original File 1 Content]
[Essential information from first file]
## [Topic 2] - [Original File 2 Content]
[Essential information from second file]
## [Topic 3] - [Original File 3 Content]
[Essential information from third file]
## Consolidated [Cross-cutting Concerns]
[Information that appeared in multiple files]
```
**Quality Standards:**
- Maintain all essential technical information
- Preserve implementation details and examples
- Keep configuration examples and code snippets
- Include all important troubleshooting information
- Maintain proper status tracking and dates
### File Naming Convention
- Use `-comprehensive` suffix for consolidated files
- Use descriptive names that indicate complete coverage
- Update CLAUDE.md with single reference per topic area
## Implementation Process
### 1. Plan and Validate
```bash
# Create todo list for tracking
TodoWrite with optimization phases and specific files
```
### 2. Execute by Priority
- Start with highest-impact targets (obsolete files)
- Move to consolidation opportunities
- Optimize main documentation
- Implement archival strategy
### 3. Update References
- Update CLAUDE.md memory bank file list
- Remove references to deleted files
- Add references to new consolidated files
- Update archive references
### 4. Validate Results
```bash
# Calculate savings achieved
find . -name "CLAUDE-*.md" -not -path "*/archive/*" -exec wc -c {} \; | awk '{sum+=$1} END {print sum}'
```
## Expected Outcomes
### Typical Optimization Results
- **15-25% total token reduction** in memory bank
- **Improved organization** with focused, comprehensive files
- **Maintained information quality** with no essential loss
- **Better maintainability** through reduced duplication
- **Preserved history** via organized archival
### Success Metrics
- Total KB/token savings achieved
- Number of files consolidated
- Percentage reduction in memory bank size
- Maintenance of all essential information
## Quality Assurance
### Information Preservation Checklist
- [ ] All technical implementation details preserved
- [ ] Configuration examples and code snippets maintained
- [ ] Troubleshooting information retained
- [ ] Status tracking and timeline information kept
- [ ] Cross-references and dependencies documented
### Organization Improvement Checklist
- [ ] Related information grouped logically
- [ ] Clear file naming and purpose
- [ ] Updated CLAUDE.md references
- [ ] Archive strategy implemented
- [ ] Discoverability maintained
## Post-Optimization Maintenance
### Regular Optimization Schedule
- **Monthly**: Check for new obsolete files
- **Quarterly**: Review for new consolidation opportunities
- **Semi-annually**: Comprehensive optimization review
- **As-needed**: After major implementation phases
### Warning Signs for Re-optimization
- Memory bank files exceeding previous optimized size
- Multiple new files covering same topic areas
- Files marked as removed/deprecated but still present
- User feedback about context window limitations
## Documentation Standards
### Consolidated File Format
```markdown
# CLAUDE-[topic]-comprehensive.md
**Last Updated**: [Date]
**Status**: ✅ [Status Description]
**Coverage**: [What this file consolidates]
## Executive Summary
[Overview of complete topic coverage]
## [Major Section 1]
[Comprehensive coverage of subtopic]
## [Major Section 2]
[Comprehensive coverage of subtopic]
## [Cross-cutting Concerns]
[Information spanning multiple original files]
```
### Archive File Format
```markdown
# archive/README.md
## Archived Files
### [Category]
- **filename.md** - [Description] (resolved/historic)
## Usage
Reference when investigating similar issues or understanding implementation history.
```
This systematic approach ensures consistent, effective memory bank optimization while preserving all essential information and improving overall organization.

154
README.md Normal file
View File

@ -0,0 +1,154 @@
# Claude Code settings
> Configure Claude Code with global and project-level settings, and environment variables.
Claude Code offers a variety of settings to configure its behavior to meet your needs. You can configure Claude Code by running the `/config` command when using the interactive REPL.
## Settings files
The `settings.json` file is our official mechanism for configuring Claude
Code through hierarchical settings:
* **User settings** are defined in `~/.claude/settings.json` and apply to all
projects.
* **Project settings** are saved in your project directory:
* `.claude/settings.json` for settings that are checked into source control and shared with your team
* `.claude/settings.local.json` for settings that are not checked in, useful for personal preferences and experimentation. Claude Code will configure git to ignore `.claude/settings.local.json` when it is created.
### Available settings
`settings.json` supports a number of options:
| Key | Description | Example |
| :-------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------ |
| `apiKeyHelper` | Custom script, to be executed in `/bin/sh`, to generate an auth value. This value will generally be sent as `X-Api-Key`, `Authorization: Bearer`, and `Proxy-Authorization: Bearer` headers for model requests | `/bin/generate_temp_api_key.sh` |
| `cleanupPeriodDays` | How long to locally retain chat transcripts (default: 30 days) | `20` |
| `env` | Environment variables that will be applied to every session | `{"FOO": "bar"}` |
| `includeCoAuthoredBy` | Whether to include the `co-authored-by Claude` byline in git commits and pull requests (default: `true`) | `false` |
| `permissions` | See table below for structure of permissions. | |
### Permission settings
| Keys | Description | Example |
| :----------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------- |
| `allow` | Array of [permission rules](/en/docs/claude-code/iam#configuring-permissions) to allow tool use | `[ "Bash(git diff:*)" ]` |
| `deny` | Array of [permission rules](/en/docs/claude-code/iam#configuring-permissions) to deny tool use | `[ "WebFetch", "Bash(curl:*)" ]` |
| `additionalDirectories` | Additional [working directories](iam#working-directories) that Claude has access to | `[ "../docs/" ]` |
| `defaultMode` | Default [permission mode](iam#permission-modes) when opening Claude Code | `"acceptEdits"` |
| `disableBypassPermissionsMode` | Set to `"disable"` to prevent `bypassPermissions` mode from being activated. See [managed policy settings](iam#enterprise-managed-policy-settings) | `"disable"` |
### Settings precedence
Settings are applied in order of precedence:
1. Enterprise policies (see [IAM documentation](/en/docs/claude-code/iam#enterprise-managed-policy-settings))
2. Command line arguments
3. Local project settings
4. Shared project settings
5. User settings
## Environment variables
Claude Code supports the following environment variables to control its behavior:
<Note>
All environment variables can also be configured in [`settings.json`](#available-settings). This is useful as a way to automatically set environment variables for each session, or to roll out a set of environment variables for your whole team or organization.
</Note>
| Variable | Purpose |
| :----------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------- |
| `ANTHROPIC_API_KEY` | API key sent as `X-Api-Key` header, typically for the Claude SDK (for interactive usage, run `/login`) |
| `ANTHROPIC_AUTH_TOKEN` | Custom value for the `Authorization` and `Proxy-Authorization` headers (the value you set here will be prefixed with `Bearer `) |
| `ANTHROPIC_CUSTOM_HEADERS` | Custom headers you want to add to the request (in `Name: Value` format) |
| `ANTHROPIC_MODEL` | Name of custom model to use (see [Model Configuration](/en/docs/claude-code/bedrock-vertex-proxies#model-configuration)) |
| `ANTHROPIC_SMALL_FAST_MODEL` | Name of [Haiku-class model for background tasks](/en/docs/claude-code/costs) |
| `ANTHROPIC_SMALL_FAST_MODEL_AWS_REGION` | Override AWS region for the small/fast model when using Bedrock |
| `BASH_DEFAULT_TIMEOUT_MS` | Default timeout for long-running bash commands |
| `BASH_MAX_TIMEOUT_MS` | Maximum timeout the model can set for long-running bash commands |
| `BASH_MAX_OUTPUT_LENGTH` | Maximum number of characters in bash outputs before they are middle-truncated |
| `CLAUDE_BASH_MAINTAIN_PROJECT_WORKING_DIR` | Return to the original working directory after each Bash command |
| `CLAUDE_CODE_API_KEY_HELPER_TTL_MS` | Interval in milliseconds at which credentials should be refreshed (when using `apiKeyHelper`) |
| `CLAUDE_CODE_IDE_SKIP_AUTO_INSTALL` | Skip auto-installation of IDE extensions (defaults to false) |
| `CLAUDE_CODE_MAX_OUTPUT_TOKENS` | Set the maximum number of output tokens for most requests |
| `CLAUDE_CODE_USE_BEDROCK` | Use [Bedrock](/en/docs/claude-code/amazon-bedrock) |
| `CLAUDE_CODE_USE_VERTEX` | Use [Vertex](/en/docs/claude-code/google-vertex-ai) |
| `CLAUDE_CODE_SKIP_BEDROCK_AUTH` | Skip AWS authentication for Bedrock (e.g. when using an LLM gateway) |
| `CLAUDE_CODE_SKIP_VERTEX_AUTH` | Skip Google authentication for Vertex (e.g. when using an LLM gateway) |
| `CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC` | Equivalent of setting `DISABLE_AUTOUPDATER`, `DISABLE_BUG_COMMAND`, `DISABLE_ERROR_REPORTING`, and `DISABLE_TELEMETRY` |
| `DISABLE_AUTOUPDATER` | Set to `1` to disable automatic updates. This takes precedence over the `autoUpdates` configuration setting. |
| `DISABLE_BUG_COMMAND` | Set to `1` to disable the `/bug` command |
| `DISABLE_COST_WARNINGS` | Set to `1` to disable cost warning messages |
| `DISABLE_ERROR_REPORTING` | Set to `1` to opt out of Sentry error reporting |
| `DISABLE_NON_ESSENTIAL_MODEL_CALLS` | Set to `1` to disable model calls for non-critical paths like flavor text |
| `DISABLE_TELEMETRY` | Set to `1` to opt out of Statsig telemetry (note that Statsig events do not include user data like code, file paths, or bash commands) |
| `HTTP_PROXY` | Specify HTTP proxy server for network connections |
| `HTTPS_PROXY` | Specify HTTPS proxy server for network connections |
| `MAX_THINKING_TOKENS` | Force a thinking for the model budget |
| `MCP_TIMEOUT` | Timeout in milliseconds for MCP server startup |
| `MCP_TOOL_TIMEOUT` | Timeout in milliseconds for MCP tool execution |
| `MAX_MCP_OUTPUT_TOKENS` | Maximum number of tokens allowed in MCP tool responses (default: 25000) |
| `VERTEX_REGION_CLAUDE_3_5_HAIKU` | Override region for Claude 3.5 Haiku when using Vertex AI |
| `VERTEX_REGION_CLAUDE_3_5_SONNET` | Override region for Claude 3.5 Sonnet when using Vertex AI |
| `VERTEX_REGION_CLAUDE_3_7_SONNET` | Override region for Claude 3.7 Sonnet when using Vertex AI |
| `VERTEX_REGION_CLAUDE_4_0_OPUS` | Override region for Claude 4.0 Opus when using Vertex AI |
| `VERTEX_REGION_CLAUDE_4_0_SONNET` | Override region for Claude 4.0 Sonnet when using Vertex AI |
## Configuration options
We are in the process of migrating global configuration to `settings.json`.
`claude config` will be deprecated in place of [settings.json](#settings-files)
To manage your configurations, use the following commands:
* List settings: `claude config list`
* See a setting: `claude config get <key>`
* Change a setting: `claude config set <key> <value>`
* Push to a setting (for lists): `claude config add <key> <value>`
* Remove from a setting (for lists): `claude config remove <key> <value>`
By default `config` changes your project configuration. To manage your global configuration, use the `--global` (or `-g`) flag.
### Global configuration
To set a global configuration, use `claude config set -g <key> <value>`:
| Key | Description | Example |
| :---------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------- |
| `autoUpdates` | Whether to enable automatic updates (default: `true`). When enabled, Claude Code automatically downloads and installs updates in the background. Updates are applied when you restart Claude Code. | `false` |
| `preferredNotifChannel` | Where you want to receive notifications (default: `iterm2`) | `iterm2`, `iterm2_with_bell`, `terminal_bell`, or `notifications_disabled` |
| `theme` | Color theme | `dark`, `light`, `light-daltonized`, or `dark-daltonized` |
| `verbose` | Whether to show full bash and command outputs (default: `false`) | `true` |
## Tools available to Claude
Claude Code has access to a set of powerful tools that help it understand and modify your codebase:
| Tool | Description | Permission Required |
| :--------------- | :--------------------------------------------------- | :------------------ |
| **Agent** | Runs a sub-agent to handle complex, multi-step tasks | No |
| **Bash** | Executes shell commands in your environment | Yes |
| **Edit** | Makes targeted edits to specific files | Yes |
| **Glob** | Finds files based on pattern matching | No |
| **Grep** | Searches for patterns in file contents | No |
| **LS** | Lists files and directories | No |
| **MultiEdit** | Performs multiple edits on a single file atomically | Yes |
| **NotebookEdit** | Modifies Jupyter notebook cells | Yes |
| **NotebookRead** | Reads and displays Jupyter notebook contents | No |
| **Read** | Reads the contents of files | No |
| **TodoRead** | Reads the current session's task list | No |
| **TodoWrite** | Creates and manages structured task lists | No |
| **WebFetch** | Fetches content from a specified URL | Yes |
| **WebSearch** | Performs web searches with domain filtering | Yes |
| **Write** | Creates or overwrites files | Yes |
Permission rules can be configured using `/allowed-tools` or in [permission settings](/en/docs/claude-code/settings#available-settings).
### Extending tools with hooks
You can run custom commands before or after any tool executes using
[Claude Code hooks](/en/docs/claude-code/hooks).
For example, you could automatically run a Python formatter after Claude
modifies Python files, or prevent modifications to production configuration
files by blocking Write operations to certain paths

16
settings.json Normal file
View File

@ -0,0 +1,16 @@
{
"model": "sonnet",
"hooks": {
"Stop": [
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "terminal-notifier -title 'Claude Code' -subtitle 'Session Complete' -message \"Finished working in $(basename \"$PWD\")\" -sound default -timeout 10"
}
]
}
]
}
}

64
settings.local.json Normal file
View File

@ -0,0 +1,64 @@
{
"env": {
"MAX_MCP_OUTPUT_TOKENS": "120000"
},
"includeCoAuthoredBy": false,
"permissions": {
"allow": [
"Bash(.venv/bin/pip:*)",
"Bash(.venv/bin/python:*)",
"Bash(awk:*)",
"Bash(cat:*)",
"Bash(chmod:*)",
"Bash(claude config get)",
"Bash(claude mcp:*)",
"Bash(cp:*)",
"Bash(curl:*)",
"Bash(echo:*)",
"Bash(env)",
"Bash(find:*)",
"Bash(gemini:*)",
"Bash(grep:*)",
"Bash(gtimeout:*)",
"Bash(ls:*)",
"Bash(mcp:*)",
"Bash(mkdir:*)",
"Bash(mv:*)",
"Bash(pip install:*)",
"Bash(python:*)",
"Bash(rg:*)",
"Bash(rm:*)",
"Bash(sed:*)",
"Bash(source:*)",
"Bash(timeout:*)",
"Bash(tree:*)",
"Bash(true)",
"Bash(uv pip install:*)",
"Bash(uv run:*)",
"Bash(uv venv:*)",
"mcp__cf-docs__search_cloudflare_documentation",
"mcp__context7__get-library-docs",
"mcp__context7__resolve-library-id",
"mcp__gemini-cli__gemini_ai_collaboration",
"mcp__gemini-cli__gemini_cli",
"mcp__gemini-cli__gemini_help",
"mcp__gemini-cli__gemini_metrics",
"mcp__gemini-cli__gemini_models",
"mcp__gemini-cli__gemini_openrouter_models",
"mcp__gemini-cli__gemini_openrouter_opinion",
"mcp__gemini-cli__gemini_openrouter_usage_stats",
"mcp__gemini-cli__gemini_prompt",
"mcp__gemini-cli__gemini_review_code",
"mcp__gemini-cli__gemini_summarize",
"mcp__gemini-cli__gemini_summarize_files",
"mcp__gemini-cli__gemini_verify_solution",
"mcp__gemini-cli__gemini_version",
"mcp__ide__getDiagnostics",
"WebFetch(domain:docs.anthropic.com)",
"WebFetch(domain:github.com)",
"WebFetch(domain:openrouter.ai)",
"WebFetch(domain:www.comet.com)"
],
"deny": []
}
}