mirror of
https://github.com/SuperClaude-Org/SuperClaude_Framework.git
synced 2025-12-18 02:06:36 +00:00
* docs: fix mindbase syntax and document as optional MCP enhancement Fix incorrect method call syntax and clarify mindbase as optional enhancement that coexists with built-in ReflexionMemory. Changes: - Fix syntax: mindbase.search_conversations() → natural language instructions that allow Claude to autonomously select tools - Clarify mindbase requires airis-mcp-gateway "recommended" profile - Document ReflexionMemory as built-in fallback (always available) - Show coexistence model: both systems work together Architecture: - ReflexionMemory (built-in): Keyword-based search, local JSONL - Mindbase (optional MCP): Semantic search, PostgreSQL + pgvector - Claude autonomously selects best available tool when needed This approach allows users to enhance error learning with mindbase when installed, while maintaining full functionality with ReflexionMemory alone. Related: #452 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: add comprehensive ReflexionMemory user documentation Add user-facing documentation for the ReflexionMemory error learning system to address documentation gap identified during mindbase cleanup. New Documentation: - docs/user-guide/memory-system.md (283 lines) * Complete user guide for ReflexionMemory * How it works, storage format, usage examples * Performance benefits and troubleshooting * Manual inspection and management commands - docs/memory/reflexion.jsonl.example (15 entries) * 15 realistic example reflexion entries * Covers common scenarios: auth, DB, CORS, uploads, etc. * Reference for understanding the data format - docs/memory/README.md (277 lines) * Overview of memory directory structure * Explanation of all files (reflexion, metrics, patterns) * File management, backup, and git guidelines * Quick command reference Context: Previous mindbase cleanup removed references to non-existent external MCP server, but didn't add sufficient user-facing documentation for the actual ReflexionMemory implementation. Related: #452 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * docs: translate Japanese text to English in documentation Address PR feedback to remove Japanese text from English documentation files. Changes: - docs/mcp/mcp-integration-policy.md: Translate headers and descriptions - docs/reference/pm-agent-autonomous-reflection.md: Translate error messages - docs/research/reflexion-integration-2025.md: Translate error messages - docs/memory/pm_context.md: Translate example keywords All Japanese text in English documentation files has been translated to English. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
3.9 KiB
3.9 KiB
PM Agent Context
Project: SuperClaude_Framework Type: AI Agent Framework Tech Stack: Claude Code, MCP Servers, Markdown-based configuration Current Focus: Token-efficient architecture with progressive context loading
Project Overview
SuperClaude is a comprehensive framework for Claude Code that provides:
- Persona-based specialized agents (frontend, backend, security, etc.)
- MCP server integrations (Context7, Magic, Morphllm, Sequential, etc.)
- Slash command system for workflow automation
- Self-improvement workflow with PDCA cycle
- NEW: Token-optimized PM Agent with progressive loading
Architecture
plugins/superclaude/agents/- Agent persona definitionsplugins/superclaude/commands/- Slash command definitions (pm.md: token-efficient redesign)docs/- Documentation and patternsdocs/memory/- PM Agent session state (local files)docs/pdca/- PDCA cycle documentation per featuredocs/research/- Research reports (llm-agent-token-efficiency-2025.md)
Token Efficiency Architecture (2025-10-17 Redesign)
Layer 0: Bootstrap (Always Active)
- Token Cost: 150 tokens (95% reduction from old 2,300 tokens)
- Operations: Time awareness + repo detection + session initialization
- Philosophy: User Request First - NO auto-loading before understanding intent
Intent Classification System
Ultra-Light (100-500 tokens): "progress", "status", "update" → Layer 1 only
Light (500-2K tokens): "typo", "rename", "comment" → Layer 2 (target file)
Medium (2-5K tokens): "bug", "fix", "refactor" → Layer 3 (related files)
Heavy (5-20K tokens): "feature", "architecture" → Layer 4 (subsystem)
Ultra-Heavy (20K+ tokens): "redesign", "migration" → Layer 5 (full + research)
Progressive Loading (5-Layer Strategy)
- Layer 1: Minimal context (mindbase: 500 tokens | fallback: 800 tokens)
- Layer 2: Target context (500-1K tokens)
- Layer 3: Related context (mindbase: 3-4K | fallback: 4.5K)
- Layer 4: System context (8-12K tokens, user confirmation)
- Layer 5: External research (20-50K tokens, WARNING required)
Workflow Metrics Collection
- File:
docs/memory/workflow_metrics.jsonl - Purpose: Continuous A/B testing for workflow optimization
- Data: task_type, complexity, workflow_id, tokens_used, time_ms, success
- Strategy: ε-greedy (80% best workflow, 20% experimental)
Error Learning & Memory Integration
- ReflexionMemory (built-in): Layer 1: 650 tokens | Layer 3: 3.5-4K tokens
- mindbase (optional): Layer 1: 500 tokens | Layer 3: 3-3.5K tokens (semantic search)
- Profile: Requires airis-mcp-gateway "recommended" profile for mindbase
- Savings: 20-35% with ReflexionMemory, additional 10-15% with mindbase enhancement
Active Patterns
- Repository-Scoped Memory: Local file-based memory in
docs/memory/ - PDCA Cycle: Plan → Do → Check → Act documentation workflow
- Self-Evaluation Checklists: Replace Serena MCP
think_about_*functions - User Request First: Bootstrap → Wait → Intent → Progressive Load → Execute
- Continuous Optimization: A/B testing via workflow_metrics.jsonl
Recent Changes (2025-10-17)
PM Agent Token Efficiency Redesign
- Removed: Auto-loading 7 files on startup (2,300 tokens wasted)
- Added: Layer 0 Bootstrap (150 tokens) + Intent Classification
- Added: Progressive Loading (5-layer) + Workflow Metrics
- Result:
- Ultra-Light tasks: 2,300 → 650 tokens (72% reduction)
- Light tasks: 3,500 → 1,200 tokens (66% reduction)
- Medium tasks: 7,000 → 4,500 tokens (36% reduction)
Research Integration
- Report:
docs/research/llm-agent-token-efficiency-2025.md - Benchmarks: Trajectory Reduction (99%), AgentDropout (21.6%), Vector DB (90%)
- Source: Anthropic, Microsoft AutoGen v0.4, CrewAI + Mem0, LangChain
Known Issues
None currently.
Last Updated
2025-10-17