Initial commit for Claude Code full guide

This commit is contained in:
Cole Medin 2025-08-05 07:10:34 -05:00
parent 866ff5f5a0
commit 828234dd67
15 changed files with 2249 additions and 1 deletions

View File

@ -16,7 +16,8 @@
"Bash(python:*)", "Bash(python:*)",
"Bash(python -m pytest:*)", "Bash(python -m pytest:*)",
"Bash(python3 -m pytest:*)", "Bash(python3 -m pytest:*)",
"WebFetch(domain:docs.anthropic.com)" "WebFetch(domain:docs.anthropic.com)",
"WebFetch(domain:github.com)"
], ],
"deny": [] "deny": []
} }

View File

@ -0,0 +1,27 @@
# Parallel Task Version Execution
## Variables
FEATURE_NAME: $ARGUMENTS
PLAN_TO_EXECUTE: $ARGUMENTS
NUMBER_OF_PARALLEL_WORKTREES: $ARGUMENTS
## Instructions
We're going to create NUMBER_OF_PARALLEL_WORKTREES new subagents that use the Task tool to create N versions of the same feature in parallel.
Be sure to read PLAN_TO_EXECUTE.
This enables use to concurrently build the same feature in parallel so we can test and validate each subagent's changes in isolation then pick the best changes.
The first agent will run in trees/<FEATURE_NAME>-1/
The second agent will run in trees/<FEATURE_NAME>-2/
...
The last agent will run in trees/<FEATURE_NAME>-<NUMBER_OF_PARALLEL_WORKTREES>/
The code in trees/<FEATURE_NAME>-<i>/ will be identical to the code in the current branch. It will be setup and ready for you to build the feature end to end.
Each agent will independently implement the engineering plan detailed in PLAN_TO_EXECUTE in their respective workspace.
When the subagent completes it's work, have the subagent to report their final changes made in a comprehensive `RESULTS.md` file at the root of their respective workspace.
Make sure agents don't run any tests or other code - focus on the code changes only.

View File

@ -0,0 +1,40 @@
# Execute BASE PRP
Implement a feature using using the PRP file.
## PRP File: $ARGUMENTS
## Execution Process
1. **Load PRP**
- Read the specified PRP file
- Understand all context and requirements
- Follow all instructions in the PRP and extend the research if needed
- Ensure you have all needed context to implement the PRP fully
- Do more web searches and codebase exploration as needed
2. **ULTRATHINK**
- Think hard before you execute the plan. Create a comprehensive plan addressing all requirements.
- Break down complex tasks into smaller, manageable steps using your todos tools.
- Use the TodoWrite tool to create and track your implementation plan.
- Identify implementation patterns from existing code to follow.
3. **Execute the plan**
- Execute the PRP
- Implement all the code
4. **Validate**
- Run each validation command
- Fix any failures
- Re-run until all pass
5. **Complete**
- Ensure all checklist items done
- Run final validation suite
- Report completion status
- Read the PRP again to ensure you have implemented everything
6. **Reference the PRP**
- You can always reference the PRP again if needed
Note: If validation fails, use error patterns in PRP to fix and retry.

View File

@ -0,0 +1,14 @@
Please analyze and fix the GitHub issue: $ARGUMENTS.
Follow these steps:
1. Use `gh issue view` to get the issue details
2. Understand the problem described in the issue
3. Search the codebase for relevant files
4. Implement the necessary changes to fix the issue
5. Write and run tests to verify the fix
6. Ensure code passes linting and type checking
7. Create a descriptive commit message
8. Push and create a PR
Remember to use the GitHub CLI (`gh`) for all GitHub-related tasks.

View File

@ -0,0 +1,69 @@
# Create PRP
## Feature file: $ARGUMENTS
Generate a complete PRP for general feature implementation with thorough research. Ensure context is passed to the AI agent to enable self-validation and iterative refinement. Read the feature file first to understand what needs to be created, how the examples provided help, and any other considerations.
The AI agent only gets the context you are appending to the PRP and training data. Assuma the AI agent has access to the codebase and the same knowledge cutoff as you, so its important that your research findings are included or referenced in the PRP. The Agent has Websearch capabilities, so pass urls to documentation and examples.
## Research Process
1. **Codebase Analysis**
- Search for similar features/patterns in the codebase
- Identify files to reference in PRP
- Note existing conventions to follow
- Check test patterns for validation approach
2. **External Research**
- Search for similar features/patterns online
- Library documentation (include specific URLs)
- Implementation examples (GitHub/StackOverflow/blogs)
- Best practices and common pitfalls
3. **User Clarification** (if needed)
- Specific patterns to mirror and where to find them?
- Integration requirements and where to find them?
## PRP Generation
Using PRPs/templates/prp_base.md as template:
### Critical Context to Include and pass to the AI agent as part of the PRP
- **Documentation**: URLs with specific sections
- **Code Examples**: Real snippets from codebase
- **Gotchas**: Library quirks, version issues
- **Patterns**: Existing approaches to follow
### Implementation Blueprint
- Start with pseudocode showing approach
- Reference real files for patterns
- Include error handling strategy
- list tasks to be completed to fullfill the PRP in the order they should be completed
### Validation Gates (Must be Executable) eg for python
```bash
# Syntax/Style
ruff check --fix && mypy .
# Unit Tests
uv run pytest tests/ -v
```
*** CRITICAL AFTER YOU ARE DONE RESEARCHING AND EXPLORING THE CODEBASE BEFORE YOU START WRITING THE PRP ***
*** ULTRATHINK ABOUT THE PRP AND PLAN YOUR APPROACH THEN START WRITING THE PRP ***
## Output
Save as: `PRPs/{feature-name}.md`
## Quality Checklist
- [ ] All necessary context included
- [ ] Validation gates are executable by AI
- [ ] References existing patterns
- [ ] Clear implementation path
- [ ] Error handling documented
Score the PRP on a scale of 1-10 (confidence level to succeed in one-pass implementation using claude codes)
Remember: The goal is one-pass implementation success through comprehensive context.

View File

@ -0,0 +1,14 @@
# Initialize parallel git worktree directories for parallel Claude Code agents
## Variables
FEATURE_NAME: $ARGUMENTS
NUMBER_OF_PARALLEL_WORKTREES: $ARGUMENTS
## Execute these commands
> Execute the loop in parallel with the Batch and Task tool
- create a new dir `trees/`
- for i in NUMBER_OF_PARALLEL_WORKTREES
- RUN `git worktree add -b FEATURE_NAME-i ./trees/FEATURE_NAME-i`
- RUN `cd trees/FEATURE_NAME-i`, `git ls-files` to validate
- RUN `git worktree list` to verify all trees were created properly

View File

@ -0,0 +1,27 @@
{
"permissions": {
"allow": [
"Bash(grep:*)",
"Bash(ls:*)",
"Bash(source:*)",
"Bash(find:*)",
"Bash(mv:*)",
"Bash(mkdir:*)",
"Bash(tree:*)",
"Bash(ruff:*)",
"Bash(touch:*)",
"Bash(cat:*)",
"Bash(ruff check:*)",
"Bash(pytest:*)",
"Bash(python:*)",
"Bash(python -m pytest:*)",
"Bash(python3 -m pytest:*)",
"WebFetch(domain:*)",
"Bash(gh issue view:*)",
"mcp__puppeteer__puppeteer_navigate",
"mcp__puppeteer__puppeteer_screenshot",
"mcp__puppeteer__puppeteer_evaluate"
],
"deny": []
}
}

View File

@ -0,0 +1,78 @@
FROM node:20
ARG TZ
ENV TZ="$TZ"
# Install basic development tools and iptables/ipset
RUN apt update && apt install -y less \
git \
procps \
sudo \
fzf \
zsh \
man-db \
unzip \
gnupg2 \
gh \
iptables \
ipset \
iproute2 \
dnsutils \
aggregate \
jq
# Ensure default node user has access to /usr/local/share
RUN mkdir -p /usr/local/share/npm-global && \
chown -R node:node /usr/local/share
ARG USERNAME=node
# Persist bash history.
RUN SNIPPET="export PROMPT_COMMAND='history -a' && export HISTFILE=/commandhistory/.bash_history" \
&& mkdir /commandhistory \
&& touch /commandhistory/.bash_history \
&& chown -R $USERNAME /commandhistory
# Set `DEVCONTAINER` environment variable to help with orientation
ENV DEVCONTAINER=true
# Create workspace and config directories and set permissions
RUN mkdir -p /workspace /home/node/.claude && \
chown -R node:node /workspace /home/node/.claude
WORKDIR /workspace
RUN ARCH=$(dpkg --print-architecture) && \
wget "https://github.com/dandavison/delta/releases/download/0.18.2/git-delta_0.18.2_${ARCH}.deb" && \
sudo dpkg -i "git-delta_0.18.2_${ARCH}.deb" && \
rm "git-delta_0.18.2_${ARCH}.deb"
# Set up non-root user
USER node
# Install global packages
ENV NPM_CONFIG_PREFIX=/usr/local/share/npm-global
ENV PATH=$PATH:/usr/local/share/npm-global/bin
# Set the default shell to zsh rather than sh
ENV SHELL=/bin/zsh
# Default powerline10k theme
RUN sh -c "$(wget -O- https://github.com/deluan/zsh-in-docker/releases/download/v1.2.0/zsh-in-docker.sh)" -- \
-p git \
-p fzf \
-a "source /usr/share/doc/fzf/examples/key-bindings.zsh" \
-a "source /usr/share/doc/fzf/examples/completion.zsh" \
-a "export PROMPT_COMMAND='history -a' && export HISTFILE=/commandhistory/.bash_history" \
-x
# Install Claude
RUN npm install -g @anthropic-ai/claude-code
# Copy and set up firewall script
COPY init-firewall.sh /usr/local/bin/
USER root
RUN chmod +x /usr/local/bin/init-firewall.sh && \
echo "node ALL=(root) NOPASSWD: /usr/local/bin/init-firewall.sh" > /etc/sudoers.d/node-firewall && \
chmod 0440 /etc/sudoers.d/node-firewall
USER node

View File

@ -0,0 +1,52 @@
{
"name": "Claude Code Sandbox",
"build": {
"dockerfile": "Dockerfile",
"args": {
"TZ": "${localEnv:TZ:America/Los_Angeles}"
}
},
"runArgs": [
"--cap-add=NET_ADMIN",
"--cap-add=NET_RAW"
],
"customizations": {
"vscode": {
"extensions": [
"dbaeumer.vscode-eslint",
"esbenp.prettier-vscode",
"eamodio.gitlens"
],
"settings": {
"editor.formatOnSave": true,
"editor.defaultFormatter": "esbenp.prettier-vscode",
"editor.codeActionsOnSave": {
"source.fixAll.eslint": "explicit"
},
"terminal.integrated.defaultProfile.linux": "zsh",
"terminal.integrated.profiles.linux": {
"bash": {
"path": "bash",
"icon": "terminal-bash"
},
"zsh": {
"path": "zsh"
}
}
}
}
},
"remoteUser": "node",
"mounts": [
"source=claude-code-bashhistory,target=/commandhistory,type=volume",
"source=claude-code-config,target=/home/node/.claude,type=volume"
],
"remoteEnv": {
"NODE_OPTIONS": "--max-old-space-size=4096",
"CLAUDE_CONFIG_DIR": "/home/node/.claude",
"POWERLEVEL9K_DISABLE_GITSTATUS": "true"
},
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace,type=bind,consistency=delegated",
"workspaceFolder": "/workspace",
"postCreateCommand": "sudo /usr/local/bin/init-firewall.sh"
}

View File

@ -0,0 +1,118 @@
#!/bin/bash
set -euo pipefail # Exit on error, undefined vars, and pipeline failures
IFS=$'\n\t' # Stricter word splitting
# Flush existing rules and delete existing ipsets
iptables -F
iptables -X
iptables -t nat -F
iptables -t nat -X
iptables -t mangle -F
iptables -t mangle -X
ipset destroy allowed-domains 2>/dev/null || true
# First allow DNS and localhost before any restrictions
# Allow outbound DNS
iptables -A OUTPUT -p udp --dport 53 -j ACCEPT
# Allow inbound DNS responses
iptables -A INPUT -p udp --sport 53 -j ACCEPT
# Allow outbound SSH
iptables -A OUTPUT -p tcp --dport 22 -j ACCEPT
# Allow inbound SSH responses
iptables -A INPUT -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT
# Allow localhost
iptables -A INPUT -i lo -j ACCEPT
iptables -A OUTPUT -o lo -j ACCEPT
# Create ipset with CIDR support
ipset create allowed-domains hash:net
# Fetch GitHub meta information and aggregate + add their IP ranges
echo "Fetching GitHub IP ranges..."
gh_ranges=$(curl -s https://api.github.com/meta)
if [ -z "$gh_ranges" ]; then
echo "ERROR: Failed to fetch GitHub IP ranges"
exit 1
fi
if ! echo "$gh_ranges" | jq -e '.web and .api and .git' >/dev/null; then
echo "ERROR: GitHub API response missing required fields"
exit 1
fi
echo "Processing GitHub IPs..."
while read -r cidr; do
if [[ ! "$cidr" =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/[0-9]{1,2}$ ]]; then
echo "ERROR: Invalid CIDR range from GitHub meta: $cidr"
exit 1
fi
echo "Adding GitHub range $cidr"
ipset add allowed-domains "$cidr"
done < <(echo "$gh_ranges" | jq -r '(.web + .api + .git)[]' | aggregate -q)
# Resolve and add other allowed domains
for domain in \
"registry.npmjs.org" \
"api.anthropic.com" \
"sentry.io" \
"statsig.anthropic.com" \
"statsig.com"; do
echo "Resolving $domain..."
ips=$(dig +short A "$domain")
if [ -z "$ips" ]; then
echo "ERROR: Failed to resolve $domain"
exit 1
fi
while read -r ip; do
if [[ ! "$ip" =~ ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ ]]; then
echo "ERROR: Invalid IP from DNS for $domain: $ip"
exit 1
fi
echo "Adding $ip for $domain"
ipset add allowed-domains "$ip"
done < <(echo "$ips")
done
# Get host IP from default route
HOST_IP=$(ip route | grep default | cut -d" " -f3)
if [ -z "$HOST_IP" ]; then
echo "ERROR: Failed to detect host IP"
exit 1
fi
HOST_NETWORK=$(echo "$HOST_IP" | sed "s/\.[0-9]*$/.0\/24/")
echo "Host network detected as: $HOST_NETWORK"
# Set up remaining iptables rules
iptables -A INPUT -s "$HOST_NETWORK" -j ACCEPT
iptables -A OUTPUT -d "$HOST_NETWORK" -j ACCEPT
# Set default policies to DROP first
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT DROP
# First allow established connections for already approved traffic
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Then allow only specific outbound traffic to allowed domains
iptables -A OUTPUT -m set --match-set allowed-domains dst -j ACCEPT
echo "Firewall configuration complete"
echo "Verifying firewall rules..."
if curl --connect-timeout 5 https://example.com >/dev/null 2>&1; then
echo "ERROR: Firewall verification failed - was able to reach https://example.com"
exit 1
else
echo "Firewall verification passed - unable to reach https://example.com as expected"
fi
# Verify GitHub API access
if ! curl --connect-timeout 5 https://api.github.com/zen >/dev/null 2>&1; then
echo "ERROR: Firewall verification failed - unable to reach https://api.github.com"
exit 1
else
echo "Firewall verification passed - able to reach https://api.github.com as expected"
fi

View File

@ -0,0 +1,759 @@
# CLAUDE.md
This file provides comprehensive guidance to Claude Code when working with Python code in this repository.
## Core Development Philosophy
### KISS (Keep It Simple, Stupid)
Simplicity should be a key goal in design. Choose straightforward solutions over complex ones whenever possible. Simple solutions are easier to understand, maintain, and debug.
### YAGNI (You Aren't Gonna Need It)
Avoid building functionality on speculation. Implement features only when they are needed, not when you anticipate they might be useful in the future.
### Design Principles
- **Dependency Inversion**: High-level modules should not depend on low-level modules. Both should depend on abstractions.
- **Open/Closed Principle**: Software entities should be open for extension but closed for modification.
- **Single Responsibility**: Each function, class, and module should have one clear purpose.
- **Fail Fast**: Check for potential errors early and raise exceptions immediately when issues occur.
## 🧱 Code Structure & Modularity
### File and Function Limits
- **Never create a file longer than 500 lines of code**. If approaching this limit, refactor by splitting into modules.
- **Functions should be under 50 lines** with a single, clear responsibility.
- **Classes should be under 100 lines** and represent a single concept or entity.
- **Organize code into clearly separated modules**, grouped by feature or responsibility.
- **Line lenght should be max 100 characters** ruff rule in pyproject.toml
- **Use venv_linux** (the virtual environment) whenever executing Python commands, including for unit tests.
### Project Architecture
Follow strict vertical slice architecture with tests living next to the code they test:
```
src/project/
__init__.py
main.py
tests/
test_main.py
conftest.py
# Core modules
database/
__init__.py
connection.py
models.py
tests/
test_connection.py
test_models.py
auth/
__init__.py
authentication.py
authorization.py
tests/
test_authentication.py
test_authorization.py
# Feature slices
features/
user_management/
__init__.py
handlers.py
validators.py
tests/
test_handlers.py
test_validators.py
payment_processing/
__init__.py
processor.py
gateway.py
tests/
test_processor.py
test_gateway.py
```
## 🛠️ Development Environment
### UV Package Management
This project uses UV for blazing-fast Python package and environment management.
```bash
# Install UV (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Create virtual environment
uv venv
# Sync dependencies
uv sync
# Add a package ***NEVER UPDATE A DEPENDENCY DIRECTLY IN PYPROJECT.toml***
# ALWAYS USE UV ADD
uv add requests
# Add development dependency
uv add --dev pytest ruff mypy
# Remove a package
uv remove requests
# Run commands in the environment
uv run python script.py
uv run pytest
uv run ruff check .
# Install specific Python version
uv python install 3.12
```
### Development Commands
```bash
# Run all tests
uv run pytest
# Run specific tests with verbose output
uv run pytest tests/test_module.py -v
# Run tests with coverage
uv run pytest --cov=src --cov-report=html
# Format code
uv run ruff format .
# Check linting
uv run ruff check .
# Fix linting issues automatically
uv run ruff check --fix .
# Type checking
uv run mypy src/
# Run pre-commit hooks
uv run pre-commit run --all-files
```
## 📋 Style & Conventions
### Python Style Guide
- **Follow PEP8** with these specific choices:
- Line length: 100 characters (set by Ruff in pyproject.toml)
- Use double quotes for strings
- Use trailing commas in multi-line structures
- **Always use type hints** for function signatures and class attributes
- **Format with `ruff format`** (faster alternative to Black)
- **Use `pydantic` v2** for data validation and settings management
### Docstring Standards
Use Google-style docstrings for all public functions, classes, and modules:
```python
def calculate_discount(
price: Decimal,
discount_percent: float,
min_amount: Decimal = Decimal("0.01")
) -> Decimal:
"""
Calculate the discounted price for a product.
Args:
price: Original price of the product
discount_percent: Discount percentage (0-100)
min_amount: Minimum allowed final price
Returns:
Final price after applying discount
Raises:
ValueError: If discount_percent is not between 0 and 100
ValueError: If final price would be below min_amount
Example:
>>> calculate_discount(Decimal("100"), 20)
Decimal('80.00')
"""
```
### Naming Conventions
- **Variables and functions**: `snake_case`
- **Classes**: `PascalCase`
- **Constants**: `UPPER_SNAKE_CASE`
- **Private attributes/methods**: `_leading_underscore`
- **Type aliases**: `PascalCase`
- **Enum values**: `UPPER_SNAKE_CASE`
## 🧪 Testing Strategy
### Test-Driven Development (TDD)
1. **Write the test first** - Define expected behavior before implementation
2. **Watch it fail** - Ensure the test actually tests something
3. **Write minimal code** - Just enough to make the test pass
4. **Refactor** - Improve code while keeping tests green
5. **Repeat** - One test at a time
### Testing Best Practices
```python
# Always use pytest fixtures for setup
import pytest
from datetime import datetime
@pytest.fixture
def sample_user():
"""Provide a sample user for testing."""
return User(
id=123,
name="Test User",
email="test@example.com",
created_at=datetime.now()
)
# Use descriptive test names
def test_user_can_update_email_when_valid(sample_user):
"""Test that users can update their email with valid input."""
new_email = "newemail@example.com"
sample_user.update_email(new_email)
assert sample_user.email == new_email
# Test edge cases and error conditions
def test_user_update_email_fails_with_invalid_format(sample_user):
"""Test that invalid email formats are rejected."""
with pytest.raises(ValidationError) as exc_info:
sample_user.update_email("not-an-email")
assert "Invalid email format" in str(exc_info.value)
```
### Test Organization
- Unit tests: Test individual functions/methods in isolation
- Integration tests: Test component interactions
- End-to-end tests: Test complete user workflows
- Keep test files next to the code they test
- Use `conftest.py` for shared fixtures
- Aim for 80%+ code coverage, but focus on critical paths
## 🚨 Error Handling
### Exception Best Practices
```python
# Create custom exceptions for your domain
class PaymentError(Exception):
"""Base exception for payment-related errors."""
pass
class InsufficientFundsError(PaymentError):
"""Raised when account has insufficient funds."""
def __init__(self, required: Decimal, available: Decimal):
self.required = required
self.available = available
super().__init__(
f"Insufficient funds: required {required}, available {available}"
)
# Use specific exception handling
try:
process_payment(amount)
except InsufficientFundsError as e:
logger.warning(f"Payment failed: {e}")
return PaymentResult(success=False, reason="insufficient_funds")
except PaymentError as e:
logger.error(f"Payment error: {e}")
return PaymentResult(success=False, reason="payment_error")
# Use context managers for resource management
from contextlib import contextmanager
@contextmanager
def database_transaction():
"""Provide a transactional scope for database operations."""
conn = get_connection()
trans = conn.begin_transaction()
try:
yield conn
trans.commit()
except Exception:
trans.rollback()
raise
finally:
conn.close()
```
### Logging Strategy
```python
import logging
from functools import wraps
# Configure structured logging
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# Log function entry/exit for debugging
def log_execution(func):
@wraps(func)
def wrapper(*args, **kwargs):
logger.debug(f"Entering {func.__name__}")
try:
result = func(*args, **kwargs)
logger.debug(f"Exiting {func.__name__} successfully")
return result
except Exception as e:
logger.exception(f"Error in {func.__name__}: {e}")
raise
return wrapper
```
## 🔧 Configuration Management
### Environment Variables and Settings
```python
from pydantic_settings import BaseSettings
from functools import lru_cache
class Settings(BaseSettings):
"""Application settings with validation."""
app_name: str = "MyApp"
debug: bool = False
database_url: str
redis_url: str = "redis://localhost:6379"
api_key: str
max_connections: int = 100
class Config:
env_file = ".env"
env_file_encoding = "utf-8"
case_sensitive = False
@lru_cache()
def get_settings() -> Settings:
"""Get cached settings instance."""
return Settings()
# Usage
settings = get_settings()
```
## 🏗️ Data Models and Validation
### Example Pydantic Models strict with pydantic v2
```python
from pydantic import BaseModel, Field, validator, EmailStr
from datetime import datetime
from typing import Optional, List
from decimal import Decimal
class ProductBase(BaseModel):
"""Base product model with common fields."""
name: str = Field(..., min_length=1, max_length=255)
description: Optional[str] = None
price: Decimal = Field(..., gt=0, decimal_places=2)
category: str
tags: List[str] = []
@validator('price')
def validate_price(cls, v):
if v > Decimal('1000000'):
raise ValueError('Price cannot exceed 1,000,000')
return v
class Config:
json_encoders = {
Decimal: str,
datetime: lambda v: v.isoformat()
}
class ProductCreate(ProductBase):
"""Model for creating new products."""
pass
class ProductUpdate(BaseModel):
"""Model for updating products - all fields optional."""
name: Optional[str] = Field(None, min_length=1, max_length=255)
description: Optional[str] = None
price: Optional[Decimal] = Field(None, gt=0, decimal_places=2)
category: Optional[str] = None
tags: Optional[List[str]] = None
class Product(ProductBase):
"""Complete product model with database fields."""
id: int
created_at: datetime
updated_at: datetime
is_active: bool = True
class Config:
from_attributes = True # Enable ORM mode
```
## 🔄 Git Workflow
### Branch Strategy
- `main` - Production-ready code
- `develop` - Integration branch for features
- `feature/*` - New features
- `fix/*` - Bug fixes
- `docs/*` - Documentation updates
- `refactor/*` - Code refactoring
- `test/*` - Test additions or fixes
### Commit Message Format
Never include claude code, or written by claude code in commit messages
```
<type>(<scope>): <subject>
<body>
<footer>
``
Types: feat, fix, docs, style, refactor, test, chore
Example:
```
feat(auth): add two-factor authentication
- Implement TOTP generation and validation
- Add QR code generation for authenticator apps
- Update user model with 2FA fields
Closes #123
````
## 🗄️ Database Naming Standards
### Entity-Specific Primary Keys
All database tables use entity-specific primary keys for clarity and consistency:
```sql
-- ✅ STANDARDIZED: Entity-specific primary keys
sessions.session_id UUID PRIMARY KEY
leads.lead_id UUID PRIMARY KEY
messages.message_id UUID PRIMARY KEY
daily_metrics.daily_metric_id UUID PRIMARY KEY
agencies.agency_id UUID PRIMARY KEY
````
### Field Naming Conventions
```sql
-- Primary keys: {entity}_id
session_id, lead_id, message_id
-- Foreign keys: {referenced_entity}_id
session_id REFERENCES sessions(session_id)
agency_id REFERENCES agencies(agency_id)
-- Timestamps: {action}_at
created_at, updated_at, started_at, expires_at
-- Booleans: is_{state}
is_connected, is_active, is_qualified
-- Counts: {entity}_count
message_count, lead_count, notification_count
-- Durations: {property}_{unit}
duration_seconds, timeout_minutes
```
### Repository Pattern Auto-Derivation
The enhanced BaseRepository automatically derives table names and primary keys:
```python
# ✅ STANDARDIZED: Convention-based repositories
class LeadRepository(BaseRepository[Lead]):
def __init__(self):
super().__init__() # Auto-derives "leads" and "lead_id"
class SessionRepository(BaseRepository[AvatarSession]):
def __init__(self):
super().__init__() # Auto-derives "sessions" and "session_id"
```
**Benefits**:
- ✅ Self-documenting schema
- ✅ Clear foreign key relationships
- ✅ Eliminates repository method overrides
- ✅ Consistent with entity naming patterns
### Model-Database Alignment
Models mirror database fields exactly to eliminate field mapping complexity:
```python
# ✅ STANDARDIZED: Models mirror database exactly
class Lead(BaseModel):
lead_id: UUID = Field(default_factory=uuid4) # Matches database field
session_id: UUID # Matches database field
agency_id: str # Matches database field
created_at: datetime = Field(default_factory=lambda: datetime.now(UTC))
model_config = ConfigDict(
use_enum_values=True,
populate_by_name=True,
alias_generator=None # Use exact field names
)
```
### API Route Standards
```python
# ✅ STANDARDIZED: RESTful with consistent parameter naming
router = APIRouter(prefix="/api/v1/leads", tags=["leads"])
@router.get("/{lead_id}") # GET /api/v1/leads/{lead_id}
@router.put("/{lead_id}") # PUT /api/v1/leads/{lead_id}
@router.delete("/{lead_id}") # DELETE /api/v1/leads/{lead_id}
# Sub-resources
@router.get("/{lead_id}/messages") # GET /api/v1/leads/{lead_id}/messages
@router.get("/agency/{agency_id}") # GET /api/v1/leads/agency/{agency_id}
```
For complete naming standards, see [NAMING_CONVENTIONS.md](./NAMING_CONVENTIONS.md).
## 📝 Documentation Standards
### Code Documentation
- Every module should have a docstring explaining its purpose
- Public functions must have complete docstrings
- Complex logic should have inline comments with `# Reason:` prefix
- Keep README.md updated with setup instructions and examples
- Maintain CHANGELOG.md for version history
### API Documentation
```python
from fastapi import APIRouter, HTTPException, status
from typing import List
router = APIRouter(prefix="/products", tags=["products"])
@router.get(
"/",
response_model=List[Product],
summary="List all products",
description="Retrieve a paginated list of all active products"
)
async def list_products(
skip: int = 0,
limit: int = 100,
category: Optional[str] = None
) -> List[Product]:
"""
Retrieve products with optional filtering.
- **skip**: Number of products to skip (for pagination)
- **limit**: Maximum number of products to return
- **category**: Filter by product category
"""
# Implementation here
```
## 🚀 Performance Considerations
### Optimization Guidelines
- Profile before optimizing - use `cProfile` or `py-spy`
- Use `lru_cache` for expensive computations
- Prefer generators for large datasets
- Use `asyncio` for I/O-bound operations
- Consider `multiprocessing` for CPU-bound tasks
- Cache database queries appropriately
### Example Optimization
```python
from functools import lru_cache
import asyncio
from typing import AsyncIterator
@lru_cache(maxsize=1000)
def expensive_calculation(n: int) -> int:
"""Cache results of expensive calculations."""
# Complex computation here
return result
async def process_large_dataset() -> AsyncIterator[dict]:
"""Process large dataset without loading all into memory."""
async with aiofiles.open('large_file.json', mode='r') as f:
async for line in f:
data = json.loads(line)
# Process and yield each item
yield process_item(data)
```
## 🛡️ Security Best Practices
### Security Guidelines
- Never commit secrets - use environment variables
- Validate all user input with Pydantic
- Use parameterized queries for database operations
- Implement rate limiting for APIs
- Keep dependencies updated with `uv`
- Use HTTPS for all external communications
- Implement proper authentication and authorization
### Example Security Implementation
```python
from passlib.context import CryptContext
import secrets
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
def hash_password(password: str) -> str:
"""Hash password using bcrypt."""
return pwd_context.hash(password)
def verify_password(plain_password: str, hashed_password: str) -> bool:
"""Verify a password against its hash."""
return pwd_context.verify(plain_password, hashed_password)
def generate_secure_token(length: int = 32) -> str:
"""Generate a cryptographically secure random token."""
return secrets.token_urlsafe(length)
```
## 🔍 Debugging Tools
### Debugging Commands
```bash
# Interactive debugging with ipdb
uv add --dev ipdb
# Add breakpoint: import ipdb; ipdb.set_trace()
# Memory profiling
uv add --dev memory-profiler
uv run python -m memory_profiler script.py
# Line profiling
uv add --dev line-profiler
# Add @profile decorator to functions
# Debug with rich traceback
uv add --dev rich
# In code: from rich.traceback import install; install()
```
## 📊 Monitoring and Observability
### Structured Logging
```python
import structlog
logger = structlog.get_logger()
# Log with context
logger.info(
"payment_processed",
user_id=user.id,
amount=amount,
currency="USD",
processing_time=processing_time
)
```
## 📚 Useful Resources
### Essential Tools
- UV Documentation: https://github.com/astral-sh/uv
- Ruff: https://github.com/astral-sh/ruff
- Pytest: https://docs.pytest.org/
- Pydantic: https://docs.pydantic.dev/
- FastAPI: https://fastapi.tiangolo.com/
### Python Best Practices
- PEP 8: https://pep8.org/
- PEP 484 (Type Hints): https://www.python.org/dev/peps/pep-0484/
- The Hitchhiker's Guide to Python: https://docs.python-guide.org/
## ⚠️ Important Notes
- **NEVER ASSUME OR GUESS** - When in doubt, ask for clarification
- **Always verify file paths and module names** before use
- **Keep CLAUDE.md updated** when adding new patterns or dependencies
- **Test your code** - No feature is complete without tests
- **Document your decisions** - Future developers (including yourself) will thank you
## 🔍 Search Command Requirements
**CRITICAL**: Always use `rg` (ripgrep) instead of traditional `grep` and `find` commands:
```bash
# ❌ Don't use grep
grep -r "pattern" .
# ✅ Use rg instead
rg "pattern"
# ❌ Don't use find with name
find . -name "*.py"
# ✅ Use rg with file filtering
rg --files | rg "\.py$"
# or
rg --files -g "*.py"
```
**Enforcement Rules:**
```
(
r"^grep\b(?!.*\|)",
"Use 'rg' (ripgrep) instead of 'grep' for better performance and features",
),
(
r"^find\s+\S+\s+-name\b",
"Use 'rg --files | rg pattern' or 'rg --files -g pattern' instead of 'find -name' for better performance",
),
```
## 🚀 GitHub Flow Workflow Summary
main (protected) ←── PR ←── feature/your-feature
↓ ↑
deploy development
### Daily Workflow:
1. git checkout main && git pull origin main
2. git checkout -b feature/new-feature
3. Make changes + tests
4. git push origin feature/new-feature
5. Create PR → Review → Merge to main
---
_This document is a living guide. Update it as the project evolves and new patterns emerge._

View File

@ -0,0 +1,15 @@
## FEATURE:
[Insert your feature here]
## EXAMPLES:
[Provide and explain examples that you have in the `examples/` folder]
## DOCUMENTATION:
[List out any documentation (web pages, sources for an MCP server like Crawl4AI RAG, etc.) that will need to be referenced during development]
## OTHER CONSIDERATIONS:
[Any other considerations or specific requirements - great place to include gotchas that you see AI coding assistants miss with your projects a lot]

View File

@ -0,0 +1,395 @@
name: "Multi-Agent System: Research Agent with Email Draft Sub-Agent"
description: |
## Purpose
Build a Pydantic AI multi-agent system where a primary Research Agent uses Brave Search API and has an Email Draft Agent (using Gmail API) as a tool. This demonstrates agent-as-tool pattern with external API integrations.
## Core Principles
1. **Context is King**: Include ALL necessary documentation, examples, and caveats
2. **Validation Loops**: Provide executable tests/lints the AI can run and fix
3. **Information Dense**: Use keywords and patterns from the codebase
4. **Progressive Success**: Start simple, validate, then enhance
---
## Goal
Create a production-ready multi-agent system where users can research topics via CLI, and the Research Agent can delegate email drafting tasks to an Email Draft Agent. The system should support multiple LLM providers and handle API authentication securely.
## Why
- **Business value**: Automates research and email drafting workflows
- **Integration**: Demonstrates advanced Pydantic AI multi-agent patterns
- **Problems solved**: Reduces manual work for research-based email communications
## What
A CLI-based application where:
- Users input research queries
- Research Agent searches using Brave API
- Research Agent can invoke Email Draft Agent to create Gmail drafts
- Results stream back to the user in real-time
### Success Criteria
- [ ] Research Agent successfully searches via Brave API
- [ ] Email Agent creates Gmail drafts with proper authentication
- [ ] Research Agent can invoke Email Agent as a tool
- [ ] CLI provides streaming responses with tool visibility
- [ ] All tests pass and code meets quality standards
## All Needed Context
### Documentation & References
```yaml
# MUST READ - Include these in your context window
- url: https://ai.pydantic.dev/agents/
why: Core agent creation patterns
- url: https://ai.pydantic.dev/multi-agent-applications/
why: Multi-agent system patterns, especially agent-as-tool
- url: https://developers.google.com/gmail/api/guides/sending
why: Gmail API authentication and draft creation
- url: https://api-dashboard.search.brave.com/app/documentation
why: Brave Search API REST endpoints
- file: examples/agent/agent.py
why: Pattern for agent creation, tool registration, dependencies
- file: examples/agent/providers.py
why: Multi-provider LLM configuration pattern
- file: examples/cli.py
why: CLI structure with streaming responses and tool visibility
- url: https://github.com/googleworkspace/python-samples/blob/main/gmail/snippet/send%20mail/create_draft.py
why: Official Gmail draft creation example
```
### Current Codebase tree
```bash
.
├── examples/
│ ├── agent/
│ │ ├── agent.py
│ │ ├── providers.py
│ │ └── ...
│ └── cli.py
├── PRPs/
│ └── templates/
│ └── prp_base.md
├── INITIAL.md
├── CLAUDE.md
└── requirements.txt
```
### Desired Codebase tree with files to be added
```bash
.
├── agents/
│ ├── __init__.py # Package init
│ ├── research_agent.py # Primary agent with Brave Search
│ ├── email_agent.py # Sub-agent with Gmail capabilities
│ ├── providers.py # LLM provider configuration
│ └── models.py # Pydantic models for data validation
├── tools/
│ ├── __init__.py # Package init
│ ├── brave_search.py # Brave Search API integration
│ └── gmail_tool.py # Gmail API integration
├── config/
│ ├── __init__.py # Package init
│ └── settings.py # Environment and config management
├── tests/
│ ├── __init__.py # Package init
│ ├── test_research_agent.py # Research agent tests
│ ├── test_email_agent.py # Email agent tests
│ ├── test_brave_search.py # Brave search tool tests
│ ├── test_gmail_tool.py # Gmail tool tests
│ └── test_cli.py # CLI tests
├── cli.py # CLI interface
├── .env.example # Environment variables template
├── requirements.txt # Updated dependencies
├── README.md # Comprehensive documentation
└── credentials/.gitkeep # Directory for Gmail credentials
```
### Known Gotchas & Library Quirks
```python
# CRITICAL: Pydantic AI requires async throughout - no sync functions in async context
# CRITICAL: Gmail API requires OAuth2 flow on first run - credentials.json needed
# CRITICAL: Brave API has rate limits - 2000 req/month on free tier
# CRITICAL: Agent-as-tool pattern requires passing ctx.usage for token tracking
# CRITICAL: Gmail drafts need base64 encoding with proper MIME formatting
# CRITICAL: Always use absolute imports for cleaner code
# CRITICAL: Store sensitive credentials in .env, never commit them
```
## Implementation Blueprint
### Data models and structure
```python
# models.py - Core data structures
from pydantic import BaseModel, Field
from typing import List, Optional
from datetime import datetime
class ResearchQuery(BaseModel):
query: str = Field(..., description="Research topic to investigate")
max_results: int = Field(10, ge=1, le=50)
include_summary: bool = Field(True)
class BraveSearchResult(BaseModel):
title: str
url: str
description: str
score: float = Field(0.0, ge=0.0, le=1.0)
class EmailDraft(BaseModel):
to: List[str] = Field(..., min_items=1)
subject: str = Field(..., min_length=1)
body: str = Field(..., min_length=1)
cc: Optional[List[str]] = None
bcc: Optional[List[str]] = None
class ResearchEmailRequest(BaseModel):
research_query: str
email_context: str = Field(..., description="Context for email generation")
recipient_email: str
```
### List of tasks to be completed
```yaml
Task 1: Setup Configuration and Environment
CREATE config/settings.py:
- PATTERN: Use pydantic-settings like examples use os.getenv
- Load environment variables with defaults
- Validate required API keys present
CREATE .env.example:
- Include all required environment variables with descriptions
- Follow pattern from examples/README.md
Task 2: Implement Brave Search Tool
CREATE tools/brave_search.py:
- PATTERN: Async functions like examples/agent/tools.py
- Simple REST client using httpx (already in requirements)
- Handle rate limits and errors gracefully
- Return structured BraveSearchResult models
Task 3: Implement Gmail Tool
CREATE tools/gmail_tool.py:
- PATTERN: Follow OAuth2 flow from Gmail quickstart
- Store token.json in credentials/ directory
- Create draft with proper MIME encoding
- Handle authentication refresh automatically
Task 4: Create Email Draft Agent
CREATE agents/email_agent.py:
- PATTERN: Follow examples/agent/agent.py structure
- Use Agent with deps_type pattern
- Register gmail_tool as @agent.tool
- Return EmailDraft model
Task 5: Create Research Agent
CREATE agents/research_agent.py:
- PATTERN: Multi-agent pattern from Pydantic AI docs
- Register brave_search as tool
- Register email_agent.run() as tool
- Use RunContext for dependency injection
Task 6: Implement CLI Interface
CREATE cli.py:
- PATTERN: Follow examples/cli.py streaming pattern
- Color-coded output with tool visibility
- Handle async properly with asyncio.run()
- Session management for conversation context
Task 7: Add Comprehensive Tests
CREATE tests/:
- PATTERN: Mirror examples test structure
- Mock external API calls
- Test happy path, edge cases, errors
- Ensure 80%+ coverage
Task 8: Create Documentation
CREATE README.md:
- PATTERN: Follow examples/README.md structure
- Include setup, installation, usage
- API key configuration steps
- Architecture diagram
```
### Per task pseudocode
```python
# Task 2: Brave Search Tool
async def search_brave(query: str, api_key: str, count: int = 10) -> List[BraveSearchResult]:
# PATTERN: Use httpx like examples use aiohttp
async with httpx.AsyncClient() as client:
headers = {"X-Subscription-Token": api_key}
params = {"q": query, "count": count}
# GOTCHA: Brave API returns 401 if API key invalid
response = await client.get(
"https://api.search.brave.com/res/v1/web/search",
headers=headers,
params=params,
timeout=30.0 # CRITICAL: Set timeout to avoid hanging
)
# PATTERN: Structured error handling
if response.status_code != 200:
raise BraveAPIError(f"API returned {response.status_code}")
# Parse and validate with Pydantic
data = response.json()
return [BraveSearchResult(**result) for result in data.get("web", {}).get("results", [])]
# Task 5: Research Agent with Email Agent as Tool
@research_agent.tool
async def create_email_draft(
ctx: RunContext[AgentDependencies],
recipient: str,
subject: str,
context: str
) -> str:
"""Create email draft based on research context."""
# CRITICAL: Pass usage for token tracking
result = await email_agent.run(
f"Create an email to {recipient} about: {context}",
deps=EmailAgentDeps(subject=subject),
usage=ctx.usage # PATTERN from multi-agent docs
)
return f"Draft created with ID: {result.data}"
```
### Integration Points
```yaml
ENVIRONMENT:
- add to: .env
- vars: |
# LLM Configuration
LLM_PROVIDER=openai
LLM_API_KEY=sk-...
LLM_MODEL=gpt-4
# Brave Search
BRAVE_API_KEY=BSA...
# Gmail (path to credentials.json)
GMAIL_CREDENTIALS_PATH=./credentials/credentials.json
CONFIG:
- Gmail OAuth: First run opens browser for authorization
- Token storage: ./credentials/token.json (auto-created)
DEPENDENCIES:
- Update requirements.txt with:
- google-api-python-client
- google-auth-httplib2
- google-auth-oauthlib
```
## Validation Loop
### Level 1: Syntax & Style
```bash
# Run these FIRST - fix any errors before proceeding
ruff check . --fix # Auto-fix style issues
mypy . # Type checking
# Expected: No errors. If errors, READ and fix.
```
### Level 2: Unit Tests
```python
# test_research_agent.py
async def test_research_with_brave():
"""Test research agent searches correctly"""
agent = create_research_agent()
result = await agent.run("AI safety research")
assert result.data
assert len(result.data) > 0
async def test_research_creates_email():
"""Test research agent can invoke email agent"""
agent = create_research_agent()
result = await agent.run(
"Research AI safety and draft email to john@example.com"
)
assert "draft_id" in result.data
# test_email_agent.py
def test_gmail_authentication(monkeypatch):
"""Test Gmail OAuth flow handling"""
monkeypatch.setenv("GMAIL_CREDENTIALS_PATH", "test_creds.json")
tool = GmailTool()
assert tool.service is not None
async def test_create_draft():
"""Test draft creation with proper encoding"""
agent = create_email_agent()
result = await agent.run(
"Create email to test@example.com about AI research"
)
assert result.data.get("draft_id")
```
```bash
# Run tests iteratively until passing:
pytest tests/ -v --cov=agents --cov=tools --cov-report=term-missing
# If failing: Debug specific test, fix code, re-run
```
### Level 3: Integration Test
```bash
# Test CLI interaction
python cli.py
# Expected interaction:
# You: Research latest AI safety developments
# 🤖 Assistant: [Streams research results]
# 🛠 Tools Used:
# 1. brave_search (query='AI safety developments', limit=10)
#
# You: Create an email draft about this to john@example.com
# 🤖 Assistant: [Creates draft]
# 🛠 Tools Used:
# 1. create_email_draft (recipient='john@example.com', ...)
# Check Gmail drafts folder for created draft
```
## Final Validation Checklist
- [ ] All tests pass: `pytest tests/ -v`
- [ ] No linting errors: `ruff check .`
- [ ] No type errors: `mypy .`
- [ ] Gmail OAuth flow works (browser opens, token saved)
- [ ] Brave Search returns results
- [ ] Research Agent invokes Email Agent successfully
- [ ] CLI streams responses with tool visibility
- [ ] Error cases handled gracefully
- [ ] README includes clear setup instructions
- [ ] .env.example has all required variables
---
## Anti-Patterns to Avoid
- ❌ Don't hardcode API keys - use environment variables
- ❌ Don't use sync functions in async agent context
- ❌ Don't skip OAuth flow setup for Gmail
- ❌ Don't ignore rate limits for APIs
- ❌ Don't forget to pass ctx.usage in multi-agent calls
- ❌ Don't commit credentials.json or token.json files
## Confidence Score: 9/10
High confidence due to:
- Clear examples to follow from the codebase
- Well-documented external APIs
- Established patterns for multi-agent systems
- Comprehensive validation gates
Minor uncertainty on Gmail OAuth first-time setup UX, but documentation provides clear guidance.

View File

@ -0,0 +1,212 @@
name: "Base PRP Template v2 - Context-Rich with Validation Loops"
description: |
## Purpose
Template optimized for AI agents to implement features with sufficient context and self-validation capabilities to achieve working code through iterative refinement.
## Core Principles
1. **Context is King**: Include ALL necessary documentation, examples, and caveats
2. **Validation Loops**: Provide executable tests/lints the AI can run and fix
3. **Information Dense**: Use keywords and patterns from the codebase
4. **Progressive Success**: Start simple, validate, then enhance
5. **Global rules**: Be sure to follow all rules in CLAUDE.md
---
## Goal
[What needs to be built - be specific about the end state and desires]
## Why
- [Business value and user impact]
- [Integration with existing features]
- [Problems this solves and for whom]
## What
[User-visible behavior and technical requirements]
### Success Criteria
- [ ] [Specific measurable outcomes]
## All Needed Context
### Documentation & References (list all context needed to implement the feature)
```yaml
# MUST READ - Include these in your context window
- url: [Official API docs URL]
why: [Specific sections/methods you'll need]
- file: [path/to/example.py]
why: [Pattern to follow, gotchas to avoid]
- doc: [Library documentation URL]
section: [Specific section about common pitfalls]
critical: [Key insight that prevents common errors]
- docfile: [PRPs/ai_docs/file.md]
why: [docs that the user has pasted in to the project]
```
### Current Codebase tree (run `tree` in the root of the project) to get an overview of the codebase
```bash
```
### Desired Codebase tree with files to be added and responsibility of file
```bash
```
### Known Gotchas of our codebase & Library Quirks
```python
# CRITICAL: [Library name] requires [specific setup]
# Example: FastAPI requires async functions for endpoints
# Example: This ORM doesn't support batch inserts over 1000 records
# Example: We use pydantic v2 and
```
## Implementation Blueprint
### Data models and structure
Create the core data models, we ensure type safety and consistency.
```python
Examples:
- orm models
- pydantic models
- pydantic schemas
- pydantic validators
```
### list of tasks to be completed to fullfill the PRP in the order they should be completed
```yaml
Task 1:
MODIFY src/existing_module.py:
- FIND pattern: "class OldImplementation"
- INJECT after line containing "def __init__"
- PRESERVE existing method signatures
CREATE src/new_feature.py:
- MIRROR pattern from: src/similar_feature.py
- MODIFY class name and core logic
- KEEP error handling pattern identical
...(...)
Task N:
...
```
### Per task pseudocode as needed added to each task
```python
# Task 1
# Pseudocode with CRITICAL details dont write entire code
async def new_feature(param: str) -> Result:
# PATTERN: Always validate input first (see src/validators.py)
validated = validate_input(param) # raises ValidationError
# GOTCHA: This library requires connection pooling
async with get_connection() as conn: # see src/db/pool.py
# PATTERN: Use existing retry decorator
@retry(attempts=3, backoff=exponential)
async def _inner():
# CRITICAL: API returns 429 if >10 req/sec
await rate_limiter.acquire()
return await external_api.call(validated)
result = await _inner()
# PATTERN: Standardized response format
return format_response(result) # see src/utils/responses.py
```
### Integration Points
```yaml
DATABASE:
- migration: "Add column 'feature_enabled' to users table"
- index: "CREATE INDEX idx_feature_lookup ON users(feature_id)"
CONFIG:
- add to: config/settings.py
- pattern: "FEATURE_TIMEOUT = int(os.getenv('FEATURE_TIMEOUT', '30'))"
ROUTES:
- add to: src/api/routes.py
- pattern: "router.include_router(feature_router, prefix='/feature')"
```
## Validation Loop
### Level 1: Syntax & Style
```bash
# Run these FIRST - fix any errors before proceeding
ruff check src/new_feature.py --fix # Auto-fix what's possible
mypy src/new_feature.py # Type checking
# Expected: No errors. If errors, READ the error and fix.
```
### Level 2: Unit Tests each new feature/file/function use existing test patterns
```python
# CREATE test_new_feature.py with these test cases:
def test_happy_path():
"""Basic functionality works"""
result = new_feature("valid_input")
assert result.status == "success"
def test_validation_error():
"""Invalid input raises ValidationError"""
with pytest.raises(ValidationError):
new_feature("")
def test_external_api_timeout():
"""Handles timeouts gracefully"""
with mock.patch('external_api.call', side_effect=TimeoutError):
result = new_feature("valid")
assert result.status == "error"
assert "timeout" in result.message
```
```bash
# Run and iterate until passing:
uv run pytest test_new_feature.py -v
# If failing: Read error, understand root cause, fix code, re-run (never mock to pass)
```
### Level 3: Integration Test
```bash
# Start the service
uv run python -m src.main --dev
# Test the endpoint
curl -X POST http://localhost:8000/feature \
-H "Content-Type: application/json" \
-d '{"param": "test_value"}'
# Expected: {"status": "success", "data": {...}}
# If error: Check logs at logs/app.log for stack trace
```
## Final validation Checklist
- [ ] All tests pass: `uv run pytest tests/ -v`
- [ ] No linting errors: `uv run ruff check src/`
- [ ] No type errors: `uv run mypy src/`
- [ ] Manual test successful: [specific curl/command]
- [ ] Error cases handled gracefully
- [ ] Logs are informative but not verbose
- [ ] Documentation updated if needed
---
## Anti-Patterns to Avoid
- ❌ Don't create new patterns when existing ones work
- ❌ Don't skip validation because "it should work"
- ❌ Don't ignore failing tests - fix them
- ❌ Don't use sync functions in async context
- ❌ Don't hardcode values that should be config
- ❌ Don't catch all exceptions - be specific

View File

@ -0,0 +1,427 @@
# 🚀 Full Guide to Using Claude Code
Everything you need to know to crush building anything with Claude Code! This guide takes you from installation through advanced context engineering and parallel agent workflows.
## 📋 Prerequisites
- Terminal/Command line access
- Node.js installed (for Claude Code installation)
- GitHub account (for GitHub CLI integration)
- Text editor (VS Code recommended)
## 🔧 Installation
**macOS/Linux:**
```bash
npm install -g @anthropic-ai/claude-code
```
**Windows (WSL recommended):**
See detailed instructions in [install_claude_code_windows.md](../git-and-claude-code/install_claude_code_windows.md)
**Verify installation:**
```bash
claude --version
```
---
## ✅ TIP 1: CREATE AND OPTIMIZE CLAUDE.md FILES
Set up context files that Claude automatically pulls into every conversation, containing project-specific information, commands, and guidelines.
```bash
mkdir your-folder-name && cd your-folder-name
claude
```
Use the built-in command:
```
/init
```
Or create your own CLAUDE.md file based on the template in this repository. See `CLAUDE.md` for an example structure that includes:
- Project awareness and context rules
- Code structure guidelines
- Testing requirements
- Task completion workflow
- Style conventions
- Documentation standards
### File Placement Strategies
Claude automatically reads CLAUDE.md files from multiple locations:
```bash
# Root of repository (most common)
./CLAUDE.md # Checked into git, shared with team
./CLAUDE.local.md # Local only, add to .gitignore
# Parent directories (for monorepos)
root/CLAUDE.md # General project info
root/frontend/CLAUDE.md # Frontend-specific context
root/backend/CLAUDE.md # Backend-specific context
# Home folder (applies to all sessions)
~/.claude/CLAUDE.md # Personal preferences and global settings
# Child directories (pulled on demand)
root/components/CLAUDE.md # Component-specific guidelines
root/utils/CLAUDE.md # Utility function patterns
```
Claude reads files in this order:
1. Current directory
2. Parent directories (up to repository root)
3. Home directory ~/.claude/
---
## ✅ TIP 2: SET UP PERMISSION MANAGEMENT
Configure tool allowlists to streamline development while maintaining security for file operations and system commands.
**Method 1: Interactive Allowlist**
When Claude asks for permission, select "Always allow" for common operations.
**Method 2: Use /permissions command**
```
/permissions
```
Then add:
- `Edit` (for file edits)
- `Bash(git commit:*)` (for git commits)
- `Bash(npm:*)` (for npm commands)
**Method 3: Create project settings file**
Create `.claude/settings.local.json`:
```json
{
"allowedTools": [
"Edit",
"Bash(git add:*)",
"Bash(git commit:*)",
"Bash(npm:*)"
]
}
```
---
## ✅ TIP 3: INSTALL AND CONFIGURE GITHUB CLI
Set up the GitHub CLI to enable Claude to interact with GitHub for issues, pull requests, and repository management.
```bash
# Install GitHub CLI (if not already installed)
# Visit: https://github.com/cli/cli?tab=readme-ov-file#installation
# On Linux or Windows in WSL, see: https://github.com/cli/cli/blob/trunk/docs/install_linux.md
# Authenticate
gh auth login # WSL - you'll have to copy the URL and visit it manually
# Verify setup
gh repo list
```
Claude can now use commands like:
- `gh issue create`
- `gh pr create`
- `gh pr merge`
- `gh issue list`
- `gh repo clone`
**Custom GitHub Issue Fix Command:**
Use the custom fix-github-issue slash command to automatically analyze and fix GitHub issues:
```bash
/fix-github-issue 1
```
This command will:
1. Fetch issue details using `gh issue view`
2. Analyze the problem and search relevant code
3. Implement the fix with proper testing
4. Create a commit and pull request
---
## ✅ TIP 4: SAFE YOLO MODE WITH DEV CONTAINERS
Allow Claude Code to perform any action while maintaining safety through containerization. This enables rapid development without destructive behavior on your host machine. Anthropic documentation for this is [here](https://docs.anthropic.com/en/docs/claude-code/dev-container).
**Prerequisites:**
- Install [Docker](https://www.docker.com/) and VS Code (or a VS Code fork like Windsurf/Cursor)
**🛡️ Security Features:**
The dev container in this repository provides:
- **Network isolation**: Custom firewall restricts outbound connections to whitelisted domains only
- **Essential tools**: Pre-installed with Claude Code, GitHub CLI, and development tools
- **Secure environment**: Built on Node.js 20 with ZSH and developer-friendly tools
**Setup Process:**
1. **Open project in VS Code**
2. **Activate the dev container:**
- Press `F1` or `Ctrl/Cmd + Shift + P` to open Command Palette
- Type and select "Dev Containers: Reopen in Container"
- OR click the blue button in bottom-left corner → "Reopen in Container"
3. **Wait for container to build** (first time takes a few minutes)
4. **Open a new termainl** - Ctrl + J or Terminal → New Terminal
5. **Authenticate with Claude Code** - You'll have to set up Claude Code and authenticate again in the container
6. **Run Claude in YOLO mode:**
```bash
claude --dangerously-skip-permissions
```
Note - when you authenticate with Claude Code in the container, copy the auth URL and go to it manually in your browser instead of having it open the link automatically. That won't work since you're in the container!
**Configuration Details:**
The `.devcontainer/` folder contains:
- `devcontainer.json`: VS Code container configuration with extensions and settings
- `Dockerfile`: Container image with Node.js 20, development tools, and Claude Code pre-installed
- `init-firewall.sh`: Security script that allows only necessary domains (GitHub, Anthropic API, npm registry)
This setup enables rapid prototyping while preventing access to unauthorized external services.
---
## ✅ TIP 5: INTEGRATE MCP SERVERS
Connect Claude Code to Model Context Protocol (MCP) servers for enhanced functionality like browser automation and database management. Learn more in the [MCP documentation](https://docs.anthropic.com/en/docs/claude-code/mcp).
**Add MCP servers:**
```bash
# Stdio server (real example)
claude mcp add puppeteer npx @modelcontextprotocol/server-puppeteer
# SSE server (add your own server)
claude mcp add --transport sse myserver https://example.com/sse
# HTTP server (add your own server)
claude mcp add --transport http myserver https://example.com/api
```
For Puppeteer, also run the command to install necessary dependencies:
```bash
sudo apt-get install -y libnss3-dev libxss1 libxtst6 libxrandr2 libasound2t64 libpangocairo-1.0-0 libatk1.0-0t64 libcairo-gobject2 libgtk-3-0t64 libgdk-pixbuf2.0-0
```
**Manage MCP servers:**
```bash
# List all configured servers
claude mcp list
# Get details about a specific server
claude mcp get puppeteer
# Remove a server
claude mcp remove puppeteer
```
**Test MCP integration:**
> "Use puppeteer to visit https://docs.anthropic.com/en/docs/claude-code/hooks and get me a high level overview of Claude Code Hooks."
**Configuration scopes:**
- **Local**: Project-specific, private configuration
- **Project**: Shared via `.mcp.json`, team collaboration
- **User**: Available across all projects (`~/.claude/mcp.json`)
**Popular MCP servers:**
- **Puppeteer**: Browser automation and screenshots
- **Supabase**: Database management and real-time features
- **Neon**: Serverless PostgreSQL database operations
- **Sentry**: Error monitoring and performance tracking
- **Slack**: Team communication and notifications
- **Archon**: AI agent builder framework ([coming soon](https://github.com/coleam00/Archon))
**Advanced features:**
- Reference MCP resources using `@` mentions in your prompts
- Execute MCP prompts as slash commands
- OAuth 2.0 support for remote servers
---
## ✅ TIP 6: CONTEXT ENGINEERING
Transform your development workflow from simple prompting to comprehensive context engineering - providing AI with all the information needed for end-to-end implementation.
*Note: While your initial feature request is usually a comprehensive document outlining what you want, we provide a simpler template here to get started.*
### Quick Start
```bash
# 1. Use the provided template for your feature request
# Edit INITIAL.md (or copy INITIAL_EXAMPLE.md as a starting point)
# 2. Generate a comprehensive PRP (Product Requirements Prompt)
/generate-prp INITIAL.md
# 3. Execute the PRP to implement your feature
/execute-prp PRPs/your-feature-name.md
```
### The Context Engineering Workflow
**1. Create Your Initial Feature Request**
Use `INITIAL_EXAMPLE.md` as a template. The INITIAL.md file should contain:
- **FEATURE**: Specific description of what you want to build
- **EXAMPLES**: References to example files showing patterns to follow
- **DOCUMENTATION**: Links to relevant docs, APIs, or resources
- **OTHER CONSIDERATIONS**: Important details, gotchas, requirements
**2. Generate the PRP**
The `/generate-prp` command will:
- Research your codebase for patterns
- Search for relevant documentation
- Create a comprehensive blueprint in `PRPs/` folder
- Include validation gates and test requirements
**3. Execute the PRP**
The `/execute-prp` command will:
- Read all context from the PRP
- Create a detailed task list using TodoWrite
- Implement each component with validation
- Run tests and fix any issues
- Ensure all requirements are met
### Custom Slash Commands
The `.claude/commands/` folder contains reusable workflows:
- `generate-prp.md` - Researches and creates comprehensive PRPs
- `execute-prp.md` - Implements features from PRPs
These commands use the `$ARGUMENTS` variable to receive whatever you pass after the command name.
---
## ✅ TIP 7: PARALLEL DEVELOPMENT WITH GIT WORKTREES
Use Git worktrees to enable multiple Claude instances working on independent tasks simultaneously without conflicts.
### Manual Worktree Setup
```bash
# Create worktrees for different features
git worktree add ../project-feature-a feature-a
git worktree add ../project-feature-b feature-b
# Launch Claude in each worktree
cd ../project-feature-a && claude # Terminal tab 1
cd ../project-feature-b && claude # Terminal tab 2
```
**Benefits:**
- Independent tasks don't interfere
- No merge conflicts during development
- Isolated file systems for each task
- Share same Git history
**Cleanup when finished:**
```bash
git worktree remove ../project-feature-a
git branch -d feature-a
```
---
## ✅ TIP 8: AUTOMATED PARALLEL CODING AGENTS
Use automated commands to spin up multiple agents working on the same feature in parallel, then pick the best implementation.
### Setup Parallel Worktrees
```bash
/prep-parallel simple-cli 3
```
This creates three folders:
- `trees/simple-cli-1`
- `trees/simple-cli-2`
- `trees/simple-cli-3`
### Execute Parallel Implementations
1. Create a plan file (e.g., `plan.md`) describing the feature
2. Execute the parallel agents:
```bash
/execute-parallel simple-cli plan.md 3
```
Claude Code will:
- Kick off multiple agents in parallel
- Each tackles the same feature independently
- Different implementations due to LLM non-determinism
- Each saves results in `RESULTS.md` in their workspace
**Why this works:**
AI coding assistants make mistakes, so multiple attempts increase chances of success. You can review all implementations and merge the best one.
### Merge the Best Implementation
After reviewing the different implementations:
1. **Choose the best implementation:**
```bash
# Review each result
cat trees/simple-cli-1/RESULTS.md
cat trees/simple-cli-2/RESULTS.md
cat trees/simple-cli-3/RESULTS.md
```
2. **Merge the selected branch:**
```bash
# If you chose implementation #2
git checkout main
git merge simple-cli-2
git push origin main
```
3. **Clean up all worktrees:**
```bash
git worktree remove trees/simple-cli-1
git worktree remove trees/simple-cli-2
git worktree remove trees/simple-cli-3
git branch -d simple-cli-1
git branch -d simple-cli-2
git branch -d simple-cli-3
```
---
## 🎯 Quick Command Reference
| Command | Purpose |
|---------|---------|
| `/init` | Generate initial CLAUDE.md |
| `/permissions` | Manage tool permissions |
| `/clear` | Clear context between tasks |
| `ESC` | Interrupt Claude |
| `Shift+Tab` | Enter planning mode |
| `/generate-prp INITIAL.md` | Create implementation blueprint |
| `/execute-prp PRPs/feature.md` | Implement from blueprint |
| `/prep-parallel [feature] [count]` | Setup parallel worktrees |
| `/execute-parallel [feature] [plan] [count]` | Run parallel implementations |
---
## 📚 Additional Resources
- [Claude Code Documentation](https://docs.anthropic.com/en/docs/claude-code)
- [Claude Code Best Practices](https://www.anthropic.com/engineering/claude-code-best-practices)
- [MCP Server Library](https://github.com/modelcontextprotocol)
- [Context Engineering Guide](contextengineering.md)
---
## 🚀 Next Steps
1. Set up your CLAUDE.md file with project-specific context
2. Configure permissions for smooth workflow
3. Try context engineering with a small feature
4. Experiment with parallel agent development
5. Integrate MCP servers for your tech stack
Remember: Claude Code is most powerful when you provide clear context, specific instructions, and iterative feedback. Happy coding! 🎉