PRP Template for Pydantic AI Agents

This commit is contained in:
Cole Medin
2025-07-20 08:01:14 -05:00
parent 84d49cf30a
commit 1bcba59231
30 changed files with 6134 additions and 88 deletions

View File

@@ -0,0 +1,228 @@
# Template Generation Request
## TECHNOLOGY/FRAMEWORK:
**Example:** Pydantic AI agents
**Example:** Supabase frontend applications
**Example:** CrewAI multi-agent systems
**Your technology:** [Specify the exact framework, library, or technology you want to create a context engineering template for]
---
## TEMPLATE PURPOSE:
**What specific use case should this template be optimized for?**
**Example for Pydantic AI:** "Building intelligent AI agents with tool integration, conversation handling, and structured data validation using Pydantic AI framework"
**Example for Supabase:** "Creating full-stack web applications with real-time data, authentication, and serverless functions using Supabase as the backend"
**Your purpose:** [Be very specific about what developers should be able to build easily with this template]
---
## CORE FEATURES:
**What are the essential features this template should help developers implement?**
**Example for Pydantic AI:**
- Agent creation with different model providers (OpenAI, Anthropic, Gemini)
- Tool integration patterns (web search, file operations, API calls)
- Conversation memory and context management
- Structured output validation with Pydantic models
- Error handling and retry mechanisms
- Testing patterns for AI agent behavior
**Example for Supabase:**
- Database schema design and migrations
- Real-time subscriptions and live data updates
- Row Level Security (RLS) policy implementation
- Authentication flows (email, OAuth, magic links)
- Serverless edge functions for backend logic
- File storage and CDN integration
**Your core features:** [List the specific capabilities developers should be able to implement easily]
---
## EXAMPLES TO INCLUDE:
**What working examples should be provided in the template?**
**Example for Pydantic AI:**
- Basic chat agent with memory
- Tool-enabled agent (web search + calculator)
- Multi-step workflow agent
- Agent with custom Pydantic models for structured outputs
- Testing examples for agent responses and tool usage
**Example for Supabase:**
- User authentication and profile management
- Real-time chat or messaging system
- File upload and sharing functionality
- Multi-tenant application patterns
- Database triggers and functions
**Your examples:** [Specify concrete, working examples that should be included]
---
## DOCUMENTATION TO RESEARCH:
**What specific documentation should be thoroughly researched and referenced?**
**Example for Pydantic AI:**
- https://ai.pydantic.dev/ - Official Pydantic AI documentation
- https://docs.pydantic.dev/ - Pydantic data validation library
- Model provider APIs (OpenAI, Anthropic) for integration patterns
- Tool integration best practices and examples
- Testing frameworks for AI applications
**Example for Supabase:**
- https://supabase.com/docs - Complete Supabase documentation
- https://supabase.com/docs/guides/auth - Authentication guide
- https://supabase.com/docs/guides/realtime - Real-time features
- Database design patterns and RLS policies
- Edge functions development and deployment
**Your documentation:** [List specific URLs and documentation sections to research deeply]
---
## DEVELOPMENT PATTERNS:
**What specific development patterns, project structures, or workflows should be researched and included?**
**Example for Pydantic AI:**
- How to structure agent modules and tool definitions
- Configuration management for different model providers
- Environment setup for development vs production
- Logging and monitoring patterns for AI agents
- Version control patterns for prompts and agent configurations
**Example for Supabase:**
- Frontend + Supabase project structure patterns
- Local development workflow with Supabase CLI
- Database migration and versioning strategies
- Environment management (local, staging, production)
- Testing strategies for full-stack Supabase applications
**Your development patterns:** [Specify the workflow and organizational patterns to research]
---
## SECURITY & BEST PRACTICES:
**What security considerations and best practices are critical for this technology?**
**Example for Pydantic AI:**
- API key management and rotation
- Input validation and sanitization for agent inputs
- Rate limiting and usage monitoring
- Prompt injection prevention
- Cost control and monitoring for model usage
**Example for Supabase:**
- Row Level Security (RLS) policy design
- API key vs JWT authentication patterns
- Database security and access control
- File upload security and virus scanning
- Rate limiting and abuse prevention
**Your security considerations:** [List technology-specific security patterns to research and document]
---
## COMMON GOTCHAS:
**What are the typical pitfalls, edge cases, or complex issues developers face with this technology?**
**Example for Pydantic AI:**
- Model context length limitations and management
- Handling model provider rate limits and errors
- Token counting and cost optimization
- Managing conversation state across requests
- Tool execution error handling and retries
**Example for Supabase:**
- RLS policy debugging and testing
- Real-time subscription performance with large datasets
- Edge function cold starts and optimization
- Database connection pooling in serverless environments
- CORS configuration for different domains
**Your gotchas:** [Identify the specific challenges developers commonly face]
---
## VALIDATION REQUIREMENTS:
**What specific validation, testing, or quality checks should be included in the template?**
**Example for Pydantic AI:**
- Agent response quality testing
- Tool integration testing
- Model provider fallback testing
- Cost and performance benchmarking
- Conversation flow validation
**Example for Supabase:**
- Database migration testing
- RLS policy validation
- Real-time functionality testing
- Authentication flow testing
- Edge function integration testing
**Your validation requirements:** [Specify the testing and validation patterns needed]
---
## INTEGRATION FOCUS:
**What specific integrations or third-party services are commonly used with this technology?**
**Example for Pydantic AI:**
- Integration with vector databases (Pinecone, Weaviate)
- Web scraping tools and APIs
- External API integrations for tools
- Monitoring services (Weights & Biases, LangSmith)
- Deployment platforms (Modal, Replicate)
**Example for Supabase:**
- Frontend frameworks (Next.js, React, Vue)
- Payment processing (Stripe)
- Email services (SendGrid, Resend)
- File processing (image optimization, document parsing)
- Analytics and monitoring tools
**Your integration focus:** [List the key integrations to research and include]
---
## ADDITIONAL NOTES:
**Any other specific requirements, constraints, or considerations for this template?**
**Example:** "Focus on TypeScript patterns and include comprehensive type definitions"
**Example:** "Emphasize serverless deployment patterns and cost optimization"
**Example:** "Include patterns for both beginner and advanced use cases"
**Your additional notes:** [Any other important considerations]
---
## TEMPLATE COMPLEXITY LEVEL:
**What level of complexity should this template target?**
- [ ] **Beginner-friendly** - Simple getting started patterns
- [ ] **Intermediate** - Production-ready patterns with common features
- [ ] **Advanced** - Comprehensive patterns including complex scenarios
- [ ] **Enterprise** - Full enterprise patterns with monitoring, scaling, security
**Your choice:** [Select the appropriate complexity level and explain why]
---
**REMINDER: Be as specific as possible in each section. The more detailed you are here, the better the generated template will be. This INITIAL.md file is where you should put all your requirements, not just basic information.**

View File

@@ -0,0 +1,101 @@
# Template Generation Request
## TECHNOLOGY/FRAMEWORK:
**Example:** CrewAI multi-agent systems
**Your technology:** Pydantic AI agents
---
## TEMPLATE PURPOSE:
**What specific use case should this template be optimized for?**
**Your purpose:** Building intelligent AI agents with tool integration, conversation handling, and structured data validation using Pydantic AI framework
---
## CORE FEATURES:
**What are the essential features this template should help developers implement?**
**Your core features:**
- Agent creation with different model providers (OpenAI, Anthropic, Gemini)
- Tool integration patterns (web search, file operations, API calls)
- Conversation memory and context management
- Structured output validation with Pydantic models
- Error handling and retry mechanisms
- Testing patterns for AI agent behavior
---
## EXAMPLES TO INCLUDE:
**What working examples should be provided in the template?**
**Your examples:**
- Basic chat agent with memory
- Tool-enabled agent (web search + calculator)
- Multi-step workflow agent
- Agent with custom Pydantic models for structured outputs
- Testing examples for agent responses and tool usage
---
## DOCUMENTATION TO RESEARCH:
**What specific documentation should be thoroughly researched and referenced?**
**Your documentation:**
- https://ai.pydantic.dev/ - Official Pydantic AI documentation
- Model provider APIs (OpenAI, Anthropic) for integration patterns
- Tool integration best practices and examples
---
## DEVELOPMENT PATTERNS:
**What specific development patterns, project structures, or workflows should be researched and included?**
**Your development patterns:**
- How to structure agent modules and tool definitions
- Configuration management for different model providers
- Environment setup for development vs production
- Logging and monitoring patterns for AI agents
---
## SECURITY & BEST PRACTICES:
**What security considerations and best practices are critical for this technology?**
**Your security considerations:**
- API key management
- Input validation and sanitization for agent inputs
- Rate limiting and usage monitoring
- Prompt injection prevention
- Cost control and monitoring for model usage
---
## COMMON GOTCHAS:
**What are the typical pitfalls, edge cases, or complex issues developers face with this technology?**
**Your gotchas:**
- Handling model provider rate limits and errors
- Managing conversation state across requests
- Tool execution error handling and retries
---
## VALIDATION REQUIREMENTS:
**What specific validation, testing, or quality checks should be included in the template?**
**Your validation requirements:**
- Tool unit testing testing
- Agent unit testing

View File

@@ -0,0 +1,604 @@
---
name: "PydanticAI Template Generator PRP"
description: "Generate comprehensive context engineering template for PydanticAI agent development with tools, memory, and structured outputs"
---
## Purpose
Generate a complete context engineering template package for **PydanticAI** that enables developers to rapidly build intelligent AI agents with tool integration, conversation handling, and structured data validation using the PydanticAI framework.
## Core Principles
1. **PydanticAI Specialization**: Deep integration with PydanticAI patterns for agent creation, tools, and structured outputs
2. **Complete Package Generation**: Create entire template ecosystem with working examples and validation
3. **Type Safety First**: Leverage PydanticAI's type-safe design and Pydantic validation throughout
4. **Production Ready**: Include security, testing, and best practices for production deployments
5. **Context Engineering Integration**: Apply proven context engineering workflows to AI agent development
---
## Goal
Generate a complete context engineering template package for **PydanticAI** that includes:
- PydanticAI-specific CLAUDE.md implementation guide with agent patterns
- Specialized PRP generation and execution commands for AI agents
- Domain-specific base PRP template with agent architecture patterns
- Comprehensive working examples (chat agents, tool integration, multi-step workflows)
- PydanticAI-specific validation loops and testing patterns
## Why
- **AI Development Acceleration**: Enable rapid development of production-grade PydanticAI agents
- **Pattern Consistency**: Maintain established AI agent architecture patterns and best practices
- **Quality Assurance**: Ensure comprehensive testing for agent behavior, tools, and outputs
- **Knowledge Capture**: Document PydanticAI-specific patterns, gotchas, and integration strategies
- **Scalable AI Framework**: Create reusable templates for various AI agent use cases
## What
### Template Package Components
**Complete Directory Structure:**
```
use-cases/pydantic-ai/
├── CLAUDE.md # PydanticAI implementation guide
├── .claude/commands/
│ ├── generate-pydantic-ai-prp.md # Agent PRP generation
│ └── execute-pydantic-ai-prp.md # Agent PRP execution
├── PRPs/
│ ├── templates/
│ │ └── prp_pydantic_ai_base.md # PydanticAI base PRP template
│ ├── ai_docs/ # PydanticAI documentation
│ └── INITIAL.md # Example agent feature request
├── examples/
│ ├── basic_chat_agent/ # Simple chat agent with memory
│ ├── tool_enabled_agent/ # Web search + calculator tools
│ ├── workflow_agent/ # Multi-step workflow processing
│ ├── structured_output_agent/ # Custom Pydantic models
│ └── testing_examples/ # Agent testing patterns
├── copy_template.py # Template deployment script
└── README.md # Comprehensive usage guide
```
**PydanticAI Integration:**
- Agent creation with multiple model providers (OpenAI, Anthropic, Gemini)
- Tool integration patterns and function registration
- Conversation memory and context management using dependencies
- Structured output validation with Pydantic models
- Testing patterns using TestModel and FunctionModel
- Security patterns for API key management and input validation
**Context Engineering Adaptation:**
- PydanticAI-specific research processes and documentation references
- Agent-appropriate validation loops and testing strategies
- AI framework-specialized implementation blueprints
- Integration with base context engineering principles for AI development
### Success Criteria
- [ ] Complete PydanticAI template package structure generated
- [ ] All required files present with PydanticAI-specific content
- [ ] Agent patterns accurately represent PydanticAI best practices
- [ ] Context engineering principles adapted for AI agent development
- [ ] Validation loops appropriate for testing AI agents and tools
- [ ] Template immediately usable for creating PydanticAI projects
- [ ] Integration with base context engineering framework maintained
- [ ] Comprehensive examples and testing documentation included
## All Needed Context
### Documentation & References (RESEARCHED)
```yaml
# IMPORTANT - use the Archon MCP server to get more Pydantic AI documentation!
- mcp: Archon
why: Official Pydantic AI documentation ready for RAG lookup
content: All Pydantic AI documentation
# PYDANTIC AI CORE DOCUMENTATION - Essential framework understanding
- url: https://ai.pydantic.dev/
why: Official PydanticAI documentation with core concepts and getting started
content: Agent creation, model providers, type safety, dependency injection
- url: https://ai.pydantic.dev/agents/
why: Comprehensive agent architecture, system prompts, tools, structured outputs
content: Agent components, execution methods, configuration options
- url: https://ai.pydantic.dev/models/
why: Model provider configuration, API key management, fallback models
content: OpenAI, Anthropic, Gemini integration patterns and authentication
- url: https://ai.pydantic.dev/tools/
why: Function tool registration, context usage, rich returns, dynamic tools
content: Tool decorators, parameter validation, documentation patterns
- url: https://ai.pydantic.dev/testing/
why: Testing strategies, TestModel, FunctionModel, pytest patterns
content: Unit testing, agent behavior validation, mock model usage
- url: https://ai.pydantic.dev/examples/
why: Working examples for various PydanticAI use cases
content: Chat apps, RAG systems, SQL generation, FastAPI integration
# CONTEXT ENGINEERING FOUNDATION - Base framework to adapt
- file: ../../../README.md
why: Core context engineering principles and workflow to adapt for AI agents
- file: ../../../.claude/commands/generate-prp.md
why: Base PRP generation patterns to specialize for PydanticAI development
- file: ../../../.claude/commands/execute-prp.md
why: Base PRP execution patterns to adapt for AI agent validation
- file: ../../../PRPs/templates/prp_base.md
why: Base PRP template structure to specialize for PydanticAI domain
# MCP SERVER EXAMPLE - Reference implementation
- file: ../mcp-server/CLAUDE.md
why: Example of domain-specific implementation guide patterns
- file: ../mcp-server/.claude/commands/prp-mcp-create.md
why: Example of specialized PRP generation command structure
```
### PydanticAI Framework Analysis (FROM RESEARCH)
```typescript
// PydanticAI Architecture Patterns (from official docs)
interface PydanticAIPatterns {
// Core agent patterns
agent_creation: {
model_providers: ["openai:gpt-4o", "anthropic:claude-3-sonnet", "google:gemini-1.5-flash"];
configuration: ["system_prompt", "deps_type", "output_type", "instructions"];
execution_methods: ["run()", "run_sync()", "run_stream()", "iter()"];
};
// Tool integration patterns
tool_system: {
registration: ["@agent.tool", "@agent.tool_plain", "tools=[]"];
context_access: ["RunContext[DepsType]", "ctx.deps", "dependency_injection"];
return_types: ["str", "ToolReturn", "structured_data", "rich_content"];
validation: ["parameter_schemas", "docstring_extraction", "type_hints"];
};
// Testing and validation
testing_patterns: {
unit_testing: ["TestModel", "FunctionModel", "Agent.override()"];
validation: ["capture_run_messages()", "pytest_fixtures", "mock_dependencies"];
evals: ["model_performance", "agent_behavior", "production_monitoring"];
};
// Production considerations
security: {
api_keys: ["environment_variables", "secure_storage", "key_rotation"];
input_validation: ["pydantic_models", "parameter_validation", "sanitization"];
monitoring: ["logfire_integration", "usage_tracking", "error_handling"];
};
}
```
### Development Workflow Analysis (FROM RESEARCH)
```yaml
# PydanticAI Development Patterns (researched from docs and examples)
project_structure:
basic_pattern: |
my_agent/
├── agent.py # Main agent definition
├── tools.py # Tool functions
├── models.py # Pydantic output models
├── dependencies.py # Context dependencies
└── tests/
├── test_agent.py
└── test_tools.py
advanced_pattern: |
agents_project/
├── agents/
│ ├── __init__.py
│ ├── chat_agent.py
│ └── workflow_agent.py
├── tools/
│ ├── __init__.py
│ ├── web_search.py
│ └── calculator.py
├── models/
│ ├── __init__.py
│ └── outputs.py
├── dependencies/
│ ├── __init__.py
│ └── database.py
├── tests/
└── examples/
package_management:
installation: "pip install pydantic-ai"
optional_deps: "pip install 'pydantic-ai[examples]'"
dev_deps: "pip install pytest pytest-asyncio inline-snapshot dirty-equals"
testing_workflow:
unit_tests: "pytest tests/ -v"
agent_testing: "Use TestModel for fast validation"
integration_tests: "Use real models with rate limiting"
evals: "Run performance benchmarks separately"
environment_setup:
api_keys: ["OPENAI_API_KEY", "ANTHROPIC_API_KEY", "GEMINI_API_KEY"]
development: "Set ALLOW_MODEL_REQUESTS=False for testing"
production: "Configure proper logging and monitoring"
```
### Security and Best Practices (FROM RESEARCH)
```typescript
// Security patterns specific to PydanticAI (from research)
interface PydanticAISecurity {
// API key management
api_security: {
storage: "environment_variables_only";
access_control: "minimal_required_permissions";
monitoring: "usage_tracking_and_alerts";
};
// Input validation and sanitization
input_security: {
validation: "pydantic_models_for_all_inputs";
sanitization: "escape_user_content";
rate_limiting: "prevent_abuse_patterns";
content_filtering: "block_malicious_prompts";
};
// Prompt injection prevention
prompt_security: {
system_prompts: "clear_instruction_boundaries";
user_input: "validate_and_sanitize";
tool_calls: "parameter_validation";
output_filtering: "structured_response_validation";
};
// Production considerations
production_security: {
monitoring: "logfire_integration_recommended";
error_handling: "no_sensitive_data_in_logs";
dependency_injection: "secure_context_management";
testing: "security_focused_unit_tests";
};
}
```
### Common Gotchas and Edge Cases (FROM RESEARCH)
```yaml
# PydanticAI-specific gotchas discovered through research
agent_gotchas:
model_limits:
issue: "Different models have different token limits and capabilities"
solution: "Use FallbackModel for automatic model switching"
validation: "Test with multiple model providers"
async_patterns:
issue: "Mixing sync and async agent calls can cause issues"
solution: "Consistent async/await patterns throughout"
validation: "Test both sync and async execution paths"
dependency_injection:
issue: "Complex dependency graphs can be hard to debug"
solution: "Keep dependencies simple and well-typed"
validation: "Unit test dependencies in isolation"
tool_integration_gotchas:
parameter_validation:
issue: "Tools may receive unexpected parameter types"
solution: "Use strict Pydantic models for tool parameters"
validation: "Test tools with invalid inputs"
context_management:
issue: "RunContext state can become inconsistent"
solution: "Design stateless tools when possible"
validation: "Test context isolation between runs"
error_handling:
issue: "Tool errors can crash entire agent runs"
solution: "Implement retry mechanisms and graceful degradation"
validation: "Test error scenarios and recovery"
testing_gotchas:
model_costs:
issue: "Real model testing can be expensive"
solution: "Use TestModel and FunctionModel for development"
validation: "Separate unit tests from expensive eval runs"
async_testing:
issue: "Async agent testing requires special setup"
solution: "Use pytest-asyncio and proper fixtures"
validation: "Test both sync and async code paths"
deterministic_behavior:
issue: "AI responses are inherently non-deterministic"
solution: "Focus on testing tool calls and structured outputs"
validation: "Use inline-snapshot for complex assertions"
```
## Implementation Blueprint
### Technology Research Phase (COMPLETED)
**Comprehensive PydanticAI Analysis Complete:**
**Core Framework Analysis:**
- PydanticAI architecture, agent creation patterns, model provider integration
- Project structure conventions from official docs and examples
- Dependency injection system and type-safe design principles
- Development workflow with async/sync patterns and streaming support
**Tool System Investigation:**
- Function tool registration patterns (@agent.tool vs @agent.tool_plain)
- Context management with RunContext and dependency injection
- Parameter validation, docstring extraction, and schema generation
- Rich return types and multi-modal content support
**Testing Framework Analysis:**
- TestModel and FunctionModel for unit testing without API calls
- Agent.override() patterns for test isolation
- Pytest integration with async testing and fixtures
- Evaluation strategies for model performance vs unit testing
**Security and Production Patterns:**
- API key management with environment variables and secure storage
- Input validation using Pydantic models and parameter schemas
- Rate limiting, monitoring, and Logfire integration
- Common security vulnerabilities and prevention strategies
### Template Package Generation
Create complete PydanticAI context engineering template based on research findings:
```yaml
Generation Task 1 - Create PydanticAI Template Directory Structure:
CREATE complete use case directory structure:
- use-cases/pydantic-ai/
- .claude/commands/ with PydanticAI-specific slash commands
- PRPs/templates/ with agent-focused base template
- examples/ with working agent implementations
- All subdirectories per template package requirements
Generation Task 2 - Generate PydanticAI-Specific CLAUDE.md:
CREATE PydanticAI global rules file including:
- PydanticAI agent creation and tool integration patterns
- Model provider configuration and API key management
- Agent architecture patterns (chat, workflow, tool-enabled)
- Testing strategies with TestModel/FunctionModel
- Security best practices for AI agents and tool integration
- Common gotchas: async patterns, context management, model limits
Generation Task 3 - Create PydanticAI PRP Commands:
GENERATE domain-specific slash commands:
- generate-pydantic-ai-prp.md with agent research patterns
- execute-pydantic-ai-prp.md with AI agent validation loops
- Include PydanticAI documentation references and research strategies
- Agent-specific success criteria and testing requirements
Generation Task 4 - Develop PydanticAI Base PRP Template:
CREATE specialized prp_pydantic_ai_base.md template:
- Pre-filled with agent architecture patterns from research
- PydanticAI-specific success criteria and validation gates
- Official documentation references and model provider guides
- Agent testing patterns with TestModel and validation strategies
Generation Task 5 - Create Working PydanticAI Examples:
GENERATE comprehensive example agents:
- basic_chat_agent: Simple conversation with memory
- tool_enabled_agent: Web search and calculator integration
- workflow_agent: Multi-step task processing
- structured_output_agent: Custom Pydantic models
- testing_examples: Unit tests and validation patterns
- Include configuration files and environment setup
Generation Task 6 - Create Template Copy Script:
CREATE Python script for template deployment:
- copy_template.py with command-line interface
- Copies entire PydanticAI template structure to target location
- Handles all files: CLAUDE.md, commands, PRPs, examples, etc.
- Error handling and success feedback with next steps
Generation Task 7 - Generate Comprehensive README:
CREATE PydanticAI-specific README.md:
- Clear description: "PydanticAI Context Engineering Template"
- Template copy script usage (prominently at top)
- PRP framework workflow for AI agent development
- Template structure with PydanticAI-specific explanations
- Quick start guide with agent creation examples
- Working examples overview and testing patterns
```
### PydanticAI Specialization Details
```typescript
// Template specialization for PydanticAI
const pydantic_ai_specialization = {
agent_patterns: [
"chat_agent_with_memory",
"tool_integrated_agent",
"workflow_processing_agent",
"structured_output_agent"
],
validation: [
"agent_behavior_testing",
"tool_function_validation",
"output_schema_verification",
"model_provider_compatibility"
],
examples: [
"basic_conversation_agent",
"web_search_calculator_tools",
"multi_step_workflow_processing",
"custom_pydantic_output_models",
"comprehensive_testing_suite"
],
gotchas: [
"async_sync_mixing_issues",
"model_token_limits",
"dependency_injection_complexity",
"tool_error_handling_failures",
"context_state_management"
],
security: [
"api_key_environment_management",
"input_validation_pydantic_models",
"prompt_injection_prevention",
"rate_limiting_implementation",
"secure_tool_parameter_handling"
]
};
```
### Integration Points
```yaml
CONTEXT_ENGINEERING_FRAMEWORK:
- base_workflow: Inherit PRP generation/execution, adapt for AI agent development
- validation_principles: Extend with AI-specific testing (agent behavior, tool validation)
- documentation_standards: Maintain consistency while specializing for PydanticAI
PYDANTIC_AI_INTEGRATION:
- agent_architecture: Include chat, tool-enabled, and workflow agent patterns
- model_providers: Support OpenAI, Anthropic, Gemini configuration patterns
- testing_framework: Use TestModel/FunctionModel for development validation
- production_patterns: Include security, monitoring, and deployment considerations
TEMPLATE_STRUCTURE:
- directory_organization: Follow use case template patterns with AI-specific examples
- file_naming: generate-pydantic-ai-prp.md, prp_pydantic_ai_base.md
- content_format: Markdown with agent code examples and configuration
- command_patterns: Extend slash commands for AI agent development workflows
```
## Validation Loop
### Level 1: PydanticAI Template Structure Validation
```bash
# Verify complete PydanticAI template package structure
find use-cases/pydantic-ai -type f | sort
ls -la use-cases/pydantic-ai/.claude/commands/
ls -la use-cases/pydantic-ai/PRPs/templates/
ls -la use-cases/pydantic-ai/examples/
# Verify copy script and agent examples
test -f use-cases/pydantic-ai/copy_template.py
ls use-cases/pydantic-ai/examples/*/agent.py 2>/dev/null | wc -l # Should have agent files
python use-cases/pydantic-ai/copy_template.py --help 2>/dev/null || echo "Copy script needs help"
# Expected: All required files including working agent examples
# If missing: Generate missing components with PydanticAI patterns
```
### Level 2: PydanticAI Content Quality Validation
```bash
# Verify PydanticAI-specific content accuracy
grep -r "from pydantic_ai import Agent" use-cases/pydantic-ai/examples/
grep -r "@agent.tool" use-cases/pydantic-ai/examples/
grep -r "TestModel\|FunctionModel" use-cases/pydantic-ai/
# Check for PydanticAI patterns and avoid generic content
grep -r "TODO\|PLACEHOLDER" use-cases/pydantic-ai/
grep -r "openai:gpt-4o\|anthropic:" use-cases/pydantic-ai/
grep -r "RunContext\|deps_type" use-cases/pydantic-ai/
# Expected: Real PydanticAI code, no placeholders, agent patterns present
# If issues: Add proper PydanticAI-specific patterns and examples
```
### Level 3: PydanticAI Functional Validation
```bash
# Test PydanticAI template functionality
cd use-cases/pydantic-ai
# Test PRP generation with agent focus
/generate-pydantic-ai-prp INITIAL.md
ls PRPs/*.md | grep -v templates | head -1 # Should generate agent PRP
# Verify agent examples can be parsed (syntax check)
python -m py_compile examples/basic_chat_agent/agent.py 2>/dev/null && echo "Basic agent syntax OK"
python -m py_compile examples/tool_enabled_agent/agent.py 2>/dev/null && echo "Tool agent syntax OK"
# Expected: PRP generation works, agent examples have valid syntax
# If failing: Debug PydanticAI command patterns and fix agent code
```
### Level 4: PydanticAI Integration Testing
```bash
# Verify PydanticAI specialization maintains base framework compatibility
diff -r ../../.claude/commands/ .claude/commands/ | head -10
grep -r "Context is King" . | wc -l # Should inherit base principles
grep -r "pydantic.ai.dev\|PydanticAI" . | wc -l # Should have specializations
# Test agent examples have proper dependencies
grep -r "pydantic_ai" examples/ | wc -l # Should import PydanticAI
grep -r "pytest" examples/testing_examples/ | wc -l # Should have tests
# Expected: Proper specialization, working agent patterns, testing included
# If issues: Adjust to maintain compatibility while adding PydanticAI features
```
## Final Validation Checklist
### PydanticAI Template Package Completeness
- [ ] Complete directory structure: `tree use-cases/pydantic-ai`
- [ ] PydanticAI-specific files: CLAUDE.md with agent patterns, specialized commands
- [ ] Copy script present: `copy_template.py` with proper PydanticAI functionality
- [ ] README comprehensive: Includes agent development workflow and copy instructions
- [ ] Agent examples working: All examples use real PydanticAI code patterns
- [ ] Testing patterns included: TestModel/FunctionModel examples and validation
- [ ] Documentation complete: PydanticAI-specific patterns and gotchas documented
### Quality and Usability for PydanticAI
- [ ] No placeholder content: `grep -r "TODO\|PLACEHOLDER"` returns empty
- [ ] PydanticAI specialization: Agent patterns, tools, testing properly documented
- [ ] Validation loops work: All commands executable with agent-specific functionality
- [ ] Framework integration: Works with base context engineering for AI development
- [ ] Ready for AI development: Developers can immediately create PydanticAI agents
### PydanticAI Framework Integration
- [ ] Inherits base principles: Context engineering workflow preserved for AI agents
- [ ] Proper AI specialization: PydanticAI patterns, security, testing included
- [ ] Command compatibility: Slash commands work for agent development workflows
- [ ] Documentation consistency: Follows patterns while specializing for AI development
- [ ] Maintainable structure: Easy to update as PydanticAI framework evolves
---
## Anti-Patterns to Avoid
### PydanticAI Template Generation
- ❌ Don't create generic AI templates - research PydanticAI specifics thoroughly
- ❌ Don't skip agent architecture research - understand tools, memory, validation
- ❌ Don't use placeholder agent code - include real, working PydanticAI examples
- ❌ Don't ignore testing patterns - TestModel/FunctionModel are critical for AI
### PydanticAI Content Quality
- ❌ Don't assume AI patterns - document PydanticAI-specific gotchas explicitly
- ❌ Don't skip security research - API keys, input validation, prompt injection critical
- ❌ Don't ignore model providers - include OpenAI, Anthropic, Gemini patterns
- ❌ Don't forget async patterns - PydanticAI has specific async/sync considerations
### PydanticAI Framework Integration
- ❌ Don't break context engineering - maintain PRP workflow for AI development
- ❌ Don't duplicate base functionality - extend and specialize appropriately
- ❌ Don't ignore AI-specific validation - agent behavior testing is unique requirement
- ❌ Don't skip real examples - include working agents with tools and validation
**CONFIDENCE SCORE: 9/10** - Comprehensive PydanticAI research completed, framework patterns understood, ready to generate specialized context engineering template for AI agent development.

View File

@@ -0,0 +1,525 @@
---
name: "Template Generator PRP Base"
description: "Meta-template for generating context engineering templates for specific technology domains and use cases"
---
## Purpose
Template optimized for AI agents to generate complete context engineering template packages for specific technology domains (AI frameworks, frontend stacks, backend technologies, etc.) with comprehensive domain specialization and validation.
## Core Principles
1. **Meta-Context Engineering**: Apply context engineering principles to generate domain-specific templates
2. **Technology Specialization**: Deep integration with target framework patterns and conventions
3. **Complete Package Generation**: Create entire template ecosystems, not just individual files
4. **Validation-Driven**: Include comprehensive domain-appropriate testing and validation loops
5. **Usability First**: Generate templates that are immediately usable by developers
---
## Goal
Generate a complete context engineering template package for **[TARGET_TECHNOLOGY]** that includes:
- Domain-specific CLAUDE.md implementation guide
- Specialized PRP generation and execution commands
- Technology-appropriate base PRP template
- Comprehensive examples and documentation
- Domain-specific validation loops and success criteria
## Why
- **Developer Acceleration**: Enable rapid application of context engineering to any technology
- **Pattern Consistency**: Maintain context engineering principles across all domains
- **Quality Assurance**: Ensure comprehensive validation and testing for each technology
- **Knowledge Capture**: Document best practices and patterns for specific technologies
- **Scalable Framework**: Create reusable templates that evolve with technology changes
## What
### Template Package Components
**Complete Directory Structure:**
```
use-cases/{technology-name}/
├── CLAUDE.md # Domain implementation guide
├── .claude/commands/
│ ├── generate-{technology}-prp.md # Domain PRP generation
│ └── execute-{technology}-prp.md # Domain PRP execution
├── PRPs/
│ ├── templates/
│ │ └── prp_{technology}_base.md # Domain base PRP template
│ ├── ai_docs/ # Domain documentation (optional)
│ └── INITIAL.md # Example feature request
├── examples/ # Domain code examples
├── copy_template.py # Template deployment script
└── README.md # Comprehensive usage guide
```
**Technology Integration:**
- Framework-specific tooling and commands
- Architecture patterns and conventions
- Development workflow integration
- Testing and validation approaches
- Security and performance considerations
**Context Engineering Adaptation:**
- Domain-specific research processes
- Technology-appropriate validation loops
- Framework-specialized implementation blueprints
- Integration with base context engineering principles
### Success Criteria
- [ ] Complete template package structure generated
- [ ] All required files present and properly formatted
- [ ] Domain-specific content accurately represents technology patterns
- [ ] Context engineering principles properly adapted to the technology
- [ ] Validation loops appropriate and executable for the framework
- [ ] Template immediately usable for creating projects in the domain
- [ ] Integration with base context engineering framework maintained
- [ ] Comprehensive documentation and examples included
## All Needed Context
### Documentation & References (MUST READ)
```yaml
# CONTEXT ENGINEERING FOUNDATION - Understand the base framework
- file: ../../../README.md
why: Core context engineering principles and workflow to adapt
- file: ../../../.claude/commands/generate-prp.md
why: Base PRP generation patterns to specialize for domain
- file: ../../../.claude/commands/execute-prp.md
why: Base PRP execution patterns to adapt for technology
- file: ../../../PRPs/templates/prp_base.md
why: Base PRP template structure to specialize for domain
# MCP SERVER EXAMPLE - Reference implementation of domain specialization
- file: ../mcp-server/CLAUDE.md
why: Example of domain-specific implementation guide patterns
- file: ../mcp-server/.claude/commands/prp-mcp-create.md
why: Example of specialized PRP generation command
- file: ../mcp-server/PRPs/templates/prp_mcp_base.md
why: Example of domain-specialized base PRP template
# TARGET TECHNOLOGY RESEARCH - Add domain-specific documentation
- url: [OFFICIAL_FRAMEWORK_DOCS]
why: Core framework concepts, APIs, and architectural patterns
- url: [BEST_PRACTICES_GUIDE]
why: Established patterns and conventions for the technology
- url: [SECURITY_CONSIDERATIONS]
why: Security best practices and common vulnerabilities
- url: [TESTING_FRAMEWORKS]
why: Testing approaches and validation patterns for the technology
- url: [DEPLOYMENT_PATTERNS]
why: Production deployment and monitoring considerations
```
### Current Context Engineering Structure
```bash
# Base framework structure to extend
context-engineering-intro/
├── README.md # Core principles to adapt
├── .claude/commands/ # Base commands to specialize
├── PRPs/templates/prp_base.md # Base template to extend
├── CLAUDE.md # Base rules to inherit
└── use-cases/
├── mcp-server/ # Reference specialization example
└── template-generator/ # This meta-template system
```
### Target Technology Analysis Requirements
```typescript
// Research areas for technology specialization
interface TechnologyAnalysis {
// Core framework patterns
architecture: {
project_structure: string[];
configuration_files: string[];
dependency_management: string;
module_organization: string[];
};
// Development workflow
development: {
package_manager: string;
dev_server_commands: string[];
build_process: string[];
testing_frameworks: string[];
};
// Best practices
patterns: {
code_organization: string[];
state_management: string[];
error_handling: string[];
performance_optimization: string[];
};
// Integration points
ecosystem: {
common_libraries: string[];
deployment_platforms: string[];
monitoring_tools: string[];
CI_CD_patterns: string[];
};
}
```
### Known Template Generation Patterns
```typescript
// CRITICAL: Template generation must follow these patterns
// 1. ALWAYS inherit from base context engineering principles
const basePatterns = {
prp_workflow: "INITIAL.md → generate-prp → execute-prp",
validation_loops: "syntax → unit → integration → deployment",
context_richness: "documentation + examples + patterns + gotchas"
};
// 2. ALWAYS specialize for the target technology
const specialization = {
tooling: "Replace generic commands with framework-specific ones",
patterns: "Include framework architectural conventions",
validation: "Use technology-appropriate testing and linting",
examples: "Provide real, working code examples for the domain"
};
// 3. ALWAYS maintain usability and completeness
const quality_gates = {
immediate_usability: "Template works out of the box",
comprehensive_docs: "All patterns and gotchas documented",
working_examples: "Examples compile and run successfully",
validation_loops: "All validation commands are executable"
};
// 4. Common pitfalls to avoid
const anti_patterns = {
generic_content: "Don't use placeholder text - research actual patterns",
incomplete_research: "Don't skip technology-specific documentation",
broken_examples: "Don't include non-working code examples",
missing_validation: "Don't skip domain-appropriate testing patterns"
};
```
## Implementation Blueprint
### Technology Research Phase
**CRITICAL: Web search extensively before any template generation. This is essential for success.**
Conduct comprehensive analysis of the target technology using web research:
```yaml
Research Task 1 - Core Framework Analysis (WEB SEARCH REQUIRED):
WEB SEARCH and STUDY official documentation thoroughly:
- Framework architecture and design patterns
- Project structure conventions and best practices
- Configuration file patterns and management approaches
- Package/dependency management for the technology
- Getting started guides and setup procedures
Research Task 2 - Development Workflow Analysis (WEB SEARCH REQUIRED):
WEB SEARCH and ANALYZE development patterns:
- Local development setup and tooling
- Build processes and compilation steps
- Testing frameworks commonly used with this technology
- Debugging tools and development environments
- CLI commands and package management workflows
Research Task 3 - Best Practices Investigation (WEB SEARCH REQUIRED):
WEB SEARCH and RESEARCH established patterns:
- Code organization and file structure conventions
- Security best practices specific to this technology
- Common gotchas, pitfalls, and edge cases
- Error handling patterns and strategies
- Performance considerations and optimization techniques
Research Task 4 - Template Package Structure Planning:
PLAN how to create context engineering template for this technology:
- How to adapt PRP framework for this specific technology
- What domain-specific CLAUDE.md rules are needed
- What validation loops are appropriate for this framework
- What examples and documentation should be included
```
### Template Package Generation
Create complete context engineering template package based on web research findings:
```yaml
Generation Task 1 - Create Template Directory Structure:
CREATE complete use case directory structure:
- use-cases/{technology-name}/
- .claude/commands/ subdirectory
- PRPs/templates/ subdirectory
- examples/ subdirectory
- All other required subdirectories per template package requirements
Generation Task 2 - Generate Domain-Specific CLAUDE.md:
CREATE technology-specific global rules file:
- Technology-specific tooling and package management commands
- Framework architectural patterns and conventions from web research
- Development workflow procedures specific to this technology
- Security and best practices discovered through research
- Common gotchas and integration points found in documentation
Generation Task 3 - Create Specialized Template PRP Commands:
GENERATE domain-specific slash commands:
- generate-{technology}-prp.md with technology research patterns
- execute-{technology}-prp.md with framework validation loops
- Commands should reference technology-specific patterns from research
- Include web search strategies specific to this technology domain
Generation Task 4 - Develop Domain-Specific Base PRP Template:
CREATE specialized prp_{technology}_base.md template:
- Pre-filled with technology context from web research
- Technology-specific success criteria and validation gates
- Framework documentation references found through research
- Domain-appropriate implementation patterns and validation loops
Generation Task 5 - Create Examples and INITIAL.md Template:
GENERATE comprehensive template package content:
- INITIAL.md example showing how to request features for this technology
- Working code examples relevant to the technology (from research)
- Configuration file templates and patterns
Generation Task 6 - Create Template Copy Script:
CREATE Python script for template deployment:
- copy_template.py script that accepts target directory argument
- Copies entire template directory structure to specified location
- Includes all files: CLAUDE.md, commands, PRPs, examples, etc.
- Handles directory creation and file copying with error handling
- Simple command-line interface for easy usage
Generation Task 7 - Generate Comprehensive README:
CREATE comprehensive but concise README.md:
- Clear description of what this template is for and its purpose
- Explanation of the PRP framework workflow (3-step process)
- Template copy script usage instructions (prominently placed near top)
- Quick start guide with concrete examples
- Template structure overview showing all generated files
- Usage examples specific to this technology domain
```
### Implementation Details for Copy Script and README
**Copy Script (copy_template.py) Requirements:**
```python
# Essential copy script functionality:
# 1. Accept target directory as command line argument
# 2. Copy entire template directory structure to target location
# 3. Include ALL files: CLAUDE.md, .claude/, PRPs/, examples/, README.md
# 4. Handle directory creation and error handling
# 5. Provide clear success feedback with next steps
# 6. Simple usage: python copy_template.py /path/to/target
```
**README Structure Requirements:**
```markdown
# Must include these sections in this order:
# 1. Title and brief description of template purpose
# 2. 🚀 Quick Start - Copy Template First (prominently at top)
# 3. 📋 PRP Framework Workflow (3-step process explanation)
# 4. 📁 Template Structure (directory tree with explanations)
# 5. 🎯 What You Can Build (technology-specific examples)
# 6. 📚 Key Features (framework capabilities)
# 7. 🔍 Examples Included (working examples provided)
# 8. 📖 Documentation References (research sources)
# 9. 🚫 Common Gotchas (technology-specific pitfalls)
# Copy script usage must be prominently featured near the top
# PRP workflow must clearly show the 3 steps with actual commands
# Everything should be technology-specific, not generic
```
### Domain Specialization Details
```typescript
// Template specialization patterns for specific domains
// For AI/ML Frameworks (Pydantic AI, CrewAI, etc.)
const ai_specialization = {
patterns: ["agent_architecture", "tool_integration", "model_configuration"],
validation: ["model_response_testing", "agent_behavior_validation"],
examples: ["basic_agent", "multi_agent_system", "tool_integration"],
gotchas: ["token_limits", "model_compatibility", "async_patterns"]
};
// For Frontend Frameworks (React, Vue, Svelte, etc.)
const frontend_specialization = {
patterns: ["component_architecture", "state_management", "routing"],
validation: ["component_testing", "e2e_testing", "accessibility"],
examples: ["basic_app", "state_integration", "api_consumption"],
gotchas: ["bundle_size", "ssr_considerations", "performance"]
};
// For Backend Frameworks (FastAPI, Express, Django, etc.)
const backend_specialization = {
patterns: ["api_design", "database_integration", "authentication"],
validation: ["api_testing", "database_testing", "security_testing"],
examples: ["rest_api", "auth_system", "database_models"],
gotchas: ["security_vulnerabilities", "performance_bottlenecks", "scalability"]
};
// For Database/Data Frameworks (SQLModel, Prisma, etc.)
const data_specialization = {
patterns: ["schema_design", "migration_management", "query_optimization"],
validation: ["schema_testing", "migration_testing", "query_performance"],
examples: ["basic_models", "relationships", "complex_queries"],
gotchas: ["migration_conflicts", "n+1_queries", "index_optimization"]
};
```
### Integration Points
```yaml
CONTEXT_ENGINEERING_FRAMEWORK:
- base_workflow: Inherit core PRP generation and execution patterns from base framework
- validation_principles: Extend base validation with domain-specific checks for the technology
- documentation_standards: Maintain consistency with base context engineering documentation patterns
TECHNOLOGY_INTEGRATION:
- package_management: Include framework-specific package managers and tooling
- development_tools: Include technology-specific development and testing tools
- framework_patterns: Use technology-appropriate architectural and code patterns
- validation_approaches: Include framework-specific testing and validation methods
TEMPLATE_STRUCTURE:
- directory_structure: Follow established use case template patterns from base framework
- file_naming: Maintain consistent naming conventions (generate-{tech}-prp.md, etc.)
- content_format: Use established markdown and documentation formats
- command_patterns: Extend base slash command functionality for the specific technology
```
## Validation Loop
### Level 1: Template Structure Validation
```bash
# CRITICAL: Verify complete template package structure
find use-cases/{technology-name} -type f | sort
ls -la use-cases/{technology-name}/.claude/commands/
ls -la use-cases/{technology-name}/PRPs/templates/
# Verify copy script exists and is functional
test -f use-cases/{technology-name}/copy_template.py
python use-cases/{technology-name}/copy_template.py --help 2>/dev/null || echo "Copy script needs help option"
# Expected: All required files present including copy_template.py
# If missing: Generate missing files following established patterns
```
### Level 2: Content Quality Validation
```bash
# Verify domain-specific content accuracy
grep -r "TODO\|PLACEHOLDER\|{domain}" use-cases/{technology-name}/
grep -r "{technology}" use-cases/{technology-name}/ | wc -l
# Check for technology-specific patterns
grep -r "framework-specific-pattern" use-cases/{technology-name}/
grep -r "validation" use-cases/{technology-name}/.claude/commands/
# Expected: No placeholder content, technology patterns present
# If issues: Research and add proper domain-specific content
```
### Level 3: Functional Validation
```bash
# Test template functionality
cd use-cases/{technology-name}
# Test PRP generation command
/generate-prp INITIAL.md
ls PRPs/*.md | grep -v templates
# Test template completeness
grep -r "Context is King" . | wc -l # Should inherit principles
grep -r "{technology-specific}" . | wc -l # Should have specializations
# Expected: PRP generation works, content is specialized
# If failing: Debug command patterns and template structure
```
### Level 4: Integration Testing
```bash
# Verify integration with base context engineering framework
diff -r ../../.claude/commands/ .claude/commands/ | head -20
diff ../../CLAUDE.md CLAUDE.md | head -20
# Test template produces working results
cd examples/
# Run any example validation commands specific to the technology
# Expected: Proper specialization without breaking base patterns
# If issues: Adjust specialization to maintain compatibility
```
## Final Validation Checklist
### Template Package Completeness
- [ ] Complete directory structure: `tree use-cases/{technology-name}`
- [ ] All required files present: CLAUDE.md, commands, base PRP, examples
- [ ] Copy script present: `copy_template.py` with proper functionality
- [ ] README comprehensive: Includes copy script instructions and PRP workflow
- [ ] Domain-specific content: Technology patterns accurately represented
- [ ] Working examples: All examples compile/run successfully
- [ ] Documentation complete: README and usage instructions clear
### Quality and Usability
- [ ] No placeholder content: `grep -r "TODO\|PLACEHOLDER"`
- [ ] Technology specialization: Framework patterns properly documented
- [ ] Validation loops work: All commands executable and functional
- [ ] Integration maintained: Works with base context engineering framework
- [ ] Ready for use: Developer can immediately start using template
### Framework Integration
- [ ] Inherits base principles: Context engineering workflow preserved
- [ ] Proper specialization: Technology-specific patterns included
- [ ] Command compatibility: Slash commands work as expected
- [ ] Documentation consistency: Follows established documentation patterns
- [ ] Maintainable structure: Easy to update as technology evolves
---
## Anti-Patterns to Avoid
### Template Generation
- ❌ Don't create generic templates - always research and specialize deeply
- ❌ Don't skip comprehensive technology research - understand frameworks thoroughly
- ❌ Don't use placeholder content - always include real, researched information
- ❌ Don't ignore validation loops - include comprehensive testing for the technology
### Content Quality
- ❌ Don't assume knowledge - document everything explicitly for the domain
- ❌ Don't skip edge cases - include common gotchas and error handling
- ❌ Don't ignore security - always include security considerations for the technology
- ❌ Don't forget maintenance - ensure templates can evolve with technology changes
### Framework Integration
- ❌ Don't break base patterns - maintain compatibility with context engineering principles
- ❌ Don't duplicate effort - reuse and extend base framework components
- ❌ Don't ignore consistency - follow established naming and structure conventions
- ❌ Don't skip validation - ensure templates actually work before completion