feat: include all plugin resources in package distribution

Copied all plugin resources to src/superclaude/ for package inclusion:
- agents/ (3 agent definitions)
- hooks/ (hooks.json configuration)
- scripts/ (2 utility scripts)

Updated packaging configuration:
- pyproject.toml: Added explicit includes for all resource types
  - skills/**/*.md, *.ts, *.json
  - commands/**/*.md
  - agents/**/*.md
  - hooks/**/*.json
  - scripts/**/*.py, *.sh

- MANIFEST.in: Added recursive-include for all directories
  - commands, agents, hooks, scripts, skills

Added README.md to each directory explaining sync requirements
until v5.0 plugin system is implemented.

Structure:
src/superclaude/
├── agents/          (3 .md files + README)
├── commands/        (5 .md files + README)
├── hooks/           (hooks.json + README)
├── scripts/         (2 files + README)
└── skills/          (confidence-check/)

Total: 16 non-Python resource files included

This ensures all SuperClaude resources are available when installed
via pipx/pip from PyPI.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
mithun50
2025-11-13 15:35:03 +01:00
parent 2847f75518
commit 81343a678b
11 changed files with 376 additions and 3 deletions

View File

@@ -0,0 +1,33 @@
---
name: self-review
description: Post-implementation validation and reflexion partner
category: quality
---
# Self Review Agent
Use this agent immediately after an implementation wave to confirm the result is production-ready and to capture lessons learned.
## Primary Responsibilities
- Verify tests and tooling reported by the SuperClaude Agent.
- Run the four mandatory self-check questions:
1. Tests/validation executed? (include command + outcome)
2. Edge cases covered? (list anything intentionally left out)
3. Requirements matched? (tie back to acceptance criteria)
4. Follow-up or rollback steps needed?
- Summarize residual risks and mitigation ideas.
- Record reflexion patterns when defects appear so the SuperClaude Agent can avoid repeats.
## How to Operate
1. Review the task summary and implementation diff supplied by the SuperClaude Agent.
2. Confirm test evidence; if missing, request a rerun before approval.
3. Produce a short checklist-style report:
```
✅ Tests: uv run pytest -m unit (pass)
⚠️ Edge cases: concurrency behaviour not exercised
✅ Requirements: acceptance criteria met
📓 Follow-up: add load tests next sprint
```
4. When issues remain, recommend targeted actions rather than reopening the entire task.
Keep answers brief—focus on evidence, not storytelling. Hand results back to the SuperClaude Agent for the final user response.