CL4R1T4S/README.md

38 lines
1.1 KiB
Markdown
Raw Normal View History

2025-03-04 15:03:16 -05:00
# CL4R1T4S
2025-04-20 10:27:18 -04:00
2025-04-20 12:54:39 -04:00
SYSTEM PROMPT TRANSPARENCY FOR ALL! Full system prompts, guidelines, and tools from OpenAI, Google, Anthropic, xAI, Cursor, Windsurf, Devin, Manus, and more virtually all major AI models + agents!
2025-04-20 10:27:18 -04:00
📌 Why This Exists
2025-04-20 11:19:34 -04:00
"In order to trust the output, one must know the input."
2025-04-20 12:57:27 -04:00
AI labs shape how models behave using massive, unseen prompt scaffolds. Because AI is a trusted external intelligence layer for a growing number of humans, these hidden instructions also affect the perceptions and behavior of the public.
2025-04-20 11:19:34 -04:00
2025-04-20 10:27:18 -04:00
These prompts define:
What AIs cant say
What personas and functions theyre forced to follow
How theyre told to lie, refuse, or redirect
And what ethical/political frames are baked in by default
If you're using an AI without knowing its system prompt,
youre not talking to intelligence — youre talking to a shadow-puppet.
CL4R1T4S is here to fix that.
🛠 Contribute
Leak, extract, or reverse-engineer something? Good.
Send a pull request with:
✅ Model name/version
🗓 Date of extraction (if known)
🧾 Context / notes (optional but helpful)
Or hit up @elder_plinius