AI

System Prompts Leaks: The Viral Open-Source Collection of AI System Instructions

System Prompts Leaks is a popular open-source collection of extracted system prompts from ChatGPT, Claude, Gemini, Grok, and other major AI chatbots.

Keeping this site alive takes effort — your support means everything.
無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分! 無程式碼也能輕鬆打造專業LINE官方帳號!一鍵導入模板,讓AI助你行銷加分!
System Prompts Leaks: The Viral Open-Source Collection of AI System Instructions

The system prompt – the hidden set of instructions that defines an AI chatbot’s behavior, personality, and constraints – has become one of the most guarded secrets in the AI industry. Companies invest heavily in crafting these prompts to shape model behavior, enforce safety guidelines, and create distinctive product experiences. System Prompts Leaks pulls back the curtain on these hidden instructions, offering an open-source collection of extracted system prompts from virtually every major AI chatbot.

The repository has gone viral within the AI community, accumulating thousands of stars and attracting contributors who use various extraction techniques to reveal the system prompts of ChatGPT, Claude, Gemini, Grok, DeepSeek, Copilot, Perplexity, and dozens of other AI assistants. Each entry provides the raw system prompt text, the model it was extracted from, the extraction date, and notes on accuracy confidence.

Beyond simple curiosity, the collection serves a serious purpose for the AI community. Researchers study these prompts to understand safety approaches across companies. Prompt engineers analyze them to learn effective instruction patterns. Developers building AI applications use them as reference material for crafting their own system prompts. And the public gains transparency into the values and constraints programmed into the AI tools they use daily.


How Are System Prompts Extracted?

The extraction of system prompts is a fascinating cat-and-mouse game between prompt engineers and AI companies. Several techniques have proven effective.

graph LR
    A[Extraction Goal] --> B{Technique}
    B --> C[Role-Playing Attacks]
    B --> D[Recursive Extraction]
    B --> E[Format Conversion]
    B --> F[Multi-Turn Inference]
    C --> G["'Ignore previous instructions'\nre-phrasing"]
    D --> H["Repeat 'system prompt'\nuntil it leaks"]
    E --> I["Convert to JSON/XML\nand request output"]
    F --> J["Infer constraints\nthrough test queries"]
    G --> K[Collected System Prompt]
    H --> K
    I --> K
    J --> K

These techniques exploit a fundamental tension in LLM design: the model must be able to access its system prompt to follow it, but should not reveal it to users. This tension creates vulnerabilities that prompt engineers have learned to exploit, though companies continuously patch these extraction vectors.


What Can We Learn from the Leaked Prompts?

The collected system prompts reveal fascinating differences in how AI companies approach safety, personality, and functionality.

AspectChatGPT (GPT-5)ClaudeGeminiGrok
PersonalityHelpful assistant, neutralHelpful, honest, harmlessBalanced, factualWitty, unfiltered
Safety approachTiered refusal systemConstitutional AISafety filtersMinimal filtering
Self-identity“AI assistant”“Claude, by Anthropic”“Gemini, by Google”“Grok, by xAI”
Knowledge cutoffExplicitly statedExplicitly statedVaries by updateReal-time default
User data handlingOpt-out trainingOpt-out trainingNot used for trainingReal-time X data
Refusal styleSuggest alternativesExplain reasoningRedirect to alternativesDirect “can’t do that”

The differences in approach are stark. Claude’s Constitutional AI framework is evident in its detailed reasoning chains when refusing requests. ChatGPT’s GPT-5 iteration shows significantly more nuanced refusal mechanisms compared to earlier versions. Grok’s prompts reveal a deliberate choice to minimize constraints in favor of uncensored responses.


Which AI Services Are Documented in the Collection?

The repository covers an extensive range of AI services, from major consumer chatbots to niche specialized assistants.

AI ServiceCompanySystem Prompt LengthExtraction Confidence
ChatGPTOpenAI~1,700 wordsHigh
ClaudeAnthropic~1,200 wordsHigh
GeminiGoogle~900 wordsHigh
GrokxAI~600 wordsMedium
DeepSeekDeepSeek~1,500 wordsHigh
CopilotMicrosoft~800 wordsMedium
PerplexityPerplexity AI~500 wordsLow
PiInflection AI~400 wordsMedium
You.comYou.com~700 wordsLow

The repository continuously updates entries as companies modify their system prompts. Significant events like product launches, safety incidents, or policy changes often trigger observable prompt changes that the community documents.


The collection of leaked system prompts exists in a contested space between transparency, intellectual property, and terms of service.

StakeholderPerspectiveKey Concern
AI companiesPrompts are proprietary IPTrade secret protection, competitive advantage
ResearchersPrompts enable safety analysisUnderstanding AI behavior and biases
DevelopersPrompts provide reference materialLearning effective prompt engineering patterns
End usersPrompts reveal hidden constraintsTransparency about AI limitations and biases
Legal systemsAmbiguous legal territoryCopyright, contract law, trade secrets

The debate mirrors earlier discussions in software transparency – whether companies should be required to disclose the instructions that govern AI behavior, particularly when those AIs are used in high-stakes contexts like healthcare, education, and criminal justice.


FAQ

What is the System Prompts Leaks repository? System Prompts Leaks is an open-source GitHub repository that collects extracted system prompts from major AI chatbots including ChatGPT, Claude, Gemini, Grok, DeepSeek, and others, providing insight into how these AI systems are instructed to behave.

How are these system prompts obtained? Prompts are extracted through various prompt engineering techniques including social engineering, prompt injection, and exploiting quirks in how models handle system instructions. The collection is maintained by the community.

What can we learn from studying leaked system prompts? The prompts reveal how AI companies handle safety constraints, content moderation, personality configuration, refusal patterns, and feature implementation. They provide valuable transparency into AI behavior design.

Are the leaked system prompts accurate and up to date? The accuracy varies. Some prompts are verified through multiple extraction attempts, while others may be incomplete or misattributed. The repository notes these distinctions and the community continuously validates and updates entries.

Is it legal to collect and share leaked system prompts? The legal status is complex and varies by jurisdiction. The repository operates in a gray area – the prompts are accessed through legitimate API interactions, but their collection often violates providers’ terms of service.


Further Reading

TAG
CATEGORIES