← Back to Glossary

Hallucination

Safety & Ethics

When an AI model confidently states something that is factually incorrect or completely made up, as if it were true.

Think of hallucination like a student who never says 'I don't know.' When they do not know the answer to a test question, instead of leaving it blank, they write something that sounds like it could be right -- mixing real concepts with made-up details. They are not trying to cheat; they just cannot tell the difference between what they know and what they are guessing.

Hallucination is the term used when an AI model generates information that sounds convincing and confident but is actually wrong or entirely fabricated. The AI is not lying on purpose -- it does not know the difference between true and false. It is simply generating the most plausible-sounding text based on patterns, and sometimes those patterns lead it to produce things that are not real.

This happens because language models work by predicting the most likely next word, over and over. They are optimized to produce fluent, confident-sounding text -- not to verify whether what they are saying is true. A model might cite a research paper that does not exist, invent statistics, attribute quotes to people who never said them, or make up historical events. And it does all of this in the same confident tone it uses for things it gets right.

Hallucination is one of the biggest practical challenges with AI today. It means you cannot blindly trust anything an AI tells you without verification. This is especially important for high-stakes tasks like medical advice, legal information, financial decisions, or academic research. The model might be right 95% of the time, but that 5% can be dangerously wrong.

There are ways to reduce hallucination. RAG (retrieval-augmented generation) helps by grounding the model's answers in actual documents. Telling the model "if you are not sure, say so" can help. Using the latest, most capable models also helps since they tend to hallucinate less. But no model is hallucination-free, and likely none will be for a long time. The practical takeaway: always verify important facts the AI gives you, especially specific claims, numbers, dates, and citations.

Real-World Examples

  • *An AI citing a scientific paper that sounds real but does not actually exist
  • *ChatGPT confidently providing incorrect historical dates or statistics
  • *An AI legal assistant citing fake court cases that were never filed

Tools That Use This

ChatGPTFreemiumClaudeFreemiumGeminiFreemium

Related Terms

RAG (Retrieval-Augmented Generation)AI SafetyLarge Language ModelAI Ethics