Hallucination
When an AI model generates information that sounds plausible and confident but is factually incorrect, fabricated, or not grounded in its training data or provided context. The model essentially 'makes things up'.
Why It Matters
Hallucinations are one of the biggest barriers to enterprise AI adoption. Understanding and mitigating them is critical for building trustworthy AI applications.
Example
An LLM confidently citing a research paper that does not exist, or inventing a historical event with specific dates and details that never happened.
Think of it like...
Like a confident storyteller who fills in gaps in their memory with plausible-sounding but completely fabricated details, and delivers them with total conviction.
Related Terms
Grounding
The practice of connecting AI model outputs to verifiable sources of information, ensuring responses are based on factual data rather than the model's potentially unreliable internal knowledge.
Retrieval-Augmented Generation
A technique that enhances LLM outputs by first retrieving relevant information from external knowledge sources and then using that information as context for generation. RAG combines the power of search with the fluency of language models.
Guardrails
Safety mechanisms and constraints built into AI systems to prevent harmful, inappropriate, or off-topic outputs. Guardrails can operate at the prompt, model, or output level.
Evaluation
The systematic process of measuring an AI model's performance, safety, and reliability using various metrics, benchmarks, and testing methodologies.