Artificial Intelligence

Hallucination Detection

Methods and systems for automatically identifying when an AI model has generated false or unsupported information. Detection can compare outputs against source documents or use consistency checks.

Why It Matters

Hallucination detection is critical for deploying AI in high-stakes domains. Without it, users cannot distinguish confident truth from confident fiction.

Example

A system that cross-references an LLM's medical response against a trusted medical database, flagging any claims not supported by the source material.

Think of it like...

Like a fact-checker at a newspaper who reviews articles against source material before publication — they catch errors before they reach the audience.

Related Terms