← All terms

Definition

Hallucination

When an AI model generates information that sounds plausible but is factually incorrect, fabricated, or not grounded in the provided context.

In Depth

Hallucination is one of the biggest challenges in deploying production AI agents. A model might fabricate statistics, cite non-existent sources, or confidently state incorrect facts. In agent systems, hallucination is particularly dangerous because agents can act on hallucinated information — sending incorrect data to customers, making wrong API calls, or producing flawed analysis. Mitigation strategies include RAG (grounding in real data), structured output (constraining valid responses), evaluation (catching hallucinations before they reach users), and guardrails (blocking outputs that can't be verified).

Build production AI agents with EigenForge

Join the Waitlist