Generative AI hallucinates because it predicts text based on patterns, not truth. It doesn't understand facts-it just repeats what it's seen. This is why it invents fake citations, medical facts, and court cases with perfect confidence.
Large language models often answer confidently even when they're wrong. Learn how AI systems are learning to recognize their own knowledge limits and communicate uncertainty to reduce hallucinations and build trust.