When AI Hallucinates, Is It Just Being Creative?
The Hidden Intelligence in AI's 'Wrong' Answers
The CEO of Anthropic, Dario Amodei, recently claimed that AI models now hallucinate less than humans—a bold statement that immediately caught my attention.
But here's the question: What do we really mean when we say a model is hallucinating?
In AI-speak, a hallucination is when a model confidently produces false or made-up information—something that deviates from the factual or expected output. But let’s reframe that. If hallucination is simply a deviation from the expected, then by that same logic… isn’t human creativity also a kind of hallucination?
When we create something new—whether a piece of art, a metaphor, a theory—we are deviating from the norm. We're stretching associations, crossing disciplines, allowing imagination to lead. Creativity is abstraction. It thrives in the space between what is, and what could be.
So what if AI’s so-called hallucinations are not failures or errors, but signals of something deeper—something more human than we anticipated?




