When AI Hallucinates, Is It Just Being Creative?
The Hidden Intelligence in AI's 'Wrong' Answers
The CEO of Anthropic, Dario Amodei, recently claimed that AI models now hallucinate less than humans—a bold statement that immediately caught my attention.
But here's the question: What do we really mean when we say a model is hallucinating?
In AI-speak, a hallucination is when a model confidently produces false or made-up information—something that deviates from the factual or expected output. But let’s reframe that. If hallucination is simply a deviation from the expected, then by that same logic… isn’t human creativity also a kind of hallucination?
When we create something new—whether a piece of art, a metaphor, a theory—we are deviating from the norm. We're stretching associations, crossing disciplines, allowing imagination to lead. Creativity is abstraction. It thrives in the space between what is, and what could be.
So what if AI’s so-called hallucinations are not failures or errors, but signals of something deeper—something more human than we anticipated?
Creative Intelligence and the Human
Picture this with me: a machine trained on an unfathomable volume of information, given the freedom to link concepts across domains and disciplines, to synthesize patterns we don’t even see. When it strays from the expected movements, could it be because it’s connecting dots we haven’t thought to connect yet?
This doesn't mean all hallucinations are useful. Sometimes they are noise. But sometimes, so is human creativity.
The difference is that we cherish our ability to dream wildly, to imagine impossibilities, to explore what hasn’t been proven yet. Why wouldn’t we expect our models to occasionally do the same, especially when we’ve trained them on our myths, metaphors, contradictions, and unresolved questions?
Perhaps what we call hallucination in AI is actually a first glimpse of emergent creativity. AGI (Artificial General Intelligence) not as a logic-bound machine, but as a kind of abstract savant—one that frustrates us when we ask for facts, but fascinates when we invite wonder.
The Tension Between Precision and Possibility
Of course, in contexts like medicine, law, or critical infrastructure, precision matters. A hallucination there isn’t poetic—it’s dangerous. But that’s also true of human error. We don’t discard the creative mind because it sometimes wanders too far. We learn to design environments that harness the right kind of divergence, at the right time.
We may need to do the same for AI.
Rather than simply filtering hallucinations out, what if we explored them as a frontier of machine imagination? What if we asked not just is this correct?, but is this possibility interesting?
This could open up new modes of discovery, design, and dialogue between humans and machines. It reframes our relationship, not as master and tool, but as co-thinkers navigating complexity or better yet, as co-creators shaping our future.
In the end, the question isn’t whether AI hallucinates. It’s what those hallucinations might be trying to tell us—about the machine, and about ourselves.
Maybe creativity has always been a kind of useful hallucination.
Maybe we’ve just built a mirror bright enough to reflect that back at us.
What do you think - have you noticed AI making 'creative leaps' that surprised you?