The Sacred Profane

The Sacred Profane

When AI Hallucinates, Is It Just Being Creative?

The Hidden Intelligence in AI's 'Wrong' Answers

Danu Vino's avatar
Danu Vino
May 30, 2025
∙ Paid
Image by Author

The CEO of Anthropic, Dario Amodei, recently claimed that AI models now hallucinate less than humans—a bold statement that immediately caught my attention.

But here's the question: What do we really mean when we say a model is hallucinating?

In AI-speak, a hallucination is when a model confidently produces false or made-up information—something that deviates from the factual or expected output. But let’s reframe that. If hallucination is simply a deviation from the expected, then by that same logic… isn’t human creativity also a kind of hallucination?

When we create something new—whether a piece of art, a metaphor, a theory—we are deviating from the norm. We're stretching associations, crossing disciplines, allowing imagination to lead. Creativity is abstraction. It thrives in the space between what is, and what could be.

So what if AI’s so-called hallucinations are not failures or errors, but signals of something deeper—something more human than we anticipated?

User's avatar

Continue reading this post for free, courtesy of Danu Vino.

Or purchase a paid subscription.
© 2026 Danu Vino · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture