Yes, LLMs are overhyped. Yes, AI may still change everything.
September 9, 2025
By Matthew Pietz
This article was written and edited without the use of AI.
The last few weeks have seen some bad news for Large Language Models (LLMs), the basis of today’s AI chatbots.
People were underwhelmed by ChatGPT 5.0, after OpenAI CEO Sam Altman promised it would be like having a PhD in your pocket (his PR people should’ve told him PhDs can do arithmetic). An MIT study showing 95% failure of generative AI pilots at corporations has been all over the internet (though arguably misunderstood). And the US Census Bureau has now released data showing adoption of AI is dropping across large US firms.
Some commenters have said AI is never going to change things in any serious way, and essentially it’s all been hype. Nobel Laureate Daren Acemoglu predicts AI will add a scant 1% to GDP over 10 years.
There has been hype, which is par for the course in the arrival of a new technology. The bubble that accompanies a major innovation is now bursting, and when everyone has taken a deep breath we can look forward to more gradual, grounded adoption. Those familiar with Gartner’s Hype Cycle will recognize that we’ve passed the Peak of Inflated Expectations for generative AI.
At Keranaut we try to take the long view. The evolution of human societies has been punctuated by increasingly complex ways of recording and then using information: cuneiform on clay tablets; the invention of alphabets; the arrival of the printing press, which democratized literacy over centuries and led to the proliferation of journals and libraries; then brand new media with the invention of photography and radio; the computer; and then all computers linked by the internet.
Each leap forward in information architecture came faster than the last, and each made it easier for people to learn and collaborate to drive innovations in all other fields of human endeavor. The nice things we have were largely made possible by information recording and sharing.
We’ve come a long way from clay tablets, and Keranaut doesn’t feel progress is going to stop because in 2025 LLMs have a tendency to hallucinate and forget things. We believe AI retains its potential to utterly transform the economy and society in the next 10 years.
But that can only be true if AI developers overcome current hurdles. How can they? What are the pathways ahead? Here are a few:
Agentic AI. Confusingly, this term has two uses at the moment. The scary one is “AI with its own agenda”, but in the engineering sense it means AI that breaks down its reasoning process, questions and double-checks its own findings, and looks to outside sources for verification, rather than just relying on its training data. Recent research points at significant promise in this field to improve accuracy.
Multi-modal AI. Feeding an AI visual and audio data can give it a richer sense of the world, external reference points to check its outputs against. Hallucinations often come from the source material on which the AI was trained—getting different types of data from the real world helps it lean less on potentially inaccurate text.
Neuromorphic Computing. Made of physical neurons and synapses, neuromorphic devices try to mimic the human brain, and can adapt and learn and take in sensory information. Plus, Intel’s chip uses 100 times less energy than a CPU. These chips don’t yet function as generative AI, but it seems they might, as synthesis is an emergent property of neural networks. Watch this tech closely in the next 2-3 years.
Next-generation AI will probably grow from a combination of these and other approaches. Constraints like chip availability, data center limitations, and energy use also need to be addressed—and will be the subject of future Keranaut posts—but there is good reason to believe that innovation and growth in the design of AI has only hit a bump in the summer of 2025, not a roadblock.
Is that good or bad news? Let us know what you think in the comments.
Click here to subscribe and be notified of future posts