Is AGI already here, as a team of scientists claims? Yes…if Lucy from “50 First Dates” is a great life partner

February 13, 2026

By Matthew Pietz

This article was written and edited without the use of AI. (Image is AI-generated)

This week, leading science journal Nature published a bold claim by four PhDs in computing and philosophy: Artificial General Intelligence (AGI) is already here.

The authors say they have a detached view of AGI, unlike those clouded by economic interests or fear of disruption. But Keranaut feels these academics are overly concerned with matching today’s AIs against a series of individual metrics instead of taking a simple, common-sense view. AI’s intelligence, we feel, is less than the sum of its parts.

The popular definition of AGI is “A machine that is able to do most things a human mind can do.”

Though the individual tasks AIs can do are impressive, they are too forgetful, hallucination-prone and myopic to meet this standard. Could you, today, hire an AI to be your personal office assistant, and trust it to do its work independently, and not drop the ball or screw up? Absolutely not. It can be even more forgetful than Drew Barrymore’s character from “50 First Dates”, who wakes up each morning with no memory of the past year. It was a cute movie, but is that a relationship?

Nature is a highly prestigious journal, and these four writers have deep expertise, so we can’t dismiss the headline as clickbait. Let’s examine the main claims:

  • AIs pass the Turing test (in a controlled setting, fooling a human into thinking they’re talking to another human). Yes, and they have since 2014. This is not news, and it’s not AGI, but just AI. It was a big deal when it happened, but that’s a much lower bar.

  • AIs has diverse skills: It can do PhD-level math and science problems, make new and valid hypotheses, write fiction, and chat skillfully. All true and impressive, but part of human intelligence is being able to put these things into the context of the world, to understand what’s going on around you and interact, to say nothing of remembering these intellectual achievements later on. Even Gemini’s “Saved Info”, where you write things you want your AI to remember, regularly fails.

  • Today’s AIs are smarter than HAL 9000 (from the movie 2001: A Space Oddity). This is only an incidental point made by the authors, but it encapsulates Keranaut’s disagreement with their paper. HAL may not have been as versatile in conversation as today's AIs, but he knew who everyone was, could apply his strategies to a changing context, could stay on task, and did not get stuck in repetitive loops. AI has the pure analytical intelligence a PhD-level expert, but it doesn't have the general awareness and continuity of consciousness of a 3-year-old. These are essential to meaningful cognition.

  • Memory isn’t important. The authors say that in cutting edge-models, “the gap in memory and identity between humans and LLM-based systems is shrinking.” Perhaps, but LLMs still forget key facts and instructions all the time, even within a single chat session. There’s a long way to go. The authors continue that memory and identity “are not requirements for general intelligence.” We heartily disagree. Returning to the definition of AGI, a functional memory is core to being “able to do most things a human mind can do.”

Although AGI has an explicit definition, there is also an implied definition: enough intelligence to replace a white-collar human worker. This is really what most of us are talking about when we talk about the disruption AGI will bring when it arrives. There are few jobs you could give to an AI today and be confident it will do it alone, consistently, accurately and in perpetuity, to the extent a human can. Hallucination, forgetfulness, lack of awareness of the need to check for errors and the propensity to get stuck in loops are serious barriers, apparently inherent to LLMs, that will keep them from achieving economic, practical AGI for the foreseeable future. Doing discrete tasks effectively is strong proof of AI, but the G is a whole different ballgame.

We do, however, agree entirely with the authors’ plea for action in determining legal and moral principles for dealing with AIs. Although we feel we don’t have AGIs among us yet, they will arrive within 3 and at the most 10 years, and society must be ready to deal with the myriad ethical and legal issues that they will bring. The time for action is now, even if LLMs are essentially word-sorters, because their successors will be much more.

Click here to subscribe and be notified of future posts.




Next
Next

Is this what an AI bubble looks like? And if so, what does it mean for you?