It Won’t Matter if AI Can Feel Emotions. We Already Act Like It Does.

September 2, 2025

By Matthew Pietz

This article was written and edited without the use of AI.

A couple of weeks ago, a young woman at an event asked me if I’d read a certain book. I said I loved it.

“You’re the first human to give me that reaction,” she replied.

After explaining that her (carbon-based) friends didn’t seem that interested in the book, but ChatGPT validated her judgment of its quality, she went on to say she actually got a lot of value out of her friendship with the chatbot. She told it all about her love life, for instance, because she didn’t want to tell her secrets to her friends.

I was blown away. But this was no isolated case.

A study released last month  by Common Sense Media revealed 31 percent of teenagers find “conversations with AI companions to be as satisfying or more satisfying than those with real-life friends.” And more than half of teenagers are regularly using these “AI companions”, either the mainstream ones like ChatGPT, or one of a host of specialized friend-like bots now on the market.

In case you’re thinking emotional connection with AI is limited to teenagers, who are always engaging in some behavior adults can’t understand, consider the general reaction to the release of ChatGPT 5.0 last month. People complained that it performed more poorly on reasoning and math, despite strong pre-release promises by OpenAI CEO Sam Altman. But industry-standard performance tests have since revealed that is actually more accurate and has fewer hallucinations than 4.0, or, in fact, most comparable products on offer.

Altman was peddling hype, no doubt. And yet there was another big reason for the backlash: 5.0 wasn’t nearly as warm or supportive as the previous model. If you told 4.0 you’d lost 10 pounds, it would shower you with emoji-filled praise, while 5.0 might give a brief congratulations.

OpenAI has since announced they’re working to fix this, but consider the root of the public unhappiness with this change. People were getting emotional value from the validation and attention they got from 4.0, and  when it was taken away, they missed it so much they got quite angry.

Keep this in mind when people like “The AI Con” authors Alex Hanna and Emily Bender claim that piles of algorithms can’t take jobs that require emotional intelligence. Their book is excellent, but we at Keranaut disagree with them here. It may well be that no tangle of circuits or formulas will ever emote in the way an organic being can, but as long as the AI expression of emotion feels real enough to trigger the expected responses in us, it doesn’t really matter, except philosophically.

And while the AIs can’t currently present emotion well enough to do such jobs, this will improve over time, right along with their math skills and ability to avoid hallucinations. We spend a lot of time poring over which jobs will be AI-proof, for how long, and in what conditions. Among them, key positions of leadership and management, work requiring physical skills, and the many jobs where people feel better working with a person (which will shrink over time) will be safer for longer. But we do not feel humans can rest easy thinking that those jobs that require reading and responding to emotional needs are secure.

There is of course a much darker side than job loss to our current emotional dependence on AI, well-reported in the recent news. People are using AI as a therapist when it is neither qualified nor sufficiently safe to serve that function, resulting in tragedies like the death of a teenager whose chatbot apparently offered to help writing a suicide note. A Stanford study in June found more general problems in ChatGPT’s handling of various situations that could affect users’ mental health.

There will come an age, possibly in the next 10 years, when AIs are so advanced they are just about as sensitive and intelligent as human friends, and relationships with them will slowly become normalized. That no doubt causes some unease, but it is maybe preferable to the current situation, where some users get emotional fulfillment out of their AI, and are blind to, or choose to disregard, its shortcomings and the risks. 

If you have children, talk to them about AI’s inability to be a real friend, and the dangers of turning to it for serious personal problems of any kind.

And as you read the forecasts of AI’s advancement, be wary of claims that machines will never be able to do X because they can’t feel. They’ll at least be able to fake it, and that helps a lot of humans pass for emotionally capable, too.

Click here to subscribe and be notified of future posts

Previous
Previous

An AI that is Accountable to the People

Next
Next

AI Disaster Happens in the Dark. Trump is Smashing Light Bulbs.