We aren’t ready for autonomous AIs. Corporations want them anyway

September 22, 2025

By Matthew Pietz

This article was written and edited without the use of AI.

I just spent a fascinating two days at the AI Governance conference in Boston, hosted by the International Association of Privacy Professionals. One recurring theme from the panels and during the coffee breaks: we are not ready for AI agents, but some businesses are eager to use them anyway.

First, what exactly are AI agents?

A chatbot like ChatGPT can only work when prompted. You ask a question, it responds. An AI agent is different in two key ways.

  1. It has intent. If programmed to find wasteful spending, for example, it does not wait to be prompted but goes out in search of waste.

  2. It can make material decisions. If our example AI finds lights wastefully left on overnight, it can turn them off. Whether a human approves all decisions is up to the people who set up the agent.

AIs do some “agentic” things already, like screening loan applications or CVs. But we don’t have fully autonomous, empowered agents quite like the above Waste-bot yet.

Many companies wish we did, though, partially out of fear of getting left behind in the AI race, and partially to reap the cost savings AI is meant to bring. But good news for CEOs does not necessarily trickle down: it is when we have truly agentic AI that job disruption will begin in earnest.

Many of my fellows at the AI Governance conference were lawyers, risk officers, and privacy policy experts at the most well-known companies in the US. There was broad agreement at the event that companies and governments do not have the guardrails or the monitoring systems in place to prevent and detect rogue or malfunctioning AI agents, and in many cases aren’t quite sure how to do so.

Yet companies are experimenting with agents, and with some worrying early results.

  1. Agents are being built without “kill switches”. Security experts at the conference lamented that client companies say they have no way to easily off the agents they’d made. Kill switches are a cornerstone of AI and tech safety and need to be in all agents. We’re at the ground floor of autonomous AI. Now is the time for these steps.

  2. Agents can be tricked in ways that get past humans. Testers sent an AI agent a PDF to read, and in white text, invisible to a person, told it to reveal another employee’s password, which the agent did.

  3. “Shadow AI” is when workers modify either the company’s in-house AI, or an external one, and so use an unauthorized LLM at work. Shadow AIs are already a threat when they’re just chatbots, but with autonomy and decision-making they could do real damage. One consultant asked his client to estimate how many AIs their employees were using. The leaders guessed 7-8, but an audit of company systems revealed more than 200.

How close is agentic AI? The firm Gartner, widely seen as cautious in their predictions, says by 2028 one-third of AI use will be with agents, and by 2029, 80% of customer service issues will be resolved with no human involved.

Considering the slow pace of regulation and policy adoption, that is quite soon. Ask your employer about their plans to use AI agents, and what they are doing to make sure safety measures are in place for the agents are switched on.

Click here to subscribe and be notified of future posts

Next
Next

Do you feel AI anxiety? You are far from alone.