AI Disaster Happens in the Dark. Trump is Smashing Light Bulbs.
August 28, 2025
By Matthew Pietz
This article was written and edited without the use of AI.
One possible path ahead of humanity leads to a future where AI equitably serves our best interests, ushering in a new era of prosperity, longevity, and discovery.
That path is well-lit, to allow regular people to see what tech companies are developing along the way, what risks they are taking, what problems their prototypes are showing, and how, specifically, they are putting humanity’s well-being ahead of the blind race to beat their competitors to the next rung on the ladder.
No company will do this voluntarily. A corporation’s to shareholders and investors is to maximize returns, and everything that might serve another goal, they will do only if compelled. If tech leaders are left to focus only on profit, they’ll give only token attention to any concerns raised by staff, and may take serious risks. These could include, as just one example, letting AIs repeatedly and autonomously improve themselves, without their creators being able to fully understand what’s happening under the hood. Several credible projections, including from within the industry, have shown that this is at least probable, and could lead to very bad outcomes. So, some rules are needed.
Since taking office, the Trump administration has clearly signaled where they fall on the question of AI regulation: just exactly where the tech giants would like them to, tearing off the guardrails and extinguishing the lights.
The administration has taken several executive actions on AI so far. Here’s what they did:
January 20: Revoked Executive Order 14110, signed by Biden in 2023, which prioritized protecting civil rights and mitigating national security risks while AI is developed.
January 23: Signed Executive Order 14179 to make sure the Trump team rooted out all guardrails on AI development that Biden’s EO put into place, while a new action plan was developed.
July 23: Unveiled the “AI Action Plan” along with three Executive Orders.
The AI Action Plan lists about 90 recommended steps for government.
On the plus side, it pushes for more open-source models to be released to the public; for open sharing of datasets; for the Department of Labor to collect data on job impacts from AI, and to give assistance to those impacted; and for more training in AI skills to be made available. Two other EOs in April also seek to help US youth train for AI jobs. All of this is positive.
Apart from that, the vast majority of the 90 steps drop a brick on the accelerator and cut the brakes. The steps direct various government agencies to:
Ask tech businesses what rules they want removed.
Cut discretionary AI funding to states judged to have too many rules.
Allow privately-owned data centers to build on federal lands—your land—and to be exempt from a wide range of environmental protections.
Research how AI arrives at its outputs, so humans can understand and therefore predict its actions. Sounds great—except this will be under the control of DARPA, the top-secret military research department. Benefits are unlikely to reach the public.
There are only two nods to safety in this 28-page, comprehensive overview of America’s AI policy:
To prevent bioterror, ensuring DNA-sequencing machines used in government-funded institutions have safety controls, and meeting with industry to facilitate customer screening. Good ideas, but there are no obligations on the private sector, nor controls on home gene-editing kits (now easily ordered online) bad actors could use in their own garages.
Preparing for “AI Incidents” like infrastructure breakdowns. In theory this too sounds wise, but again no obligations are even suggested for tech companies. The prep consists of updating some government playbooks and “encouraging” sharing information on vulnerabilities.
Astoundingly, there are no steps to prevent threats to regular Americans’ civic rights, physical safety, or indeed democracy itself, by a rapidly accelerating AI regime owned by an elite private minority.
The three EOs that accompany the Action Plan do no more to keep us safe:
Try to keep “woke” ideology out of AI systems
Make it easier to get data center building permits
Facilitate the export of US technology to allies
The Action Plan claims “AI is far too important to smother in bureaucracy at this early stage”. Concerned citizens must utterly reject this framing. It is precisely at this early stage that we have the opportunity, and the responsibility to all humans on earth, to set up guardrails and principles to ensure AI is developed safely. The types of catastrophe that can result from reckless development of AI are so devastating that complaining about red tape is just naive.
There was a great deal of red tape in the creation and running of the Manhattan Project. Even though we were in a race to beat the Nazis, safety and security were rightfully prioritized. AI, alongside its wonderful potential to improve society, has destructive potential to rival or even exceed the atomic bomb.
Journalists have wondered whether tech giants would get any return on their millions in contributions to the 2024 Trump campaign. I think the answer is clear. The ties between officials appointed by Trump to offices governing tech and the industry itself are too numerous to list here, but suffice it to say the people making decisions on how to roll out this Action Plan do not come from the world of advocating for citizen rights.
Are you as concerned as we are? Write your senator or representative to ask for:
Specifically worded, enforceable regulations to:
a. Ban fully autonomous AI and unbounded, recursive self-improvement, until it is fully understood and controllable
b. Expand government oversight and licensing of cutting-edge AI, similar to controls on nuclear development
c. Establish a transparent, politically neutral and well-funded AI auditing office in government
d. Require disaster insurance for developers of high-impact technologies
e. Require “air-gapping”, physical separation of an autonomous or semi-autonomous AI agent from access to the internet, until they are conclusively and publicly shown to be safe
A public statement of their opposition to problematic aspects of the AI Action Plan and associated executive orders.
Let’s turn on the spotlights to make sure we can all see the path to the future that we want.
Click here to subscribe and be notified of future posts.