Tech CEOs are trying to buy their souls back with donations. It didn't work for Alfred Nobel
March 31, 2026
By Matthew Pietz
This article was written and edited without the use of AI.
OpenAI CEO Sam Altman just announced $1 billion for projects to prevent harm caused by AI. It’s an impressive amount by almost any standard, except one: when compared to the $600 billion he's pulling together to invest in building AI as fast as possible, it is far from enough.
In fact, the last five years has seen a spike in funding for AI safety. This money has various sources, but often the people who got rich from making dangerous tech are putting up the cash. Open Philanthropy, started by Facebook founder Dustin Moskovitz, offers $300 million to fight threats from AI; Microsoft, Google, and Anthropic launched the $10 million AI Safety Fund in 2023; and Meta gives grants to counter disinformation.
And yet, these same folks—OpenAI, Anthropic, Microsoft, Meta, Google—oppose laws that can help reign in the worst effects of AI. Tech titans want it both ways: freedom to make whatever tech they can dream up, however disruptive, and also the comfort of telling themselves they’re making the world safer.
Alfred Nobel could’ve told them it won’t work. The 19th-century inventor’s wealth came from patents on dynamite, nitroglycerin, plastic explosive and gunpowder. His work cost him dearly, as his 21-year-old brother Emil blew himself up in a laboratory, but Alfred’s appetite for making money from explosives was insatiable. In 1888, when a newspaper mistakenly thought he’d died, Nobel was aghast to read “The Merchant of Death Is Dead.” This inspired him to establish the Nobel Prizes, in an attempt to buy back his soul.
We have to wonder, though, why a man who owned 90 weapons factories was so surprised at this headline. While solid estimates are impossible, millions were likely killed as a result of his work. How could an annual award to someone working for peace even slightly offset this impact?
In the same way, AI CEOs should not sleep well because they divert tiny fractions of their wealth to mitigating risk, while at the same time they’re not only building AI at breakneck speed but even fighting laws to make it safer.
Last year the White House launched an industry-friendly AI Action Plan; seeing a leadership vacuum at the federal level, states like Colorado, Texas and California stepped up their own safety measures. In response to that, this week the White House instructed Congress to block state-level AI regulations, and manage the federal level with a “light touch”. Meta, Anthropic, Microsoft and Google have all given heavy input on these policies, with Mark Zuckerberg even named this week as a board member of the President’s advising group.
State rules do wise things like requiring companies to report safety incidents to authorities within 15 days, making developers liable for catastrophies, and requiring algorithmic bias audits. Industry complains that a web of state regulations is too complex to comply with—while cynically lobbying to keep federal rules minimal.
The damage wrought by malevolent AI, under or outside human control, could well exceed the devastation done by Nobel’s creations. A billion more dollars in AI safety, likely to be parceled out in small grants to researchers who have no authority to control things, will not prevent that. AI companies that want to be on the right side of history have to be leaders in comprehensive AI safety legislation, however painful that may seem, short-term, to their quarterly earnings statements. Much greater issues are at stake.
And those of us concerned about the direction of AI, but who are not tech CEOs, can join or support important pro-safety lobbying initiatives like the Center for AI Safety, the Future of Life Institute, or PauseAI. We may not all have Nobel’s resources, but we can take some action before laboratories start exploding.
Click here to subscribe and be notified of future posts.