An AI that is Accountable to the People
September 5, 2025
By Matthew Pietz
This article was written and edited without the use of AI.
Leave it to the Swiss.
This week the government of Switzerland unveiled an AI called Apertus, Latin for “open”. It lives up to its name:
Everything about how it was built is public. Its architecture, the weights, the data used to train it, and the development process. This almost never happens.
The huge 15-trillion token dataset they trained it on is entirely publicly available.
This all means that anyone with the means and interest can reproduce Apertus and test it for flaws or dangers to users.
Apertus was built by public research institutions and it’s operated by the Swiss government—owned, that is, by the Swiss people. While “government participation” often makes us think of slowness and bureaucracy, the Swiss have shown that a viable, useful AI can be built by public actors and, crucially, held accountable to the public.
This is an entirely new paradigm in AI development. For-profit companies are without question the engine behind the development of AI, but that breakneck speed can come at a cost. Without transparency, which large corporations tend to dislike, we cannot know if and when their models exhibit harmful behavior or work toward their own purposes in some way, unless it happens to make the news through a user experience or independent research.
The Swiss are choosing to see AI as a public good, like clean drinking water or fire stations, which means that in addition to being available to everyone, there will be accountability to fix things that go wrong. Even though governments are by no means perfectly responsive to citizen complaints, they can be held to much greater account than can companies.
Now, is Apertus the most powerful AI on the market? Certainly not, and it doesn’t aim to be. The developers have prioritized public access and transparency over power, and they probably couldn’t have marshalled the resources to train a model to rival Claude or ChatGPT 5 anyway. It can’t analyze audio or images unless modified by power users, and the fact that it doesn’t include private data means its pool of knowledge is shallower than leading models on the market.
But it works pretty well, much of the time. When I tested Apertus, it skillfully answered fact-based queries or requests for recommendations. It stumbled on “trick” questions, like when I asked it riddles or paradoxes, but those aren’t useful anyway.
You might think of it like a vegan cookie: it doesn’t taste as good (admit it, vegans), but for those who put principles ahead of quality—for example, the principle of not using private or arguably stolen, copyrighted data—or want to be part of a broader ethical movement, it’s good enough. Try it yourself.
Apertus’s contribution is not a revolution in computing power, but a new way of thinking about who “owns” AI models and to whom they are accountable. If we can all see how these machines are built, and the builders have to answer to us when things go wrong, the machines may be safer and better aligned with all of humanity’s interests.
And that’s a pretty delicious cookie.
Click here to subscribe and be notified of future posts