- Wednesday, August 13, 2025

Sam Altman, CEO of OpenAI, appeared at a Federal Reserve event on July 22 and outlined three “scary categories” of how advanced artificial intelligence could threaten society.

The first two scenarios — a bad actor using artificial intelligence for malfeasance and a rogue AI taking over the world — were accompanied by the insistence that people were working to prevent them. However, Mr. Altman offered no such comfort with the third scenario, the one that seemed to trouble him most.

He described a future where AI systems become “so ingrained in society … [that we] can’t really understand what they’re doing, but we do kind of have to rely on them. And even without a drop of malevolence from anyone, society can just veer off in a sort of strange direction.”



This scenario has no Hollywood-style robot uprising. No killer drones. No Skynet. Instead, AI quietly embeds itself into the machinery of governance, commerce and daily life until human decision-making becomes the exception, not the norm.

As Mr. Altman put it: “The models kind of accidentally take over the world. They never wake up. They never do the sci-fi thing.”

In this situation, AI has become so smart that the people in control of our institutions have become reliant on it to make all important decisions. “What if AI gets so smart that the president of the United States cannot do better than following [a future iteration of ChatGPT]’s recommendation?” As Mr. Altman explains, although it might be the correct decision to listen to AI, this would mean “society has collectively transitioned a significant part of decision-making” to artificial intelligence.

Most scary, we might already be at this point.

Swedish prime minister consults ChatGPT

Advertisement

Earlier this month, The Guardian reported that Swedish Prime Minister Ulf Kristersson had acknowledged that he regularly consulted AI tools for second opinions before he made decisions. He told the Swedish business paper Dagens Industri, “I use it myself quite often. If for nothing else than for a second opinion. What have others done? And should we think the complete opposite? Those types of questions.”

In other words, the elected leader of an entire nation is outsourcing at least part of his decision-making process to a corporate-controlled algorithm.

This acknowledgment has drawn much criticism. The Swedish newspaper Aftonbladet accused Mr. Kristersson of having “fallen for the oligarchs’ AI psychosis,” and others have pointed out that they “didn’t vote for ChatGPT.”

Even if AI suggestions are sound, voters never agreed to hand policy decisions to an opaque, privately developed tool whose inner workings are hidden from public scrutiny. It is exactly the kind of slow, voluntary ceding of democratic responsibility Mr. Altman fears.

Mr. Altman’s fear starts at $1

Advertisement

If the Swedish prime minister represents the individual adoption of AI in high office, the United States may soon see a more systemic version. Just this month, CNBC reported that OpenAI will offer its ChatGPT Enterprise product to federal government agencies for the bargain price of $1 through the next year.

The company framed it as a benevolent move, saying that “helping government work better — making services faster, easier and more reliable — is a key way to bring the benefits of AI to everyone.”

Under the guise of increased efficiency, this development will surely help weave AI into the daily operations of government agencies. Throughout our government, AI will help shape reports, draft communications and potentially influence policy proposals and decisions.

However, if AI is adopted by the federal government, it may quickly become too powerful to remove. Dependence will be the default, and AI’s influence on public policy could become nearly indistinguishable.

Advertisement

Human judgment to algorithmic governance

Mr. Altman’s third “scary category” rests on a simple idea: Humans tend to delegate difficult decisions to whoever (or whatever) seems most competent.

On the individual level, that might mean relying on AI to choose our investments, write our emails or plan our diet. On the systematic level, it could mean presidents, prime ministers and Cabinet officials deferring to AI-generated recommendations. Again, this is not solely because of laziness or malevolence but also because the machine seems to outperform human judgment, intelligence and efficiency.

If this paradigm shift occurs, real political power would no longer reside with voters or their elected representatives. Rather, it would be centralized to a handful of corporations and engineers who design, train and tweak the algorithms.

Advertisement

We’ve been here before in other domains. Social media companies became the gatekeepers of public discourse without a single vote being cast. Financial institutions adopted agenda-driven environmental, social and governance standards that manipulated how markets operate. Now, AI threatens to become the unelected shadow government for the free world.

Transparency disappears. Accountability blurs. And control shifts from the people to the programmers.

The Altman irony

Perhaps the most confounding part of this story is the fact that the very scenario Mr. Altman fears most is being accelerated by his own actions. By embedding AI into the decision-making processes of world leaders and governments, we are inching toward a world where “We didn’t vote for ChatGPT” becomes a fact of political life.

Advertisement

This is not a call to ban AI. It’s simply a call to remember that in a free society, important decisions should be made by people who are accountable to the public.

If we wouldn’t accept unelected corporate executives writing our laws in secret, why would we accept their algorithms to do it for them?

Once we’ve handed over that authority, it won’t matter whether AI “takes over the world” on purpose or not.

• Donald Kendal (dkendal@heartland.org) is the director of the Emerging Issues Center at The Heartland Institute. Follow @EmergingIssuesX.

Copyright © 2025 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.