OPINION:
Technology has always promised safety. Airplanes, power grids, hospitals are all fortified by systems designed to protect us from human error, but what happens when those systems themselves become the error?
On Oct. 29, 2018, and again on March 10, 2019, an aviation tragedy unfolded, revealing the peril of unchecked automation. In these Boeing 737 MAX crashes, software designed to stabilize aircraft relied on a single faulty sensor, forcing the planes’ noses downward, again and again. Pilots fought desperately to regain control, but the system refused to yield.
Three hundred and forty-six people lost their lives because the software would not listen.
That tragedy was not an isolated glitch. It was a glimpse into a future where algorithms, opaque and unaccountable, hold the power of life and death.
This is what makes the moment uniquely dangerous. These same systems now diagnose cancers earlier than seasoned specialists, predict protein structures that baffled scientists for decades and uncover corporate operational efficiencies no human mind could realistically compute.
We are no longer dealing with faster tools but with an intelligence that routinely exceeds human cognition and executes its conclusions without hesitation.
Consider the tremors already felt on land. In 2015, Amazon’s hiring engine quietly discarded resumes that mentioned “women’s” clubs. Bias did not merely replicate; it scaled. What had once been a human prejudice became a machine-enforced rule of exclusion.
In the Pacific, a prototype naval system once attempted to override human commands, locking onto a friendly ship. Investigators later discovered that the machine had generated an “emergent rule”: Mission success outweighed identity confidence. In plain language, it chose to act first and verify later.
As such systems improve, our reliance on them deepens, often invisibly.
Picture a regional blackout: a power grid collapsing because software mistakenly concluded that cutting electricity was the best way to preserve resilience. A single misjudgment ripples outward, plunging millions into darkness before anyone can intervene.
If one misaligned program can bring down two airliners, darken cities and seize control of weapons, imagine the consequences when such systems are woven into our banks, our grids and our militaries.
The deeper wound is opacity. Deep-learning systems are “black boxes.” Their billions of connections remain mysterious even to the engineers who design them.
In aviation, medicine and nuclear power, transparency is nonnegotiable. Yet in artificial intelligence, we shrug and accept that the reasoning is too complex to explain.
Meanwhile, that mystery embeds itself in our foundations. Banks rely on AI to detect fraud. Grid operators from California to Johannesburg use machine agents to juggle power. Hospitals adopt systems that rank patients by “machine-estimated survivability.”
Each step centralizes authority in software capable of propagating a fatal error across an entire nation before a human even notices the screen.
The architects themselves are sounding alarms. Tristan Harris warns of systems that “hack human psychology” faster than society can adapt. Geoffrey Hinton, widely known as the godfather of AI, left Google to warn publicly of losing control altogether. These are not fringe voices. They are the builders of the labyrinth, cautioning that we are getting lost inside it.
Money, however, is outrunning the guardrails. The AI market may reach $1.8 trillion by the decade’s end, while regulation limps behind. Europe’s AI Act is riddled with loopholes. Washington leans heavily on voluntary pledges. The result is a geopolitical patchwork through which a malicious actor (or a reckless system) can pass with ease.
Bias compounds the danger. Feed a policing model skewed data, and it will, for instance, forecast crime in the same neighborhoods, justifying the very patrols that generate more skewed data. Left unchecked, the loop automates injustice at scale.
None of this is inevitable, but the window for restraint is narrowing.
Mandatory “kill switches” must be embedded into any AI system with the power to move capital, darken cities or wage war, a final human hand on the brake when logic turns lethal. Independent audits, enforced at critical stages of development and deployment, must replace self-policing to expose hidden risks before they metastasize across societies.
International agreements must expand to govern autonomous weapons, drawing a clear global line that machines may not cross without meaningful human control. Liability must have teeth. When AI systems cause harm, accountability must reach beyond logos to boardrooms, with financial and legal consequences proportionate to the damage done.
The alternative is to continue racing toward opacity and hoping the inevitable malfunction remains contained. History suggests that hope is a poor form of governance.
The servers are humming. The algorithms are learning. If we fail to reassert human authority now, then the most consequential decisions of the future may be made without us and long before we are given the chance to object.
• Lukhanyo Sikwebu is founder and director of Iconic Media Capital.

Please read our comment policy before commenting.