- The Washington Times - Thursday, June 26, 2025

Researchers at the Massachusetts Institute of Technology say they’ve developed a new and improved means for artificial intelligence to detect what ails, medically speaking, and alert doctors to possibly unknown patient health problems. 

But the dark side of predictive and diagnostic AI is significant — significant enough that the premise of its continued development shouldn’t yet focus solely on improvement, but on asking the question: Is this necessary at all?

Sometimes, just because you can, doesn’t mean you should.



“Making AI models more trustworthy for high-stakes settings,” MIT News wrote in May. The piece went on to report on a “new method” that “helps convey uncertainty more precisely, which could give researchers and medical clinicians better information to make decisions.” Not patients? Hmm. 

The new model touts better accuracy in determining X-ray scans. Sounds good, at least in concept. But as with all things technology, the devil is not just in the details, but also in how those details will morph and progress.

Predicting diseases seems a particularly prickly practice. Yet here we are.

“A new AI machine learning model can detect the presence of certain diseases before the patient is even aware of any symptoms, according to its maker Astra Zeneca,” the World Economic Forum wrote in March.

Its primary researcher, Slave Petrovski, told Sky News, “We can pick up signatures in an individual that are highly predictive of developing diseases like Alzheimer’s, chronic obstructive pulmonary disease, kidney disease and many others.”

Advertisement

And all the world should go: Remember COVID.

This is the tragic time in human history when Americans were told to disregard the fact they didn’t feel sick, and instead, behave as if they were sick, all as a means of protecting those who were afraid of getting sick. That was the time when COVID Czar Anthony Fauci told everyone to mask up, stay home, isolate, get the shot, get more shots, get even more shots, and double mask — even if you didn’t test positive for COVID; even if you didn’t feel sick; even if technically you weren’t sick. 

That was the time when all health and medical facts flew out the window to make room for politicized bureaucrats to run roughshod over individual rights. If the masks worked, why did everyone have to wear one? If the shots worked, why did everyone have to get one? Shh. Don’t ask; just obey. The madness was fueled by computer modeling.

“Stanford-led team creates a computer model that can predict how COVID-19 spreads in cities,” StanfordReport wrote in November of 2020.

The study came by scooping up data from cellphones. It was one of many, many cherry-picked reports the government used to justify locking down citizens and to clamor for more technologically driven contact tracing, i.e., surveillance of citizens — all for the health and safety of the people.

Advertisement

Is this where we want the field of health care to go? 

If computer modeling is only as accurate as the numbers that are input — it is — and artificial intelligence relies on data for development — it does — then what happens when there’s not enough data that’s been collected to guarantee accurate modeling?

Nightmare scenarios. That’s what happens.

America turns into a surveillance society, with privacies being violated at every turn — so as to allow the powers-who-be the ability to scoop data on every street corner, in every office.

Advertisement

America’s health care system becomes guided more by machine than by mankind. 

Yes, we know the alert button has already been hit by those who say medical professionals must only use artificial intelligence as a guide for patient care, and never as a substitute for doctor-patient diagnostic and treatment decisions. But tell that to the insurance companies. Tell that to hospitals seeking to control U.S. health care costs. Tell that to politicians whose political futures are tied to medical field funding. Tell that to Big Pharma. Tell that to Big Tech. Tell that to Big Business. Tell that to colleges and universities training the next generation of medical professionals — raised on technology; totally trusting in technology — in the way to go.

Patient choice?

There are far too many factors that could make the individual just a cog in a very massive, very bloated, very fast-moving wheel.

Advertisement

AI in health care could one day prove the COVID years on steroids.

So once again, the consideration is: Just because you can, doesn’t mean you should.

• Cheryl Chumley can be reached at cchumley@washingtontimes.com or on Twitter, @ckchumley. Listen to her podcast “Bold and Blunt” by clicking HERE. And never miss her column; subscribe to her newsletter and podcast by clicking HERE. Her latest book, “God-Given Or Bust: Defeating Marxism and Saving America With Biblical Truths,” is available by clicking HERE.

For more information, visit The Washington Times COVID-19 resource page.

Advertisement

Copyright © 2025 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.