Biotechnology paired with artificial intelligence is rapidly becoming one of the biggest emerging security threats for the U.S., according to a report published this week by the Hoover Institution.
Drew Endy, a senior fellow at Stanford’s Hoover Institution, writes in the report that “distributed biomanufacturing suggests futures in which anyone, anywhere will gain the capacity to source any toxin or pathogen.”
The development of DNA printers and AI models has kicked off a “winner-take-all” race in biotechnology, Mr. Endy warns. Large libraries of DNA information on everything from smallpox to Ebola mean the sector is ripe for bioterrorism and warfare.
“At some point in the last 10 years, building viruses from scratch became nothing special,” Mr. Endy told The Washington Times. “But, we haven’t changed how we govern work with pathogens.”
Regulation and investment around biotechnology research in China are quickly outpacing the U.S. — and at such a rate that the U.S. may not be able to catch up.
The report outlines how historic approaches are quickly becoming irrelevant and makes the case that a lack of competition and overregulation in the U.S. has quickly pushed research and development overseas.
Mr. Endy pointed to what he termed “operating systems for life” that developers in the field would be able to finish work on “within the next 1,000 days.”
“They’re either going to happen in the United States or in Shenzhen,” Mr. Endy said. “Whoever gets that first is going to have an advantage.”
Shenzhen, a major city in China’s Guangdong Province, is a global tech hub.
The Chinese government placed biotechnology front and center as part of a whole-of-government priority list as early as 2015.
The regulatory framework that developed from that encouraged innovation and investment.
A professional peer in China told Mr. Endy that “in a six-year period of time, he was able to navigate a system that provided access to capital, land, approvals and everything else needed to create a national laboratory from scratch for emerging biotechnology.”
That lab now employs more than 2,500 people, he said.
“And it’s now fully operational,” Mr. Endy said. “Meanwhile, I’m still trying to get funding to onboard a second Ph.D. student this year.”
Mr. Endy and his colleagues want to see a “National Biotechnology Coordination Office” that would oversee the building of federally funded AI labs dedicated to processing highly technical biological data.
The U.S.-based effort would be aimed at combating the unregulated marriage of new biosynthetics and AI that are quickly “lowering barriers for designing novel toxins and pathogens.”
The industry is starting to engage with the problem with or without the U.S. government. Possible misuse of new developments prompted concerns published by the Rand Corp. last month on how to detect and evaluate possible bioterror threats.
A team at Microsoft was able to bypass — and then fix — security protocols in place against similar bioterror-type threats recently by using open-source AI protein design tools.
Mr. Endy isn’t convinced the fix will be enough. In the report, he and his colleagues ask the open question “Which is easier: prompting a Large Language Model to design a novel, harmful biomolecular function or using AI to detect one that has never been seen before?”
He’s still not sure of the answer, but he is encouraged that there’s a conversation about the emerging threat.
“At least some people are paying attention to the fact that things are changing and that we have to wake up and do something about it,” Mr. Endy said. “Biosecurity victory is possible. We can secure biology.”
• John T. Seward can be reached at jseward@washingtontimes.com.
Please read our comment policy before commenting.