sign up log in
Want to go ad-free? Find out how, here.

Chatbots won’t help anyone make weapons of mass destruction. But other AI systems just might

Technology / analysis
Chatbots won’t help anyone make weapons of mass destruction. But other AI systems just might
Cybermagician / Shutterstock
Cybermagician / Shutterstock

By David Heslop & Joel Keep*

Over the past two years, we have seen much written about the “promise and peril” of artificial intelligence (AI). Some have suggested AI systems might aid in the construction of chemical or biological weapons.

How realistic are these concerns? As researchers in the field of bioterrorism and health intelligence, we have been trying to separate the genuine risks from the online hype.

The exact implications for “chem bio” weapons are still uncertain. However, it is very clear that regulations are not keeping pace with technological developments.

Assessing the risks

Assessing the risk an AI model presents is not easy. What’s more, there is no consistent and widely followed way to do it.

Take the case of large language models (LLMs). These are the AI engines behind chatbots such as ChatGPT, Claude and Gemini.

In September, OpenAI released an LLM called o1 (nicknamed “Strawberry”). Upon its release, the developers claimed the new system had a “medium” level risk of helping someone create a biological weapon.

This assessment might sound alarming. However, a closer reading of the o1 system card reveals more trivial security risks.

The model might, for example, help an untrained individual navigate a public database of genetic information about viruses more quickly. Such assistance is unlikely to have much material impact on biosecurity.

Despite this, media quickly reported that the new model “meaningfully contributed” to weaponisation risks.

Beyond chatbots

When the first wave of LLM chatbots launched in late 2022, there were widely reported fears that these systems could help untrained individuals unleash a pandemic.

However, these chatbots are based on already-existing data and are unlikely to come up with anything genuinely new. They might help a bioterrorism enterprise come up with some ideas and establish an initial direction, but that’s about it.

Rather than chatbots, AI systems with applications in the life sciences are of more genuine concern. Many of these, such as the AlphaFold series, will aid researchers fighting diseases and seeking new therapeutic drugs.

Some systems, however, may have the capacity for misuse. Any AI that is really useful for science is likely to be a double-edged sword: a technology that may have great benefit to humanity, while also posing risks.

AI systems like these are prime examples of what is called “dual-use research of concern”.

Prions and pandemics

Dual-use research of concern in itself is nothing new. People working on biosecurity and nuclear non-proliferation have been worrying about it for a long time. Many tools and techniques in chemistry and synthetic biology could be used for malicious ends.

In the field of protein science, for example, there has been concern for more than a decade that new computational platforms might help in the synthesis of the potentially deadly misfolded proteins called prions, or in the construction of novel toxin weapons. New AI tools such as AlphaFold may bring this scenario closer to reality.

However, while prions and toxins may be deadly to relatively small groups people, neither can cause a pandemic that could wreak true havoc. In the study of bioterrorism, our main concern is with agents that have pandemic potential.

Historically, bioterrorism planning has focused on Yersinia pestis, the bacterium that causes plague, and variola virus, which causes smallpox.

The main question is whether new AI systems make any tangible difference to an untrained individual or group seeking to obtain pathogens such as these, or to create something from scratch.

Right now, we simply do not know.

Rules to assess and regulate AI systems

Nobody yet has a definitive answer to the question of how to assess the new landscape of AI-powered biological weapons risk. The most advanced planning has been produced by the outgoing Biden administration in the United States, via an executive order on AI development issued in October 2023.

A key provision of the executive order tasks several US agencies with establishing standards to assess the impact new AI systems may have on the proliferation of chemical, biological, radiological or nuclear weapons. Experts often group these together under the heading of “CBRN”, but the new dynamic we call CBRN+AI is still uncertain.

The executive order also established new processes for regulating the hardware and software needed for gene synthesis. This is the machinery for turning the digital ideas produced by an AI system into the physical reality of biological life.

The US Department of Energy is soon due to release guidance on managing biological risks that might be generated by new AI systems. This will provide a pathway for understanding how AI might affect biosecurity in the coming years.

Political pressure

These nascent regulations are already coming under political pressure. The incoming Trump administration in the US has promised to repeal Biden’s executive order on AI, concerned it is based on “radical leftist ideas”. This stance is informed by irrelevant disputes in American identity politics that have no bearing on biosecurity.

While it is imperfect, the executive order is the best blueprint for helping us comprehend how AI will impact proliferation of chemical and biological threats in the coming years. To repeal it would be a great disservice to the US national interest, and global human security at large.The Conversation


*David Heslop, Associate Professor of Population Health, UNSW Sydney and Joel Keep, Biodefense Fellow at the Council on Strategic Risks and PhD Candidate in Biosecurity at the Kirby Institute, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We welcome your comments below. If you are not already registered, please register to comment.

Remember we welcome robust, respectful and insightful debate. We don't welcome abusive or defamatory comments and will de-register those repeatedly making such comments. Our current comment policy is here.

10 Comments

Made me chuckle.

I'd be much more concerned about A.I. recommending 'recipes' for drugs ... except ... Isn't that what they can do?

Oh wait. Nothing to be concerned about ...

But they've been doing it for many years. (A.I. makes it so much easier...]

Isn't A.I. great?

Billionaires have told you A.I. was great .... Did you believe it?

Up
3

Sam Altman said recently he is hoping that AI will regulate itself.

The guys building this stuff don't really give two hoots about negative effects on the pheasants.. to them it's just money, a challenge and fun

Enough said.

 

Up
2

And/or they view themselves as tech gods creating a superior form of (artificial) life.

Up
0

Any average honours or Masters biochem student can make some extremely dangerous chemicals in any typical well stocked lab. So what is the fearmongering with this AI angle? Contriving an excuse to regulate and control all use of online AI services?

Up
1

Those that would misuse or have nefarious intents, may not be the of the same ability to become an average honours or Masters biochem student.

Up
1

You don’t need to be either of those to find an internet recipe for a bad stuff. 

Up
0

Which is the premise of the article, is it not?

Do you regulate the internet/AI so it isn't accessible, or do you just mop up afterwards?

Up
0

The MENSA level PhDs and post-docs in Monsanto/Pfizer style corporations have literally poisoned entire ecosystems and populations.

Up
0

In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built

This seems relevant to all our constructed systems. I don't understand this radical leftist diatribe.

Common sense and human decency seem like basic principles to me. KISS rather than added complexity. Has humanity been overridden? Do we have any principles?

How ironic I got censored on another article discussing the wisdom, or lack of, regarding our use of technology, and our history of biological/chemical weapons.

Up
0

An example of OpenAI's new model tried to protect itself when it detected information suggesting that programmers were going to wipe it and start over fresh. I don't think that the people who wrote this piece really get how powerful, and independent, AI is getting.

Up
0