Russia may have used a killer robot in Ukraine. So now what?

Assuming open-source analysts are right, the event illustrated below shows that autonomous weapons using artificial intelligence are here.

But it is only the tip of the iceberg when it comes to AI creating death weapons. It’s AI-developed biological and chemical weapons that all of us really fear. A brief postscript below.

A screenshot of what is called a “loitering munition”, known as the KUB-BLA in English. I’ll explain a loitering munition in the text below.

 

17 March 2022 (Paris, France) –  Using pictures out of Ukraine showing a crumpled metallic airframe, open-source analysts of the conflict there say they have identified images (see Tweet immediately below) of a new sort of Russian-made drone, one that the manufacturer says can select and strike targets through inputted coordinates or autonomously. When soldiers give the Kalashnikov ZALA Aero KUB-BLA loitering munition an uploaded image, the system is capable of “real-time recognition and classification of detected objects” using artificial intelligence (AI), according to the Netherlands-based organization Pax for Peace (citing Jane’s International Defence Review which is the Bible for following this stuff):

 

ABOVE: to read the full Twitter thread click here

 

In other words, analysts appear to have spotted a killer robot on the battlefield.

BRIEF DEFINITION: a loitering munition (also now known as “suicide drones”) is a weapon system category in which the munition loiters (waits passively) around the target area for some time and attacks only once a target is located. Loitering munitions enable faster reaction times against concealed or hidden targets that emerge for short periods without placing high-value platforms close to the target area, and also allow more selective targeting as the attack can easily be aborted. Loitering munitions fit in the niche between cruise missiles and unmanned combat aerial vehicles (UCAVs), sharing characteristics with both. They differ from cruise missiles in that they are designed to loiter for a relatively long time around the target area, and from UCAVs in that a loitering munition is intended to be expended in an attack and has a built-in warhead. Loitering munitions were used by the U.S. throughout the wars in Afghanistan and Iraq. 

Loitering munitions may be as simple as an UCAV with attached explosives that is sent on a potential kamikaze mission, and may even be constructed with off the shelf commercial quadcopters with strapped on explosives. Purpose built munitions are more elaborate in flight and control capabilities, warhead size and design, and on-board sensors for locating targets.

The images of the weapon, apparently taken in the Podil neighborhood of Kyiv and uploaded to Telegram on March 12, do not indicate whether the KUB-BLA, manufactured by Kalashnikov Group (yes, of AK-47 fame), was used in its autonomous mode. The drone appears intact enough that digital forensics might be possible, but the challenges of verifying autonomous weapons use mean we may never know whether it was operating entirely autonomously. Likewise, whether this is Russia’s first use of AI-based autonomous weapons in conflict is also unclear. Some published analyses suggests the remains of a mystery drone found in 2019 Syria was from a KUB-BLA (though, again, the drone may not have used the autonomous function).

Nonetheless, assuming open-source analysts are right, the event illustrates well that autonomous weapons using artificial intelligence are here to stay. And what’s more, the technology is proliferating fast. The KUB-BLA is not the first AI-based autonomous weapon to be used in combat. In 2020, during the conflict in Libya, a United Nations report said the Turkish Kargu-2 “hunted down and remotely engaged” logistics convoys and retreating forces. The Turkish government denied the Kargu-2 was used autonomously (and, again, it’s quite tough to know either way), but the Turkish Undersecretary for Defense and Industry acknowledged Turkey can field that capability. And there are scores of stories on the U.S. use of such technology in Afghanistan and Iraq.

Autonomous weapons have generated significant global concern. A January 22, 2019 Ipsos poll found that 61 percent of respondents across 26 countries oppose the use of lethal autonomous weapons. Thousands of artificial intelligence researchers have also signed a pledge by the Future of Life Institute against allowing machines to take human life.

These concerns are well-justified. Current artificial intelligence is particularly brittle; it can be easily fooled or make mistakes. For example, a single pixel can convince an artificial intelligence that a stealth bomber is a dog. A complex, dynamic battlefield filled with smoke and debris makes correct target identification even harder, posing risk to both civilians and friendly soldiers. Even if no one is harmed, errors may simply prevent the system from achieving the military objective.

The open questions are: What will the global community do about autonomous weapons? What should it do and, in reality, what can it do?

In the first case the answer is pretty clear: Almost certainly nothing. International norms around autonomous weapons are quite nascent, and large, powerful countries, including the United States, have pushed back against them. Even if there were broadly accepted norms, it’s not clear how much more could be done. Russia is already under harsh, punishing sanctions for its actions in Ukraine.

Plus, the U.S. Congress just approved a $13.6 billion Ukraine aid bill, which includes providing Javelin anti-tank and Stinger anti-aircraft missiles, and has now added Switchblade drones, which are small, portable so-called kamikaze or suicide drones that carry a warhead and detonate on impact. These systems can be set up and launched within minutes.

The United States and its allies have been clear they have little appetite for direct military intervention in the conflict. Plus, how much can the global community really do without knowing for sure what happened? But Russia’s apparent use of the KUB-BLA does lend greater urgency to broader international discussions around autonomous weapons. Last week, global governments met in Geneva under the auspices of the United Nations Convention on Certain Conventional Weapons to discuss questions raised by autonomous weapons, including whether new binding treaties are needed. Arms control advocates noted they have not been successful in winning support for a binding treaty banning autonomous weapons so far, and the war in Ukraine has made it worse. The convention’s process requires member states to reach consensus on any changes to the treaty. The United States, Russia, and Israel have all raised significant concerns and blocked consensus. But a recent report shows that many other countries do not support a ban so the Convention on Certain Conventional Weapons process is not going anywhere.

It also failed because there was no enforcement mechanism. Arms control advocates admit that countries with large, powerful militaries will never be supportive of regulations on autonomous weapons. If great powers do not support the norm, potential punishments like economic sanctions or military intervention won’t be meaningful. Certain punishments like robust military intervention would require require a specific country like the United States to carry it out and accept whatever risks may come.

Good luck with that.

And let’s be frank. The approach was wrong. A legally-binding comprehensive ban on autonomous weapons was something the major military powers would never support. As one of my military contacts told me, the active protection and close-in weapon systems they use to defend military platforms from incoming missiles and other attacks are simply too valuable.

But now a looming, frightening dark cloud looks out over Ukraine. The more precise targeting that autonomous weapons offers or achieves is quite significant for chemical and biological weapons delivery. Part of why most countries have given up chemical and biological weapons is because delivery is unreliable, making them militarily less useful. An errant wind might blow the agent away from the intended target and towards a friendly or neutral population. But artificial intelligence-aided delivery could change that, and may weaken the existing norms around those weapons further.

This is why we have seen over the last few years moves by the U.S. and other technology rich countries to adopt export control measures to reduce the risk of algorithms and software designed for dispersal of pesticides or other chemicals falling into the hands of governments that have chemical and biological weapons.

But it is only the tip of the iceberg when it comes to AI creating death weapons

 

In my research I came across a company that made machine learning software to use AI for commercial drug discovery:

Our drug discovery company received an invitation to contribute a presentation on how AI technologies for drug discovery could potentially be misused. Our company—Collaborations Pharmaceuticals, Inc.—had recently published computational machine learning models for toxicity prediction in different areas, and, in developing our presentation to the Spiez meeting, we opted to explore how AI could be used to design toxic molecules. 

So they were told there were potential “security concerns” for these models they developed and they should run some tests. They noted:

The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery. We have spent decades using computers and AI to improve human health—not to degrade it.

We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life. Even our projects on Ebola and neurotoxins, which could have sparked thoughts about the potential negative implications of our machine learning models, had not set our alarm bells ringing.

It was a thought exercise we had not considered before that ultimately evolved into a computational proof of concept for making biochemical weapons.

So they ran some tests. Within 6 hours they discovered not only one of the most potent known chemical warfare agents, but also a large number of candidates that the model thought was more deadly. As they noted:

In less than 6 hours after starting on our in-house server, our model generated 40,000 molecules that scored within our desired threshold. In the process, the AI designed not only [the very deadly nerve agent] VX, but also many other known chemical warfare agents that we identified through visual confirmation with structures in public chemistry databases. Many new molecules were also designed that looked equally plausible. These new molecules were predicted to be more toxic, based on the predicted LD50 [lethal dose for 50% exposed to it] values, than publicly known chemical warfare agents.

This is basically a real-world example of what I have written about before when it comes to AI: it just works to flip the sign of the utility function and turn a “friend” into an “enemy”. To be frank, this was slightly more complicated as they had two targets that they jointly optimized for the drug discovery process (toxicity and bioactivity), and only the toxicity target is flipped. This makes sense. You’d want your chemical warfare agents to not be bioactive. And it did require a little bit of domain knowledge – they had to specify which sort of bioactivity to look for, and picked one that would point towards this specific agent. There is a link to the full paper below.

You can read the full piece by clicking here. It’s like reading the first act of an extremely worrying bio-terror thriller. The AI researchers end their piece by stating:

Without being overly alarmist, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community.

I would add “and all our colleagues in the AI autonomous weapon community, too”. As I noted above, current artificial intelligence is particularly brittle; it can be easily fooled or make mistakes. In the example I noted above, a single pixel convinced an artificial intelligence that a stealth bomber was a dog. A complex, dynamic battlefield filled with smoke and debris makes correct target identification even harder, posing risk to both civilians and friendly soldiers. 

 

The problem, of course, is that the monster has left the pen. Militaries around the globe are racing to build ever more autonomous drones, missiles, and cyberweapons. Greater autonomy allows for faster reactions on the battlefield, an advantage that is as powerful today as it was 2,500 years ago when Sun Tzu wrote, “Speed is the essence of war.” Today’s intelligent machines can react at superhuman speeds. Last year, in a piece leaked by a Chinese activist, we saw that modern Chinese military academics have been speculating about a coming “battlefield singularity,” in which the pace of combat eclipses human decision-making.

The consequences of humans ceding effective control over what happens in war would be profound and the effects potentially catastrophic. While the competitive advantages to be gained from letting machines run the battlefield are clear, the risks would be grave: accidents could cause conflicts to spiral out of control.

I’ll have more this AI this weekend when I expect (hope) to publish my detailed military technology analysis of what is happening in the Ukraine war on the high-tech side.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top