AI killer robots and global security

 

24 May 2024 – There are many legitimate near-term fears about the misuse of AI, ranging from mis- and dis-information to voice cloning scams to bias to deepfake scams and deepfake porn. 

But for the military the focus is on what it all means for military strategy. At a major military intelligence conference in Berlin, Germany in February we covered a wide range of military AI subjects. And two weeks ago in Vienna, Austria we addressed the biggest threat – artificial intelligence autonomous weapons (AWS).

The Vienna conference was the first of its kind, with with more than 1,500 participants representing over 140 countries, with a broad spectrum of attendees: national representatives, United Nations representatives, international and regional organizations, academia, industry, computer engineers, weapons experts, etc.,etc.

The conference revolved around the quick development of AWS, using AI based technologies, such as drones or AI based target selection, in current battlefields. As Alexander Schallenberg, Austrian Minister of Foreign Affairs, pointed out in his opening statement, there are currently more than 8 armed conflicts happening around the globe, who threaten to become worse due to the high speed of technological weapons’ development. The use of autonomous weapons delegates the decision-making process from humans to AI, which can lead to lethal mistakes. AI can be a tool, but tools are only valuable as they are predictable and can be controlled. The lack of human control in the use of weapons entails the risk of destabilizing international security, in fact has destabilized international security.

The conference was divided into four panels, each one with a different topic: 

1. “Emerging technologies, AWS and the (future) shape of conflict”

2. “Human control and accountability under the law”

3. “Human dignity and the ethics of algorithms” 

4. “How dealing with AWS will shape future human-technology relations”. 

I spent most of my time in the first panel, with presentations such as “AI or not AI? Fully autonomous or optionally remote-controlled”. There was intense discussion (and brilliant explanations) of the technological development in the field of AWS and how the issue of AWS relates to wider questions around artificial intelligence in the military domain, especially the use of large-scale data processing in military applications. 

Note to readers: there was an in-depth report from Gaza on Israel’s use of AI-powered target suggestion systems (called “Lavender”) which are already showing us how the quest for speed, the erosion of meaningful human control, and the reduction of people to data points can contribute to devastation for civilians. I covered Lavender briefly in a post last month.

Highlighting the gravity of the situation, Austria’s foreign minister Alexander Schallenberg said: “This is the Oppenheimer Moment of our generation”.

Indeed, to what extent the genie is already out of the bottle is a question in itself. Drones and AI are already widely used by militaries around the world.

GlobalData defense analyst Wilson Jones said:

The use of drones in modern conflict by Russia and Ukraine, by the U.S. in targeted strike campaigns in Afghanistan and Pakistan and, as recently revealed last month, as part of Israel’s Lavender program, shows that AI’s ability to process information is already being used by world militaries to increase striking power.

As I have noted in previous posts, investigations from The Bureau of Investigative Journalism into drone warfare by the U.S. brought to light the repeated drone airstrikes by the U.S. military killing civilians in Pakistan, Afghanistan, Somalia and Yemen. More recently, as I noted in my post referenced above, the IDF’s Lavender AI system has been used to identify tens of thousands of targets, with civilians killed as a result of the strikes.

Sources quoted in the report on Lavender by +972 media said that, at the start of the IDF’s assault on Gaza, it permitted the deaths of 15 or 20 civilians as “collateral” for strikes aimed at low-ranking militants, with up to 100 allowed for higher-ranking officials. The system has been said to have a 90% accuracy rate in identifying individuals affiliated with Hamas, meaning that 10% are not. Moreover, deliberate targeting of militants in homes has reportedly occurred, resulting in entire families being killed at once due to the AI’s identification and decisions.

Note to readers: in was reported last night that 2 entire families were killed by missile attacks, neither family having anything to do with Hamas.

Yes, a threat to global security and a “need” for regulation, but for many an almost futile effort. Dr Alexander Blanchard, senior researcher for the Governance of Artificial Intelligence programme at the Stockholm International Peace Research Institute (SIPRI), an independent think tank focussing on global security, said:

The use of AI in weapon systems, especially when used for targeting, raises fundamental questions about us – humans – and our relationship to warfare, and, more particularly our presumptions of how we may exercise violence in armed conflicts.

AI changes the way militaries select targets and apply force to them. These changes raise in turn a series of legal, ethical and operational questions. The biggest concern is humanitarian.

But even military advisors say their biggest fears are that the way these autonomous systems are currently being designed and used will expose civilians and other persons protected under international law to risk of greater harm. This is because AI systems, particularly when used in cluttered environments, may behave unpredictably, and may fail to accurately recognize a target and attack a civilian, or fail to recognize combatants who are hors de combat.

There was an interesting discussion on the issue of how culpability is determined. One presenter noted:

Under existing laws of war there is the concept of command responsibility. This means that an officer, general, or other leader is legally responsible for the actions of troops under their command. If troops commit war crimes, the officer bears a responsibility even if they did not give orders, the burden of proof falls on them proving they did everything possible to prevent war crimes.

With AI systems, this complicates everything. Is an IT technician culpable? A system designer? It’s unclear. If it’s unclear, then that creates a moral hazard if actors think their actions are not covered by existing statutes.

Several major international agreements limit and regulate certain uses of weapons. There are bans on the use of chemical weapons, nuclear non-proliferation treaties and the Convention on Certain Conventional Weapons, which bans or restricts the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.

But nuclear arms control required decades of international cooperation and treaties after that to be enforceable and few see that happening with autonomous weapons.

One big reason? Anti-proliferation worked because of US-USSR cooperation in a bipolar world order. That doesn’t exist anymore, and the technology to make AI is already accessible by many more nations than atomic power ever was. A binding treaty would have to sit everyone involved down at a table to agree to not use a tool that increases their military power. That isn’t likely to work because AI can improve military effectiveness at minimal financial and material costs.

So in the absence of a clear regulatory framework, these declarations about regulating autonomous weapons remain largely aspirational. And it should come as no surprise that some states want to retain their own sovereignty when it comes to deciding on matters of domestic defense and national security, especially in the context of the current geopolitical tensions. Specific example? While the EU’s Artificial Act does lay out some requirements for AI systems, it does not address AI systems for military purposes. Those provisions were drafted, but every EU member state demanded they be deleted.

Today’s tense geopolitical environment has made the U.S. military spend billions on AI. In Ukraine and in Yemen, it has been testing all types of new AI weapon systems and related software. AI is not just another tool or weapon that can bring prestige, power, or wealth. It has the potential to enable a significant military and economic advantage over adversaries. Rightly or wrongly, the two players that matter most – China and the United States – both see AI development as a zero-sum game that will give the winner a decisive strategic edge in the decades to come.

Russia is also in the game, but to a limited degree, though it is demonstrating in growing expertise in drone warfare in Ukraine. And it is expanding its expertise in other areas. Russian jamming has kept many of Ukraine’s relatively new long-range GLSDB bombs from hitting their intended targets. The Boeing and Saab-made ground-launched small diameter bomb has a 161km range. It launches with a rocket motor and then wings pop out to extend its range. But its guidance system has been targeted by Russian jamming that its makers are struggling to counteract.

The thing is, wars change constantly, as a function of technology, politics, geography, climate, and thousands of other variables; they cannot be perfectly simulated. Special-purpose AI has been great for static, easily-simulated games with rules that have been stable for centuries, like chess, Go, and maybe even diplomacy, but has never been great at dynamic environments in which the environment itself is unpredictable.

AI systems, especially generative AI systems, like GPT-4, that are more general purpose, can’t even handle the static situations.

And for war, even more unstable. The future is barbarian. The world’s armies now see they must adapt to the new Dark Ages. During the glory days of the 1990s there was much excitement about computer technology opening the way to a new kind of warfare. Information, we were told, would flow from the battlefield to headquarters and back, giving commanders total control over hypercomplex, hugely expensive militaries that would overwhelm more poorly equipped forces with ease.

That has, however, not come to pass.

On the battlefields of eastern Ukraine, the single most effective force the Ukrainian army has consisted of little independent units huddled in bunkers just behind the lines, equipped with cheap drones. Right now, Russia has the upper hand by every conventional measure; it has more troops, more tanks, more artillery, more ammunition and other expendables, and a vastly superior air force. Its missiles pound Ukrainian targets hundreds of miles behind the lines – and yet, it is also restricted to slow, gruelling, trench-by-trench advances, because any attempt at a general assault in open country gets chopped to pieces by drones.

The same thing, but mediated by a different set of technologies, is also taking place in the Gaza Strip. The Israeli military is so much larger and better armed than Hamas that, in a conventional struggle, there would be no contest at all. But the Hamas commanders aren’t stupid enough to meet the Israelis in a conventional struggle. Instead, their network of tunnels allows Hamas forces to pop up, ambush Israeli detachments, and vanish again. It’s the same strategy Hezbollah forces in southern Lebanon used against the Israeli army in 2006, and it’s proving just as effective this time around.

Then there’s the Ansarullah militia in Yemen, drawn mostly from the Houthi ethnic group. Their approach to messing with the industrial West is just as cheap and effective. You don’t need a permanent installation to launch a drone against a ship passing through the Red Sea – the back of a truck is quite adequate – and so the US and British forces on the scene have nothing useful to bomb. Yes, some Ansarullah drones get shot down. But so what? It takes a missile costing $2 million to down a drone that only costs $2,000.

The Ansarullah strategy is particularly clever because they don’t have to defeat the US and British navies. All they have to do is make the Red Sea too costly for commercial shipping to Israel and its allies, and they can do that by adding the risk of a drone strike (and the insurance premiums that follow from that risk) to the other costs and dangers shipping companies have to face. If Israel and its allies produced most of their goods and services at home, that wouldn’t be any kind of problem. But it turns out that one of the many downsides to economic globalization is that it holds every nation’s economy hostage to shipping disruptions in the major sea lanes.

Result? It’s only now that the Pentagon has been saying (softly) that the United States may have built the entirely wrong war machine needed for the 21st century.

So letting AI and autonomous weapons “run a war” just might be malpractice.

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top