Wavell Room
Image default
Capabilities and Spending Concepts and Doctrine

The British way in AI warfare.

Experimental Feature: Audio Read Version

Artificial Intelligence is of the moment, you might have noticed. It features prominently in the UK’s recent Integrated Security and Defence Review. The head of UK Strategic Command just called it the the ‘one ring to rule them all’, and the Chief of the General Staff reckons it might ‘transform warfare,’ thereby, ‘rendering legacy systems obsolete ever more rapidly’.

Meanwhile, the Americans have just wrapped up a large National Security Commission on AI, and the direction of travel over there is clear too – much more autonomy in defence is coming, and fast. Famously, Vladimir Putin mused that whoever masters AI will control the world. He should be worried, if so, because it won’t be him. Like the UK, China and Russia are betting big on military AI, but they’ve got sizeable task ahead of them. You get to the cutting edge by innovating, not imitating.

Anyway, heady stuff all round. But there’s still plenty of uncertainty and debate (sometimes heated) about what AI can actually do, and what the implications are for defence.

Cyber Cynics

One alternative position to the hyperbolic revolutionaries is jaded cynicism. There’s nothing new under the sun, says the cynic, usually in a Tweet. Keith Dear calls them the ‘reactionariat,’ and for them, AI is just the latest hype. Remember the Effects Based Approach to Operations, or the IT revolution in military affairs that preceded it? You won’t lift the fog of war with this ‘revolution’, either.

It’s just as lazy as the hyperbole, this cynicism. It avoids engaging with all that complex technology, or its cultural foundations.  Sometimes, tiresomely, it’s the badge of the self-declared ‘expert’: ‘Oh you innocent technophiles, why can’t you understand war, like I do?’

In their favour, we have been here before – with overblown claims for radical new military technologies. And there are continuities with the past. If you want to see failed efforts to swarm and saturate a carrier task group, read about Japan’s kamikazes in Ian Toll’s sublime history of the Pacific war. Or reflect on the limits of Israel’s situational awareness today in densely populated, urban Gaza.

New wine, new bottles

They’re wrong though, the cynics. Change is coming, and much faster than a traditional procurement cycle (hello Ajax!), or the generation of an officer’s career. The middle ground is a safe bet. AI will certainly change the character of war. For the merest hint at what’s coming, look at Iron Dome versus Hamas’s rockets. Look at the Armenian tank versus the Azerbaijani drone.

But I would go further, leaning to the revolutionaries by thinking AI will alter war’s nature too. Not as a human phenomenon – involving chance, passion and violence. Those Clausewitzian staples will surely remain. But change in the nature of war, seen as involving human decision-making, with all our fallible, evolved mental shortcuts – Clausewitz’s ‘genius’ commander. For the first time, decisions about violence against humans will be made by inhuman minds. That’s revolutionary.

What does this mean?

Will all this AI mean new kit, concepts, and people? Certainly. How do you lead CGS’s planned robotic 30,000 strong army? Will tactics change? Again, yes. For the CAS, Mike Wigston, AI enables mass, especially in the air. Swarming favours the offence; and lightning-fast decisions mean, in the words of one US general, a ‘hyperactive battlefield’. With no crew to protect, all sorts of new combined arms possibilities open up. A robotic menagerie is taking shape. AI’s decision-making is its USP – so staying at the cutting edge becomes imperative. Procurement will have to change, because you can’t spend a quarter century designing your next generation fighter plane. Accordingly the Americans claim to have done it in just one year.

AI is a large wager, balancing legacy equipment and tradition against unproven change. Sunset capabilities are being retired early to free up our stake in the gamble: Warrior – gone.  The US might soon junk the thoroughbred F22. Should a manned Tempest even be on the drawing board today, when an AI has just spanked a human pilot 5-0 in simulated dogfighting? What about our gleaming new carriers – might they be vulnerable to saturation from fast moving autonomous platforms – in the air and under the surface?

The uncertainty is feeding an intense security dilemma. With the right combination of weapons, people and concepts unsettled, what’s developing is an arms race and Wacky Races, all rolled into one. Those at the top, CGS et al, are anxious not to be left behind. Yet at the same time, there’s so much hype, and plenty of snake oil being offered for sale. What to do?

Culture is the key

We really must ground our analysis of AI in an understanding of the history and culture from which it emerges.  The cardinal sin here is ‘technological determinism’, whose adherents imagine technology emerging fully formed, out of the blue, to exert a dramatic effect on society. That’s where a lot of the debate is today: too much sci fi, not enough social science.

Our technologies are a product of our societies. So, the history of AI is deeply interwoven with the particular blend of entrepreneurialism, university research, and deep federal government pockets found in the US. Similarly, British norms will shape our development and employment of AI. It will look very different to Chinese military AI, for sure. That’s wider societal culture, about ethics, say; and narrower organisational culture, for example about mission command.

Much discussion of AI in war ignores this cultural aspect – getting wrapped up in the tech and in buzzword bingo. If this is indeed a new ‘horse and tank moment’ – we’d do well to remember that the tank was used in very different ways by rival armies of the 1930s. CGS and the rest are right, imho – the times are a changing. We need a way to use AI effectively, while understanding its limitations. And, critically, to understand the cultural forces that shape its development here in the UK. That’s the debate that needs to happen now.

Kenneth Payne

Dr Kenneth Payne is a Reader in International Relations at King's College London. A former BBC journalist, he is the author of four books on strategy. The latest, I Warbot: The Dawn of Artificially Intelligent Conflict was published by Hurst on 17th June 2021.

Related posts

Why We Need A Modern Theory of Special Warfare to Thrive in the Human Domain

The Socio-Cultural World of 2020-2050: What might it mean for militaries?

Liz

British Army Communications Review

Rupert Jones

2 comments

Anne September 22, 2021 at 13:29

It’s the age of the Zurg rush. Swarms of cheap, disposable drones overwhelming everything: fixed positions, less disposable heavy assets, everything.

Reply
John October 8, 2021 at 14:54

If the cultural element is so important, it would be useful for the author to be more specific in describing how the various military-political cultures he mentions will influence the procurement and deployment of AI and how those choices will then interact; and also, are there any aspects to the military use of AI which will be common across all cultures because they are either so fundamental or so obviously the best choice?

Reply

Leave a Comment