Wavell Room
Image default
Cyber / InformationMulti-Domain OperationsShort Read

Artificial Intelligence: The Ultimate Deterrence

“We are not enemies, but friends. We must not be enemies. Though passion may have strained, it must not break our bonds of affection. The mystic chords of memory will swell when again touched, as surely they will be, by the better angels of our nature.”

  Abraham Lincoln

Deterrence is nothing new to military thinking and has formed a large part of the rationale for standing armies since the time of Sparta. Deterrence came into its own with the advent of nuclear weapons and the nuclear age. During the Cold War nuclear deterrence became synonymous with Mutually Assured Destruction – or the concept that if the United States and the Soviet Union came to nuclear blows, both sides would be annihilated. Artificial Intelligence’s (AI) ability to out-think humans, and its clear-sighted, unsentimental approach to decision-making, marks the next iteration of deterrence. Although the jury is still out, AI possesses the potential to be  more of a deterrent than nuclear weapons.

Deterrence is characterised by the possession of a capability, or capabilities, which are maintained to intimidate potential adversaries from acting beyond one’s self-interest. Additionally, deterrence is practised to keep an actor from acting outside the bounds of international norms and laws. Deterrence is most useful when the deterrent capability is forceful enough to not be used, but still maintains an inhibiting effect. Nuclear weapons provide perhaps the best example of deterrence.

Nuclear weapons, and their strategic effect, helped set the Cold War in motion. Although direct armed conflict between the Soviet Union and the U.S. was avoided, the Cold War was anything but peaceful. The Cold War saw the U.S. and Soviet Union effectively try to out manoeuvre the other, politically and strategically, with the globe as their chess board. Activities and military operations in Berlin, the Vietnam war (1965-1973) and Soviet-Afghan War (1979-1989) are a few of the more well-known Cold War crises. Nevertheless, if nuclear weapons are supposed to deter confrontation between strategic actors, why then has there been so much military activity since nuclear weapons development?

The element of doubt is a key factor that must be considered regarding deterrence. A strategic actor, for instance, never truly knows with absolute certainty the other actor’s red line for the use of their nuclear arsenal. Therefore, careful judgement is applied when engaging in strategic competition with nuclear armed actors so as to not trigger nuclear reprisal.

1962’s Cuban Missile Crisis is an excellent example of this situation, and it resulted in the world teetering on the brink of nuclear war. The U.S. established a limit which the Soviets could operate in Cuba, to include the placement of missiles. The U.S. used clear signalling to the Soviets regarding their red line, and that if the Soviets crossed that red line, the situation would achieve a point of no return. The Soviets eventually backed down, because they genuinely feared that the U.S. might launch a nuclear response. The U.S. gambled that the Soviet ships would not risk those consequences. However, what is clear, is that the Cuban crisis was characterised by bluffing: both sides gambled that their actions would not be enough to provoke outright hostilities, but strong enough to leave lingering doubt that they might. This element of uncertainty is what aids deterrence in preventing escalation, and hence why both Kennedy and Khrushchev pushed so hard for a resolution that avoided a kinetic solution; it was the surest way to avoid a nuclear war.

Uncertainty and fear are very human attributes. The story of Vasili Arkhopov provides a glimpse into these attributes. Arkhopov was the Executive Officer of the Soviet Submarine, B-59, during the crisis. While operating in international waters, the USS Randolph detected B-59. The USS Randolph used depth charges to signal B-59 to surface. Instead of surfacing, the B-59’s captain elected to dive deeper, which caused it to lose communications, and awareness, above the waves. The captain thought that war may have already started, and as a result, he wanted to use nuclear torpedoes to engage the USS Randolph. Arkhopov intervened, and prevented the captain and the political officer from launching the nuclear weapon.

Arkhopov is often cited as the man who stopped World War III. It seems Arkhopov understood the potential consequences of a nuclear exchange with the U.S., so, in the case of B-59, deterrence appeared to work. However, the motivations of B-59’s captain and political officer must also be examined. The captain and the political officer lost awareness of the overarching situation, and attempted to make  decisions based on B-59 operating in international waters and not having taken hostile action first. Because of the captain and political officer’s lack of awareness, they might have felt that he met launch criteria for nuclear weapons. For the captain and political officer, deterrence failed, but war was averted thanks to Arkhopov.

Deterrence is therefore largely based on emotion – fear of the consequences and the uncertainty of what may come next. The Cuban Missile Crisis happened because both sides judged that the other would blink and back down, and fortunately the Soviets did so. The problem is that deterrence still allows violence to occur as sides push boundaries and attempt to conduct activities that fall short of that line. Nevertheless, both sides gamble that the other does not want nuclear war either, but each side’s red lines are nebulous and ill-defined. As a result, the risk of misjudgement increases as each actor operates close to the other’s red line.

Removing humans from the equation, however, eliminates the element of fear and an actor’s ability to ‘blink’, like the Soviets did during the Cuban Missile Crisis. For example, if an AI-enabled system were in place of Arkhopov on B-59, would the submarine have held off firing? The question is not simple and rests on many factors. Programming and fail-safes are the most important of these factors, but the answer likely boils down to one criterion: were the execution conditions met? If yes, then most computer programmes would subsequently execute the command.

If so, would society allow that? Most AI developers agree that there should always be a human in the loop for these decisions, and that AI must only aid in the decision-making process but not to do it without meaningful human involvement. Situational context is challenging for AI programming to account for. While an AI can process data much faster than a human – and so shorten the OODA (Observe-Orient-Decide-Act) cycle – humans in the loop can account for situational context missed by AI programming. In turn, this accountability can provide military commanders with a broad, more nuanced range of options than they might otherwise have.

The issue then becomes one of speed versus context. Computer programmes can process data more quickly than a human. To maintain the initiative on the battlefield, an actor must move faster through the OODA loop than their opponent. True on the battlefield level, this approach also has utility at the strategic level when viewed through the lens of decision superiority. AI is particularly pertinent in helping an actor gain an advantage through making more informed and accurate decisions than their enemy. If the situation on the ground is rapidly changing, a human is significantly challenged to make informed, timely decisions that support decision superiority regarding their adversary. This is even more important when the adversary is an AI-enabled combatant. To put it simply, there will come a time when having a human in the loop becomes a disadvantage.

Malicious code. Credit: Getty Images/iStockphoto

A force that hands over control to an AI will be able to move through the OODA loop far faster than one with a human in it. Moving quicker through OODA cycles will allow an actor to possess the initiative in any conflict and be able to maintain it throughout. Based on decision superiority, an actor might decide to take no action, but that remains an AI-informed decision. AI’s ability to move quickly through an OODA cycle means that the decision can always be made on one’s own terms, rather than those of the enemy.

If two peer actors found themselves in conflict and both were AI-enabled, the one without the humans in the loop would operate quicker. This is because they would not have humans slowing operating processes and decision-making. For instance, if a human planning cycle takes twelve hours, how much quicker can an AI-enabled machine accomplish the task? Considering that a machine can complete millions of operations per second, compared to a much slower human process, an AI will be able to generate a plan in a fraction of the time. To this end, an AI without a human in the loop might provide such an asymmetric advantage that it will inevitably be adopted by all actors with the means to do so.

Giving full control to an AI may be too controversial an idea for some people to stomach. However, it is worth remembering Winston Churchill’s comment on nuclear weapons. In 1945, Churchill stated that, “this revelation of the secrets of nature long mercifully withheld from man should arouse the most solemn reflections in the mind and conscience of every human being capable of comprehension.” Despite the doubts, and despite acknowledging that they were capable of “wreaking measureless havoc upon the entire globe,” Britain had its own nuclear weapon seven years later.

AI is the next arms race…a human out of the loop, and ai’s primacy in future armed conflict, is inevitable.

AI is the next arms race. All it will take is for one actor to demonstrate AI’s asymmetric advantages for other strategic actors to rush for it as well. Churchill hoped that nuclear weapons could be used for peace despite their destructive potential, and this rationalisation of destructive potential has not gone away. A human out of the loop, and AI’s primacy in future armed conflict, is inevitable.

Moreover, if AI with a human in the loop is a deterrent, how does autonomous AI become even more effective than its counterpart? The answer largely boils down to a human’s ability to blink, and to succumb to better angels. A machine currently lacks the capability to utilise human empathy and so will dispassionately execute its program once the criteria has been met, and it will not blink doing so. While the benefit is speed, it loses the human element of situational context and judgement. The real fear of an AI without a human augmenting its control is that the AI will operate only on programming, rather than also considering context and emotion – essentially, humanity will not be lucky enough to have a Vasily Arkhopov again.

Nonetheless, AI without humans in the loop makes the most effective deterrence. If an actor does not know the parameters of an AI’s strike criteria, but they do know that the AI will strike without question when its strike criteria is met, how cautious would that adversary be towards approaching its enemy’s red lines? Bluffing would no longer hold a place in strategic bargaining, and how would that change strategic appetites for risk?

AI is inevitable. The advantages that it offers are simply too great to stop development, but just as with nuclear weapons those advantages also carry great risk. As such, the imperative to develop the capability increases so that it can be used as a deterrence rather than an advantage. AI represents a deus ex machina waiting in the wings ready to intervene, but those same advantages mean that we cannot risk provoking it. As such, it represents the logical next step in deterrence.

James McEvoy
Major, Royal Corps of Signals at British Army

Major James McEvoy is a serving Royal Corps of Signals officer who was part of the team who deployed the British Army’s first deployed use of AI into an operational theatre in 2021. He is working on a project on bringing AI into service at the tactical level and has a degree in Classics and Philosophy.

Related posts

Innovation and the Integrated Review

Keith Dear

Fighting From Cities: The British Army after Ukraine

Anthony King

What might 2021 bring the British military?

The Wavell Room Team