Wavell Room
Image default
Capabilities and SpendingConcepts and DoctrineCyber / InformationLandPeople and LeadershipShort ReadWavell Writes 19

Could Autonomous Systems Hold Rank?

Editor’s Note: this was the runner-up submission to the Wavell Writes 2019 essay competition.

Introduction

Imagine … It is 2050.  After a quarter century of successful service, Boxer is being phased out.  Its replacement: the autonomous infantry fighting vehicle (IFV). The Section Command variant carries 8 dismounts; individual headsets connect each soldier to the command ‘brain’, which is connected to the company, battlegroup and brigade information networks.  Situational updates are continuously received and assessed, the plan amended to reflect the latest information and personalised orders issued simultaneously to each soldier in the section immediately before the engagement. When the section dismounts, the commander autonomously manoeuvres, providing fire support to ‘his’ soldiers from the turret-mounted cannon, and using visual, thermal and acoustic sensors to lead the soldiers onto the objective.  The autonomous IFV holds the rank of Corporal …

Is this a realistic vision?  The speed of technological development in areas such as machine learning, neural networks and so-called artificial intelligence is astonishing.  If we look back 25 years, hardly anyone had a mobile phone and email was still a novelty. I am quite ready to believe that in another 25 years we could have created electronic entities with a sense of agency.  However, from a military standpoint, the true barriers to machine autonomy and authority are likely to be social rather than scientific – can we overcome the cultural and moral challenges? I suspect my scenario would seem much more plausible (and acceptable?) to many readers if the section commander were the human and the dismounts were robots responding to human control.

For this reason, and because the scientific challenges involve very deep technical issues that are unlikely to be of much interest to the general reader, this essay will focus on those social challenges.  Following a brief discussion of the characteristics of rank in a military context, I will assess the extent to which an autonomous machine might meet these; and consider some of the implications if they did.  There is, though, one scientific issue I want to cover first.

The Turing Test

The British computing pioneer Alan Turing, famous to many for his codebreaking work at Bletchley park during WW2, devised a test to determine if a machine could ‘think’.  Essentially, the test involves an ‘interrogator’ holding conversations between another person and a machine, without knowing which one is which. The machine passes the test – ie it is considered to be capable of thinking – if the interrogator cannot work out, from the conversation, which is the machine.  Much scientific and philosophical effort has followed to further refine the definition and identification of a thinking machine; for my purposes, I will assume that a thinking, self-determining machine is possible without worrying about the technical details, such as how it would be powered, move, communicate etc.

Rank and Leadership

At its most basic, rank is ‘[a] position in the hierarchy of the Armed forces.’  On this level, almost anything could ‘hold rank’: animal mascots, for instance, are frequently given rank.  However, my interpretation of the ‘exam question’ in this section is whether autonomous systems could lead and command, roles for which we generally use rank as a shorthand in the military.

AFM Command provides the following list of skills and qualities required by a commander:

  • Leadership
  • Understanding
  • Decision Making
  • Vision and Intellect
  • Initiative
  • Judgement
  • Building Relationships
  • The Ability to Communicate
  • Learning from Experience

Arguably, a thinking machine could be better at some of these than a human: understanding and judgement, for instance, could rely on potentially unlimited ‘situational databases’ which the machine could mine for insights.  The speed with which this could be done would aid rapid decision making and allow the machine to take the initiative. Work on neural networks indicates that machines can very successfully learn from experience.

It is interesting to me that Leadership is listed as a discrete quality, given the volume of literature and the countless hours of military education spent analysing it.  Thinking about leadership traits that a machine might display, we can envisage a ‘heroic’ machine leading fearlessly from the front, setting an example (shooting better, carrying more) and exhorting its comrades to ‘follow me’. As discussed in the previous paragraph, I also see no issues with the intellectual end of leadership – setting intent and so forth.

However, is leadership fundamentally a rational or an emotional quality?  Arguably the overriding requirement to build an effective leader-led relationship (as opposed to a command-commanded one) is mutual trust. Would we trust a machine to give us orders or take life-and-death decisions on our behalf?  A contributing factor to mutual trust, certainly at lower levels of command, is a sense of shared risk. A section or platoon commander is likely to be under fire with his or her men, and the consequences of that fire equally dangerous.  Hence, a (human) leader needs the same physical courage as any other soldier; and can empathise directly with the fear and strain that combat imposes.

Will the thinking machine be able to feel, for fearlessness is very different to courageousness?  Will a machine leader actually be at the locus of danger, or will the ‘brain’ act remotely through disposable avatars?  The limiting factor in all this might be how the future human-machine interface evolves, and the degree to which society in general comes to accept machine autonomy.  It may be that we can come to trust machine leaders – after all, we trust machines with our lives in all sorts of circumstances, and I can think of numerous humans who have relied on trust in their abilities rather than trust in their personality to succeed as leaders.     

The final issue I want to discuss in this section is accountability.  A commander is accountable for the decisions she or he makes, and might face personal, career or even legal sanctions in certain circumstances.  Could a framework of accountability exist for a machine? If it can think (or is potentially even conscious), what are the ethical considerations for sanctions such as switching it off or reprogramming it?  Alternatively, would it be a better commander by lacking Hamlet’s cowardice of conscience, and therefore immune to considerations of future accountability for present actions?

Even with current technology, the question of accountability has to be addressed.  Current capability retains human-in-the-loop decision making for lethal effects (and the US Army Robotics and Autonomous Systems (RAS) strategy suggests that this will remain the case even for long term development); but who is accountable for the collateral damage from systems such as hard-kill defensive aid suites that must fire when a threat is detected without recourse to human approval?  Going slightly further, it would be feasible, today, to create sentry vehicles with fire control orders to shoot a target with a given set of characteristics in a given arc. In this case, the machine could be considered to hold rank as a private soldier, as there would be some element of discrimination and decision making as to whether it did or did not fire.

Of course, a majority of rank holders do not exercise command authority, but instead fill a huge range of staff-type jobs where experience and subject matter expertise are important.  It is, perhaps, easier to see the utility of rank-holding machines in this case – as watchkeepers, logistic planners, fire cell coordinators or in any one of the myriad roles in a headquarters where detailed data collection, analysis and forecasting are required.  The advantages are obvious: reductions in space and real-life support requirements for the HQ; continuity rather than day-night shifts; accuracy of operational staff work; and always up-to-date situational awareness. There could even be a ‘hive mind’ type benefit, where simultaneous iterations of individual plans could be generated around a HQ, saving the time required for back briefing and other coordination activity; and ultimately increasing tempo.

In non-operational roles effort is already underway to ‘automate’ business processes; it does not need a great leap of imagination to foresee autonomous ‘bots’ competing a wide range of the bureaucratic activity that, however mundane, is essential for the running of the MoD.

Wider Implications

Assuming autonomous systems do become a viable military capability, what consequences would ensue?  I have already touched on one of the more controversial aspects of an autonomous fighting system, viz the authority to apply lethal force to a living target.  It is already the case on many weapon platforms that target acquisition and fire control is carried out by a computer, based on a set of targeting algorithms.  However, the final decision to fire is made by a human. Conceptually, the transfer of this decision to a machine is a huge step; in practice, in a high threat, fast moving combat environment I wonder how much scrutiny of the proposed target plan would actually be applied, and therefore how big a change this would really represent.

Fratricide (and other targeting mistakes) could become a much less likely occurrence, as machines could be trained to recognise friendly equipment, uniforms and even faces.  Such recognition would not rely on human performance under pressure. If the machine were responsible for fire control orders, that reliable recognition could provide a further control measure against friendly fire incidents.

Thinking more broadly in terms of the freedom of action that a commander might be given, how far might a machine leader be empowered under Mission Command?  At the lower tactical levels, it seems plausible that an autonomous system might give individual orders within a section. Already, many video wargames have algorithms that adapt enemy tactics and responses to the actions of the player.  Given a specific mission (and access to a potentially huge set of historical examples to compare against) it seems reasonable that a machine could devise subordinate objectives and missions and produce a plan for a limited number of soldiers.

Does this scale up?  Could a neural network learn to be a battlegroup or brigade commander?  Put another way, even if autonomous systems could hold rank, is there a ceiling above which they would be incapable of operating?  On this question I am much less certain, and I would want to see some empirical evidence before offering a definite opinion.

If there is a limit to rank, is there also a limit to the roles that could be carried out by our autonomous soldier?  I doubt there is. Sustainment is already approaching the point where autonomous vehicles could be responsible for a large part of the supply chain. Equipment support robots (an autonomous tow truck?) would be a logical next step.  Communications and information infrastructure could also be autonomised – no need for vulnerable soldiers to set up a remote rebro station. Military engineers have operated remote-controlled equipment for some time; an autonomous armoured bridgelayer or minefield breaching platform is easy to imagine.  And lastly, we have the combat component – firepower and movement through the robotic tank. As so much of the effort of a military force is taken up with its own protection and sustainment, the removal of humans from large swathes of military activity would massively increase agility, tempo and resilience  

Finally, I want to address the elephant that has been in the room throughout this discussion.  If we accept that a machine could hold rank, why would we have any human soldiers under its command?  Our army could become a robotic force, overmatching an adversary with indomitable will and an unmatched ability to ignore any casualties it sustained.  In principle, I believe this will eventually become the future of high intensity warfighting – a non-human technological contest. What that would mean in terms of our current approach to war – manoeuvring to destroy the will and cohesion of the enemy – is a question I will leave for another day, although it may suggest an ever-greater reliance on economic superiority to ensure that the technology is available in sufficient quantities to keep humans out of harm’s way. 

However, much of what the Army does is not high intensity warfighting, and relies on the ability of soldiers to interact with people rather than just to kill them.  In the same way that tanks and armoured vehicles can represent the wrong ‘posture’ in some scenarios, ‘killer robots’ may create equally negative perceptions. We come full circle back to the question of trust, and perhaps the tim eit would take for a technologically-disadvantaged population to begin to trust an autonomous system.  Therefore, human soldiers, together with the support and infrastructure they need, will not disappear from the orbat, but will rather be joined by new non-human comrades in true human-machine teams.

Conclusion

Currently, the Army’s (and wider military’s) use of the word ‘Autonomous’ might be better replaced by ‘automatic’. In reality, many of the candidate technologies displayed and tested on Ex AUTONMOUS WARRIOR 18 were pre-programmed or remote control.  This is not to deny the operational utility of these technologies, nor the benefits of the soldier-inventor interaction that AW 18 allowed. However, truly autonomous fighting systems could represent a discontinuity in the conduct of war – a genuine revolution in military affairs.

Moving forward, as true autonomous systems become available and accepted, we will see a gradual transition from human augmentation to human replacement.  We must use the time while the technology matures to make sure we can manage the cultural and moral challenges, otherwise our adversaries will make better use of this potential revolution than ourselves.   

Major Andy Bell RLC

Andy commissioned into the RLC in 1998, and served in Germany, Bosnia, Northern Ireland, the Falkland Islands and Iraq in various transport and port operations roles.  Also served in the DLO on the staff at the time of its merger with the DPA to form DE&S.  Left in the Army in late 2007, and subsequently held a number of project and programme management jobs in the public and private sectors.  He rejoined the colours in 2017, returning to DE&S in an FTRS appointment.

Related posts

How the “Gibraltar of the East” fell: A Historical Analysis of the Singapore Strategy up to WWII.

Andy Wong

When Russia used an atomic bomb on people

Sergio Miller

The Vampire Fallacy: Virtual Reality vs Needs Based Strategies

Leave a Comment