Wavell Room
Image default
Capabilities and SpendingConcepts and DoctrineCyber / InformationSpace

Artificial Intelligence Primer: Time to understand AI

The recent “AI washing” surge in Defence is, somewhat misguided, infuriatingly repetitive and frankly – a bit shallow.

This article will frame the differences between forms of Artificial Intelligence, the difficulties incorporating it at scale within Defence, and offer some suggestions to achieve behaviour change in favour of an organisation genuinely enabled by AI.  This article is a simple guide to what you need to know.

Narrow and General Artificial Intelligence

There is a stark difference between General AI (which doesn’t yet exist) and Narrow AI (which does exist).  Narrow AI is not a capability, it is not an end state, it is a tool.  A way and a means to achieve an end.  Narrow AI is ultimately a mathematical method for prediction, it can give you the most likely answer to any question that can be answered with a number – it is statistics on steroids.

General AI is the Hollywood AI, the sentient robots; it is the consciousness inside computers where machines “think” like humans.  Outside of the tech world, this difference seems to be often misconstrued.

General AI should be a moonshot investment for the UK working alongside the US and similarly aligned democracies.  This technology is genuinely seismic. Rather than a change in goalposts, it is a game changer.  Whoever gets to General AI first gets to embed their morals, ideals, and ethical codes and it is most certainly a strategic capability which will shape the course of GPC.

The Singularity

The “Singularity” is a terrible and exciting prospect, by which ultra-intelligent machines will design more intelligence machines and will therefore be the last invention humans need ever make.  David Chalmers, a leading professor of philosophy on the subject, explains how the prospects for humans integrating into a post-singularity world are reduced to 4 fairly grim options: extinction, isolation, inferiority or integration – the best option being integration, where we would need to gradually “upload” our brains to computers.1

Compared to Narrow AI, General AI will need to operate safely across thousands of contexts, including those contexts not envisioned by the designers or users, and even those not a human has yet encountered.  A great fear is that without sufficient thought and effort directed towards the ethics of this technology, human existence could genuinely lie in the balance.

Nick Bostrom is a notable philosopher and author on topics including human enhancement ethics and super-intelligence risks.2 Bostrom posited a possible solution; first build a Narrow AI which, when it executes, becomes more ethical than a human could ever be.  If machines are to be placed in a position of being stronger, faster, more trusted, or smarter than humans then the field of machine ethics must commit itself to seeking human-superior (if not equivalent) niceness.

The AI Race

The General AI race will likely be between Silicon Valley and China – the asymmetry is deliberate.  With China’s civil-military fusion doctrine, they may, at any moment, seize the reins from their market driven leading tech firms and commandeer the technology for military purposes in the interests of the Communist Party.  In contrast, Silicon Valley is notoriously untethered from the US Government – just look at the employee activism at Google in response to Project MAVEN.3

Narrow AI is already everywhere in our personal technological lives and already used in several military applications worldwide.

The truth is, any modern computer can make a decision faster than a human.  What is useful about Narrow AI is that it can process and consider many variables and the 2nd, 3rd and perhaps 4th order effects of a course of action rapidly.  It can calculate various outcomes and present determinations to a human to make a genuinely well informed decision.

Defensive AI

AI for defensive purposes is a no-brainer.  The argument being that in order to win, one must make a higher number of better decisions than an adversary.  When reacting to an attack a fast response is key which is where an autonomous defensive system is advantageous.  Autonomous defensive systems have had the best traction in defensive cyber user cases where they have proven to be highly effective at defending IT systems, which (often wrongly) are considered to be more benign environments.

Forecasts of future conflicts tend to include the use of swarming devices to blind and confuse an adversary’s equipment.  In theory these devices would be autonomous and could “learn” from immediate context and cross domain inputs.

Allergic Reactions

However, there seems to be a societal allergic reaction towards autonomous systems charged with “lethal” force.  Yet, the reality is that defensive autonomous weapons have been in service for decades.  The PHALANX Close-In Weapon System (CIWS), designed by General Dynamics in 1978, is capable of firing at a rate of 3,000+ rounds per minute at an incoming hostile target, automatically.  The system does not recognise friend or foe signals, but collects data in real time and makes a fair assumption that if something looks like an incoming missile and moves like an incoming missile it’s probably an incoming missile.  It then engages autonomously or hastily recommends engaging to the operator all in in the spirit of defence.

Perhaps the concern is over the intent of use.  Many would accept the requirement for an algorithm to detect and destroy an incoming missile before it impacts a ship and sinks it.  Many of those same people are averse to an algorithm autonomously selecting targets for an offensive operation even though we know the algorithm to be more accurate, more precise, and less prone to error than its trained human counterpart.

Algorithms have moved on since the 1980s but still fundamentally rely on good quality data being fed in in order to give a good output.  However, information management is outrageously boring.  It doesn’t recruit people, it doesn’t fill people with motivation and it certainly doesn’t get people excited in the morning.  But it is central to the effective functioning of Narrow AI systems.

Information management is outrageously boring.  

The Ministry of Defence has an abundance of data.  Old data, going back since we first started collecting information about tides and when we first published military doctrine.  Erwin Rommel’s famous quote, “The British have some of the best doctrine in the world; it is fortunate that their officers do not read it” springs to mind.  We have the information but we need the insights.

Reinforcement learning (machine learning) technology4 is rapidly getting better at sorting through and deriving insights from large volumes of unstructured data as evidenced by examples like Alpha Dogfight5 and Alpha StarCraft.  Reinforcement learning algorithms are likely to be an incredibly important part of the MOD’s information management strategy in the near future by drawing interesting and meaningful needles out of the mountains of hay bales in our archives.

Information Superiority

The MOD has several hurdles to jump before claiming a place at the top of the information pyramid and gaining information superiority.

It will be difficult to motivate the sailors, soldiers, and airmen of our military to get on board with the implementation of AI into their duty roles.  This is because those same people are endlessly frustrated and disappointed by the investment in the basic “supporting” systems they rely on to do their jobs; their online HR system, their travel booking system, their financial auditing, their security clearance databases etc.

The MODs culture must change to accommodate the newfound importance of our data.  Those in a position where they deal with significant volumes of data (almost everyone in the military) must be schooled in why, where, and how to store it for it to be useful later.

Cultural Change

Changing cultures is notoriously difficult.  So where should the MOD start?

The strategy for incorporating effective Narrow AI usage at scale within the MOD could be similar to the strategic culture developed for incorporating skill at arms and weapon handling competencies.

In the UK Armed Forces, there are hundreds, perhaps thousands of individuals, who you could safely guarantee will never wield an assault rifle (SA80) in anger at anyone.  Ever.  These people most likely joined their respective corps, branches and trades knowing this.  But these people all understood the requirement to learn about individual weapons and accept the requirement to maintain their weapon competencies.

Why?  Because it’s a part of who we are as an organisation, it’s cultural and symbolic, it sets an ordinary clerk apart from a clerk working at Unilever.  It is a way of demonstrating discipline, skills, courage, etc.

A top down, bottom up and thoroughly pervasive approach is required.

From the top; “cyber” in its multiple flavours, must be funded and cohered properly, and direction and strong leadership is required.

From the bottom; Defence people, at all levels should be given opportunities to up-skill and achieve transferable qualifications recognised by the civilian sector.  Fluency in information management practices could become an expected attribute, alongside physical fitness.  To fully integrate; physical joint exercises, with high levels of interoperability will allow the identification of big problems and blocks forcing solutions and capability adaptions to move Defence forward.

The UK is not alone in needing change

AI is in vogue in US SOF as it is in most places.  Although better resourced than its UK counterpart, the US Department of Defence is not immune to AI culture problems.  The Chief Technology Officer network within SOCOM6 advocates for an evidence backed culture change in which quick wins are delivered rapidly to those who will feel it most.

Algorithmic “capability” development and acquisition cannot occur on the same model as current defence acquisition.  These technologies iterate monthly if not weekly and the MOD must forge the most direct route from whiteboard to pilot for these tools.  Our toolbox must be kept evergreen.

Recruiting high quality tech talent is hard.  The US DoD and the UK MOD (among others) go about it in entirely the wrong way 

Often, those in a position to make some of the necessary cultural, process, and priority changes do not understand the scope of the necessary pre-work or how the technology works.  These individuals are rightly charged with a breadth of responsibility for which they cannot be expected to have an intimate understanding of the programming but they should know about the newest and most promising developments, like reinforcement learning and semi-unsupervised learning.

Recruiting high quality tech talent is hard.  The US DoD and the UK MOD (among others) go about it in entirely the wrong way.  But it is an easy problem to reverse.  The tech talent of today want to be founders and inventors, not a cog in a machine.  They thrive on high levels of autonomy not taking orders.  For many of them, working at Google sees them regarded as a class A citizen, rather than a class C or D tech citizen that the military attracts.  Technically minded people want to work somewhere where the technical talent is idolized and not seen as a back office function.

You don’t go and work for Goldman Sachs if you want to be an engineer, you go there to work as a banker.  While upward mobility for a technical person in the military or within government is heavily capped, there will be a lack of enthusiasm in the recruiting office.  I cannot dispute that there are some exceptionally talented programmers already serving in the military.  But how many of these people are being employed, developed, and retained effectively? When I hear Admirals, Generals, and Air Marshalls proclaiming that they “have the best tech talent in the world” – referring to their green or blue suiters – working on defence problems, they are not only wrong, they are entirely deluded.

However, Defence has something that Silicon Valley does not: access to some of the most interesting, complex, relevant and wicked problems that exist.  Capitalising on these problems, enticing high quality talent into Defence for short secondments, surging them to the problem, is likely to be far more appealing than a complete career direction change for a digital native with their eyes set on a powerful career in the technology sector.

When I hear Admirals, Generals, and Air Marshalls proclaiming that they “have the best tech talent in the world” – referring to their green or blue suiters – working on defence problems, they are not only wrong, they are entirely deluded.

Cultures shift, and the recognition from multiple Special Forces communities that the door kickers may well shift from the supported to the supporters of tech functions is encouraging.  Allowing cyber reservists greater autonomy and encouraging them to attract and recruit their own teams from the civilian tech world (within the bounds of classification) could be monumentally beneficial for the MOD.

Significant portions of our defence budget are funneled into the development of technologies aimed at defeating or matching our adversaries for deterrence or in preparation of war.  This activity in itself is provocative.

“Getting one’s house in order” by focusing on core information management practices and making our data ready for future “weaponisation” can never be considered an act of war.  It is unlikely to be misconstrued and miscommunicated as an aggressive act; a particularly useful non-confrontational approach in our current “grey zone” where fragile egos lie in the balance.

 

 

Kate Turner

I am a serving Communications and Electronics Engineering Officer in the Royal Air Force. I have spent the last year as a visiting researcher at Yale University studying the intersection between AI, GPC and Climate Change. Prior to this research I served as the UK Liaison to the Chief Technology Officer at the US DoD Joint Special Operations Command - learning about the upcoming technologies that will reshape defence, and how to effectively introduce them.

Footnotes

  1. David Chalmers, Australian Professor of Philosophy and Cognitive Scientist. Author of “The Singularity: A Philosophical Analysis” Journal of Consciousness Studies 17 (9-10):9 – 10 (2010).
  2. Nick Bostrom, Swedish philosopher and author of “The Ethics of Artificial Intelligence” Cambridge Handbook of Artificial Intelligence (2011)
  3. Project MAVEN is a Pentagon project with an objective to automate processing, exploitation and dissemination of massive amounts of full motion video collected by intelligence, surveillance and reconnaissance assets in operational theatres.
  4. Reinforcement ML is just one example.  Many other forms of ML are likely to be hugely impactful for Defence like semi-supervised, semi-unsupervised, unsupervised and neural networks.
  5. Alpha Dogfight defeated an F-16 pilot in a simulation, a similar algorithm defeated an F-16 pilot using a virtual reality headset and simulator controls, the AI pilot won 5-0.
  6. US Special Operations Command.

Related posts

Emotional Resilience: The Role of Social Media.

Nigel

Better Urban Training

James Athow-Frost

The Answer is Multi Domain Operations – Now What’s the Question?

Phil Clare

Leave a Comment