Wavell Room
Image default
OpinionShort Read

AI in Weapon Systems: Proceed with Caution

Artificial intelligence (AI) has spread into many areas of life, and defence is no exception. There is a growing sense that AI will have a major influence on the future of warfare, and armed forces worldwide are investing heavily in AI-enabled capabilities. Despite these advances, fighting is still largely a human activity.

Bringing AI into warfare through AI-enabled autonomous weapon systems (AWS) could revolutionise defence technology and is one of the most controversial uses of AI today. There has been particular debate about how autonomous weapons can comply with the rules and regulations of armed conflict which exist for humanitarian purposes.

It is clear from media coverage of our inquiry that there is widespread interest in and concern about the use of AI in autonomous weapons. Achieving democratic endorsement will have several elements, including increasing public understanding of AI and autonomous weapons, enhancing the role of Parliament in decision-making on autonomous weapons, and retaining public confidence in the development and use of autonomous weapons.

The Government aims to be “ambitious, safe, responsible”. Although, of course, we agree in principle, aspiration has not lived up to reality. In our Report, we, therefore, make proposals to ensure that the Government approaches the development and use of AI in AWS in an ethical and legal way, providing key strategic and battlefield benefits while achieving public understanding and democratic endorsement. “Ambitious, safe and responsible” must be translated into practical implementation.

International engagement

Second, the Government should lead by example in international engagement on the regulation of AWS. The AI Safety Summit was a welcome initiative, but it did not cover defence. The Government must include AI in AWS in its proclaimed desire to “work together in an inclusive manner to ensure human-centric, trustworthy and responsibleAIthat is safe” and to support “the good of all through existing international fora and other relevant initiatives”.

The international community has been debating the regulation of AWS for several years. Outcomes from this debate could be a legally binding treaty or non-binding measures clarifying the application of international humanitarian law – each approach has its advocates. Despite differences in form, the key goal is accelerating efforts to achieve an effective international instrument.

A key element in this will be prohibiting the use of AI in nuclear command, control and communications. On one hand, advances in AI have the potential to bring greater effectiveness to nuclear command, control and communications. For example, machine learning could improve detection capabilities of early warning systems, make it easier for human analysts to cross-analyse intelligence, surveillance and reconnaissance data, and improve the protection of nuclear command, control and communications against cyberattacks.

However, use of AI in nuclear command, control and communications also has the potential to spur arms races or increase the likelihood of states escalating to nuclear use – either intentionally or accidentally – during a crisis. The compressed time for decision-making when using AI may lead to increased tensions, miscommunication, and misunderstanding. Moreover, an AI tool could be hacked, its training data poisoned, or its outputs interpreted as fact when they are statistical correlations, all leading to potentially catastrophic outcomes.

Defining AWS

Third, the Government should adopt an operational definition of AWS. Surprisingly, the Government does not currently have one. The Ministry of Defence has stated it is cautious about adopting one because “such terms have acquired a meaning beyond their literal interpretation” and concerns that an “overly narrow definition could become quickly outdated in such a complex and fast-moving area and could inadvertently hinder progress in international discussions”. However, we believe it is possible to create a future-proofed definition. Doing so would aid the UK’s ability to make meaningful policy on autonomous weapons and engage fully in discussions in international fora.

Human control

Fourth, the Government should ensure human control at all stages of an AWS’s lifecycle. Much of the concern about AWS is focused on systems in which the autonomy is enabled by AI technologies, with an AI system undertaking analysis of information obtained from sensors. But it is essential to have human control over the system’s deployment both to ensure human moral agency and legal compliance. Our absolute national commitment to the requirements of International Humanitarian Law must buttress this. 

Procurement

Finally, the Government should ensure that its procurement processes are appropriately designed for the world of AI. We heard that the Ministry of Defence’s procurement lacks accountability and is overly bureaucratic. In particular, we heard that it lacks capability in relation to software and data, both of which are central to the development of AI. This may require revolutionary change. If so, so be it; but time is short. 

Conclusion

Overall, we welcome the fact that the Government has recognised the role of responsible AI in its future defence capability. AI has the potential to provide key battlefield and strategic benefits. However, in doing so, the Government must embed ethical and legal principles at all stages of design, development and deployment. Technology should be used when advantageous, but not at unacceptable cost to the UK’s moral principles.

Image: Lord Lisvane's Profile
Lord Lisvane

Lord Lisvane's full title is The Lord Lisvane KCB DL.  Lord Lisvane is a Member of the House of Lords and is Chair of the AI in Weapon Systems committee.

Related posts

Innovation and the Integrated Review

Keith Dear

What Can The Military Learn From A Decade Of Cyber Attacks?

Gabriel Currie

Neurodiversity in defence and a chaotic world

Gordon J