Wavell Room
Image default
Short Read

AI Spy

Originally published on our weekly Edge of Defence newsletter.  Signup here for more excellent analysis on science and technology advancements and defence. 

AI will do so much more for intelligence work than crunching through big data or drafting low-level reports. It’s starting to look like today’s large language models can read people too.

What use might intelligence agencies make of large language models like ChatGPT? According to The Telegraph last week, not much:

‘Chatbots such as ChatGPT are only good enough to replace “extremely junior” intelligence analysts’. 

They cited this recent report from the Turing Institute, co-authored with a bona fide spook, the chief data scientist at GCHQ.  He argued that language models might make some contribution to rather bureaucratic processes, such as ‘auto-completing sentences, proofreading emails, and automating certain repetitive tasks’. All very dull and administrative. And given the tendency of such models to occasionally hallucinate nonsense, you’d certainly need to check their output.

I think this is far too conservative. A handful of recent papers suggests that there’s something profound going on with language models. We might bracket these together as part of a new scholarly discipline – ‘machine psychology’. Together they suggest that facility with language can provide insights into human minds – something that’s surely of value for intelligence agencies. Here’s a flavour.

  • First, this pre-print study showed that GPT-4 was very good at understanding ‘false beliefs’ – the idea that people can have mistaken views on the basis of the partial evidence they’ve seen. That’s an important element in ‘theory of mind,’ the term psychologists use to describe how we intuitively figure out what other people might be thinking.
  • Next, an interesting study that showed language models can make pretty good guesses about the emotional states of people they read about in fictitious vignettes. Better than many people doing the same task, in fact. Again – more evidence of ‘theory of mind’ at work.
  • Then there’s this intriguing finding – if you prime a language model with anxiety inducing words, you can shape its subsequent decision-making, a bit like you might expect with a human too. In this case, the model became more racist and ageist – which seems a bit random, but if a human did it we might attribute it to stronger feelings towards their ingroup when worried.

There’s a lot more work to be done here. Can you make a language model angry, and so more certain in its judgment? I bet you can. That’s what happens to humans – and definitely an experiment I’d like to see done. All very interesting.

But what’s the take-out for intelligence agencies?

Mainly that language models might be much more useful than cobbling together unreliable reports. Some intelligence agencies, like GCHQ are primarily interested in the ability of AI to plough through vast quantities of data, looking for meaningful patterns, or correlations. That’s the sort of spade work that simply can’t be done by humans. We have a bit of insight into what’s involved from the Snowden revelations – Barton Gellman’s revelatory accounts in The Washington Post and here in his outstanding book Dark Mirror are a good summary. But this sort of AI is essentially mindless – just drawing on the advantages of computer memory and brute force processing.

Language models, by contrast, might be of interest to spooks looking for psychological insights. What are other people thinking? How might they be influenced, or perhaps even deceived? That’s the terrain of the other two agencies – especially the Secret Intelligence Service, whose boss has made no secret of his intention to leverage AI, even if he thinks that human intuition will remain beyond it.

Models like ChatGPT certainly don’t have a mind of their own. It’s not like they’ve achieved consciousness – whatever some insiders think. But they’re also clearly doing more than just crunching numbers at scale. Or, rather, that’s literally what they are doing, but in doing so, something else emerges: some sort of model of the world that is latent in our language. And after all, that makes perfect sense – we ourselves use language to reflect on the real world.

Where this all ends is as much your guess as mine. But language models are rapidly getting larger, and their reasoning abilities are improving generation by generation, especially when those that also generate computer code. These capabilities, moreover, are about to get a huge boost. Any day now, we’ll move from text and pictures to realistic multimedia generation – video and audio. Expect more of this sort of thing, but entirely generated by machines, rather than voiced and scripted by humans, and able to interact naturally with you. Very soon we’ll live in a world with machines that can understand us better than ever before, and tailor their responses accordingly.

To me, that sounds handy for spymasters everywhere, and a long way from ‘automating certain repetitive tasks’.

Thought this was great?  Signup to the newsletter.

Kenneth Payne

Dr Kenneth Payne is a Reader in International Relations at King's College London. A former BBC journalist, he is the author of four books on strategy. The latest, I Warbot: The Dawn of Artificially Intelligent Conflict was published by Hurst on 17th June 2021.

Related posts

Information, Disinformation and Misinformation:

Chris F

The Decisive Act

John Dorey

Christmas #PME: Looking for something to read?

The Wavell Room Team