Wavell Room
Image default
Short Read

Using ChatGPT

Experimental Feature: Audio Read Version

The future of work fundamentally changed in November 2022 with the release of the AI language generator, ChatGPT.  ChatGpt and its successors will change how we think, write, and communicate.  Initial reports suggest that 80% of US jobs will have at least 10% of their work tasks affected by language models, while 19% will see at least 50% of tasks impacted.1  This form of AI will become a general-purpose technology and will have significant implications across every industry.  We in Defence have a choice whether to embrace the change, fight against it, or try to blindly ignore it.  Embracing systems like ChatGPT is the only reasonable option; we can use this technology to our advantage.  It is worth pointing out that by the time this article was completed, several other competitors had been entering the market including picture and video generators.  The pace of this technology is such that it would be easy to see that by the time you read this article, the capabilities I will describe are already old news.  It may even already be embedded in Microsoft Office.

What is ChatGPT?

ChatGPT is a free online tool created by OpenAI, an American AI research laboratory.  Arguably the first time that such a technology has been democratised.  GPT stands for Generative Pre-Trained Transformer, a family of language models trained on large swathes of data, designed to generate human-like text in a conversational manner.  Given a prompt, ChatGPT can generate text that is relevant and coherent to the prompt, making it an intriguing prospect for support in generating documents or refining writing, freeing up more time for us to do the thinking that we are paid to do.  At the time of writing, ChatGPT runs on GPT series 3, with the paid version operating the fourth series.

You may have seen mention of ChatGPT in the news, in a recent Wavell Room article on Artificial Intelligence in Defence, or even seen endless, abusive poems generated over WhatsApp.  Whatever your experience or view of text-generating AI, it is difficult to argue that it is not a powerful tool.  However, there is little evidence that it is being taken seriously by the MOD.  Developing and understanding this tool and how we can safely, legally, and ethically use it will allow us to make the most of it and ensure that our people do so in a way that does not compromise the organisation.

To do what?

ChatGPT is an excellent brainstorming tool that can generate a large number of ideas which we can then choose to investigate in more depth.  Take, for example, my search for a historic battle to understand the effect of logistics on war.  ChatGpt generated these examples:

    1. The Battle of Stalingrad (1942-1943) – This was a major battle during World War II, fought between the Germans and the Soviets. The German army was ultimately defeated due to their inability to maintain their supply lines, resulting in a lack of food, ammunition, and medical supplies for their troops.
    2. The Battle of Gettysburg (1863) – This battle during the American Civil War was significant in that it demonstrated the importance of logistics in modern warfare. The Union army was able to secure victory largely due to their superior supply chain, which allowed them to keep their troops fed and equipped with ammunition.
    3. The Battle of Waterloo (1815) – This famous battle was fought between the French army, led by Napoleon Bonaparte, and the British and allied forces under the command of the Duke of Wellington. The French army suffered from poor logistics, with their supply lines being cut off by the British and their troops suffering from hunger and fatigue as a result.

A simple prompt of “continue” would have the system return another set of options.  As it is a conversational system, I could then continue to delve into these individual options by posing questions or challenges, even asking it to respond with the thoughts of the military leaders of the time.  While this is a simple example, you can see how this could save significant time in research, especially if you were to then use the tool to dive into the subject.  It is a quick and effective way to get the creative juices flowing, and more importantly, it will do so with the diversity that the MOD arguably lacks.

ChatGPT is also a powerful tool for improving and altering the style of a piece of writing.  It can be prompted to write text from scratch in a given style, as the recent Wavell Room article was, or be given a body of text and asked to alter it.  I have used it to make (non-work) emails “more persuasive” and write presentation scripts based on crucial points.  While I didn’t conduct controlled experiments to gauge the impact, it seemed to produce wording I would not have brought together.  This can be easily extended to include the improvement of O/SJARs and commendation citations.  AI tools can be used to level the playing field so that chances of promotion no longer heavily rely on the writing ability of a 1RO.


There are some technical concerns around systems like ChatGPT.  For one, it has been trained on a body of data that is not openly available and stops in 2021.  It is difficult, therefore, to understand what may be missing either intentionally or as a symptom of existing in a rapidly moving world.  Secondly, we are not, at the time of writing, party to where the data that we input goes.  There will (hopefully) be someone reading this that can take a more educated guess than I, but it suffices to say that there will be risks around what information should be used by MOD personnel.  Finally, as with all programmes, it will be subject to the biases of those who wrote the code.  There are examples of the system refusing to generate arguments on one side of the political spectrum but not the other. The ethical algorithm leans the way of the programmer.

However, the primary concern for many will be the ethical issues around its use.  To what extent should an annual report be generated using AI?  Where is the balance between time-saving and not doing a subordinate justice?  If the writer was so inclined, a year of work could be distilled into a couple of paragraphs in minutes with minimal thought and effort. We all deserve better than that.

The knee-jerk reaction, therefore may be to ban the use of these systems in the work environment.  That would be a mistake and contrary to the move from “reluctant follower” to “fast follower” ethos that should underpin our approach to technology adoption.  The momentum behind these language models will be ferocious and we should move to adopt them now.

How to do it? A three-phase plan

I recently participated in a workshop with an organisation within an area where this technology has already started to force a fundamental shift in how they do business, and academia.  The university with which I discussed the impact has a three-phase plan on how to react, noting that some in the industry call for it to be banned outright.  A response that is as unhelpful as it is outdated.  They are currently in the awareness phase, where they ensure that students and staff know of the existence of ChatGPT, how it can be used, and the potential dangers.

Phase two will be to provide guidance.  The university will release guidelines on the system’s use and under what circumstances.  This will state how to be transparent in its use to avoid plagiarism and to ensure that the proper scrutiny is applied to work that uses it.  These two phases are planned to span a minimum of six months. This will be key to ensure that the rapidly changing market has time to “settle” before entering the final phase of producing Policy.

The university understands that their people will make mistakes in using this technology in the early phases through a lack of knowledge.  They will provide ongoing support and they will update the guidance regularly; weekly round tables are planned for the first two months.  We can emulate this model in the MOD to avoid falling behind.


Language generators such as ChatGPT are a tool that can reduce our cognitive burden, streamline our thinking, and free people as a resource to do the complex thinking that we are paid to do.  What we must avoid is outsourcing our thinking to the system and use it in a carefully guided way. AI is here to stay whether we want it to or not and banning the use of it at a tactical level is not the answer.  The MOD must create awareness, produce guidance, develop policy, and underpin this with consideration for security and ethics.

Cover photo by ilgmyzin on Unsplash

Alex Shand

Alex is a serving REME Officer and Chartered Engineer with 18 years service. He currently works in Capability Development in the Futures Directorate of Army HQ.


  1. GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

Related posts

The UK’s Ranger Regiment: An opportunity

Seven Experts

Magic and Information Manoeuvre (Part 2): Command and Proxy

Connor O

Neighbourhood Watch: The Chinese Security Dilemma

Alexander Archer