Wavell Room
Image default
Automation and TechnologyCyber / Information

The Science of Cyber and the Art of Deception

The cyber domain and the concept of deception are deeply linked.  Deception has been fundamental to the effectiveness of cyberattacks since their inception and is increasingly important for cyber defence.  Cyber, once fully integrated into operations, will be a crucial tool in the hands of the military leader seeking to mount an effective deception operation.  Effective hackers and deceivers have common qualities.  The mystery and opacity surrounding the world of cyber opens the door to stratagems designed to surprise and outwit an enemy.  The complex domain that we call cyber opens new opportunities for tactical ruse, allowing machines to mislead humans.  When trickery and cyber are so intertwined, it’s no wonder that hackers and deceivers have common qualities.

All cyber is based on deception

Hacking is, by its very nature, a ruse designed to use a human or a machine to mislead another machine.  In the late 60’s, hacking pioneers discovered that a simple whistle that came free in Captain Crunch cereal packets made exactly the same sound at 2600Hz as that made by an AT&T phone line.  Exploiting a flaw in the AT&T phone networks which carried voice and switching signals on the same line, these early ‘hackers’ were able to ‘commandeer’ phone lines free of charge.  This the beginning of the ‘phreakers era’ (contraction of ‘phone’ and ‘freaker’) and was clearly about misleading the machine for the technical feat. 

By transmitting corrupted data or modifying its environment, most of a hacker’s work involves making a target device behave unexpectedly.  A hacker wants their piece of software – which has gained privileged access to a system – to execute their commands by providing it with false memory addresses (buffer overflow). This can be used to reveal confidential data by injecting requests in fields not intended for that purpose (SQL injection) or to saturate a service with false requests so that it cannot respond anymore (DOS or deny of service). 

The most sophisticated attacks will see a hacker using a form of cyber-camouflage to avoid the target’s protection and monitoring systems and concealing the malware’s signature by changing its form using polymorphic shellcodes.  They also extract the information collected by its Advanced Persistent Threat (APT) by hiding it in the normal network traffic.  The first of APTs to be recorded were called Duqu, Flame and Gauss.  Should the virus be challenged and investigated, the hacker has to ensure the code is illegible or hidden from view by obfuscating it – drowning the useful functions in a tidal wave of useless data.

Hacker v Machine v Human?

Hacker’s don’t just look to fool the machine.  Sometimes it is more effective to dupe the human in the loop as humans are often the weakest link of a system.  In the 1990’s, Kevin Mitnick (still one of the most famous hackers today) developed a technique called ‘social engineering’.  In 1992, after making a few phone calls to Morotola employees, he was able to obtain the source code of the microTAC Ultralite, the cutting-edge mobile phone at that time.  He simply pretended to be one of their colleagues and leveraged their will to get them to do what he wanted. 

Mitnick outlined his approach in The Art of Deception: ‘social engineering uses influence and persuasion to deceive people by convincing them that the social engineer is someone they are not.  As a result, the social engineer is able to take advantage of people to obtain information with or without the use of technology’.  According to Mitnick, the human factor is the weak link in the cybersecurity chain.  The facts back him up; according to 2021 Verizon data breach investigations report, 85% of breaches include a human element and phishing was present in 36% of breaches in their dataset.

Shams, trickery and the military

Hackers are, by their very nature, individuals who think about their actions through trickery.  They achieve their objectives by making a technical or human system behave in a way that’s not anticipated by its designer or the intent of the owner.  This means there are two important consequences for military operations.  

Firstly, the military cannot command a cyber-effect in the same manner as an artillery barrage or airstrike.  Misleading a system means exploiting its peculiar flaws using appropriate tools.  Even a server protected by a powerful firewall can still be attacked by gaining access to the room in which it sits and connecting a device directly into it.  

Secondly, the effects of a cyber ruse are difficult to predict and harder to control.  The individual who carried the Stuxnet virus into a nuclear plant, for example, was not supposed to connect the usb device to the internet. This chance error allowed the virus to spread.

If deception has a crucial role for cyberattacks, it also has a key role in cyber defence.  In the late 1980s, Clifford Stoll set up an imaginary computer environment (honeypot), with a fictitious account and fake documents to lure a hacker and force them to reveal themselves.  This could be considered as the first use of deception in cyber defence.  Cyber deception is another set of tools and tactics designed to defeat cyber-attacks and exploitation.  Instead of focusing on an attackers’ actions, the goal is to obfuscate the attack surface and to confuse or mislead attackers increasing their risk of being detected.  It works like any other deception by hiding what is real and showing a false picture.  A honeypot defence manipulates network activities to mask the real one while creating a false one.  

Deception techniques, which were once held back due to technology limitations, are now widely used in cyber defence.  They are a game changer because more classic perimeter-based defence strategy (firewalls, authentication controls, intrusion prevention systems, etc.) and even defence-in-depth strategy, has been proven ineffective in the face of sophisticated attacks.

From a hacker’s tricks to military deception

Cyber was first used for military deception operations in the late 1990s. In 1999, the United States carried out what could be considered a proto-cyberattack by breaking into the Serbian telephone network that was linked to their air defence system making it possible to inject false information on the radar screens.  Even if the use of cyber in warfare is still in its infancy at the tactical level, its contribution to all the dimensions of deception operations is certainly promising.

Passive deception, or concealment, intends to hide something that really exists (means, actions, and/or intentions).  Cyber can contribute to a unit’s concealment by obfuscating vital information in a sea of useless data to degrade an opponent’s ability to exploit information.  In 2011, during Operation Unified Protector in Libya, NATO used social networks as a source of information for targeting purposes.  Aware that Gaddafi’s government could also use the same social media to spread disinformation, NATO’s fusion cell cross-referenced information to ensure they weren’t being deceived.  

Attacking the sensors

Hiding what is real not only involves camouflage.  Cyber operations can also remove the need to hide by blinding sensors with electronic attack.  Communications between radar stations can be attacked and  degraded by ‘invading’ the network and taking control of the air defence system, preventing it from detecting friendly aircraft.  This is best demonstrated by the action of Israeli F-15Is in the Deir ez-Zor region (Syria) in 2007.  They reportedly flew unhindered over Tor-M1 and Pachora-2A anti-aircraft systems, using a system similar to Suter.  They deployed a virus, developed by BAE Systems, which allowed the Israelis to detect and locate anti-aircraft radars.  The virus was able to analyse the Syrian communication network, penetrate it, and then exploit computer loopholes to see that same information the Syrians were looking at and insert false information of their own.

Digital Decoys

Cyber is also a powerful tool for active deception.  This is especially true when dealing with decoys.  Digitalization tends to make each piece of equipment more connected and the volume of technical data (known as signals) increasingly important.  Although accessing a network can be extremely difficult it is made easier by hacking into these data exchanges.  There is certainly a wide range of options available especially as command and control is becoming increasingly virtual.  There is one one basic rule is that the more trustworthy a source (i.e. a friendly headquarters) is believed to be, the more effective it is as a vehicle for deception.


The third major type of deception is intoxication, or disinformation.  Intoxication is an intellectual offensive, which has the effect of deceiving an adversary about friendly intentions and possibilities by giving them false or misleading information.  Psychological operations (psyops) are one of its major tools.  Over the past 15 years, cyber operations have become increasingly important in psyops, not just at the strategic level.  They can also have tactical effects in support of deception operations.  This is the case via the use of cellular phone networks to intercept and manipulate the communications of enemy combatants. 

As early as 2007, the Americans gained access to the Iraqi telephone network and sent SMS messages to a number of insurgents to demoralise them.  Beyond targeting by deception, a harassment effect can be achieved by giving the target a feeling of being besieged by an enemy who can reach them at any time, as was the case in Ukraine in 2014-2015.  These cyber activities can cause panic and confusion or draw an opponent’s attention to one area when the main action is taking or is about to take place in another.  The potential of a use of social media for deception is especially strong in the current context of a new age of propaganda that benefits from an unprecedented range of disinformation techniques coming from, amongst other things, digital marketing.

Intoxication is also achievable through manipulation, which is a cyberattack category aiming to control or modify information, systems or networks.  The manipulated information can be very simple, such as a distorted geographic coordinate, or much more elaborate, up to the point of transmitting fake operation orders.  One could thus imagine the digital identity of a high-ranking individual in the military being stolen by an irregular group manipulating orders

Exploiting Deepfake Technology

Advances in artificial intelligence (AI) have opened up the world of deepfakes, increasingly realistic face-swap videos that show ‘real people’ saying anything their creators want them to.  Deepfake software is freely available and their use is already widespread.  Applications such as “Zao”, “Face2face”, or “Nvidia vid2vid” now make it possible for a wide audience to access this technology.  It is not just videos that can be deepfaked.  From a military perspective, the use of audio deepfakes could be an extremely powerful weapon in the hacker’s armoury.  Today, it may require a relatively long recording of the subject’s voice (about one hour) but the acceleration of technology in this area will make it a credible and powerful way of spreading false information on a radio network.  Such deceptions are, in effect, an online version of pseudo-operations.

Countering machine learning

We also need to bear in mind that systems using AI are particularly vulnerable to cyberattacks and trickery: a small, undetectable change can result in bad recommendations or non-optimal actions. An entire field of research called ‘adversarial machine learning’ seeks to understand how to abuse an AI.  Mindful of the fact that it will always be possible to deny access to signals for the sensors that feed AI, we can distinguish three main types of attacks to thwart machine learning.  First, it is possible to poison data.  AI can also be fooled.  If it is based on an algorithm trained for a predetermined task, it is unable to detect a decoy if it meets the parameters of its programming.  One can even consider modifying the code of the AI itself or penetrating its data processing system (via a cyber-attack, for example). 

Finally, there is another method to manipulate an AI by interfering with it mechanism.  Knowledge of the algorithms, the data used, and the degree of training (in the case of machine learning) would allow one to predict the responses, especially since it is highly likely that the adversary also has AIs to help in this prediction task.  An opponent could bombard an algorithm to interrogate it, observe its reactions to understand the prediction system, and then seek to manipulate it.

Hacker and commander

Barton Whaley, one of the most renowned analysts of military deception, details four main characteristics that make a successful deceiver, and they apply equally well to hackers even if there is no unique profile either among hackers or deceivers.  First, ‘a kind of oblique insubordination seems a characteristic of our deceivers’.  Deception is about manipulating people and not about manipulating information and it should not be surprising that ‘having discovered the value of deception of an enemy, our deceivers might apply it to their own bureaucracy.’  The parallel with the hacker is obvious in the digital culture where they are seen as figures of transgression, of trouble – individuals that question established hierarchies and seats of power. 

The disruptive thinking and mindset of personnel involved in the planning and conduct of deception operations is sometimes difficult to manage for the military, but it is essential for success.  Big tech companies have understood this and, despite the ‘anti-system’ mentality of hackers, they deploy strategies to ‘capture’ and employ these purveyors of disruption.  The address of Meta’s headquarters in Menlo Park, California, even refers to it: ‘One, Hacker Way’.

Hacking and humour

The second characteristic of a good deceiver according to Whaley is ‘a heightened sense of humour’. Indeed, as R.V. Jones identified in his essay The Theory of Practical Joking, the design of military deception operations is somewhat identical to the design of hoaxes that are ‘advanced jokes’.  With some hoaxes ‘the object is to build up in the victim’s mind a false world-picture which is temporarily consistent by any tests that he can apply to it, so that he ultimately takes action on it with confidence.  The falseness of the picture is then starkly revealed by the incongruity which his action precipitates’.  These ‘induced incongruities’ are exactly what military deception operations are looking for. 

Playing pranks and even integrating them into source codes are commonplace in the world of hacking.  Gabriella Coleman states that ‘humour saturates the social world of hacking’.1 She goes on to say that ‘Hackers literally enjoy hacking almost anything […]. To put it bluntly, because hackers have spent years, possibly decades, working to outsmart various technical constraints, they are also good at joking. Humour requires a similarly irreverent, frequently ironic stance toward language, social conventions, and stereotypes.’

Empathy, rapport and insight

A good military deceiver is also well versed in empathy; the ability to put oneself into the mind of the enemy.  They engineer intelligence about the target into a profile of that person’s preconceptions, beliefs, intentions, and capabilities.  That is because the aim of military deception operations is to convince their adversary to do something different.  Dudley Clarke, the master of British deception operations during the Second World War, was known for this quality and ‘his mind worked differently from anyone else’s and far quicker; he looked out on the world through the eyes of his opponents’.2 

Empathy is also an important quality for hackers.  As we have already seen, the human factor is cyber security’s weakest link and deception based social engineering techniques (phishing for instance) are the key to cybercriminal successes.  Therefore, knowing basic human psychology and being able to put oneself in the place of the target is an important quality for a successful cyberattack.  This is for instance useful to guess a password, by combining words referring to the target’s dog, birthday, etc…

The last characteristic of a good deceiver is to have a ‘prepared mind – one open to unusual events. […] It notices the anomalies, the discrepancies, the incongruous happenings that crop up from time to time and seizes eagerly upon them as food for thought’.  In the words of Pasteur: ‘chance smiles only on well-prepared minds’ and this is why serendipity is so important.  Good hackers share the same characteristic.  Hackers are trick collectors.  They anticipate situations but also adapt to contingencies. Hacking requires both dexterity and opportunism.


The cyber domain and the concept of deception are deeply linked. Deception has been fundamental to the effectiveness of cyberattacks since their inception and increasingly important for cyber defence.  Cyber, once fully integrated into operations, will be a crucial tool in the hands of the military leader seeking to mount an effective deception operation.  Effective hackers and deceivers have common qualities. 

Two conclusions can be drawn from these assessments.  First, those working in the field of military deception have a lot to learn from a hacker’s state of mind.  Not only to get inspiration by thinking out of the box for planning operations, but also to be able to task and interact with those who handle the illusion in the cyber world.  It is essential to understand that cyber operations are essentially dealing with trickery and misleading both humans and machines.  Deception is a more empirical field.  The effects are difficult to control and conventional military thinking is not always a good thing.  It is sometimes more relevant to think from the perspective of an opportunity (a flaw in the cyber or human targeted system) than it is to manufacture an effect (as with an artillery barrage,for example).

Secondly, understanding the link between hacking and deception offers a better view on which people to employ and how to train specialists in these fields.  They are different from the regular military profile: quite undisciplined, creative, mischievous, more empathetic… Hiring people with a different manner of thinking is not an easy job and could be achieved through unusual methodsIn the same way, these specialists cannot be trained like other soldiers: their technical and empirical skills have to be tested in realistic scenarios that require a level of simulation and specialist infrastructure such as cyber ranges.  These requirements, which are well outside normal military activity, can of course be outsourced.  With all that means in terms of confidence, tactical and strategic autonomy for the military leaders who can no longer afford to wage operations without deception…

Anthony Namor

Anthony Namor is French army officer and a cyber specialist. He is an associate researcher in the Saint-Cyr Military Academy Research Centre.

Rémy Hémez
French Army

Rémy Hémez is French army officer and combat engineer. He has done a lot of research on military deception over the past ten years and has published several articles and studies on the subject. You can follow him on Twitter: @RemyHemez


  1. E. Gabriella Coleman, Coding Freedom, the Ethics and Aesthetics of Hacking and Coding Freedom, 7.
  2. Thaddeus Holt, The deceivers: allied military deception in the Second World War, Phoenix, 2005, p. 14

Related posts

‘High Time for Hypersonic Missiles’ – the challenges of fielding hypersonic weapons for UK Defence

Phil Clare

Intelligence NEEDS to go Digital

John S

Silence, Starvation and Savagery: Countering Putin’s Brutal One-Two-Three

Keith Dear