Phishing the human mind
How organisations need to manage their cyber security strategically, through a combination of technology and human psychology
Cyber threats are becoming increasingly common and sophisticated, targeting both individuals and organisations. Consider the following scenarios: an email lands in your inbox from your employer, offering a gift card and asking you to click a link to claim it, or news arrives via email that you are off the waitlist for Taylor Swift’s Eras Tour, and you click a link to register. In both cases you will have failed a phishing test by your employer, used to train people on how to recognise attacks. It is only when employees report an email that they pass.
If you would have failed, then you would not be alone – a security awareness company sent over 17,600 Taylor Swift phishing emails with over 533 people clicking on it. Whilst in percentage terms that seems pretty good, each one of these could potentially cost the organisation millions in recovery efforts, legal fees, and lost revenue. And added to that can be potential fines, such as in the UK where the Information Commissioner's Office is able to levy fines for lapses in cybersecurity of up to £17.5 million or 4% of the annual turnover of the parent company. There are huge costs involved with the World Economic Forum estimating that the global cost of cybercrime will jump to $23.84 trillion by 2027, up from $8.44 trillion in 2022.
Given the pervasiveness of IT systems in our lives, managing everything from our employment, finances, health, social relationships and so on, then we are arguably in a time when cyber-security threats represent a huge challenge to our wellbeing. This threat this is of course not purely technical but has a strong social element. Defending against phishing attacks is challenging because identifying a phishing email requires specific social context that only the recipient possesses, such as expected communications and trusted contacts.
Given the techno-social element of these attacks, it is clear that humans are a crucial part of the defence against cybercrime. Behavioural science can be called upon to consider how people can best be equipped to defend themselves and their organizations. But in doing so, we can ask what is reasonable to expect of people, and how they can be best equipped to defend themselves and their organisations.
The Growing Threat of Phishing
Phishing attacks are considered one of the top cybersecurity threats organizations face, with the UK Government's Cyber Security Breaches Survey finding that 90% of businesses that experienced cyber crime reported phishing as the primary type of attack they faced. These can take a number of forms:
Spear Phishing: Targeted attacks aimed at specific individuals or organizations.
Whaling: Targeting high-profile individuals like executives or public figures.
Smishing: Phishing attacks conducted via SMS or text messages.
Vishing: Phishing attacks conducted via voice calls.
Deloitte's 2024 Cybersecurity Threat Trends Report notes that these forms of phishing are common entry points for more sophisticated attacks. This can include picking up information that is then used to ‘socially engineer’ communications for actions such as intercepting communications to redirect funds or downloading malicious software (malware) designed to block access to a computer system or encrypt data until a ransom is paid to the attacker.
And not even those who might expected to have high levels of security are immune: in 2020, Twitter experienced a high-profile phishing attack that compromised the accounts of prominent figures like Elon Musk and Barack Obama. So protecting against cyber-crime is a hugely pressing issue for all organisations (and of course individuals on their home computers). So just how is this done?
The history of modern organisational cybersecurity
Early organizational IT security strategies were heavily influenced by the metaphor of the firewall, drawing from the concept of physical firewalls in buildings. In buildings, firewalls contain fires within specific areas, preventing their spread to protect the entire structure. Similarly, in computing, firewalls were designed to establish a secure perimeter around the organisation, isolating it from external threats.
The firewall metaphor emphasized the importance of separation and control over movement. Just as a physical firewall in a building limits the spread of fire, a digital firewall restricts the flow of data, allowing only certain types of traffic and interactions to pass through. This creates a protected interior space where valuable digital assets can be safely contained.
However, the metaphor of the firewall in IT security strategies also places significant responsibility on employees within organisations because while it managed external threats, the onus was on employees to maintain the integrity of the organisation’s network by being vigilant against accidental breaches, unauthorized activities and insider attacks. This typically requires employees to have ongoing security training and awareness to recognize and report suspicious activities. With this focus on the importance of the workforce, how can this group of people be encouraged to have effective engagement in cybersecurity?
The role of the employee in cybersecurity
Typically, it is the responsibility of the IT team to manage the wider employee’s cybersecurity behaviours. However, expertise in technical issues does not necessarily translate to expertise in risk management communication. All too often technical experts tend to think cyber behaviours are caused by a lack of available facts or understanding of the consequences of certain actions. This leads to the tendency to tell people what they think they ought to know, in other words a ‘broadcast of facts’.
Of course, the problem here is that it is fact -focused rather than audience focused – unfortunately it has been found this results in IT professionals tending to blame users security incidents with words such as ‘lazy’, ‘stupid’ or ‘ignorant’ being used. Of course, there have been significant advancements, and most modern IT departments now adopt a more enlightened approach, encouraging whistleblowing and implementing a no-blame policy for reporting security lapses. Whilst these steps are necessary, there is clearly more that needs to be done and the next step is to consider how to better educate users to deal with cyber security threats, such as phishing attacks.
Education is clearly needed but there are also challenges with it. Education works best when people are motivated – but the evidence suggests that people are not motivated to engage on this issue and in fact will frequently seek to avoid it. In addition, application of the training and feedback on the steps taken are needed to embed what has been learnt in the ‘classroom’ to real life. The challenge here is that no-news is often good news in cyber-security in which case the training runs the risk of being a tick-box exercise rather than something that is properly engrained in day-to-day working behaviours. Some steps have been successful such as gamification approaches where fake phishing emails are sent out with prizes for those that accurately spot and report them (and education for those that click on the links).
The education approach can lead to best practices such as ‘Never click on a link in an email’ and ‘Never respond to an email asking for banking details’ as well as checking that URLs match the alleged email sender. Again, while necessary, the challenge is this is unlikely to be sufficient, in no small part because the time that users have to spend on checking is limited - so making them deal with increasingly warnings and difficult tasks (like hovering over every email link received) results in lost time, distraction, and fearfulness.
So how can employees best be equipped to manage the range of cyber security threats – for which their engagement is urgently required? To explore this one area that holds promise is the use of mental models.
The use of mental models to support cyber security
Rick Wash and Emilee Radar suggest that mental models are “simplified representation of reality that allows people to interact with the world". They describe how a person reasons and makes inferences about a problem or situation, allow people to make predictions about what might happen, and provide simple rules of thumb and guidance to guide behaviours. These are helpful are they guide our understanding without necessarily being influenced by formal instruction or education: furthermore, they are often intuitive, we do not have to explicitly think about them.
This holds true in cybersecurity: users who have mental models of hackers as criminals are more likely to adopt stringent security practices compared to those who see them as mere nuisances. Another paper by Ruba Abu-Salma and colleagues looks at the way people understand the security properties of communications tools (such as WhatsApp, iMessage and Telegram). The paper emphasizes that non-experts often have an ego-centric mental models of cybersecurity, thinking that splitting information across various tools (like email, messaging apps, and social media) act as separate channels and on that basis are protected them from interception by an attacker. This can lead to them failing to to implement security measures (E2E encryption and digital signatures) across their channels actually making their communications more susceptible to breaches. By contrast, experts have a network-based mental model, understanding that all communication tools are interconnected within a larger network, recognising that there is no security benefit by splitting information across various tools. As a result of this (correct) mental model, experts are more likely to adopt and correctly configure advanced security technologies.
Perhaps it is no wonder then, as this paper suggests, that mental models appear to be shaped by fictional portrayals of computer security concepts and behaviours in films and media generally. It seems that incorrect or incomplete mental models are often based on these portrayals, leading to suboptimal outcomes such as believing security intrusions are always obvious, that hacking is inevitable, and that ordinary users are not important enough to be hacked. One example of this is the unplugging a computer to stop a hacker in the TV series NCIS led participants to believe (incorrectly) that this simple action is an effective solution for stopping cyberattacks.
Finally, an interesting study led by Karen Renaud explored children's vulnerability to dark patterns—deceptive techniques in websites and apps that trick users into actions they might not intend. The paper found that 11-12 year-old Scottish children are aware of online deception but often misinterpret benign warnings and fail to differentiate between various dark patterns and genuine alerts, leading to heightened suspicion and mistrust. The paper suggests that interventions should focus on improving children's comprehension of the characteristics and motivations behind these deceptive techniques to help them develop more accurate mental models, enhancing their ability to navigate online environments safely.
All of this suggests that encouraging the adoption of the ‘right’ mental models are helpful, offering shortcuts to facilitate understanding and emphasising the importance of embodying some behaviours and not others. So should we double down on storytelling and narrative as the means to support employee cybersecurity behaviours?
The limitations of mental models
Karen Cook and Russell Hardin suggest we need to be cautious about an over-reliance on narratives, arguing that stories can be manipulated to highlight specific aspects while omitting others. For example, a company might emphasize a single successful defence against a cyber-attack while ignoring multiple instances of breaches, creating a misleading sense of security.
In addition, narratives often rely on emotional appeal, which can overshadow rational analysis and lead to decisions driven more by emotional resonance than by factual evaluations. In cybersecurity, this could mean prioritizing high-profile, emotionally charged threats like ransomware attacks over more pervasive but less dramatic risks such as phishing.
Lastly, narratives often capture only a snapshot in time, which can lead to decisions based on outdated or incomplete information. This means that in cybersecurity, relying on a story about a past success in security protocols might overlook current vulnerabilities and evolving threats. Consequently, this could result in a false sense of security and complacency, leaving us unprepared for new and emerging risks.
Why we need relational trust
While Cook and Hardin critique the over-reliance in storytelling, more positively they make a case for relational trust. By this they mean that how we understand what and when to trust communication can be enhanced by having shared experiences which then leads to mutual reliability – and it is through this that we have a verifiable basis for trust.
In the context of cybersecurity, this means building ‘trust networks’ through regular communication and transparency, having regular meetings and communication channels between departments and security teams for a consistent and reliable flow of information. The point is that when employees have a strong network of trusted contacts within and outside the organization, they can better assess the legitimacy of communications.
To understand how this works we can draw on a study, again by Rick Wash, that looks at the way experts identified phishing emails. This involved a three-stage process:
First, they engage in sensemaking, using their expertise to notice discrepancies within the email's context, triggering a cognitive shift, making them suspicious.
This prompts a second stage of and investigating further by, for example, hovering over the URL to check its legitimacy.
Finally, they decide on the email's authenticity and take appropriate action, such as deleting or reporting it.
The point here is that focusing on discrepancies (the second stage above) is not enough; Angela Sasse argues that this approach is misguided and that the time users have to spend on checking is limited and making them deal with increasingly warnings and difficult tasks (like hovering over every email link received) results in lost time, distraction, and fearfulness.
Instead, users must shift their understanding of the email's context to identify a potential problem. For example, in the research study, the participant in the study reports how their organization doesn’t typically send business emails outside of business hours (for an email asking to update benefits information). Another raised the alarm as an email didn’t have the usual full, long signature, which is standard practice for their organization. One person got an email from someone who worked down the corridor which he considered odd as she never emailed him, and the company culture was to personalize emails and this email wasn’t personalized.
We can see these all of examples of relational trust that we have of the organisation we work in, and an important element of helping people to recognize discrepancies.
Reducing the pressure on the employee
Whilst a lot of work had been done to enhance the capability of the workforce to spot and prevent cyber-crime, maintaining vigilance is challenging with what can seem at times an unreasonable level of requirement on the individual employee.
An alternative approach to cybersecurity has been emerging called Zero Trust that, in part, shifts the focus away from individual employees as the frontline of cybersecurity vigilance. This approach proposes organizations should not automatically trust anything either inside or outside their perimeters and must verify everything trying to connect to their systems before granting access. As such there is a need for continuous monitoring and validation of users and devices, with rigorous authentication mechanisms, assuming that no user or device, whether inside or outside the network perimeter, should be trusted without verification.
For the employee this means continuous verification (ongoing authentication that the users are who they say they are), least-privilege access (providing minimal necessary permissions to access data and systems), and micro-segmentation (isolating network segments to limit impact of security breaches).
Zero trust approach reduces the risk profile for the business by lowering the 'attack surface', which is the total number of entry points and vulnerabilities that can be exploited by attackers, and the 'blast radius', which is the potential extent of damage or impact that can occur if a security breach happens, by implementing strict access controls, continuous monitoring, and segmentation of network and systems.
Of course, Zero Trust is not immune from cybercrime which can move from phishing to verification fraud so the challenge moves to maintaining user trust in legitimate verification processes while educating them to recognize and avoid fake ones. And increasing the number of verification steps can lead to ‘security fatigue’, causing users to become desensitized to security prompts and potentially respond without due diligence. Organizations must balance the need for robust security with the risk of overwhelming users, which can lead to lapses in judgment.
The need for a wider organisational perspective
There is a danger that cybersecurity is seen is the domain of the IT department, albeit with support from disciplines such as behavioural scientists to understand employee behaviour in this regard. However, the emerging area of Cyber Security Governance recommends security is aligned with strategic objectives to ensure that risks are managed most effectively.
The authors of this paper identify three key paradoxical tensions that need to be addressed for effective cybersecurity implementation:
Institutionalization versus Professionalization: This tension is about whether security should be a part of everyone's job or handled by specialized security professionals. Effective cybersecurity means making security a daily responsibility for all employees, creating a strong security culture and reducing errors.
Security versus Innovation: Balancing security with the need for innovation is crucial. Innovation requires flexibility and speed, but security measures can slow things down. Effective cybersecurity integrates security from the start, ensuring new technologies are secure without limiting creativity.
Mindfulness versus Mindlessness: This tension involves balancing human oversight with automated processes. Automation is essential for managing threats quickly, but human judgment is needed for complex issues. Effective cybersecurity requires both automation and human involvement to maintain high security levels.
The business critical nature of cybersecurity risks alongside the need for wider cross-organisational engagement to deliver effective security means this is not something to be managed in a separate isolated way from the rest of the business. The challenges that come with this more holistic approach, will inevitably mean paradoxical tensions for which there needs to be effective governance and decision making.
In conclusion
There is no simple answer to the challenge of tackling cyber threats: fraudsters are becoming ever more sophisticated as AI based software is used for ever more tailored targeting. Cybercriminals now use AI tools to not only find details from social media and data breaches, but dynamically generate personalised messages that mimic individuals in convincing way.
On this basis, the degree to which we can rely on intuitive mental models that can guide our intuition of ‘rightness’ whether to spot fraudulent emails or verification systems will be something that need constant focus and attention. Strategies that support mental models and relational trust have a role to play; there are no silver bullets here and a range of approaches are needed by organisations.
But also technology systems such as Zero Trust that recognise there is a limitation to the degree that the vigilance of the workforce can (or indeed should) be relied upon to police the organisation against ever more sophisticated and nuanced cyberthreats. This is an issue that requires strategic organisational decision making, reflecting the challenges of navigating an increasingly complex world with evolving risks.