It’s something we regularly hear in the press and even from those within the industry: “People are cyber security’s weakest link”, writes Oz Alashe, CEO of CybSafe.
We’re told that users are stupid and lazy — that they click carelessly on obvious phishing emails, instantly download random files they’ve been sent, and plug-in blatantly malicious USBs when given the opportunity; that they never update software, use ‘password’ as a password, and then reuse that password across multiple websites.
While this might seem like an absurd portrayal, it’s been the language of much of the security industry for the last couple of decades. It’s language that, for many good reasons, we rapidly need to be getting away from.
Looking Behind the Slogan
As Ciaran Martin, Chief Executive Officer of the UK’s National Cyber Security Centre explained in a speech last year, the ‘weakest link’ narrative is a meaningless one: “It’s a bit like saying the weakest link in a sports team is all the players.”
People are always going to be victims of cyber-crime. Stating this is stating the obvious. While it’s true that 4 of the 5 top causes of data breaches are because of human or process error, this needn’t suggest that people are the problem.
Yes, “people” are arguably to blame when your favourite team loses, but what do we actually achieve by stating this? We need to be having a different kind of discussion, and we need to be asking entirely different questions.
Running with the sports analogy – what were the underlying causes of the defeat? Was it the strategy, the formation, a specific player underperforming, or something else entirely? What can we understand by digging deeper into the data and what insights can we take from this to make things better?
Changing the Tone
The ‘weakest link’ expression isn’t just redundant; it’s also inaccurate.
When people are phished, when they download malware, or when they feel the urge to write down a password, companies often assume that the individual is solely at fault. Sometimes this is true. Sometimes it’s not.
As the ‘systems model’ of psychology implies, poor cybersecurity behaviour is often the result of a systemic problem, rather than a problem with the individual. Indeed, as James Reason – a psychologist who has long researched human error and error management techniques – argues, “a substantial part of the problem is rooted in error-provoking situations, rather than error-prone people.”
Designing Technology and Systems With The Human in Mind
So what’s the solution to all this? Well for a start, let’s get rid of the negative language once and for all.
It reflects poorly on the cyber security sector and Infosec professionals – making us sound rather arrogant and supercilious. It distances us from others within organisations at a time when we need to be trusted.
Staff should feel comfortable reporting phishing attempts, data breaches, coming to us with questions, and telling us other bits of useful information. Staff certainly shouldn’t have to see themselves as a liability.
On the contrary, businesses need to empower users. People need to understand that they are the best defence against all the threats that are out there. People need incentives. They need a ‘what’s in it for me’.
And organisations need to have the right environments in place to bolster those incentives. Correct cyber security behaviour should be seamless; doing the right thing should be the easiest thing.
To that end, businesses need systems and technology in place that enable their people to be secure without compromising their productivity. As Pfleeger, Sasse and Furnham note in a 2014 paper, the “key principle is designing technology that fits a person’s physical and mental abilities: fitting the task to the human.”
To quote the words of James Reason, let’s get away from seeing the “human as hazard”, and let’s start seeing the “human as hero”.
People aren’t the “weakest link”. Armed with the right knowledge, the right motivations, and the right resources, they’re a business’ strongest security asset.