Computers are not like bread toasters or microwave ovens. They are like children, and more specifically, like toddlers.
Like how toddlers need to be protected from “stranger danger”, digital systems need protection from “cyber stranger danger”. And both of them – toddlers and digital systems – need us for their protection, to watch out for them.
Like how toddlers are prone to physical injury from tumbles and falls, computers too are prone to technical accidents – data loss due to storage corruption, hardware failure and such. But there is also the other type of danger – stranger danger. Toddlers need to be protected from those with malicious intents, anti-social elements, and exposure to age-inappropriate situations. Following suit, digital systems need to be protected from cyberattackers operating with various intentions. While lollies, ice creams, toys, stories and impersonating a relative constitute some props and techniques used against toddlers, phishing emails and malicious-intention advertisements are used against the human users of digital systems. The goal of such emails and ads is to get the human user to run bad code on their machine or divulge sensitive account credentials or data.
Digital systems are not like independently capable adults. They are like toddlers. Like child protection laws, child safety and welfare policies, technical solutions and controls provide one arm/type of defence for digital systems. What’s the second arm of defence and protection? How do we manage to keep toddlers safe from stranger danger and other such dangers? How do we keep them safe, generally? By practising watchfulness. Security Education, Training, and Awareness is the cyber equivalent of that watchfulness.
But, such SETA efforts are often reported to be ineffective, and those who are the receivers of such training often find it painful, annoying, boring, and a nuisance. There is one single and simple thing that could help solve the effectiveness problem of security education, training, and awareness efforts. It is to acknowledge the fact that digital computer-based systems are inherently vulnerable systems. They are easily accessible and modifiable code-execution machines. They can run good code and bad code with equal ease.
Digital systems carry a wide variety of inherent vulnerabilities. They are not the all-knowing, all-powerful, perfect machines that they are typically portrayed to be. They require lots of care and attention to be kept safe. In the case of human development, through the process of maturation, adults develop the ability to keep themselves safe, make informed decisions, manage risks, and respond to situations with the faculties of intelligence, memory, perception etc. Digital computer systems have not yet reached such a state of maturity. Since their programmable surface is large and easily accessible, they remain highly exposed to attacks.
This situation is different from other technologies that can be considered more mature than computer systems. Such matured systems are tightly limited in their scope and accessibility. Examples would include mechanical engineering, electrical engineering, and traditional industrial control systems. They have an array of mature security, safety, and reliability standards and principles (fault tolerance, redundancy, bypass mechanisms, kill-switches, strict data transfer protocols etc.) in addition to them being generally much less ‘hackable’ because they are physical objects (factory switches, aircraft, microwave ovens and other kitchen appliances) that remain largely inaccessible.
The acknowledgement that digital systems are not the astonishingly perfect systems they are often touted to be, and that they need our care to remain safe, could be the one simple thing that will help the human users and the effectiveness of SETA efforts.
How might it help? To start with, it acknowledges and respects the dignity of human users, by telling them the truth about the systems they are asked to use to perform work. Human users have been consistently made to feel ‘less than’ compared to the ‘smart digital systems’. Human users have been repeatedly berated and shamed directly and indirectly for being unproductive in comparison to digital systems.
In many organisational settings, through the nature and tone of internal communications, humans users of computer systems are made to feel like they are a nuisance that the organisation has to manage. And that the organisation/leadership/management would rather eliminate humans entirely from their “systems” so that there will be no “human risk” and other loss-making things related to humans – while machines “will do things perfectly”, without the overheads of caring about well-being, health, dignity, respect, ethics and such.
Acknowledging the fallibility of digital systems puts an end to this false narrative. It restores dignity and respect to human users of digital systems. It is a welcome break from treating the human users of technology as the error-prone elements in an otherwise perfectly orchestrated, high-performance, productivity panacea.
Restoration of their dignity, respect, value, and agency could help the human users spend their attention to grasp the importance of the situation regarding cybersecurity, and contribute their energy and time towards keeping those systems secure. If you continue trying to manipulate them into behaviours that you want them to exhibit so that your risks remain managed, it’s not going to be a pleasant experience for anyone. Nor will the systems get secured.
Humans are not the weakest links in cybersecurity. It is the underlying systems themselves the weakest links. If anything, humans could be the guardians. It’s important to call things by their right names.