As advancements in artificial intelligence (AI) accelerate across all industries, the numerous benefits this technology can bring to organisations continues to expand, writes Piers Wilson, Head of Product Management at Huntsman Security.
It is particularly helpful in security operations teams, as it can support personnel by taking over mundane tasks and automatically monitoring network or user activity, freeing up cyber security analysts to focus on genuine threats. However, the unfortunate reality of AI is that the smarter the technology gets, the smarter attackers will also become.
Throughout 2019/2020, we can expect to see the emergence in AI-driven cyberattacks as malicious actors harness advances in technology to identify vulnerabilities and access networks. To defend against these new breeds of attack, businesses, security vendors and government bodies alike must collaborate on efforts to hold the fort against this oncoming threat.
Evil Twin
It seems inevitable that security teams globally will see a rise in AI-powered cyberattacks. There has already been one major case of this happening, in India; a never-before-seen attack used simple machine learning to identify patterns of normal user behaviour within a network, and then learned to mimic this activity. This enabled the malicious AI to blend into the background of the network, evading the detection of security tools.
As similar attacks grow in numbers and become more sophisticated, we could see a new breed of cyberattack that security teams must be prepared to defend against; ‘Distributed denial of AI attacks’. Much like a distributed denial of service attack, this new form of threat will disconnect AI defences from a network or subvert them and so disable their ability to detect malicious activity, alert personnel to the threat, or resolve it automatically. This would leave networks extremely vulnerable and exposed. So, what is it about AI technology that makes it so powerful?
Malicious Marketing
The power of AI lies in how fast it can learn, adapt and adopt. An AI-powered attack can scan networks, determining what constitutes normal behaviour and how to mimic it to avoid detection, all while potentially locating unpatched ports and other vulnerabilities to exploit. AI attacks can also target individuals who have access to networks, using their data against them in order to take control of personal devices within the wider enterprise.
Much the same way that Facebook, Google and Amazon profile their users based on their online activities, AI can monitor user behaviour online and in systems, learn usage patterns, and pick up on what sort of links people might react to and click on. For instance, if someone exchanges frequent emails with a particular colleague, a malicious AI actor may recognise this pattern and so send a fake email pertaining to come from the colleague containing a link the recipient would trust, and hence click on.
In this way, individuals can fall victim to this form of ‘malicious marketing’ and can inadvertently become doorways for AI attacks to access the broader network of the organisation. Security teams, organisations and the individuals who work within them, therefore, all must work to tackle the serious question of how to defend against this form of attack?
Fight AI with AI
Relying on just security personnel to manually defend against these attacks is impossible. Most security analysts are already stretched to the limit, and the ongoing skills shortage means those remaining face enormous time pressures and workloads. Within the field, there are already widespread concerns of people and teams being overloaded, with 38 per cent of cybersecurity professionals saying they believe the skills shortage is leading to higher rates of burnout. Relying on a human response to automated AI threats is not sustainable, leaving networks and organisations exposed to even greater risk.
To counteract this, organisations must employ technology that incorporates AI, machine learning and automation to mitigate risk exposure. These technologies can reduce analysts’ workload by taking care of routine, time-consuming and mundane activities, so they can concentrate on the issues that demand a human response.
For instance, tools that can automatically identify, analyse, triage and (if necessary) quarantine potential threats, mean analysts can focus on effectively resolving genuine dangers to the network, instead of chasing after every potential warning and false alarm. Analysts can work with their AI-based defences to learn from attacks and improve security accordingly – something that could be critical in the wider fight against new forms of cyber attacks.
Doing nothing is not going to combat AI driven cyber attacks. In order to defend against this threat, businesses and security teams across all industries must work closely with security providers to deliver robust security systems that leverage AI, machine learning and automation to support security teams’ efforts to detect, triage, diagnose, contain, resolve and recover from attacks when they occur.