Sign up for our newsletter
Technology / Cybersecurity

AI in Cyber Security: Creating the best defence against modern cyber attacks

Before embarking on this intro to my AI & security article, I tried to avoid the ‘robot uprising’, ‘Terminator’, ‘cyborg overlords’ rhetoric.  However, this effort was in vain, as I came to the realisation that the Terminator was a perfect segue into the article. In the original film Schwarzenegger’s Terminator was the bad guy, a fact often forgotten, directed by the artificial intelligence Skynet.

In the sequel, the humans embrace the Skynet technology, reprogramming a Terminator and sending it back in time to protect a young John Connor. It is from this film where the Terminator is then consistently fighting for humanity, keeping the various John Connors from harm and battling Skynet. Withdrawing from the world of fiction and coming back to reality, this is not a million miles away from how AI is being used in IT security – without the guns and Guns N’ Roses soundtrack of course.

While attackers have leveraged AI for such things as automated bot attacks, the good guys have taken the technology to create a new type of defence – thereby levelling the cyber playing field and ushering in a new ‘machine age.’

However, before delving into the AI-Security dynamic, let’s dispense with the fiction of Terminator. Offering a description of AI in cyber security, Oliver Tavakoli, CTO at Vectra Networks, told CBR:

White papers from our partners

“Most leading edge cyber security solutions would more accurately be described as employing ‘data science’ and ‘machine learning’ than ‘AI’. There is no concise technical definition of AI, though pop culture is replete with examples such as HAL 9000, Skynet, WOPR, etc. No AI used in the context of cyber security attempts the level of general intelligence shown in movies. Instead, machine learning is applied to a more constrained series of problems and when it looks advanced enough, people are apt to refer to it as AI.”

No matter how you refer to it, artificial technology or machine learning, the technology cuts to the core of two major problems facing the IT security industry.

The first is data; IT security pros are facing an information overload, with vast amounts of data with a very low signal-to-noise ratio. Far outside the capabilities of humans, AI can analyse huge amounts of complex data with speed and accuracy.  Balabit CEO Zoltán Györko explained the valuable insights that AI offers, telling CBR:

“Using artificial intelligence or machine learning can help with the information/data overload problem. Instead of presenting security analysts with terabytes of raw data we can present them with easy-to-understand views such as behavioural profiles or virtual "video recordings" of user sessions or a prioritised view of all unusual events. A machine can really efficiently dig through tons of raw data and produce real insight from it thereby freeing up security teams to focus on what's really important for them.”

This fast, accurate processing of data also affords defenders another weapon against attackers – that of finding behavioural patterns. This cuts to the second major issue facing security professionals in that attackers are constantly evolving and keeping one step ahead of defenders. Attackers are constantly swapping and changing attack methods, finding new flaws and manipulating new victims – all of which results in defenders playing catch-up. HOwever, this could change with AI, as Wandera CEO Eldar Tuvey told CBR.

“One of the key advantages of AI solutions is the ability to establish behavioural patterns, otherwise known as "profiles", from largely unlabelled and unstructured data.” 

“These patterns provide additional insights to our security experts, and are also utilised as additional inputs to further machine learning processes. For example, clustering algorithms can be deployed to help identifying peer groups of vulnerable systems or users, or finding correlations between malicious websites and applications.”

By establishing profiles and utilising behavioural analytics, intelligent anomaly detection can be performed in order to identify potential exploits or vulnerabilities. As a company which relies heavily on behavioural analytics, Wandera’s Tuvey told CBR how machine learning allows the triggering of tailored mechanisms and the application of very specific security policies – giving an example, Tuvey explained how a specific security policy may be applied on “a certain app on a given user's device, at a particular local time frame and geo-location. Such a policy may be completely irrelevant to another user even in the same peer group.”

This identification and analysis of behaviour could become the authentication of the future, with Balabit’s Györko telling CBR that behavioural analytics will replace one-off authentication methods such as passwords and form a new kind of continuous authentication.

Data is at the core of AI, but it is a double-edged sword. While it affords defenders with increased intelligence and fast processing, it can also be leveraged by attackers. Attackers will identify that the key to AI is data and will look to compromise and manipulate that data to their own advantage. Speaking about this threat, Darktrace’s Dave Palmer said:

“AI thrives on the data it learns from. We should anticipate attacks on underlying data that are aimed at subverting the decisions that machines make. An example of this could be falsifying market information to cause incorrect actions by investment (by AI) in financial institutions, or subverting geo-physical data to cause rival Oil & Gas companies to bid for rights and drill in the wrong locations.”

The need to secure the data that feeds AI is an important one, especially in cases where human life could be compromised. Although this lends itself well to the dramatic, the reality is that AI is being used in areas such as healthcare and transportation, all of which has the potential for loss of life if data is compromised.

“If the data an AI machine receives is not secure, or is tampered with in some way, then the end results it generates will be incorrect.” Jason Hart, CTO Data Protection at Gemalto, told CBR.

“The implication could be huge, especially where people’s safety will be at risk such as in medical, pharmaceutical and infrastructure technologies.”  

This threat to the data which underpins AI is further amplified when taking into account user privacy – a debatable area which could stretch this article to a War and Peace-length novel. Robert McFarlane, head of labs at digital agency Head, highlighted these issues to CBR, saying:

“Too much data is a potential threat. If biodata and the facial recognition records of individuals suddenly become stored on a Google server, does that spell the end for anonymity and privacy? We’d have to weigh up the balance between giving away personal information and receiving more frictionless convenient services. You might be able to unlock your phone faster, or the AI that manages your hair appointment might be able to read your mood through blood pressure and facial expressions so it knows to reschedule for you, but is that worth it?”

It is an interesting point which McFarlane makes – in order for AI to be effective, huge amounts of personal data needs to be relinquished to the machines. Behavioural analytics, which will be of so much use to security professionals, relies on the gathering and storing of human behaviour – a thing as unique as your fingerprint. While debate will rage on this topic for many years to come, the privacy issue is intrinsically linked to two further issues of AI – trust and dependency.

The more it matures, the more AI will be entrusted to make crucial decisions and assessments. Security professionals will have to trust the algorithms, trust the machine learning and trust the data of the machines. This could be a tricky dynamic for the security industry – an industry entrusted to protect business infrastructure and mission-critical applications.

This trust issue then evolves into one concerning dependency – how much do we entrust to the machines? How much human oversight do we give AI? Balabit’s Györko believes that security pros are not quite ready to hand over full control to AI, although the CEO did press the importance of humans and machines working together.

“In our experience, security professionals are not yet ready to blindly trust algorithms to make serious decisions, no matter how sophisticated those algorithms are. This attitude might change in the future (after all, we will slowly but surely begin to trust machines to drive our cars) but that will be a slow transition and will require lots of human interaction.”

The scaremongers warning of the death of the human workforce are wide off the mark; security professionals must reap the benefits of AI and create a working relationship with the machines.

AI takes care of the security grunt work, freeing up the time of security pros to focus on new innovations and developments. The way AI works with data could herald a new era for IT security, one where behaviour is a form of continuous authentication and where psychologists join the ranks of security professionals.

Yes, attackers will try to subvert the data that feeds AI and will probably develop their own malicious innovations to thwart the intelligent technology – yet AI is continually learning, giving defenders more of a level playing field when it comes to the cyber battlefield.

It is an exciting time to be in IT security. Who knows, in 20 years the fictional Terminator may become a reality, but for now security professionals need to see AI for what it is – a valuable resource, a co-worker. Humans and AI can become the ultimate team – just like the Terminator and John Connor.

“For the foreseeable future, humans and AI must work together to thwart most attackers. In movie terms, you need a Robocop rather than the Terminator – or you need a Terminator with human friends. Machine learning can process large data sets and do what humans would never have the patience to do – an AI doesn’t tire. Oliver Tavakoli, CTO at Vectra Networks, told CBR.

“Humans know all the messy context (e.g. knowing that a particular deal requires an unusually large size data transfer) surrounding the business. The combination of these two skills (tireless data process + human-supplied context) yield the most effective defence against modern cyber attacks.”
This article is from the CBROnline archive: some formatting and images may not be present.