There is a growing conversation within cybersecurity about the potential for artificial intelligence or machine learning to stand guard for us against attack vectors such as malware, independently identifying threats, and then formulating the decision to dispatch suspicious programmes – this idea is disputed by the CTO of Bromium.
The argument made for this measure is often sold on the argument that there are not enough humans to handle the volume of information faced by systems in the modern era. While this is absolutely true, Simon Crosby, CTO at Bromium argues that some of the things vendors are saying are a fallacy.
Mr Crosby points out a misuse of the term AI, and explains the correct way we should use and view the technology in question, he said:
“In general AI is not AI, let’s get away from AI, let’s call it machine learning, can we do that? AI in general is I think is an advance on everything that we have in every domain of using computers today. Artificially intelligent systems are systems that could think about themselves and improve, and that is definitely not where we are in cybersecurity.”
While Mr Crosby believes machine learning is being called AI wrongly in certain instances, he does not deny the great potential it holds to help us deal with cybersecurity, especially in light of the skills gap, combined with the humanly insurmountable loads of data that analysts are currently faced by.
Mr Crosby said: “Machine learning is an extraordinarily powerful tool that can help humans do this job, and when used in this way can be extremely effective because infrastructure nowadays is becoming more and more instrumented. So we are getting tons and tons of data from every piece of computer infrastructure, and so machine learning can be used to find faults and attacks and to find lateral movements of attackers within the enterprise.”
In some instances it is possible to be made to think that something put into action within the bounds of machine learning or AI is infallible, the Bromium CTO concisely deals with this, he said:
“The key point here is that machine learning is not perfect, so what you are doing is using it to figure out and classify any particular point of data whether or not it is normal or abnormal, and just like humans, it can make mistakes, it can make mistakes, or it can be in the grey.”
Detailing these areas in which machine learning can be beaten, Mr Crosby described the ability of a sophisticated attack to best the software you are trusting to protect your systems, or simply for it to assume that normal traffic is an attack, resulting in he said “is called a false alert or a false positive.”
“The mathematics behind this stuff goes back to Turing, a famous British mathematician, and in fact the father of all computer science from about 90 years ago, and there is a proof by Turing that basically says there is no way that a false positive is a false negative, you cannot build a perfect detector.” Mr Crosby said.
Any idea that machine learning or AI could really solve the problem is completely ridiculous, there will always be false positives and false negatives, accepting that, it is a powerful tool that can aid people to get through tons of data and find things that the algorithm believes are good or bad, but it can make mistakes.
Crosby used the example of the hype surrounding self-driving cars and the perfect system that is naturally assumed to come with them to parallel the narratives created by some vendors, Mr Crosby dispels these myths by reminding us that machine learning can make mistakes, just as an autonomous car could crash.
With particular focus placed on this area, Crosby said: “in the specific area of end-point protection, the notion that you could use machine learning to detect bad stuff arriving at a computer like malware is completely daft.”
He went on to explain, saying: “Let me tell you why, ultimately malware is a programme, and Turing’s proof basically says there is no way for one programme to ever decide if another programme is good or bad, it’s called the halting problem, and it’s a famous result in computer science. So the idea that it could look at some blob of stuff that I have downloaded from the web or some documents or attachment, and decide whether it is good or bad is purely fictitious, it is just marketing nonsense.”
Mr Crosby explained that using machine learning or a programme for this purpose would mean removing and deleting other programmes before finding out whether they were truly malicious or not, and without learning about them.
“The problem is stuff has to start to execute before you see whether it is good or bad, and the challenge then is whether you can stop it in time… it does not have access to any enterprise networks or high value content or anything like that, but you can watch it safely, and then you can learn, and rapidly describe whether it looks good or bad, so rather than looking for bad, you are looking for deviations from good.”
The Bromium CTO, Mr Crosby, made it clear that anywhere machine learning can be used to automate a manual, tedious procedure which suffocates analysts, it is a powerful and important tool. This is a familiar sentiment that CBR has previously learnt from Peter Woollacott, the CEO of security firm Huntsman.
READ MORE: Why automation is key for the future of cyber security
In summary Mr Crosby said: “First off, the vendors that claim that they use AI for anti-virus are basically lying, second, if you let things run for a bit, you can start to learn a whole bunch, which could really help, and this is where my company’s technology actually can help a lot in the machine learning domain.”