Microsoft recently announced that it is bringing artificial intelligence (AI) and machine learning tools to Windows 10, in a major new push to democratise the technology. But while its efforts could herald untold productivity and efficiency benefits for users, there are risks. In an era in which desktop features have been exploited relentlessly by attackers, AI represents possibly the most powerful, and dangerous tool yet. Over 90% of IT professionals are already worried it will be used in cyber-attacks of the future.
As an industry, we need to evolve our approaches to mitigate risk in this emerging area. But at the very least, consumers should be made aware that the brave new world of AI is also one fraught with potential danger.
A powerful tool
AI is set to become one of the most disruptive technologies of the 21st century. It offers organisations the opportunity to boost productivity, get closer to their customers, and differentiate on innovative new services. Microsoft has already been using it to good effect to improve Office 365, the Windows 10 photo app and the Windows Hello facial recognition authentication feature.
It will expand these efforts much further by rolling out the Windows ML machine learning platform to developers, allowing them to create powerful new apps running on Windows 10. Reports suggest it could speed real-time analysis of local data and improve background tasks to enhance the entire user experience. There’s just one problem: it also offers the black hats new opportunities.
Exploited by hackers
History is a great teacher here. It tells us that powerful features on the desktop are always abused. Just look at how hackers are currently targeting Outlook Forms, Outlook Rules and the DDE protocol to launch successful attacks. These tools all brought with them the promise of improvements to the end user experience, but have been ruthlessly exploited to make consumers less secure.
We live in the age of the feature exploit: when something as powerful as AI comes along, you can bet that the bad guys are working on a way to exploit it in attacks.
One way AI could be subverted for malicious intent is to quickly find the most valuable data in a targeted organisation. This is often a challenge for hackers once they have infiltrated corporate networks. But by using topic modelling they can discern the content or themes within documents stored on that network. It’s a potentially very powerful way to reach the content that matters fast, before the organisation has had a chance to detect and respond to the threat. Having this kind of power already inside the network is a troubling thought, especially given the size of the typical endpoint attack surface.
Hackers could also use reinforcement learning to help evade traditional security controls. This is a sub-discipline of machine learning, whereby AI algorithms are made more intelligent by trying actions and – via repeated failures – adapting behaviour to achieve their ends. This was very powerfully demonstrated when AI algorithms without prior knowledge of the rules of a notoriously difficult Atari game were able to learn how to succeed and even beat the highest scores achieved by human players.
Next-gen phishing
Phishing and spear-phishing is another area of concern. Tricking users into clicking on malicious links or opening malware-laden attachments has become one of the most popular ways for attackers to infiltrate corporate networks. Verizon claims that 43% of data breach attacks now involve phishing. Now just imagine what might happen if the black hats are able to use AI to make their phishing attacks even more successful.
How would they do this? By running algorithms to learn how targeted users construct their e-mails and text documents. They could use this intelligence to craft highly convincing phishing emails, spoofed as if sent from that person. Going further, hackers could even use neural networks and monitoring of users via their desktop microphones to learn how to speak like their victims. In a world increasingly moving towards voice recognition and biometric authentication, cyber-criminals and nation states will be queueing up to do their worst.
No-one knows how AI tools will ultimately transform business and shape our society. But the IT industry should be aware from bitter past experience that new technologies are always abused. Microsoft and others would do well to temper their relentless marketing efforts by also educating users about cybersecurity risk. As an industry, we should strive for better than repeating the mistakes of the past.