View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
October 18, 2019

The Future of AI & Cybersecurity

"...multiple malicious AIs competing for digital resources"

By CBR Staff Writer

Debates over the benefits and risks of AI are now a day-to-day occurrence in the media, writes Professor Ben Azvine, Head of Security Research, BT. Many of these discussions focus on the potential negatives – from existential threats to employment via the automation of jobs, through to AI being used for the creation of ‘deep fake’ videos. On the other hand, we’re already benefiting from the positive effects of AI with automated assistants, while future benefits like self-driving cars are now only just over the horizon.

Professor Ben Azvine, Head of Security Research, BT. AI & Cybersecurity

Professor Ben Azvine, Head of Security Research, BT

AI will have a transformative effect across almost all technologies and industries, and cyber security is no exception.

In preparation for this, my team are constantly researching the short- and long-term developments that AI could provide for both cyber attack and defence – and how we can prepare for them.

AI & Cybersecurity: The Growing Threats

AI is already capable of enhancing malware so that it can evolve and adapt to counter security defences. In tandem, machine learning is being used to analyse vulnerabilities in target networks. Soon, AI will even have the ability to fund its own attacks via crypto-currency platforms, automatically channelling profits with no human intervention.

At a nation-state level, military grade AI will also be a major threat to critical infrastructure and available as an option alongside more traditional methods of cyber-warfare. Potentially, these new AI attack vectors could become commoditised and be sold as a service. Meanwhile, AI’s increasing capabilities will enable the creation of fake online personas – which may be almost indiscernible from humans when interacting with them – to perpetrate widespread, automated social engineering and fraud.

Even further over the horizon, AI has the potential to become capable of its own strategic operations, including planning and orchestrating its own attacks. Apocryphal though it may seem, we may see a future in which continuous AI vs AI cyber-battles are in play on a massive scale – not just between offensive and defensive systems, but between multiple malicious AIs competing for digital resources.

How Can We Respond?

Of course, AI is also already a significant tool in cyber defence. In fact, as we evolve to use more cloud-based services and virtualised networks, it’s becoming absolutely essential to combating cyber-attacks, with traditional prevention strategies becoming increasingly obsolete.

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

Security vendors have made extensive use of machine learning algorithms for many years. In the 1990s, early applications included use of Bayesian logic to filter spam email messages, or the use of large neural networks to classify spam. Today, further developments have combined machine learning algorithms with advanced data visualisation to create smart security interfaces. And by using AI to process massive amounts of information in real time, response times are hugely reduced, while it can analyse trends and patterns to predict cyber-attacks before they even happen.

AI is also now increasingly being used in the development of security ‘immune systems’. For example, at BT we’re examining how models of biological systems can display how viruses spread throughout populations. By applying the learnings from these models to our networks, we can train AI systems to test different defence strategies to minimise or stop the spread of malware during a cyber attack, controlling the infection and eradicating the causes. And this is a reciprocal process, as analysing the AI response can allow humans to improve their understanding and preparedness against cyber threats.

Going forward, as AI becomes better able to observe successful responses to cyber-attacks, it will also become ‘self-healing’, dynamically replicating the best defence strategies designed by human analysts. This will again allow a greater speed of response, freeing up human experts to take on more complicated investigations.

AI & Cybersecurity: Managing the risks

As with most technologies, the capabilities that AI provides are ‘agnostic’ – they can be used for both defence and attack, and their success in both areas depends on the strategies and investments underlying them.

We’ve learned from our usage of AI that access to data sets is an absolutely critical factor – if you don’t have the data, then even the most advanced AI and Machine Learning technologies quickly lose their accuracy and usability. This may seem self-evident, and not a major issue for large enterprises with numerous data lakes and different inputs, but access to data will be a huge driver of future success in this field.

The ability to align human and AI capabilities will also be a massive influence, as finding a way for analysts to process & investigate very high volumes of data is difficult in practice. Advanced interfaces for visualisation and real-time interaction are an essential part of the process – but combining inputs from analysts with the output from Machine Learning algorithms (known as ‘active learning’) is particularly hard. This requires a deep understanding of how analysts work, plus how algorithms can be re-configured and re-trained in real time, and will have a key impact on how successfully AI is implemented.

The fundamental vulnerabilities that AI systems possess are not yet totally understood, providing a unique opportunity for malicious actors to use AI against us. As with the development of antibiotics, formulating defences against AI-driven cyber-threats is likely to be expensive and will need to be carefully considered. Hostile AI will adapt, just as bacteria adapt to antibiotics. The best defence may be to simply have researched as many scenarios as possible and planned a response.

See also: A Tale of Two Honeypots

 

Topics in this article : , , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU