View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Comment
May 15, 2024

Smart AI applications urgently require smarter AI security solutions

While generative AI applications will prove transformative for most businesses, new vulnerabilities will emerge - and AI security must keep pace.

By Martin Borrett

Generative AI offers significant potential to revolutionise both business operations and daily life. However, this potential is heavily dependent on trust. Any compromise in the trustworthiness of AI could have far-reaching consequences, including stifling investment, hindering adoption, and eroding our reliance on these systems.

Just as the industry has historically prioritised securing servers, networks, and applications, AI now emerges as the next major platform necessitating robust security measures. Given its impending integration into business frameworks, it is vital to incorporate security measures from the outset. By integrating security into AI models and applications early in the development process we can ensure that trust remains intact, facilitating smoother transitions from proof-of-concept to production.

Driving this change means looking to new data to understand how the current C-Suite is looking to secure generative AI and developing a plan of action to help navigate and prioritise these AI security initiatives.

A malevolent AI hovering over industry, used to illustrate a comment piece about AI security.
Generative AI applications are being downloaded into companies every passing day. With that, however, will come new software vulnerabilities – flaws that only a holistic AI security strategy can keep pace with. (Image by Shutterstock)

C-Suite perspectives on generative AI

As most AI projects are driven by business and operation teams, security leaders participate in these conversations from a risk-driven perspective, with a strong understanding of business priorities.

In our latest research, we delved into the perspectives and priorities of global C-Suite executives regarding the risks and adoption of generative AI. The findings reveal a concerning gap between security concerns and the urge to innovate rapidly. While a significant 82% of respondents recognise the importance of secure and trustworthy AI for business success, a surprising 69% still prioritise innovation over security.

In the UK, while our CEOs similarly look at productivity as a key driver, they are increasingly looking toward operational, technology, and data leaders as strategic decision-makers. This was reflected in our 2023 CEO study that highlighted that the influence of technology leaders on decision-making is growing – 38% of CEOs point to CIOs, followed by Chief Technology Officers (26%) as making the most crucial decisions in their organisation.

Driving change by navigating and prioritising AI security

To successfully navigate these challenges, businesses need a framework for securing generative AI. That begins with the realisation that AI does pose a heightened security risk, insofar as models centralise and are trained upon highly sensitive data. As such, that data needs to be secured against the threat of theft and manipulation.

Content from our partners
Rethinking cloud: challenging assumptions, learning lessons
DTX Manchester welcomes leading tech talent from across the region and beyond
The hidden complexities of deploying AI in your business

Security around the development of new models also needs to be tight. As new AI applications are devised and their training methods evolve, companies must be alert to the possibilities of new vulnerabilities being introduced into their wider system architectures. Firms must therefore be on the constant lookout for flaws, in addition to hardening their integrations and religiously enforcing policies around access to sensitive systems. Attackers, too, will seek to use model inferencing to hijack or manipulate the behaviour of AI models. Companies must therefore secure the usage of AI models by detecting data or prompt leakage, and alerting on evasion, poisoning, extraction, or inference attacks.

We must also remember that one of the first lines of defence is having a secured infrastructure. Firms of all stripes must harden network security, access control, data encryption, and intrusion detection and prevention around AI security environments. Organisations should also consider investing in new security defences specifically designed to protect AI from hacking or hostile manipulation.

With new regulations and public scrutiny on responsible AI on the horizon, robust AI governance will also play a greater role in putting operational guardrails to effectively manage a company’s AI security strategy. After all, a model that operationally strays from what it was designed to do can introduce the same level of risk as an adversary that’s compromised a business’s infrastructure.

A broken firewall. Used to illustrate a comment piece about AI security.
Companies of all types will need robust AI security strategies in place to guard against the introduction of vulnerabilities by new generative AI services. (Image by Shutterstock)

Protecting now for the future

Above all, the transformative potential of generative AI hinges on trust, making robust security measures imperative. Any compromise in AI security could impede investment, and adoption, and erode reliance on these systems. Just as securing servers and networks has been prioritised, AI has emerged as the next major platform requiring stringent security. Integrating security measures early in AI development is crucial for maintaining trust and facilitating smooth transitions to production.

Understanding the perspectives and priorities of C-Suite executives regarding AI security is essential, especially considering the gap between security concerns and the urge to innovate rapidly. To address these challenges, a framework for securing generative AI must focus on securing data, model development, and usage. Additionally, safeguarding the infrastructure and implementing robust AI governance is vital in mitigating risks and ensuring AI operates as intended.

Read more: Can compliance keep up with the convergence of cybercrime and crypto?

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.