View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cybersecurity
August 30, 2023

Treat large language AI models like a ‘beta’ product, NCSC warns tech leaders

Businesses should beware the unintended consequences of incorporating AI into their systems, the cybersecurity watchdog says.

By Matthew Gooding

The UK’s National Cyber Security Centre (NCSC) has warned tech leaders of the security risks of building systems that incorporate large language AI models (LLMs). The watchdog says a lack of knowledge of how the systems behave means that deploying them with customer data could have unpredictable consequences.

The NCSC says tech leaders should treat large language models like a “beta product”. (Photo by T Schneider/Shutterstock)

In advice published today, the NCSC outlines some of the security risks associated with AI technology and says businesses must exercise caution when deciding how to deploy the models.

Over recent months, business interest in using AI has boomed, driven by the popularity of OpenAI’s ChatGPT chatbot. This has led to many of the enterprise tech vendors infusing their products with AI, and companies in all sectors looking at ways to integrate automated features to boost productivity and offer better customer service.

ChatGPT is built on OpenAI’s GPT-4 LLM, which is capable of analysing text and images and producing detailed responses, and many of tech’s biggest names, including Google and Meta, have been developing their own LLMs.

NCSC warns on the risks of using large language AI models

In its guidance, the NCSC says using LLMs provided by vendors like OpenAI can be fraught with difficulty.

“While there are several LLM APIs already on the market, you could say our understanding of LLMs is still ‘in beta’, albeit with a lot of ongoing global research helping to fill in the gaps,” the advice says.

It highlights the risk of relying on API-based LLMs, where the end user cannot see how the model makes decisions. “With models being constantly updated in an uncertain market, a start-up offering a service today might not exist in two years’ time,” it says.

Content from our partners
DTX Manchester welcomes leading tech talent from across the region and beyond
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail

“So if you’re an organisation building services that use LLM APIs, you need to account for the fact that models might change behind the API you’re using – breaking existing prompts – or that a key part of your integrations might cease to exist.”

The advice, penned by the NCSC’s director for platforms research, referred to only as ‘Dave C’, says LLMs have displayed signs of artificial general intelligence (AGI), or being able to think for themselves, marking them out from more basic machine learning (ML) systems which are simply able to classify information and spot patterns.

“Creators of LLMs and academia are still trying to understand exactly how this happens and it has been commented that it’s more accurate to say that we ‘grew’ LLMs rather than ‘created’ them,” the advice says. “It may be indeed more useful to think of LLMs as a third entity that we don’t yet fully understand, rather than trying to apply our understanding of ML or AGI.”

How tech leaders should approach the use of LLMs

The advisory also details the type of attacks that can be targeted at LLMs, including so-called “prompt injection” attacks, where requests made to an LLM-based system can be subverted by criminals.

When it comes to building systems incorporating LLMs, the NCSC says: “One of the most important approaches is ensuring your organisation is architecting the system and data flows so that you are happy with the ‘worst case scenario’ of whatever the LLM-powered application is permitted to do.” Tech leaders must also consider “the issue that more vulnerabilities or weaknesses will be discovered in the technologies that we haven’t foreseen yet,” the cybersecurity watchdog says.

Describing the emergence of LLMs as “a very exciting time in technology”, the NCSC adds: “This new idea has landed – almost completely unexpectedly – and a lot of people and organisations, including the NCSC, want to explore and benefit from it.

“However, organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta. They might not let that product be involved in making transactions on the customer’s behalf, and hopefully wouldn’t fully trust it yet. Similar caution should apply to LLMs.”

Read more: FBI takes down Qakbot botnet

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU