View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
January 25, 2024updated 26 Jan 2024 10:43am

Generative AI banned at over a quarter of firms, while two-thirds have imposed stringent guardrails

A new study from Cisco finds that while staff understand the privacy shortcomings of generative AI, many are still inputting non-public information into ChatGPT-like services.

By Greg Noone

Over a quarter of companies have impletemented generative AI bans among their staff, according to a new survey by Cisco. The firm’s annual Data Privacy Benchmark Study of 2,600 privacy and security professionals in 12 countries also revealed that two-thirds of those polled have both imposed guardrails on what information can be entered into LLM-based systems or prohibited the use of specific applications. 

“Over two-thirds of respondents indicated they were concerned about the risk of data being shared with competitors or with the public,” wrote Robert Waitman, a director at Cisco’s Privacy Center of Excellence, in a blog post about the survey. “Nonetheless, many of them have entered information that could be problematic, including non-public information about the company (48%).”

A picture of a man inputting sensitive data into a generative AI application, used to illustrate a story about generative AI bans over data privacy concerns.
Cisco’s latest Data Privacy Benchmark Study revealed that most of its 2,600 respondents were well aware of the data privacy concerns surrounding generative AI and had still inputted sensitive corporate data into such applications. (Photo by Shutterstock)

Respondents aware of data privacy concerns

Unsurprisingly, the survey revealed a strong and growing acquaintance with generative AI among respondents, with 55% saying that they were “very familiar” with the technology compared to 52% saying they were unfamiliar in the 2023 poll. 79% of those surveyed this year, meanwhile, said that they were extracting “significant or very significant value” from generative AI applications. 

Even so, the study revealed persistent reservations among respondents about the ability to keep company data safe and secure when deploying such tools. These included worries that AI could endanger their organisation’s legal and intellectual property rights (69%), the leaking of proprietary data to competitors or the general public (68%) or that the outputs produced by the application could be inaccurate (68%). 92% of respondents, meanwhile, agreed that deploying generative AI required “new techniques to manage data and risks.”

Despite these concerns, many respondents freely admitted that they had inputted sensitive company data into generative AI applications. 62%, for example, had entered information about internal processes, while 48% had sought new insight from generative AI applications using non-public company data. 

Monetisation survey

Generative AI bans becoming more common

It also appears that companies are beginning to adapt to these temptations by imposing guardrails on the use of generative AI. 27% of respondents said that their organisations had banned the technology altogether, though most companies seem happy to use ChatGPT-like systems provided they were deployed with certain restrictions. According to the survey, this can come in the form of data limitations (63%), tool restrictions (61%) or data verification requirements (36%). 

These findings echo similar public announcements throughout 2023 by companies in retail, manufacturing and financial services that they were imposing guardrails on the internal use of generative AI. In May, Samsung banned its employees from using generative AI, while Amazon, Apple and Verizon imposed similar restrictions. These decisions were likely motivated by regulatory uncertainty as much as they were concerns about data privacy, generative AI expert Henry Ajder told Tech Monitor in October

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

“They realise,” said Ajder, “that if they completely go all-in on generative AI in its current form, and in the current regulatory landscape, they could end up with them having to finish implementing something that is no longer compliant, or is going to be incredibly costly to maintain.”

Read more: ICO launches consultation on how data protection law should be applied to generative AI

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU