View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
May 3, 2023updated 04 May 2023 12:39am

Generative AI vendors must develop ‘enterprise-friendly’ solutions to woo big businesses

Companies deploying large language models will need to ensure they have data security mechanisms in place to keep enterprise customers happy.

By Ryan Morrison

More than 70% of organisations are in “exploration mode” when it comes to deploying generative AI, a new survey has revealed. It comes as companies such as Samsung are banning employees from using tools like ChatGPT over data security fears. One analyst told Tech Monitor vendors like Microsoft need to design “tailored solutions for enterprise” if they want the technology to succeed.

Samsung staff have been banned from using ChatGPT due to the risk of data leakage (Photo: Ascannio / Shutterstock)
Samsung staff have been banned from using ChatGPT due to the risk of data leakage (Photo: Ascannio / Shutterstock)

Samsung has ordered employees to avoid tools like Chat GPT, Bing and Google’s Bard after company source code was leaked last month. It follows similar announcements from other companies expressing data security concerns from the use of generative AI.

OpenAI confirmed last week it was developing a version of its ChatGPT platform for enterprise that barred the use of conversations in retraining by default. It isn’t clear whether this would be enough to satisfy security concerns from the largest enterprise clients but there is a clear desire for the technology from executives.

A new poll by Gartner of 2,500 executives across industry found 45% of those surveyed felt ChatGPT had prompted an increase in AI investment and 70% said their organisations were investigating ways to deploy generative AI. In total 19% were already at the pilot or production stage.

“The generative AI frenzy shows no signs of abating,” said Frances Karamouzis, distinguished VP analyst at Gartner. “Organisations are scrambling to determine how much cash to pour into generative AI solutions, which products are worth the investment, when to get started and how to mitigate the risks that come with this emerging technology.”

Most telling though was that 68% felt the benefits of generative AI outweigh the risks, with just 5% feeling that the risks outweigh the benefits from using such a technology. This may change as investment in generative AI deepens and the impact of data security becomes more apparent, as is the case with Samsung. The South Korean tech giant is building its own foundation AI models.

Generative AI: investment in customer experience

Despite an economic slowdown and mass layoffs throughout the tech sector, only 17% said cost optimisation was the main reason behind investing in AI with customer experience the single most important focus. With companies like Microsoft deploying AI through Copilot in both its CRM and Office 365 suites and Salesforce adding chatbot technology to its full suite of products, avoiding generative AI may prove harder than deploying it.

Avivah Litan, a Gartner distinguished VP analyst told Tech Monitor enterprises face multiple risks when it comes to the use of public large language models but there are ways to mitigate the risk including ensuring they have the ability to automatically filter outputs for misinformation, hallucinations and unwanted factual information.

Content from our partners
<strong>How to get the best of both worlds in the hybrid cloud</strong>
The key to good corporate cybersecurity is defence in depth
Cybersecurity in 2023 is a two-speed system

She says companies need to also ensure that they have “Verifiable data governance and protection assurances” from LLM vendors to ensure confidential enterprise information transmitted to the LLM is not compromised. Transparency is also required to ensure any use is compliant with legislation like GDPR and the EU AI Act.

“What we are seeing in the meantime until these requirements are met is that cautious enterprises either cut off employee access to chatgpt (which is impossible to enforce because of personal devices), or allowing measured experimentation that precludes sending confidential data to the LLMs,” she explains.

The interest in deploying the technology is due to the potential benefits., particularly from public models trained on expensive and extensive sets of data. “The great value of OpenAI and third party LLMs is the amount of data used to train them and the immense complex supercomputer functionality required to run them,” says Litan.  “Enterprises simply don’t have the resources to recreate these types of valuable models.”

The solution is for OpenAI, Microsoft and any vendor wanting to deploy such a technology is create technical tooling through shared security responsibility agreements. This is “where the vendors take responsibility if confidential data is compromised. This is the direction they are moving towards.”

Read more: Samsung bans staff from using ChatGPT after data leak

Topics in this article : ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU