View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. AI and automation
October 23, 2023updated 15 Nov 2023 7:24am

How the biggest companies wrote their own generative AI guardrails

Making up the rules for internal use of generative AI products has been a process of trial and error.

By Greg Noone

Are your colleagues using generative AI on the sly? The statistics suggest a couple of them have – and probably do not plan to tell their managers about it any time soon. According to a survey earlier this year by Deloitte, an estimated one in ten adults in the UK have used generative AI for work purposes. When asked whether their managers would approve of them using ChatGPT-like services to assist them in their daily tasks, only 23% concluded that their managers would endorse their unorthodox working arrangements. 

It gets worse. Of those people who have actually used it for work purposes, “43% mistakenly assume that it always produces factually accurate answers, while 38% believe that [the] answers generated are unbiased”. This collective failure to recognise that the definition of ‘accuracy’ for such models has repeatedly been proven to be Fantanta-esque at best, and should unsettle even the most experienced CIO. So, too, should the danger of generative AI models being able to plagiarise artwork and code, or leak sensitive corporate data unwittingly inputted by an unsuspecting office worker. These factors combined mean businesses could be plunged into acute legal danger at a moment’s notice. 

This isn’t just happening in the UK. Across the pond, an earlier survey found that almost 70% of US employees who had used ChatGPT in a work context had not informed their line manager that they were using the model – respondents who claimed to work for several Fortune 500 companies. In response, many of these firms simply banned staff from using ChatGPT or any other generative AI model that wasn’t approved by the C-Suite. The potential productivity benefits of using the technology, it seemed, were vastly outweighed by the security risks. “ChatGPT is not accessible from our corporate systems, as that can put us at risk of losing control of customer information, source code and more,” said a spokesperson for US telco Verizon back in February. “As a company, we want to safely embrace emerging technology.” 

But, as generative AI expert Henry Ajder explains, “Risk tolerances and adoption speeds are far from uniform.” Many major companies, as it turns out, believe there is a way to harness generative AI in the workplace in a supervised manner that reduces any potential reputational or legal risk to the wider firm. McKinsey confidently announced in June that it was letting “about half” of its employees use the technology under supervision. The following month, insurance provider AXA announced its deployment of ‘AXA Secure GPT,’ which leveraged Microsoft Azure’s OpenAI service to ‘generate, summarise, translate, and correct texts, images and codes.’ 

It’s in these seemingly high-value but low-risk tasks, says Ajder, where the biggest companies are most enthusiastic about deploying generative AI. “If you have some human oversight, this stuff could be deployed pretty quickly and pretty easily,” he adds – the hope being that the productivity benefits will naturally follow. 

A manager talking to a robot, metaphorically illustrating the difficulty of safely integrating generative AI into corporate workflows.
Growing interest and use of generative AI by employees have forced managers to come up with creative approaches to imposing guardrails on the use of services like ChatGPT and Copilot. (Photo by la pico de gallo/Shutterstock)

Guardrailing generative AI

Any company that sets out to build its own internal guardrails for the use of generative AI within its workplace needs to define its tolerance for risk when using the technology – a process usually worked out in committee. Over the past year, it has become de rigueur for major financial institutions, consultancies and even movie studios to form a dedicated task force on AI to doorstep departments across the company about all the possible calamities that might result from its use. After these have been worked out, a threshold for appropriate use begins to be defined. 

These guidelines vary from business to business, explains Ajder, but there are commonalities. Many have “clear rules, for example, around ingesting company data, disclosing to customers when [generative AI models] are being used, and not deploying it in any context where there is not any human oversight in the final application or output”, he says. Among institutions with serious compliance budgets, like banks and legal firms, such models may even be restricted at a departmental level, “to people working in, say, marketing, or in spaces that have a bit more freedom”.

Content from our partners
The hidden complexities of deploying AI in your business
When it comes to AI, remember not every problem is a nail
An evolving cybersecurity landscape calls for multi-layered defence strategies

Again, this comes back to risk. There are fewer scenarios wherein an edited LLM-generated press release will result in catastrophic reputational damage to an insurance firm, for example, than if an auditor were to use it to produce a report about recent changes in EU regulations that they should have written themselves. There is less consensus about who is responsible when mistakes are introduced into workflows through the use of generative AI. At one company Ajder recently observed, “the buck always stopped at the manager who was managing the model”, he says, a prospect that might trigger consternation among staff who are being pressured to use such services by their superiors. 

Some of the more nuanced questions about the technical capabilities and limitations of LLMs can also be answered by internal company sandboxes, Ajder argues. These give employees the freedom to “play around” with different models in a risk-free environment. Some have also chosen to adopt an almost constitutional approach to AI risk. Salesforce – both a user and producer of generative AI solutions – has devised a set of key principles, which inform policy on the use of such models at every level of the business. 

“Our AI research team started working on generative AI long before ChatGPT hit the mainstream,” says Yoav Schlesinger, Salesforce’s architect for responsible AI and tech, who claims that the company anticipated many of the concerns surrounding LLM hallucinations – he prefers the term ‘confabulations’, as it’s less anthropomorphising – long before the rest of the tech world. Its key principles include setting realistic expectations about the accuracy of an LLM’s outputs and their potential for toxicity or bias; being honest about the provenance of the data used to train a model; and ensuring that such systems are used sustainably and to augment rather than replace human capabilities.  

These tenets not only inform the framework for the appropriate use of generative AI by Salesforce’s own staff but also its provision of external AI services. “We’ve crafted a number of prohibitions…that hopefully also steer our customers away from high risk,” says Schlesinger, including against using its chatbots to deceive consumers into believing they’re talking to a human, the use of such models in biometric identification, and even to offer personalised medical advice. 

A manager opposite a robot, illustrating the difficulties of integrating generative AI into corporate workflows.
Most major companies have shaped their rules governing the internal use of generative AI products around real and perceived risks – often resulting in such models being confined to peripheral application areas within the marketing or IT departments. (Image by la pico de gallo/Shutterstock)

Moving with the times

Salesforce’s overall aim, Schlesinger claims, is to keep “that important human in the loop to address those areas where there might be other risks that are opened up.” It’s a sentiment shared by other providers of enterprise AI solutions, all of which seem to have recognised that winning the hearts and minds of CIOs around the world requires them to constantly address their very real concerns about safety and reputational risk. Generative AI can boost workplace productivity in all kinds of ways, reads one of Microsoft’s latest missives on the subject, but should only be pursued after the imposition of “proper governance measures based on rigorous data hygiene”.

Enterprise AI providers have also proven sensitive to the many cybersecurity concerns surrounding generative platforms. While there “have been a few, high-profile cases of company data leaking after being ingested via large language models”, says Ajder, “I think that fear is a little overblown now” when applied to more mainstream services designed for corporate use. Most of these, he adds, can now be tweaked to prevent any sensitive data from being inadvertently collected for training future models. 

As if to push any hesitating CIOs over the line, Microsoft and others have gone as far as to promise to pay the legal costs of those companies who find themselves being sued for copyright infringement by disgruntled coders and artists. It remains to be seen how convincing that offer is to your average compliance department. Even so, there does seem to be a growing openness among some companies to overlook bringing in complex risk assessments for using generative AI models in select applications. During a panel discussion in Copenhagen last month, Vodafone’s chief technology officer, Scott Petty, suggested that there were plenty of opportunities for the firm’s operations teams to use such services without consulting any internal ethics committee. 

“There are so many places where you can apply AI where the risk is really low which can generate immense value,” he said, adding that many potential application areas could be easily found in the IT department. This, added Petty, “is the bottleneck in every telco [and] there is far more demand for new capabilities than we can deliver. Generative AI can unlock that velocity.” 

But is that how far the risk appetite really extends for generative AI among major companies? Ajder suspects so. Many such firms, he explains, are waiting on new legislation in the UK, EU and the US to formally define liability as it relates to the use of AI models. And while the current regulatory environment is changing relatively quickly, many CIOs are still in wait-and-see mode. “They realise that if they completely go all-in on generative AI in its current form, and in the current regulatory landscape, they could end up with them having to finish implementing something that is no longer compliant, or is going to be incredibly costly to maintain,” says Ajder.

For his part, Schlesinger maintains that CEOs, CIOs and CTOs should all keep an open mind about the potentiality of generative AI in the workplace. “Generative AI has incredible promise to help unlock human potential, and we should imagine and expect that people will use it to augment their work,” he says. “Fighting against that tide is a fool’s errand.”

Read more: How real is the threat of data poisoning to generative AI?

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU