Since the launch of ChatGPT, the corporate world has been abuzz about the possibilities of generative AI. As millions of users created prompts for the chatbot to write power ballads, poetry and persuasively worded high school essays, businesses worldwide began to consider integrating similar services into their workflows.
Nearly a year on, Tech Monitor sat down with a selection of legal firms, IT giants and financial institutions to take stock of generative AI’s impact on their businesses, and whether it lived up to that early hype from late 2022. The discussion is captured in a new report, Thriving in the Age of Generative AI: Harnessing possibilities to maximise business value.
The initial conclusion from delegates was that it had, with some caveats. Most described their embrace of generative AI in their businesses as having derived from an early fascination with ChatGPT.
“It’s very rare when I start using something that I don’t move for three or four hours,” recalled one insurance executive, who ended up test-running several of their audit procedures through the model and pioneered the use of generative AI in comparing insurance policies and securely translating regulatory documents. Others lauded its ability to quickly produce summaries of complex security reports or monitor adverse media coverage – in short, to quickly churn through reasonably complex tasks that would have taken professional auditors or security analysts hours to complete.
Thriving in the Age of Generative AI: Harnessing possibilities to maximise business valueBy Hexaware-Technologies
Generative AI poses risks
Few firms represented at the table, however, were willing to lean too heavily on generative AI. This was, it emerged, down to pervasive fears that such models continue to ‘hallucinate’ responses to questions they are not trained to answer, or fears that highly sensitive corporate data could be inputted indiscriminately into chatbots by employees at great risk to the company. “We soon discovered that lots of our staff were using it,” said the firm’s CISO. “And, on Valentine’s Day this year, we blocked it.”
That company only resumed its use of generative AI a couple of weeks before the roundtable, after a rigorous investigation into what new guardrails on its use would be appropriate to impose. “We set up a central AI accelerator function to look at the security, the risk, the compliance and legal implications,” said the CISO. The firm is now an enthusiastic user of Bing Chat Enterprise, and is “very pleased to be on the M365 Early Adopters Program”.
Implementing guardrails for AI
Similar investigations had taken place across other media, legal and financial institutions represented at the roundtable. Few of these guardrails, however, completely forestalled the use of generative AI in more traditionally sensitive areas of business, such as cybersecurity. One IT department head from a major high street bank, for example, described how he had championed the use of tailor-made generative AI models to instruct staff on how they had perpetrated minor internal policy breaches. “It’s not big money we’re saving,” they said, “but it will save security analysts hours of their time a week.”
One frequently cited challenge in implementing these kinds of bespoke generative AI solutions, however, was the quality and organisation of the corporate data used to train the model. “When we ask generative AI tools to surface our intranet and find me a policy on such-and-such,’ half the time it comes up with an out-of-date policy because we haven’t done a clean-up,” said one CISO. “That’s going to be a humungous undertaking.”
Others were also concerned about the direct threat of cybercriminals using generative AI to breach their companies. In addition to the threat of AI-based phishing attacks, delegates highlighted the possibility of so-called ‘vishing’ attacks, wherein audio of senior leadership figures could be used to either trigger illegal fund transfers or manipulate stock prices. The risk of this happening had led at least one major financial institution at the table to force all client interactions to take place over video as such attacks, even without the use of AI, were “happening at least once a week”.
Even so, most figures at the table remained confident that the market would eventually produce solutions to these challenges. Delegates remained overwhelmingly convinced that generative AI had triggered positive changes to workflows within their organisations and welcomed new developments soon.
“We are going to have massive productivity improvements,” pronounced one CSO. This was especially true when it came to code generation applications, added, which were proving to be a boon for junior and mid-level programmers. “More senior developers probably don’t need it as much, but it brings everybody up to a much higher level of productivity and quality.”