The hype around artificial intelligence (AI) shows little sign of abating. Especially since the dawn of ChatGPT in 2022, it is almost taken as a given that AI will revolutionise whole business sectors. According to McKinsey, nearly two-thirds of organisations regularly use generative AI (GenAI), suggesting that few tech leaders are prepared to be left behind.
Starry-eyed predictions aren’t hard to come by. The IMF believes that AI has the potential to ‘reshape the global economy’, while Goldman Sachs predicts that GenAI will increase annual global GDP by 7% over 10 years. McKinsey Global Institute calls GenAI ‘the next productivity frontier’, estimating that it will boost the global economy by as much as $4.4tn a year.
All this being said, some commentators have suggested that the bubble may be about to burst. According to Gartner’s latest hype cycle report, released in July, GenAI for procurement has already hit its ‘Peak of Inflated Expectations’. This phase is classically followed by the ‘Trough of Disillusionment’ – a period of waning interest ‘as experiments and implementations fail to deliver’.
While GenAI is likely to mature rapidly from here on in, reaching the ‘Plateau of Productivity’ within two to five years, the path to this point may not run smoothly. Gartner has estimated that at least 30% of GenAI projects will be abandoned after proof of concept by the end of 2025.
Don’t believe the hype
Market research is starting to bear this out. Software company Asana, which surveyed more than 1,200 IT professionals, found that a quarter of respondents regret having invested in AI so quickly. Boston Consulting Group has found that two-thirds of executives are ambivalent about, or dissatisfied with, their organisation’s progress on AI. And SaaS company WalkMe says that half of US office workers have seen no improvement in their work since they started using these technologies.
In April, MIT economist Daron Acemoglu made a splash with a paper that predicted ‘nontrivial but modest’ economic benefits for AI. In contrast to Goldman Sachs and McKinsey, Acemoglu foresees a GDP boost of no more than 1.16% over the next 10 years, along with productivity gains barely exceeding half a per cent. (To date, notes a July article in The Economist, ‘the technology has had almost no economic impact’.)
“Everybody is getting into a mad dash to do something with AI without knowing what they’re doing,” says Acemoglu. “But the technologies are not sufficiently mature. That’s going to lead to a lot of disruption and unnecessary automation, and it might reduce the effectiveness of the products and services companies offer.”
He thinks that, while AI is proving useful in some areas – not least in automating simple tasks – other applications have failed to live up to their promise. For instance, there has been a lot of buzz around the idea of AI as a fully functioning personal assistant.
“For that to transpire, we need much more reliable models that can provide expert assistance to human decision-makers in a variety of difficult circumstances,” says Acemoglu “We’re just not there yet.”
Yuval Perlov is chief technology officer at K2View, which helps clients use their enterprise data to train large language models (LLMs). He notes that, in many use cases, companies are still in pilot project mode and are not seeing returns on investments.
“When we see organisations that are starting to build and train their own LLMs, the outcome is not always predictable,” he says. “It’s a huge investment, and it’s sometimes extremely successful – sometimes, not successful at all.”
Perlov cautions against expecting too much from LLMs, arguing that this is a fast track to buyer’s remorse. More successful clients take a hybrid approach, he says, in which the overall problem is broken down into smaller problems, and only the tasks that specifically relate to language processing are delegated to the LLM.
The need to upskill
Pieter J. den Hamer, a research director at Gartner, says he has noticed a growing sense of disappointment within the market.
“Especially for companies that started investing in GenAI after ChatGPT went viral, they are becoming increasingly aware that this is not a silver bullet,” he says. “It’s a very powerful technology, but just like any other technology, it needs to be aimed carefully.”
He notes that many CIOs invest in AI with an eye to productivity gains. The challenge comes further down the line when they struggle to quantify these gains. In a recent Gartner survey, almost half of IT leaders said they’d had issues determining AI’s business value.
“The most successful application seems to be AI being applied to customer service,” remarks den Hamer. “The last number I saw was that the average productivity gain was 10%, measured in terms of the number of calls that call centre agents can deal with. But that’s only true for cases where people are being upskilled to use the AI in an effective manner.”
Marketing is another area where AI can play a role, thinks den Hamer, along with AI-empowered information retrieval, code generation, and code conversion. He notes that while these use cases are ‘pretty successful on average’, that only applies if you train employees appropriately and pivot to a different way of working. “If you don’t change your work habits,” he says, “then the impact will be significantly smaller.”
Indeed, the Gartner survey found that just 9% of enterprises were currently classed as ‘AI-mature’. What set them apart from the rest was a scalable AI operating model, a focus on AI engineering, an investment in upskilling workers and better risk management capabilities.
The latter point is a critical one, thinks Andrew Southall, principal engineer at the cybersecurity consultancy SkySiege. He works with many clients who regret their GenAI investments, not only because of misunderstood business value and high cost of ownership, but also because of security hiccups like ‘poisoned data’.
“If we’re honest, no one is checking all the data that is going into a model, meaning that no one knows what the results may include,” he says. “There’s also a lack of security controls – last year OpenAI had their application leak user details. GenAI teams often release software that wouldn’t pass the standard software development process.”
Wait and see
None of this is to say that AI is a lost cause. Den Hamer argues that it really can be a competitive differentiator for businesses. But it requires more than purchasing the latest splashy tech and hoping for the best.
“The companies that succeed educate not just their technical people, but also their end users and senior management, to bring their expectations back to a more realistic level,” says den Hamer. “They also really contextualise and focus on using AI in particular use cases. Yes, AI is overhyped, but it still offers plenty of opportunity to use it to your advantage.”
Acemoglu believes that while there are some AI applications that could be very beneficial for the economy, they fall short of the vision promoted by the tech industry. He suspects we may be about to enter a period of instability, in which AI hype coexists with AI disillusionment and AI companies see swings in their valuations.
“Almost a year and a half ago, we were promised a completely transformative technology, and we haven’t seen anything like that,” he says. “There was also an implicit promise that there will be new models with greater capabilities. Those haven’t arrived.”
Throughout this time, the tech industry has stuck to the same message: invest right away, or you’ll fall behind. Acemoglu says his own message is quite the opposite.
“You can evaluate the bottlenecks in your organisation where you could use some additional help,” he says. “Perhaps those are the places where you might be able to find just the right application. But there’s no rush. Wait and see what mistakes other people are making with this technology.”