Artificial intelligence (AI) promises to be one of the world’s most transformational technologies – perhaps the most transformational. Consider, if you will, that adoption rates outstrip those of the internet and the scale of its potential is laid bare.

But how do you power the potential of AI?

Addressing that question has been the focus of a series of Tech Monitor / AMD roundtable events across Europe during 2024 featuring some of Europe’s leading IT decision-makers. After spring events in London, Frankfurt, and Copenhagen, the latest roundtable took place in Stockholm. As before, the objective was to better understand the challenges and barriers, the strategic thinking and tactical activities of senior technology leaders contemplating the implementation of AI.

Here are four themes that emerged from a night of discussion and debate in Sweden’s capital.

Use cases? Which use cases?

If ever we needed proof that being a senior IT leader can be a lonely job, it came with the first question posed to more than a dozen attendees at our central Stockholm location. Asked what they wished to get out of the evening, over half said they were keen to understand how others are using GenAI. They were looking for “perspective” (to borrow the word of one guest), “inspiration” (to borrow another), and to understand how their peers “identify relevant uses” for the technology.

A number of use case examples were shared – including those that consolidated huge amounts of technical or legal data, those that brought intelligence to everyday household appliances, those that helped new employee onboarding, and those that provided an auxiliary chatbot function in organisations looking to address high customer agent attrition. The latter two examples were an interesting echo of McKinsey’s assertion that 75% of GenAI’s value will come from four areas: sales and marketing, research and development, software engineering, and customer operations. 

Attendees at Tech Monitor's roundtable on AI in Stockholm last month, held in association with AMD.
Attendees at Tech Monitor‘s roundtable on AI in Stockholm last month, held in association with AMD. (Photo: Tech Monitor)

Balancing the pros and cons of cloud for Gen AI

Most organisations represented around the table were operating from a position of cloud maturity, or near-maturity. This makes the cloud, rather than on-premise, the natural first choice for AI workloads. That is the assumption, at least. But some organisations are now testing that assumption. Why? For a number of reasons including concerns over data privacy, IP protection, data sovereignty, and latency. Another issue raised by several attendees is the relative cost of cloud versus on-premise. Processing large amounts of data is expensive wherever it resides but it can be particularly expensive in the cloud especially when it needs to be transferred, with egress charges in particular proving prohibitive.

Even a truism of cloud computing – that workloads with peaky traffic are best in the cloud – is being tested. When the peaks of GenAI are voluminous, sustained costs soon mount. Or, consider those organisations that need to address latency issues. The choice: move on-premise or buy dedicated server capacity from a cloud provider. After the promise of pay-as-you-go cloud, the latter option feels oxymoronic. Cloud clearly has its role but customers are showing greater scepticism than before.

Hallucinations? Don’t let the perfect be the enemy of the good

Those expecting perfect results from GenAI need to recalibrate their expectations. So said more than one attendee. Generative tools are, by their nature, calculation engines not databases of incontestable knowledge. Yet many organisations assume they are the latter. It’s time, said one guest, to rethink the value of a tool that is 85% – not 100% – accurate. Human supervision is one way to ensure value and monitor mistakes. Setting expectation thresholds is another.

Expecting perfection is counterproductive. As one attendee observed: “Humans hallucinate, too.” And as another observed: “GenAI is a crowd pleaser – it will always look to provide an answer.” This leads to errors but if we are aware of that from the outset we can get value from GenAI nonetheless.

To centralise or not to centralise? Freedom versus control

One attendee wanted to know what others thought about the relative merits of centralising or federating the control of GenAI projects. For his part, his company had taken a centralised approach citing cost control, shared learning (from mistakes as well as successes), error minimisation, improved governance, and a focal point for idea generation.

On the other hand, others argued for a decentralised model where risk-taking is prized above control. Centralisation creates an unnecessary bottleneck, proponents of a federated model argued, and it stifles creativity. One attendee said her organisation is happy to absorb the costs of failed ideas because of the potential benefits of the successes. “We take a ‘dare to try’ approach,” she said.

‘Powering AI’s potential ’ – a Tech Monitor roundtable discussion in association with AMD – took place at Copine, Stockholm on 12 September 2024.