The third of three roundtables devoted to the power and potential of artificial intelligence (AI) took place in Copenhagen towards the end of April. Its objective – alongside earlier events in London and Frankfurt – was to better understand the challenges and barriers, the strategic thinking and tactical activities of senior technology leaders contemplating the implementation of AI.
Here are five lessons that emerged from a final night of discussion and debate:
1. Cloud or on premise? When it comes to AI, it’s both
Attendees debated the relative merits of cloud as the platform to develop and host AI applications. A consensus emerged. The potential benefits of turning to a large cloud provider – scalability, access to enormous compute power, no upfront capital expense, and faster time to market – must be balanced with the potential downsides: higher costs and lock-in. On costs, one attendee observed that the big cloud hyperscalers tend not to discount data storage in the cloud. For those building large language models (LLMs) supported by substantial volumes of contextual data this can prove prohibitively expensive and undermine any return on investment (ROI) calculation. On lock-in, the inability to move a model developed with one provider to another echoes the walled gardens that have long characterised some IT provision.
Is there a solution to these challenges? Hybrid cloud might be it, suggested more than one attendee. A mix of public and private infrastructure, hybrid cloud allows organisations to choose the most appropriate environment for each workload. It lets organisations accommodate edge use cases – likely to require low latency – locally while centralising access to widely used applications in the public cloud. Hybrid also gives firms a better negotiating position – if they can host data on premise then the hyperscaler needs to make a competitive counteroffer. Others advocated for open source as a means of developing transportable LLMs.
2. Garbage in: garbage out? Data is everything
When it comes to generative AI (Gen AI), early experimentation will focus on the publicly available models from the likes of OpenAI, Meta and Google. In the longer term, however, organisations will seek to develop their own LLMs – or possibly smaller “boutique” language models – based on their own data. This will be the case especially for applications that offer competitive advantage and, possibly, the opportunity to commercialise tools originally built for in-house use. So far, so good. However, this approach presupposes that you can identify, extract and activate all the data you need. This isn’t straightforward, particularly when much of that intelligence sits on legacy systems. “In my organisation,” noted one attendee, “we have something like 50,000 machines – 30,000 of those aren’t even connected to the internet. And even if I can get to that data I know it will be of variable quality.” If identifying where AI can add business value is the first priority for anyone exploring the technology (see point 5, below), then getting internal data into shape is priority number two. The old adage – garbage in, garbage out – still applies in the age of AI.
3. Different use cases make different demands on AI infrastructure
Where the audience at our Frankfurt event skewed, unsurprisingly, towards financial services, the Copenhagen audience was more diverse. It took in leaders from manufacturing, logistics, software, healthcare, and, yes, banking. The AI use cases attendees are exploring and implementing include predictive analytics, resource planning, legal content management, and supplier negotiation optimisation. The logistics and manufacturing firms represented had a particular need to enhance delivery at the edge of their operations – detecting faults on a production line, or streamlining shipping at the point of departure, for example. These examples demand low latency. Timely information is useful. Information slowly delivered is not. This need informs architecture and infrastructure and is likely to underscore a reliance on localised processing power (see point 1, above).
4. Reliability, resilience and the reality of AI
“If you rely on AI for something like your supply chain which is all about resilience,” asked one attendee from a global logistics firm, “how can you be sure it’s reliable?” This is not idle thought. This is a practical consideration. And it doesn’t just apply to logistics or manufacturing – it applies to service sectors, including financial services, trading on credibility and customer trust. Gen AI hallucinations are well known and inspire concern among AI practitioners. So can you really rely on it? Fellow attendees offered three broad thoughts. One, public domain Gen AI models are improving all the time, and improving fast. Two, the models that will inform individual businesses will be based on in-house data, so make sure that data is as “clean” as possible (see point 2, above). Three, consider whether an AI-driven solution – with all its imperfections – is better than the status quo. If it is, go ahead. Ultimately, decision-making will map against risk appetite.
5. Business value doesn’t mean instant ROI
As has become tradition at these events, attendees were asked latterly to share best practice tips when it comes to the implementation of AI. Picking up on a theme from earlier in the night, attendees advised that those embarking on AI projects should first validate the business case for change. In other words, ask why. “Identify where there is business value and then work out the ROI after that,” one attendee said. And in the interest of identifying business value and ultimate returns, don’t try and do everything at once. Another attendee, acknowledging that big gaps in knowledge persist in what is still an emerging technology, urged those starting out to “ask a grown-up.” Another guest countered: “If you really want to know what to do and how to do it, ask a start-up.”
‘Powering AI’s potential’ – a Tech Monitor roundtable discussion in association with AMD – took place at Chambre Formel, Copenhagen on 25 April 2024.