Silence is a therapist’s best friend. After delving deep into the patient’s anxieties, a pause can be just what’s needed to provoke in them a period of introspection and, one would hope, enlightenment. A similar phenomenon appears to be happening in the international AI community. The emergence of ChatGPT last year triggered a cacophonous debate about the implications of generative AI, ranging from the Utopian to the apocalyptic. For now, that conversation appears to have quietened – though that silence may not last with the emergence of GPT-4, OpenAI’s new large language model (LLM) which Microsoft executives have said will be released as soon as next week.
In the meantime, countries around the world have been left asking themselves why on earth they don’t have their own ecosystem of LLMs. The US has some innate advantages in this area, to be sure, whether that’s in a strong history of government-sponsored R&D or an entrenched technology sector with corporations like Google and Microsoft that bestride the world like so many brass colossi. Even so, it’s not like many other nations lack such assets: the UK, for example, possesses a formidable legacy in AI thanks, in large part, to its extraordinarily dynamic universities and research institutions.
It’s the reason, explains Dame Wendy Hall, that the country is third on the AI league tables, going toe-to-toe with the US and China in the research and development of countless products and services running on machine intelligence. It’s the same message Hall, Regius Professor of Computer Science at the University of Southampton, delivered to the House of Commons Select Committee on Science and Technology. Would the government have listened to scientists like her and more jealously guarded the UK’s sovereign technological capabilities, Hall tells Tech Monitor, and it might not have been in the position where so many private and public organisations are dependent on US technological heft.
“We could develop unicorns really well,” says Hall. “But what we don’t do, then, is take the unicorn to the next stage. And that’s all to do with investment.”
Generative AI? Why not buy British
That doesn’t simply mean the British government backing promising start-ups. What’s also required, argues Hall, is raw computational muscle – a market that’s largely cornered by US hyperscale cloud providers. “There are agencies within the government that worry about whether we’ve got that compute power, whether we provide it to our universities or to our big companies.”
Last week the government published a review calling for the same kind of investment in exascale computing. Such sovereign capabilities are essential in scaling out application areas for foundation models, argues Hall, and preserving the UK’s independence in how and when they’re used. “Suppose we wanted to run a large language model over NHS data,” she says. “The Holy Grail is being able to do analysis across all NHS data to discover things about drugs and cures and treatments and in a way that retains privacy and security. Well, with things as they are, if we don’t do something about it, we’ll have to use American technology.”
What’s so bad about using a foundation model born in the USA? In principle, not much, says Hall – but if we don’t want to replicate the kinds of constraints already associated with the British government’s reliance on US hyperscalers to meet its computational needs, the UK needs to act now to develop some kind of sovereign capability in all forms of AI, generative or otherwise. “We have to be able to have the technologies and be able to develop them,” says Hall, “rather than just be given a black box by Microsoft and Google.”
That doesn’t mean, however, that Whitehall should consider constructing its very own, in-house ‘BritGPT,’ as some have implied. Rather, explains Hall, the UK government should be sizing up nascent AI start-ups and organisations and injecting them with the steroidal funding necessary to allow our private sector to buy British and compete globally. “It’s not about being inward-looking,” says Hall. “If we get this right, that company will produce algorithms and solutions that people around the world would want to buy.”
Hall is optimistic that the UK government will make the right calls, at least as far as the British AI research community is concerned. Ministers across Whitehall, she claims, have been asking for briefings on how to jump-start AI innovation in the UK left, right and centre. “I think,” says Hall, “we’re pushing at a half-open door.”
A Chinese, Canadian, Indian ChatGPT
The UK is hardly alone in its AI anxieties. In Canada, home to some of the most celebrated AI researchers in the world, there are calls for a new national investment strategy that might prevent the crème de la crème of its AI talent from emigrating to Silicon Valley, notwithstanding the sizeable financial commitment made by the federal government in its innovation clusters strategy.
“My impression is that the culture of innovation – and risk-taking that goes with it – isn’t nearly as developed here as it is in the US,” said Yoshua Bengio, a foremost expert in AI at the Université de Montréal, in a conversation with Global News. “Venture capitalists here in Canada are not willing to take as much risk, to invest as much money, to look over a horizon that is this long.”
Similar concerns are being voiced in India. Despite having produced legions of talented software engineers and not a few leading lights within the Big Tech firmament, critics now point to a lack of investment in supporting the AI pioneers of tomorrow as one of the reasons why the country isn’t having its own ChatGPT moment. “When it comes to theoretical work, we are as good as anyone,” Professor Amrutur Bharadwaj, the research head and director of Artpark, told Analytics India Magazine. Despite this, “the current ecosystem [in India] is not geared to produce and support brilliant outliers”.
But perhaps the most visceral reaction to OpenAI’s success is that in China. While in some ways the Chinese government anticipated the current moment for generative AI with a set of remarkably prescient new rules on labelling AI-generated content, many venture capitalists and researchers feel that the nation has been caught off-guard. Why, asked some, hadn’t China been engaging in the kind of long-term research into large language models that could produce a Chinese ChatGPT?
China’s biggest tech companies responded quickly by harnessing their own foundation models to create indigenous versions of ChatGPT, with Tencent developing its HunyuanAide variant and Baidu poised to release its own version next week (notwithstanding some pre-launch jitters). But attaining supremacy in this area could see the Chinese government fall back on a tried and tested asset: the state-owned enterprise. By launching such a company, argues Shaoshan Liu, the founder and CEO of AI vision company PerceptIn, the Chinese government could light a fire under the development of a new, sovereign LLM capability – much as it did for the telecoms sector some 30 years ago.
“State enterprises carry two missions from a historical perspective,” says Liu. “The first is to generate revenue for the state, [and] when you have a monopoly, it’s easier to generate a profit.” The second, Liu continues, is that a state-owned enterprise (SOE) affords the government closer control of a sector compared to letting private enterprise run things – a greater priority in this case, perhaps, given that many of the existing regulations around generative AI in China are primarily concerned with auditing and curtailing the spread of information.
Could such an approach lead to China leapfrogging the US in the creation and dissemination of ChatGPT-like services? It would certainly help in rapidly scaling up the national computing resources necessary to support such AI projects off private enterprise, says Liu, who previously argued in The Diplomat that such a move could have a positive impact on China’s sanction-hit semiconductor industry. Indeed, we may end up seeing a group of such SOEs, backed to the hilt by Beijing, make their global debut with a suite of subsidised AI services, in much the same way as its telecoms giants did in the early 2010s.
Even so, while the Chinese government has expressed an interest in harnessing ChatGPT-like systems (while shutting off access to the original), it may be years before it considers it necessary to launch a dedicated SOE to support that goal. An argument Liu has heard from fellow businesspeople and contacts in the government is that the latter are still undecided as to whether it’s wise to let state-run enterprises take the reins when it comes to long-term AI R&D.
“All the internet companies – Baidu, Tencent, Alibaba – they have invested a lot of resources into AI technology,” says Liu, with the government content to wait and see for now as to how the field develops with the addition of new foundation models. When it comes down to it, says Liu, “generative AI is not mature yet. It’s amazing, but there’s still a long way to go.”