View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Open source LLMs could make artificial intelligence more dangerous, says ‘godfather’ of AI

Geoffrey Hinton believes making the code behind AI models available online may prove harmful.

By Matthew Gooding

Open source large language models (LLMs) could make artificial intelligence more dangerous, according to a UK AI pioneer. Geoffrey Hinton, who has been dubbed the “godfather of AI” for his ground-breaking work on neural networks, believes the technology is more likely to be exploited if its code is freely available online.

Large language models have been developed by AI labs around the world in recent months (photo by Tada Images/Shutterstock)

Large language models like OpenAI’s GPT-4 or Google’s PaLM underpin generative AI systems, such as ChatGPT, which have enjoyed rapid take-up among businesses and consumers in recent months. The ability of these tools to automatically generate detailed images and texts in seconds could be transformative for many industries, but the closed nature of the AI models – and the high development costs involved – means accessing them can be expensive.

Many argue that open source LLMs can provide a more cost-effective alternative, particularly for small companies looking to harness the power of AI and use tools like ChatGPT.

The trouble with open source LLMs

But Hinton, who says he left his job at Google last month so he could freely voice his concerns about AI development, believes the growing open source LLM movement could be problematic.

Speaking after delivering a lecture at the Cambridge University Centre for the Study of Existential Risk on Thursday evening, Hinton said: “The danger of open source is that it enables more crazies to do crazy things with [AI].

He said he believes LLMs remaining confined to the labs of companies such as OpenAI may ultimately prove beneficial. “If these things are going to be dangerous it might work out better for a few big companies – preferably in several different countries – to develop this stuff and, at the same time, develop ways to keep it under control.

“As soon as you open source everything people will start doing all sorts of crazy things with it. It would be a very quick way to discover how [AI] can go wrong.”

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

Hinton used his lecture to restate his belief that the point where a so-called super intelligent AI’s capability will exceed human intelligence is not far away, saying that believes GPT-4 already shows signs of intelligence. “These things are going to get more intelligent than us and it might happen pretty soon,” he said. “I used to believe it was 50-100 years away, but now I believe it’s more like five to 20. And if it’s going to happen in five years time, we can’t just leave it up to philosophers to decide what we do about it, we need people with practical experience.”

He added: “I wish I had an easy answer [for how to handle AI]. My best bet is that the companies who are developing it should be forced to put a lot of work into checking out the safety of [AI models] as they develop them. We need to get experience of these things, how might they try to escape and how they can be controlled.”

On Friday, DeepMind, the AI lab owned by Hinton’s former employer Google, said it had come up with an early warning system to spot potential risks posed by AI.

How open source LLMs can benefit businesses

Open source LLMs are relatively plentiful online, particularly since source code from Meta’s LLM, LLaMa, was leaked online in March.

Software vendors are also trying to profit from the growing desire among businesses for installable, targeted, and personalisable LLMs which can be trained on company data. In April, Databricks released an LLM called Dolly 2.0, which it trumpeted as the first open-source, instruction-following LLM designated for commercial use. It has ChatGPT-like functionality, says Databricks, and can be run in-house. 

Proponents of open source models say they have the potential to democratise access to AI systems like ChatGPT. Speaking to Tech Monitor earlier this month, software developer Keerthana Gopalakrishnan, who works with open source models, said: “I think it’s important to lower the barrier to entry for experimentation.” She added: “There are a lot of people interested in this technology who really want to innovate.” 

Read more: UK AI regulation plans dodge ChatGPT question

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU