Some of the biggest names in enterprise tech, including IBM, Intel, Oracle and Meta have set up a new AI Alliance to promote the advancement of open-source artificial intelligence. The group’s formation illustrates the divide forming in the industry between those who believe open source should be the basis for AI development and those who think making the code behind models available to the public could be dangerous.
As well as tech companies, the alliance is being backed by a host of academic institutions and public bodies such as the US space agency Nasa. It aims to “support open innovation and open science in AI”, according to a statement released today. It says: “The AI Alliance is action-oriented and decidedly international, designed to create opportunities everywhere through a diverse range of institutions that can shape the evolution of AI in ways that better reflect the needs and the complexity of our societies.”
How will the open-source AI alliance work?
The AI Alliance is “focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigour, trust, safety, security, diversity and economic competitiveness”.
Its members say that by “bringing together leading developers, scientists, academic institutions, companies, and other innovators, we will pool resources and knowledge to address safety concerns while providing a platform for sharing and developing solutions that fit the needs of researchers, developers, and adopters around the world.”
They plan to do this by developing and deploying “benchmarks and evaluation standards, tools, and other resources that enable the responsible development and use of AI systems at a global scale, including the creation of a catalogue of vetted safety, security and trust tools.”
It will also “responsibly advance the ecosystem of open foundation models with diverse modalities, including highly capable multilingual, multi-modal, and science models that can help address society-wide challenges in climate, education, and beyond”, and support the global development of AI skills and educational content to “inform the public discourse and policymakers on benefits, risks, solutions and precision regulation for AI”.
The rapid progress of AI development over the past 12 months is “a testament to open innovation and collaboration across communities of creators, scientists, academics and business leaders”, according to IBM CEO Arvind Krishna. “This is a pivotal moment in defining the future of AI,” Krishna said. “IBM is proud to partner with like-minded organisations through the AI Alliance to ensure this open ecosystem drives an innovative AI agenda underpinned by safety, accountability and scientific rigour.”
Is open source the future of AI?
Whether the group will be able to make its influence felt in a world where the competing interests of governments and Big Tech are setting the agenda around AI deployment and regulation remains to be seen. Absent from the list of founders of the AI Alliance is the public cloud hyperscalers – Amazon’s AWS, Microsoft Azure and Google Cloud – which host and run many of the most popular AI models and tools. Major AI labs such as Anthropic and OpenAI are also apparently not involved.
These hyperscalers and AI labs have invested billions in AI training and development to build proprietary models and tools, but other vendors such as Meta have taken an open approach to their AI journey. The Facebook parent company has made its Llama models available as open source, though this decision may have been influenced by the fact that the first iteration of Llama leaked online before being officially open-sourced by the company.
Whatever the reason, Nick Clegg, Meta’s president of global affairs, said the company now believes “it’s better when AI is developed openly”. Clegg said: “The AI Alliance brings together researchers, developers and companies to share tools and knowledge that can help us all make progress whether models are shared openly or not.
“We’re looking forward to working with partners to advance the state-of-the-art in AI and help everyone build responsibly.”
But not everyone is a fan of the open approach. Speaking earlier this year, Geoffrey Hinton, the man dubbed the “godfather of AI” for his pioneering work on deep neural networks, said an open approach could encourage more people to misuse AI.
Hinton said the danger of open source is “that it enables more crazies to do crazy things with [AI]”, adding: “As soon as you open source everything people will start doing all sorts of crazy things with it. It would be a very quick way to discover how [AI] can go wrong.”