View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Meta’s Llama 2 AI model coming to Qualcomm-powered smartphones as part of ‘significant’ shift to open source

The model is being made available for commercial use with all the weights and training data.

By Ryan Morrison

Meta has partnered with chipmaker Qualcomm to make its new large language AI model, Llama 2, available on Snapdragon processors, meaning commercial and non-commercial versions will be embedded in flagship smartphones from next year. It follows the news that Meta is making Llama 2 available as a fully open-source product, something that could mark a “significant shift in the market” for AI models.

Llama 2 is the latest incarnation of the Meta LLM. The first was leaked on the web last year (Photo: Noe Besso/Shutterstock)
Llama 2 is the latest incarnation of the Meta LLM. The first was leaked on the web last year, (Photo by Noe Besso/Shutterstock)

Unlike other big AI labs and tech giants, Meta has taken a very open approach to its AI development, including open-sourcing one of the most prominent machine learning frameworks, PyTorch. Its latest model, Llama 2, was developed using Microsoft compute power and will be available as a download or through Microsoft’s Azure cloud platform and its Amazon rival, AWS.

“We believe an open approach is the right one for the development of today’s AI models, especially those in the generative space where the technology is rapidly advancing. By making AI models available openly, they can benefit everyone,” Meta said.

The first version of Llama was not open-source, but a week after being published online it leaked and has since been widely adapted by the open-source community. There are models built on top of Llama 1 that can run locally on any hardware. This led to some concern at the time over security and how to prevent misuse

The company says it has been working with researchers in academia and industry for more than a decade to develop safe AI solutions. It argues that open-sourcing the model allows a wider group of people to stress test and identify problems as a community.

The release includes the starting code, model weights, pre-trained model and conversational fine-tuned versions of the model. As well as Azure and AWS, it is also being made available through a range of compute providers, including Hugging Face. Qualcomm says Windows machines using its Snapdragon processor will be able to run the model locally next year.

Meta’s Llama 2 on-device is ‘safer and cheaper’

Qualcomm believes that on-device AI implementation helps to increase user privacy, address security preferences, enhance applications reliability and enable personalisation. The company says this can be delivered “at a significantly lower cost for developers compared to the sole use of cloud-based AI implementation and services”. 

Content from our partners
An evolving cybersecurity landscape calls for multi-layered defence strategies
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways

“We applaud Meta’s approach to open and responsible AI and are committed to driving innovation and reducing barriers-to-entry for developers of any size by bringing generative AI on-device,” said Durga Malladi, senior vice-president and general manager of technology, planning and edge solutions businesses, at Qualcomm. “To effectively scale generative AI into the mainstream, AI will need to run on both the cloud and devices at the edge, such as smartphones, laptops, vehicles and IoT devices.”

Meta says it has focused heavily on responsibility in the development of Llama 2. This includes transparency and access to the models, with red teaming security probes to test for safety in output, a detailed explanation of fine-tuning and evaluation methods and a responsible use guide. This is designed to support companies on best practices for safe development.

Rodrigo Liang, CEO and co-founder of enterprise AI platform SambaNova Systems, told Tech Monitor the move could be a “significant shift” in the market. “This transition opens possibilities for democratising AI and enabling enterprises to build custom software on top of the technology,” he said “By open-sourcing this model, as well as making it free, Meta allows researchers and developers to build on, deconstruct and learn from its architecture.”

Will Meta drive responsible open AI innovation?

OpenUK, the non-profit organisation representing the UK’s open technology sector, supports the move and says “responsible and open innovation gives us all a stake in the AI development process, bringing visibility, scrutiny and trust to these technologies. Opening today’s Llama models will let everyone benefit from this technology.”

Amanda Brock, CEO at OpenUK, said: “This is a positive step for the community and hopefully the first of many. We note that the licence requires any company or individual that gets more than 700 million users for its product to re-engage with Meta on licensing, which is a potential restriction for the future.

“The total number of internet users is estimated at 5.18 billion, so Meta’s licence condition would only apply to services that are used by more than 13.5% of all online users.”

Heather Dawe, UK head of data at digital transformation consultancy UST, said there are a number of benefits from open-source LLMs over the closed models being developed by companies like Google and OpenAI. They can potentially run more cheaply and be used more securely as they can be operated on local hardware.

“Data privacy can be maintained in much more effective ways, meaning the beneficial uses of generative AI within enterprises can be obtained in much more controlled ways,” Dawe said, adding that her company is seeing significant client interest in open-source models.

Dawe added: “This move by Meta to release Llama 2 as open source and free for commercial use, along with releasing the model weights and similar which will make it easier to utilise Llama 2, will further likely raise interest in open-source LLMs as valid commercial alternatives to closed source models.”

Not everyone is a fan of open-source models. Former Google AI chief Geoffrey Hinton, known as the ‘godfather of AI’ for his pioneering work on deep neural networks, said earlier this year that open-source models could make AI more dangerous.

“The danger of open source is that it enables more crazies to do crazy things with [AI],” Hinton said as part of a lecture at Cambridge University in May.

Read more: Open source energised AI. LLMs complicate matters

Topics in this article : , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU