Announced at Snowflake’s flagship customer event, Snowflake Summit, taking place this week in Las Vegas, the new functions will be built on Nvidia’s NeMo platform for developing large language AI models (LLMs).
Software vendors such as Snowflake have been rushing to incorporate AI into their products since the launch of OpenAI’s chatbot, ChatGPT, last year, which sparked a wave of interest in the potential business applications for automated systems, and has seen the biggest names in tech investing billions of dollars in AI models, products and services.
Snowflake’s data cloud is one of the most widely used data analysis platforms on the market, with more than 8,000 customers. Yesterday it also announced an expanded partnership with Microsoft which will enable greater integration between Data Cloud and Microsoft’s Azure ML cloud-based AI platform.
How Snowflake and Nvidia are deploying generative AI on Data Cloud
The new functions in Data Cloud will enable businesses to use data in their Snowflake accounts to make custom LLMs for advanced generative AI services, including chatbots, search and summarisation of information. The company says the ability to customise LLMs without moving data “enables proprietary information to remain fully secured and governed within the Snowflake platform”.
Snowflake already offers industry-specific flavours of its Data Cloud, and says its new LLM capability will enhance this by allowing users to build AI models for their verticals. It gives the example of a healthcare insurance model that “could answer complex questions about what procedures are covered under various plans”, or a financial services model could “share details about specific lending opportunities available to retail and business customers based on a variety of circumstances”.
“Snowflake’s partnership with Nvidia will bring high-performance machine learning and artificial intelligence to our vast volumes of proprietary and structured enterprise data, a new frontier to bringing unprecedented insights, predictions and prescriptions to the global world of business,” said Frank Slootman, chairman and CEO of Snowflake.
Indeed, while publicly available AI models such as OpenAI’s GPT-4 have proved popular with consumers, many businesses are wary of allowing staff to use them for fear that valuable data may leak, or that they may fall foul of privacy regulations. As a result, many are looking to build their own models in-house, says Alexander Harrowell, principal analyst for advanced computing for AI at Omdia.
“More enterprises than we expected are training or at least fine-tuning their own AI models, as they increasingly appreciate the value of their own data assets,” Harrowell said. “Similarly, enterprises are beginning to operate more diverse fleets of AI models for business-specific applications. Supporting them in this trend is one of the biggest open opportunities in the sector.”
Nvidia keeps riding the AI wave
No financial details of the deal have been announced, and Snowflake does not say when the new service will be available. Tech Monitor has contacted the company for clarification.
The partnership with Nvidia was announced as part of a fireside chat at the conference between Slootman and Nvidia CEO Jensen Huang. Nvidia has become the key tech company at the centre of the generative AI revolution, with its A100 GPUs being deployed in their thousands to train and run models. The company has announced a string of new AI and supercomputing services to complement its hardware, and saw its value top $1trn for the first time last month on the back of its AI success.
“Data is essential to creating generative AI applications that understand the complex operations and unique voice of every company,” Huang said. “Together, Nvidia and Snowflake will create an AI factory that helps enterprises turn their own valuable data into custom generative AI models to power ground-breaking new applications — right from the cloud platform that they use to run their businesses.”