Nvidia and Facebook have collaborated to advance artificial intelligence with Caffe2, a new deep learning framework delivered by Facebook to the open source community.
In a blog post, Nvidia said that together with Facebook they are providing AI acceleration with the work on the Caffe2 deep learning framework.
Facebook works to develop new AI systems to help manage information generated around the world, making it easier for people to understand the world and communicate more effectively as the volume of information increases.
Caffe2 enables developers and researchers to create large-scale distributed training scenarios and build machine learning applications for edge devices.
Caffe2 is designed to be a fast, scalable and portable framework. It delivers near-linear scaling of deep learning training with 57x throughput acceleration on eight networked Facebook Big Basin AI servers with a total of 64 Nvidia Tesla P100 GPU accelerators.
Read more: Nvidia to power industry supercomputers with new Quadro products
According to Nvidia, Caffe2 has been fine-tuned to take full advantage of its GPU deep learning platform. It also uses the latest Nvidia deep learning SDK libraries, cuDNN, cuBLAS and NCCL to deliver high-performance, multi-GPU accelerated training and interference.
Nvidia’s DGX-1 AI supercomputer will be the first AI system to offer Caffe2 within the optimised software stack for deep learning. Both DGX-1 and Caffe2 will deliver high performance and fast training and will be made be available to customers via Nvidia’s DGX-1 Container Registry.
The company also recently launched new Quadro products for its industry supercomputers, with capabilities for professional workflows with deep learning and more.
Already, from using its Deep Learning Institute, Nvidia has helped over 10,000 developers across the world to learn how to use frameworks to design, train and deploy network-powered machine learning for a selection of intelligent applications and services.
Nvidia will be adding Caffe2 training to the curriculum starting from its GPU Technology Conference in May 2017.