Microsoft is using Intel’s technology as a key hardware accelerator in its deep learning platform – Project Brainwave.
The FPGA-based accelerated deep learning platform is said to be capable of delivering real-time AI for the purpose of allowing cloud infrastructure to process and transmit data as quickly as it comes in.
With every piece of research pointing to the exponential growth of data, and the rise of the Internet of Things, being able to process live data streams is becoming increasingly important.
Microsoft’s approach to this is through Project Brainwave, which uses Intel Stratix 10 FPGAs to handle deep learning models.
According to the companies, Microsoft is the first major cloud service provider to deploy FPGAs in its public cloud and the implementation will “enable the acceleration of deep neural networks that replicate “thinking” in a manner that is conceptually similar to that of the human brain.”
Dan McNamara, corporate vice president and general manager of the Programmable Solutions Group (PSG) at Intel, said: “Intel FPGAs provide completely customizable hardware acceleration that Microsoft can program and tune to achieve maximum performance from its AI algorithm and deliver real-time AI processing. Better still, these programmable integrated circuits are adaptable to a wide range of structured and unstructured data types, unlike the many specialty chips that are targeted at specific AI data types.
“Intel FPGAs enable developers to design accelerator functions directly in the processing hardware to reduce latency, increase throughput, and improve power efficiency. FPGAs accelerate the performance of AI workloads, including machine learning and deep learning, along with a wide range of other workloads, such as networking, storage, data analytics and high-performance computing.”
The company said that typical silicon AI accelerators require grouping multiple requests together, or batching, in order to achieve high performance. Project Brainwave however managed to achieve over 39 Teraflops of performance on a single request.
“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning processing units,” said Doug Burger, distinguished engineer at Microsoft Research NExT.
Microsoft is said to be working to deploy Project Brainwave in Azure so that its customers will be able to run complex deep learning models.