View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Microsoft moves a step closer to real-time AI in the cloud with Intel

Microsoft is using Intel's Stratix 10 FPGAs to "think" in a similar way to the human brain.

By James Nunns

Microsoft is using Intel’s technology as a key hardware accelerator in its deep learning platform – Project Brainwave.

The FPGA-based accelerated deep learning platform is said to be capable of delivering real-time AI for the purpose of allowing cloud infrastructure to process and transmit data as quickly as it comes in.

With every piece of research pointing to the exponential growth of data, and the rise of the Internet of Things, being able to process live data streams is becoming increasingly important.

Stratix 10 FPGAs and SoC FPGAs leverage Intel’s 14nm process.

Microsoft’s approach to this is through Project Brainwave, which uses Intel Stratix 10 FPGAs to handle deep learning models.

According to the companies, Microsoft is the first major cloud service provider to deploy FPGAs in its public cloud and the implementation will “enable the acceleration of deep neural networks that replicate “thinking” in a manner that is conceptually similar to that of the human brain.”

Dan McNamara, corporate vice president and general manager of the Programmable Solutions Group (PSG) at Intel, said: “Intel FPGAs provide completely customizable hardware acceleration that Microsoft can program and tune to achieve maximum performance from its AI algorithm and deliver real-time AI processing. Better still, these programmable integrated circuits are adaptable to a wide range of structured and unstructured data types, unlike the many specialty chips that are targeted at specific AI data types.

“Intel FPGAs enable developers to design accelerator functions directly in the processing hardware to reduce latency, increase throughput, and improve power efficiency. FPGAs accelerate the performance of AI workloads, including machine learning and deep learning, along with a wide range of other workloads, such as networking, storage, data analytics and high-performance computing.”

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester
Read more: AI and VR lead the way as 2017 top tech trends – Gartner

The company said that typical silicon AI accelerators require grouping multiple requests together, or batching, in order to achieve high performance. Project Brainwave however managed to achieve over 39 Teraflops of performance on a single request.

“We exploit the flexibility of Intel FPGAs to incorporate new innovations rapidly, while offering performance comparable to, or greater than, many ASIC-based deep learning processing units,” said Doug Burger, distinguished engineer at Microsoft Research NExT.

Microsoft is said to be working to deploy Project Brainwave in Azure so that its customers will be able to run complex deep learning models.

Topics in this article : , , , ,
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU