View all newsletters
Receive our newsletter - data, insights and analysis delivered to you

Accelerated computing in data centre driven by AI, says Nvidia

Accelerated computing in data centres driven by AI, according to Nvidia.

By Hannah Williams

Nvidia has revealed that Accelerated Computing in data centres is driven by Artificial Intelligence, causing the dramatic change seen in data centres today.

The company refers to deep learning in particular, which is an algorithm that is able to learn from vast amounts of data to create software that can tackle challenges such as language translation and teaching autonomous cars to drive.

In order for deep learning to work effectively however, it demands that computers process the vast amounts of data in good time. This is what leads to the invention of new computing architecture.

This is where Nvidia’s GPU-accelerated computing model, originally designed for computing graphics and supercomputing applications, is also ideal for deep learning.

Nvidia launched its Pascal-based Tesla P40 Inferencing Accelerator which was built to deliver 26x of its deep-learning inferencing performance.

Read more: IBM Cloud boosts AI computing with Nvidia Tesla GPU

In 2015, Google designed a custom accelerator chip called the tensor processing unit (TPU), also made to handle inferencing.

Nvidia makes a comparison with Google’s TPU, in which it identifies that its P40 balances computational precision and throughput, on-chip memory and memory bandwidth to achieve unprecedented performance for training and inferencing.

Content from our partners
Powering AI’s potential: turning promise into reality
Unlocking growth through hybrid cloud: 5 key takeaways
How businesses can safeguard themselves on the cyber frontline

Its P40 has a high-throughput 8-it integer and high-memory bandwidth, whereas Google’s TPU was only developed for inferencing so does not offer training.

Although Google and Nvidia’s development path differ for obvious reasons, several themes make them similar as well. This firstly leads back to the importance of accelerated computing for AI.

In a blog post, Nvidia said: “The technology world is in the midst of a historic transformation already being referred to as the AI Revolution. The place where its impact is most obvious in the hyperscale data centers of Alibaba, Amazon, Baidu, Facebook, Google, IBM, Microsoft, Tencent and others.”

Secondly, Nvidia says Tensor processing is a major new workload that enterprises are advised to consider when building modern data centres as it is at the core of delivering for deep learning and inference. This can also lead to a reduction of building costs.

According to Nvidia, without deployed accelerated computing, the scale-out of AI is not practical.

 

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU