View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cloud
February 22, 2017

Google Cloud boosts machine learning portfolio with Nvidia GPUs

Google says users won’t need to construct a GPU cluster in their own data centre.

By James Nunns

Google Cloud is to start offering Nvidia Tesla K80 GPU-based virtual machines to give a power boost to customers running deep learning projects.

The beta program is to be applied to the company’s Compute Engine and cloud Machine Learning hosted services and will give users the ability to attach up to eight GPUs to any custom Google Compute Engine virtual machine.

Google said in a blog post: “GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.”

The reason for doing this is to save customers a bit of trouble and strife. Basically it means that users won’t need to construct a GPU cluster in their own data centre, they can just add GPUs to virtual machines running in the cloud.

Google first revealed that it would be using Nvidia’s tech back in November last year.

The GPUs are said to be attached directly to the VM on the Google compute Engine. With each Nvidia GPU in a K80 offering 2,496 stream processors with 12GB of GDDR5 memory. Users are also able to shape their instances by attaching 1,2,4, or 8 GPUs to custom machine shapes.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester
Read more: From robot uprisings to machine learning and smart conversations – what’s all the fuss about chatbots?

According to Google, the tech is going to be particularly good for training machine learning models, because the GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML). The belief is that this will slash training time at scale using the TensorFlow framework.

Google Cloud

The improvement is said to be instead of taking several days to train an imagine classifier on a large image data set on a single machine, users will be able to run distributed training with multiple GPU workers on Cloud ML.

Google said that its GPUs are priced “competitively” and will be biller per minute, with a 10 minute minimum. So for customers in Europe it will cost $0.770 per hour per GPU, while the US will pay $0.700.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.