Google Cloud is to start offering Nvidia Tesla K80 GPU-based virtual machines to give a power boost to customers running deep learning projects.
The beta program is to be applied to the company’s Compute Engine and cloud Machine Learning hosted services and will give users the ability to attach up to eight GPUs to any custom Google Compute Engine virtual machine.
Google said in a blog post: “GPUs can accelerate many types of computing and analysis, including video and image transcoding, seismic analysis, molecular modeling, genomics, computational finance, simulations, high performance data analysis, computational chemistry, finance, fluid dynamics and visualization.”
The reason for doing this is to save customers a bit of trouble and strife. Basically it means that users won’t need to construct a GPU cluster in their own data centre, they can just add GPUs to virtual machines running in the cloud.
The GPUs are said to be attached directly to the VM on the Google compute Engine. With each Nvidia GPU in a K80 offering 2,496 stream processors with 12GB of GDDR5 memory. Users are also able to shape their instances by attaching 1,2,4, or 8 GPUs to custom machine shapes.
Read more: From robot uprisings to machine learning and smart conversations – what’s all the fuss about chatbots?
According to Google, the tech is going to be particularly good for training machine learning models, because the GPUs are tightly integrated with Google Cloud Machine Learning (Cloud ML). The belief is that this will slash training time at scale using the TensorFlow framework.
The improvement is said to be instead of taking several days to train an imagine classifier on a large image data set on a single machine, users will be able to run distributed training with multiple GPU workers on Cloud ML.
Google said that its GPUs are priced “competitively” and will be biller per minute, with a 10 minute minimum. So for customers in Europe it will cost $0.770 per hour per GPU, while the US will pay $0.700.