View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cloud
May 20, 2016

Google unveils custom chip for machine learning

The chip needs fewer transistors to perform operations.

By CBR Staff Writer

Google has developed a custom chip to improve machine learning capabilities used by its own teams in various applications.

The new chip called as a Tensor Processing Unit (TPU), a custom application-specific integrated circuit (ASIC), is built specifically for machine learning and tailored for TensorFlow.

Google has already started using TPUs in its data centers more than a year ago. It has found that the chips were able to deliver an order of magnitude better-optimized performance per watt for machine learning.

Google’s distinguished hardware engineer Norm Jouppi said: "This is roughly equivalent to fast-forwarding technology about seven years into the future (three generations of Moore’s Law)."

Google said that over 100 teams are currently using machine learning at the company, from Street View, to Inbox Smart Reply, to voice search.

Jouppi said: "But one thing we know to be true at Google: great software shines brightest with great hardware underneath. That’s why we started a stealthy project at Google several years ago to see what we could accomplish with our own custom accelerators for machine learning applications."

The chip needs fewer transistors to perform operations in machine learning applications, as it remains more tolerant to reduced computational precision, it said.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Jouppi said: "Because of this, we can squeeze more operations per second into the silicon, use more sophisticated and powerful machine learning models and apply these models more quickly, so users get more intelligent results more rapidly."

The company is already TPU to power many applications used by it, including RankBrain, used to improve the relevancy of search results and Street View, to improve the accuracy and quality of our maps and navigation.

Jouppi added: "AlphaGo was powered by TPUs in the matches against Go world champion, Lee Sedol, enabling it to "think" much faster and look farther ahead between moves."

Using TPUs in the infrastructure is enabling the company to offer its strengths to developers across software like TensorFlow and Cloud Machine Learning with advanced acceleration capabilities.

Jouppi said: "Machine Learning is transforming how developers build intelligent applications that benefit customers and consumers, and we’re excited to see the possibilities come to life."

In February, it had launched its TensorFlow Serving as open source to help developers in taking their machine learning models into production.

The TensorFlow Serving system, which was made available on GitHub under the Apache 2.0 license, was aimed at enabling developers to easily implement new algorithms and experiments.

According to the company, the system can handle around 100,000 queries per second per core on a 16vCPU Intel Xeon E5 2.6 GHz machine.

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU