GPU maker Nvidia has brought out its latest microprocessing powerhouse designed for AI and ML applications. Nvidia has launched the TITAN V GPU designed for Artificial Intelligence developers and scientific simulations.
The California-based firm claim its latest PC chip is nine times faster than its predecessor; an eye-watering 21.1 billion transistors transmit 110 teraflops of power with “extreme energy efficiency”, announced Nvidia founder and CEO Jensen Huang at the annual NIPS conference in Long Beach this week.
Engineers redesigned the internal streaming multiprocessor to double the energy efficiency of the previous generation Pascal. Nvidia also tweaked its Volta cloud architecture to work in tandem with the latest GPU upgrade. The manufacturer claims Volta is “much more efficient” on workloads owing to parallel integer and floating-point data paths. A new combined L1 data cache and shared memory unit is said to simplify programming as well as boost performance.
TITAN V incorporates Volta’s 12GB HBM2 memory subsystem for optimised bandwidth utilisation. Its high-powered processing capability comes with a $2,999 price tag in selected countries and is aimed at professionals including data scientists working on machine learning projects for desktop computing.
“Our vision for Volta was to push the outer limits of high performance computing and AI. We broke new ground with its new processor architecture, instructions, numerical formats, memory architecture and processor links,” said Huang. “With TITAN V, we are putting Volta into the hands of researchers and scientists all over the world. I can’t wait to see their breakthrough discoveries.”
Nvidia’s chip launch follows hot on the heels of its latest collaboration with IBM to produce the Power9 series. On Thursday, Elon Musk announced Tesla is creating its own AI-optimised chips.
TITAN V customers access GPU-optimized AI, deep learning and HPC software by signing up for Nvidia GPU Cloud online at no extra cost. The container registry, made available this week, includes Nvidia-optimized deep learning frameworks, third-party managed HPC applications, Nvidia HPC visualization tools and the Nvidia TensorR inferencing optimizer.