Google Cloud has set a new Pi record. The company calculated the figure (Pi’s decimal representation, for those in need of a reminder, never ends) to 31,415,926,535,897 digits in a mission that took 170 terabytes of data and 121 days to complete.

The Guinness World Record-winning feat, understandably, trounces the Guinness World Record for Pi places memorised: that accolade belongs to India’s Rajveer Meena who on 21 March 2015 recited 70,000 places of Pi; a feat that took 10 hours.

New Pi Record: Nine Trillion More Places

Google’s record includes a number of other “firsts”. It is the first Pi record done on a commercial cloud services, the first done using solid state drives (SSDs) and the first Pi record in the PC era to be achieved using network storage, Google said.

The final figure is almost nine trillion digits more than the previous world record set in November 2016 by Peter Trueb. The record was announced to coincide with Pi Day, today, March 14 (or 3.14).

Google has released a pi.delivery service that provides a REST API to access the digits on the web, along with a number of experiments and a cloud experiment that lets users generate a custom art piece from digits of π.

The computation for the new Pi record racked up a total of 10 PB of reads and 9 PB of writes.

The project was led by Google Developer Advocate Emma Haruka Iwao, who said in a blog that she had been thinking about the challenge since she was 12. She used an application called y-cruncher on 25 Google Cloud virtual machines.

She wrote: “The complexity of Chudnovky’s formula—a common algorithm for computing π—is O(n (log n)3). In layman’s terms, this means that the time and resources necessary to calculate digits increase more rapidly than the digits themselves. Furthermore, it gets harder to survive a potential hardware outage or failure as the computation goes on.”

Google is promoting the achievement in part to illustrate the reliability of its service: “We ran 25 nodes for 111.8 days, or 2,795 machine-days (7.6 machine-years), during which time Google Cloud performed thousands of live migrations uninterrupted and with no impact on the calculation process.”

The challenge involved 96 vCPUs, or an n1-megamem-96 instance for the main computing node. Google described this as the biggest virtual machine type available on Compute Engine that provided Intel Skylake processors at the beginning of the project.

The Skylake generation of Intel processors supports AVX-512, which are 512-bit SIMD extensions that can perform floating point operations on 512-bit data or eight double-precision floating-point numbers at once, it added.

See also: The Telegraph Ditches AWS to Go All-In on GCP