The Google Cloud Platform is having its Speech API updated so that enterprises can do more with the service that was launched last year.
The Google Cloud Speech API, which the company says has been used to do everything from “improve speech recognition for everything from voice-activated commands to call centre routing to data analytics,” now offers expanded support for long-form audio and support for a whole host of new languages.
Read more: Cloud isn’t killing software – IT departments want both
Google says that files up to three hours long can now be supported, up from 80 minutes, with the company saying that longer files could also be supported on a “case-by-case” basis, but you’ll have to apply for a quote expansion through Cloud Support.
The system, which already supports 89 language varieties, is having 30 additional language varieties added to it, including Bengali, Swahili, and Latvian.
Dan Aharon, Product Manager, Google Cloud Platform, said: “Our new expanded language support helps Cloud Speech API customers reach more users in more countries for an almost global reach. In addition, it enables users in more countries to use speech to access products and services that up until now have never been available to them.”
Read more: Google Cloud challenges AWS to a Snowball fight
The most requested feature that’s been added though is the Word-level timestamps, which is providing timestamp information for each word in the transcript. This feature lets users go to the exact moment where certain text was spoken, or display the relevant text while the audio is playing, according to the company.
Google’s additions not only expand the global reach of the API, by adding in coverage for around one billion people with the additional language support, but it also makes its offering a more appealing one to those offering transcription services.
The company said that the updates are based upon customer feedback which called for greater functionality and control.