Google has launched its TensorFlow Serving as open source, which intends to assist developers in taking their machine learning models into production.
The TensorFlow Serving system, which is available on GitHub under the Apache 2.0 license, will enable developers to easily implement new algorithms and experiments.
Written primarily in C++, the technology supports Linux and introduces minimal overhead. It has the capacity to run several models, at large scale, that change over time depending on real-world data.
TensorFlow Serving offers various extension points where users can add new functionality.
Google software engineer Noah Fiedel said in a blog post: "TensorFlow Serving makes the process of taking a model into production easier and faster.
"It allows you to safely deploy new models and run experiments while keeping the same server architecture and APIs."
The company says the system can handle around 100,000 queries per second per core on a 16vCPU Intel Xeon E5 2.6 GHz machine.
It provides out of the box integration with TensorFlow models, and can be extended to serve other types of models.
The central abstraction in TensorFlow Serving are servables, which are the underlying objects that clients use to carry out computation.
The technology can handle one or more versions of a servable over the lifetime of a single server instance.
It manages the lifecycle and metric aspects of servables through standard TensorFlow Serving APIs and treats servables as well as loaders as opaque objects.
Last November, Google announced the open source release of TensorFlow, its second-generation machine learning system.
TensorFlow is a followup to the company’s original DistBelief engine, which was used to make speech recognition work better and build image search into Google Photos.
The company has also unveiled a new tool which lets developers build applications that understand the content of images.
The tool, Google Cloud Vision API, encloses powerful machine learning models in an easy to use REST API.