(Batch inference involves processing a set of prepared input data to a referenced training model and writes the inference results to a folder. Streaming inference entails deploying a model on a system and processing singular data as it is received).
Intel Nauta: A “Production-Ready” Kubeflow
Hugely popular orchestration engine Kubernetes, originally developed by Google, makes it easier to build and deploy container-based applications.
It is, essentially, a way of efficiently running online software across a vast array of machines, stitching them together into “a big computer” and letting users oversee machines running on a range of cloud services, as well as inside private data centers.
Intel’s Jason Knight, speaking at the company’s AI DevCon in Munich today, described Nauta as built on the back of Google’s Kubeflow tool (to which Intel is the third-largest code contributor) and was effectively a “production-ready” version of that tool.
(He also annouced the open sourcing of nGraph, an open-source C++ library and runtime / compiler suite for Deep Learning ecosystems.)
Nauta lets you use Kubernetes to manage end-to-end orchestration of ML pipelines, to run your workflow in multiple or hybrid environments (e.g. swapping between cloud and on-prem building blocks depending upon context), and to help you reuse building blocks across different workflows. The release offers greater flexibility to data scientists or developers wanting to carry out deep learning training experiments, without worrying about the pressure these will place on a given bit of infrastructure.
Intel Nauta: Deep Learning, Powered by Kubernetes
Nauta provides a “multi-user, distributed computing environment” for running DL model training experiments on Intel Xeon processor-based systems, using a command line interface, web UI and/or TensorBoard and powered by Kubeflow and Docker Intel said.
Developers can use existing data sets, proprietary data, or downloaded data from online sources, and create public or private folders to make collaboration among teams easier and use multi-node deep learning training experiments “without all the systems overhead and scripting needed with standard container environments.”
“Nauta is an enterprise-grade stack for teams who need to run Deep Learning workloads to train models that will be deployed in production”, Intel’s Carlos Morales said in a blog published to coincide with Intel’s AI DevCon in Munich today.
“With Nauta, users can define and schedule containerized deep learning experiments using Kubernetes on single or multiple worker nodes, and check the status and results of those experiments to further adjust and run additional experiments, or prepare the trained model for deployment” he wrote.
While the business value continues to grow, and the interest in DL in the enterprise is palpable, it is still a “complex, risky, and time-consuming effort to integrate, validate, and optimize deep learning solutions” Intel noted.
Using Nauta, “at every level of abstraction, developers still have the opportunity to fall back to Kubernetes and use primitives directly. Nauta gives newcomers to Kubernetes the ability to experiment – while maintaining guard rails.”
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.