Tell us about your converged virtualisation appliance.
It is really the idea of delivering data centres without network storage, without SAN or NAS. The idea is to pull all the storage intelligence into the server tier. So you have a common set of machines that basically deliver compute as well as storage.
We deliver a package that looks like a self-contained data centre. You can create your virtual machines and do everything you can with a VMware tier expect that you don’t need a backend NAS or SAN. It is delivered in a scale-out fashion. It is basically a cluster of VMware machines that has unifying software that makes it look like one single system.
The idea being that VMware does not realise it is not talking to a single box in the backend; it thinks it’s talking to a monolithic system like NetApp or EMC but it’s really talking to a bunch of machines. All the VMware features that depend on shared storage work out of the box.
We call them SAN-less data centres. They don’t need a NetApp or an EMC.
So what is the advantage to doing it that way?
We call it converged because we really bring compute and storage into one. But if you look at where storage is headed… over the last five years pretty much most storage is now Intel x86 hardware and you already had x86 architecture on the compute-side, so it made a lot of sense to ask the question why we’re not consolidating those two tiers into one?
Now you can deliver storage that is sitting right in the same box as the rest of the compute. If you keep adding VMware hosts you continue to get more storage controllers. Every VMware box now has a storage controller running inside it. Computer, which is the customer’s applications, are talking to the storage controller that is local. They are all stitched together to form one single system.
It collapses infrastructure into one piece so that storage and applications are sharing the same motherboard, chassis, fans, power controllers and even networks. You have one converged 10Gbit pipe that can be traffic-shaped and so on.
If it’s all in one box, isn’t there a risk that if one part goes down the whole thing follows?
We use fault tolerance to really manage things like that. You still have multiple copies of the data sitting on the network and on the cluster and so on. If one host goes down it doesn’t mean you’ve lost data.
What about scaling? What if a customer doesn’t want to scale the whole thing?
There are two ways to look at that problem. One, every unit of compute only comes with five drives, it’s only five TBs. So if you want to just scale compute what we’re giving you is not a whole lot, we’re acknowledging that when you ask for more compute you will need performance from the controller. So we give you a controller that is basically giving you enterprise-grade features.
The other way of looking at it is that we are not about deep storage. We don’t have to manage compute and all the storage underneath; we can spill over into existing NAS or SAN as well. Storage begins at Nutanix; it doesn’t have to end there. We take care of the interesting, frequently-accessed stuff in Nutanix.
Because our view is server-centric we can arbitrarily waterfall data to a pretty deep tier of storage.
Tell us about some of the savings businesses can make using this technology.
There are real, tangible benefits that are not just OpEx, but a lot in hardware, CapEx is 40%-60% better, form-factor is 5X reduced, you save on power and real estate and so on. So we are finding plenty of savings in the spaces businesses are constrained.
Let’s talk about your customer base. You are very strong in the government space in particular. Why?
I think part of the reason is that they are heavily into consolidation, which means saving on data centre space, power and also being bale to save on hardware in general.
We deliver a data centre at the same cost as a storage appliance. That’s very powerful – not having to spend separately on blade chassis and storage and so on. It is one converged product that gives you all the goodness of a true data centre – compute, storage and virtual networking in one.
So what are some of the uses for Nutanix’s technology?
VDI projects are a big one. Basically they are a highly replicable virtualisation workload, nothing more than that. They need a lot of compute and lots of virtual machines and they want to spin them up in their hundreds if they can.
It was one of those things that always had local disks. When you try to consolidate that in a data centre you have the issue of thousands of desktops and thousands of spindles; you can’t throw thousands of enterprise storage spindles at it, so how do you make it scale?
There has been a lot of consolidation in the storage industry recently, with the likes of EMC acquiring flash storage firm XtremIO. Are you next on this list?
We’re only two and a half years old, which means the stakeholders and so on are willing to go for a business exit, not a technology exit. A technology exit is $300m – $400m, like a LeftHand or XtremIO. Today the traction we see makes us believe we can build a viable business out of this.
In two quarters we’ve done what it took comparable companies like Palo Alto and Data Domain six or seven to build. There is a lot of promise here. We’re well capitalised so we’re not itching to go and bring in liquidity.
Opportunities to change the game in enterprise computing come few and far between, and there’s no guarantee that what we do next could be as big.
If you look at the last 15 years the biggest vendor that has built a $10bn+ company is NetApp. They were disruptive, they were doing things differently to EMCs and the likes were doing back then. They were ambitious people. They had their sights on everything in the data centre; we have our sights on everything in the data centre. We can go after pretty much all workloads someday. We have a measured approach at the moment because we believe that’s the right way of going, but I think the sky is the limit.