View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Cloud
August 3, 2020

Five Questions with… Nebulon CEO Siamak Nazari

"We’re starting to see a shift away from the notion that convergence requires the server CPU to run everything"

By CBR Staff Writer

Every Monday morning we fire five questions at a leading C-suite figure in the business technology sector. Today we’re pleased to be joined by Siamak Nazari, co-founder and CEO of Nebulon, a startup launched in June that specialises in Cloud-Defined Storage, a flexible, on-prem and application server-based enterprise-class storage that consumes no server resources (CPU, network or memory) and is defined and managed through the cloud.

Siamak – What’s the Biggest Challenge for your Clients? 

I can best articulate it by detailing a conversation I had few years back with a customer. A conversation which is still very relevant today.

When I was at 3PAR and later at HPE, I would often travel to visit enterprise customers and walk through our architecture with their CIOs. All had on-prem data and at one point in time or another, each had come to me and asked why they had to buy premium-priced external arrays when they had literally thousands of servers, each with a couple dozen slots for drives. Their existing solution was a source of pain for them because what they ultimately needed was the ability to:

–  Reduce costs and simplify their infrastructure footprint

–  Reduce operational overhead and accelerate deployment of new services

–  Provide their application owners with a self-service on-prem experience

For some companies, the natural solution was and is to move their data to the cloud, but for many enterprises who have data that cannot or should not move to the cloud for SLA, cost or governance reasons, what choices do they have?  Deploying another expensive 3-tier architecture solves none of their problems and moving to server-based solutions (hyperconverged infrastructure) brings on its own set of restrictions.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

First let’s look at 3-tier architectures or enterprise storage arrays. Shared storage solutions like enterprise arrays support low-latency and mission critical workloads, but it’s no secret that they are very expensive. Expensive to buy, expensive to manage, tough to automate, especially at-scale.

So what’s their next option? We see numerous organisations moving away from external storage and 3-tier architectures to server-based, single-tier approaches like hyperconverged infrastructure (HCI).

Nebulon CEO Siamak Nazari

Nebulon CEO Siamak Nazari

While HCI has been proven to be cost-effective and easier to use for a specific set of workloads, namely VMware, it requires additional server software which limits the OS/hypervisor choice and consumes up to 30% of server resources, leaving less for VMs/applications and adding cost to the overall server and software infrastructure.

Most people will agree that array-based capacity remains the highest cost capacity in the industry, but HCI is not always the optimal solution. In reality, customers need a single tier solution which eliminates the software footprint typical of the HCI approach.

But let’s not forget about what they both lack. Both arrays and HCIs approach to management is siloed by device or cluster. Customers need the ability to fully automate their entire IT infrastructure at scale, and with their existing storage solutions, this just isn’t an option.

What Technology Excites you Most?

There are two in particular: the rise of Data Processing Units (DPUs) and control in the cloud for on-prem environments. Let me explain.

In the previous question I touched on the fact that enterprises are looking to HCI as an alternative to shared storage, but let’s look at why that is. Converging storage onto the virtualisation server should reduce cost and improve utilisation, but installing the storage software stack on the server creates a variety of issues related to cost, availability, and OS flexibility.

More recently, we’re starting to see a shift away from the notion that convergence requires the server CPU to run everything. If you really stop and think about why the application workload should be running on the server CPU, we believe only the application workload should be running there without having to compete with supporting software pieces.

Why? Otherwise by doing so, you’re reducing workload density and adding additional servers and software licenses in your data centre that you should not need. You also introduce software and lifecycle management issues for pieces that have different cadences. We’re seeing more and more people understanding this, which is evident with new startups popping up focused on developing Data Processing Units or DPUs, which takes the burden off the server CPU. By utilising a DPU inside a server vs taxing the CPU directly, enterprises can enjoy all of the benefits of running applications in the server without burning CPU resources. It is an approach that has a long history and is being reinvented for the modern architecture.

The second technology I am excited about that we are beginning to see more and more, is having the cloud as the point of control for the data centre. Control in the cloud is nothing new. It’s been around on the consumer side for a while. This is not surprising as consumers are usually the ones to set trends. A key example is the Nest Thermostat – the control is completely in the cloud, but there is a physical device that I have in my home.

Cloud managed devices allow for simple remote management, eliminate the headaches of manual software updates, and simplify automation. I have a cute example for you. My 6-year-old niece regularly asks her Google home speaker to change lighting or adjust the temperature in her home. She does not even know where the thermostat is installed. This level of integration and automation would simply not be possible without a cloud control plane.

Looking at the data centre similarly, an approach where enterprises have on-premises infrastructure that is completely controlled in the cloud makes complete sense. Most of us associate cloud management as being this smart, scalable solution, but when the entire control plane is in the cloud, you can have much better visibility and scale on the services you can’t move to the cloud. It is the natural evolution in the next wave of datacentre management and I am excited to see how far we can take it.

Greatest Success? 

I have to say it would be developing an at-scale, distributed system..

I’ve always enjoyed working on large-scale distributed systems. These days they are readily available, but when I first started in the industry 20+ years ago, it was a personal dream of mine to bring distributed systems at-scale. I wanted to be part of the solution, to build a distributed system that was highly resilient, had high performance, and delivered exactly what customers needed.

It’s actually interesting to see how this all works especially if you look at the storage layer. Everything that connects storage to the customer has redundancy: networking, compute systems, etc. However, if the data storage layer fails, literally everything comes to a halt. Building an at-scale storage solution that has high-performance and is resilient is a tough task. The fact that we have a solution that can deliver this to customers is a game changer.

Worst Failure?

Where do I start? In all seriousness, while there are many points in my life, I could have approached a situation differently, each time boils down to one of two scenarios: I was either overly ambitious or I was overly conservative.

In hockey there is a saying, skate to where the puck will be. There were times in my life when I expected the puck to go farther without taking the whole picture into account and other times when I didn’t anticipate just how far it would go.

I’ve learned that when I’m overly ambitious, I’m not thinking about the edges. What limitations could keep what I’m doing from being effective? On the other hand, you could be so conservative and cautious about changing direction that you will become irrelevant.

Over time one learns the balance between setting meaningfully big goals and achievable ones, and with a bit of luck build something that changes the industry for the better.

In Another Life I’d …

…work at a non-profit to see how technology can be used to help aid human crises.

The number of human rights issues today including but not limited to refugees fleeing their countries, is growing and these are problems that don’t have a one size fits all solution. Bringing technology to non-profits who serve these groups, however, would be very useful. There are multiple organisations working on this as we speak, but there are still significant challenges in the world that technology can help with.

It is not completely clear to me how it is being facilitated and whether technology is being fully utilised. It’s an area I am eager to learn about and help with.

See also: Five Questions with… BlackBerry CTO Charles Eagan

 

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU