Depending on who you talk to, ‘software-defined’ is either an overused marketing buzzword or the most significant technological advancement in the data storage industry of late. Regardless, IT managers are acutely aware that the way they manage their data centres is shifting from hardware-driven management to software-centric tools. Recent surveys reflect this shift: two thirds of CIOs in Northern America and Europe plan to expand their use of software-defined data centre technologies this year according to an April 2016 survey, while spending on software-defined data centres is forecast to increase by 14% in 2016. Indeed, Gartner estimates that by 2020 75% of organisations will need to implement a software-defined data centre in order to support the OpEx approach and hybrid clouds they need as part of agile digital business initiatives.

 

First Servers, then Networks

Thanks to virtualisation technologies hitting the market over a decade ago, server and networking domains are already well on their way to software-defined control. By 2013 51% of servers were virtualised, and today server virtualisation rates exceed 75% in many organisations. Software-defined network virtualisation from Cisco, Juniper Networks, Barracuda and more is also taking hold, such that Gartner forecasts that 10% of customer appliances will be virtualised by 2017, up from 1% this year.

For the data centre, virtualisation has meant that spaces where one could practically get a tan from the heat of servers, switches and spinning disks have been replaced by more efficient hardware. Less hardware translates into smaller floorspace requirements, lower energy bills for operation and cooling, and reduced capital outlays that can be depreciated across five years. While staff expertise required for operating the data centre didn’t go away, it shifted to more valuable activities, because their prior tasks became easier as management interfaces improved.

 

The evolution of data storage

The next logical evolution of the software-defined approach, and one that’s poised to deliver tremendous improvements, is data storage – traditionally a big-iron, big price tag dominated sector. Research & Markets estimated the software-defined storage market as totalling $1.4 billion in 2014, growing at about 34% annually through 2019 – though just a fraction of the overall $36 billion storage market that year. 

The delayed embrace has a reason. SAN and NAS equipment has historically depended on custom-made ASICs, custom-made circuit boards, and custom-made real-time operating systems. The cost of developing that customisation and testing to assure interoperability, kept prices high and prohibited the roll-out of useful features as well as easy manageability on site. End-users, in an effort to protect their most critical asset – their data – continued to rely on hardware-centric solutions and were cautious to move to new untried platforms.

The emergence of cloud storage achieved early adoption for departmental applications, DevOps and testing needs. Yet, the “serious” IT work for applications requiring high availability, high performance, high IOPs or low latency, had to remain on-premise on traditional equipment, which still required overprovisioning, careful planning to accommodate for data growth and continuous oversight by expert engineers.

OpEx-based software-defined storage is a new way of building, managing and buying storage. Such solutions are built on high-volume, industry-standard hardware and open operating systems that cost less, yet typically provide the same types of storage, protocol support, and storage support users have expected from traditional CapEx-based hardware-driven approaches.  

 

New possibilities

Software-defined storage solutions have reset the standard for what IT teams should expect from any kind of data storage. The top four advantages are:

Agility – defined as the speed at which the underlying resource can be changed.With older CapEx-based storage, when a storage administrator needed to expand capacity, it often took weeks or months to negotiate with a vendor, place an order, receive the equipment, install and deploy the storage array.  Particularly with newer software-defined approaches enabling storage-as-a-service, it typically takes minutes to expand capacity.

Elasticity –IT managers don’t have a crystal ball to predict what they’ll need in the next month, let alone the next five years – and they no longer want to be penalised in the form of overspending for storage resources they don’t need.  With software-defined approaches, IT teams can scale resources both up and down quickly via a remote management interface.  They can remain aligned with a changing world.

Scalability – Traditional SAN and NAS architectures have physical hardware limitations in terms of scalability. In contrast, software-defined storage is vastly scalable to hundreds of thousands of nodes.

Multi-tenancy – In the typical SAN and NAS storage configuration, since the devices are limited in their potential for scalability, they usually fulfil multiple purposes, and there is no reliable separation among the workloads.  For example, accounting applications would run on the same equipment as R&D workloads. In the event of simultaneous peak times, such as at the end of the fiscal year or during the development of a new software build, this creates potential performance issues. To ameliorate this, IT managers invest in multiple storage arrays and use physical separation to segregate the workloads. This runs up costs, and unused space is simply wasted. It’s also a harder configuration to administer, because IT teams have to manage a number of storage arrays, often from different vendors and with different management consoles.

In contrast, a software-defined storage system is designed for multi-tenancy. Users can relocate applications to unused storage to make the most out of the availableresources. If the solution is a cloud storage offering, the software allows IT to track the actual costs incurred by each application, so that departmental end-user groupscan be billed by application and by department, not just by their entire storage usage.  

It is also worth noting that more advanced storage-as-a-service offerings offer resource isolation, enabling a single-tenant experience in a multi-tenant environment – or “the best of both worlds”.  In this case, while the applications are multi-tenant in that they share common resource pools, they are allocated their own storage resources. This eliminates both the chance of a performance problem caused by a “noisy neighbour,” and also security concerns over mixing application data on common drives.

The industry is at the front end of an exciting transformation to software-defined data centres, where IT teams are freed from being hardware-bound and benefitting from business model and feature improvements that have raised the bar on price, performance and flexibility. Transformations in server and network virtualisation have proven the path of what is now accelerating in data storage. Organisations can use this time to benefit from a substantial early-mover advantage.

 

Dani Naor is VP International Sales at Zadara Storage