Historically, servers and networking have been the stars of the data centre. However, with the increasing volume, velocity, value and longevity of data, we’re entering an era when data storage is entering the limelight. In fact, According to IDC by 2020 the ‘digital universe’ – data we create and copy annually, will reach 44 zettabytes or 44 trillion gigabytes.
To cope with the exponential growth of data, CIOs must evolve their data centre strategies to meet future challenges. In this article, Nigel Edwards from HGST explains how the optimisation of hardware, software and storage architecture can help data centre owners deal with the ever-expanding need for storage and ensure this is high on the enterprise IT priority list.
Optimising hardware
Standard storage building blocks are already being optimised. Higher capacity drives consuming less power are improving storage clusters, enabling more resources for the same footprint. In addition, new technologies like hermetically sealed helium filled drives allow for more optimal data storage in the standard 3.5" form factor. Drives that are lighter and lower power also enable vendors of standard server hardware to increase the density of their enclosures to support software-defined storage. Whilst 12-36 had been a typical system density, 60-80+ drive systems are now much more feasible.
Optimising software
However, optimised data storage goes beyond more than just hardware, with storage software providing the opportunity for optimisation as well. Although the biggest names in large cloud services firms have begun designing personalised, highly optimised hardware, few data centre providers have equivalent human resources to do the same. But new software advances will enable enterprise data centres to gain the same CapEx and OpEx benefits enjoyed by large cloud services firms without the need for equivalent resources.
Optimising architecture
Ethernet drives are now able to bring new abilities to distribute software services for scale-out storage. This architecture optimises the data path so that application services can run closer to the location where data resides when at rest. Using open architecture, developers can take advantage of resources without needing to modify their applications as is usually required when using drive-bound resources.
By virtue of Ethernet, operators supporting those developers get seamless connectivity to existing data centre fabrics, and use of existing automation and management frameworks. An open Ethernet drive architecture can also enable the intermixing of new technology with server-based deployments of popular software-defined storage solutions. These new open software-defined storage options which have been recently introduced to the market make optimised architectures for scale-out much more approachable.
Historically, servers and networking have been the stars of the data centre. However, with the exponentially increasing volume, velocity, value and longevity of data, we’re entering an era when data storage is entering the limelight."
By Nigel Edwards, Vice President, EMEA Sales and Channel Marketing at HGST