Hyper converged infrastructure means that the functions of storage, networking and processing power can all be managed through one management interface rather than having separate boxes for separate functions all being managed by their own separate systems.
Another way of looking at it is to describe it as ‘software defined’ – you use a management system to decide what type of computing power you need.
In reality this means much of the old-fashioned implementation and set-up is done by an automated system, or even that the applications ‘self provision’ – they tell your data centre what resources they need without you, or your staff, having to spend time doing it.
Gartner predicts the market for such systems will be close to $5bn by 2019.
The thinking behind the move is to do away with computing silos – separate functions within organisations which are difficult or time consuming to bring together.
Previous architectures built in this way were also reflected in personnel.
IT departments usually had a storage guy, or gal, who specialised in different flavours of memory. There would also be a networking guy who dealt with linking all the disparate functions together.
The typical data centre has another problem – each of these functions is probably provided by several different vendors. There might be as many as a dozen different storage providers in a large data centre for instance, along with different server and networking technologies and vendors.
And you can bet when a problem does come up the person in your IT department who knows both the technology and the specific vendor who supplied the kit is away that day.
With each set of products also requiring a different management suite even what should be quite simple projects or roll-outs can fast become a complicated nightmare.
The other problem is that today’s applications don’t function within the old siloes, depending on one or two computing functions. Instead they need to be deployed dynamically and will have radically different, and often unpredictable, demands for resources at different times.
Hence the need for an architecture which can rapidly change depending on the demands made upon it.
Such systems also require different types of staff, although almost all systems offer management systems which promise easy-to-use interfaces making running a data centre an easier task.
Hewlett Packard Enterprise’s flagship product in this space is Hyper Converged 380 which combines HPE Proliant Servers, VMware virtualisation tools and HPE management tools.
This lets the user choose one of three workload configurations and then customise storage, networking and computing capabilities as required.
This lets you add more capability as required, from two to 16 nodes, without paying for what you don’t need and are not using.
HPE is not alone in backing the hyper-converged trend. There are big players from various areas including storage, virtualisation and more traditional hardware companies all offering products in the same space.
Regardless of which precise model you follow it seems likely that storage and computing will increasingly be regarded as part of one function. Because managing disparate storage systems and connecting them dynamically to processor boxes will create too much of a management headache.
Moves to ever more commoditised, Flash-based storage will only accelerate this movement.
Hyper converged systems also promise to embrace more than just servers and storage by running the other services on which enterprise technology depends like network management, cloud provision, security, data protection and back-up.