View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Hardware
October 6, 2017

The Future of Hyper Converged Infrastructure

In future the business will demand not technology but better ways to deal with internal and external data.

By CBR Staff Writer

Enterprise technology is undergoing one of its regular seismic shifts. There is a widespread shift away from three tier computing to hyper-converged systems which work within a broader strategy of hybrid platforms.

Bringing together storage, computing and networking functions has been happening for some years. But the truth was that getting kit and software from a variety of vendors meant it took up to six months to get systems up and running properly and certified.

Even then many systems still suffered from data silos which stopped businesses truly controlling and making use of their data.

Add to this cloud services and IT departments found themselves with a serious management puzzle. As individual business units sourced their own cloud services organisations were left with ‘Shadow IT’ – where no one took strategic responsibility for technology and many services were duplicated in different departments. This is both expensive but also creates a business and regulatory risk with a lack of clarity as to where crucial company data is stored.

As ‘Industry 4.0’ impacts on more and more industries across the economy it becomes ever more important for companies to make proper use of their data. That means making it available to applications where ever they are – in the cloud, on a server in the data centre or even on a mobile device.

Today with several vendors backing hyper convergence the process is far easier and quicker. At entry level there are validated designs guaranteed to work.

Switching to software-defined infrastructure allows enterprises to shift applications to and from different parts of the IT estate almost at will.

Content from our partners
Scan and deliver
GenAI cybersecurity: "A super-human analyst, with a brain the size of a planet."
Cloud, AI, and cyber security – highlights from DTX Manchester

Even if moving to a more bespoke system lead times on deployment and management of new systems has been cut drastically.

It is easier to scale up computing or storage capability independently of each other. This is also supported by a shift in how companies pay for their technology.

The move to ‘pay-as-you-use’ or flexible consumption gives power back to on-premises data centres which can be built with built in extra capacity without extra cost until the moment systems are switched on.

But the issue with these systems has been scalability. There are physical limits on how big they can grow and even if you can build them such large systems also come with a massive management headache.

NetApp’s Hyper Converged Infrastructure Solution brings together integrated data services, data fabric services and third party services into one management tool – Solid Fire Elements OS.

This allows enterprises to guarantee application performance, scale up or down as required by the business as well automating many aspects of IT operations to free up staff for more strategic and forward looking activities.

It allows business to properly exploit the data it collects to improve existing processes and drive the business forward.

NetApp works with partners to provide back-up, recovery and database solutions built on its solutions.

Businesses are failing to get the most benefit out of hyper converged systems because managing them can be so time consuming that other benefits are wiped out.

But NetApp’s systems mean you can deal with mixed workloads and guarantee application performance by setting Minimum, Maximum or Burst settings for individual apps.

Solid Fire also offers granular control down to the individual Virtual Machine – stopping one VM from impacting on the performance of any other.

It also provides a fully flexible upgrade path from two racks up to enterprise scale systems. Storage capability ranges from 480GB to 1.92TB.

Genuinely scalable, industry-proven Flash storage which is simple to manage brings a big advantage to any data centre. The other key differentiator for NetApp is that its built on its long experience of the enterprise level data centre


Because storage and compute functions can scale completely independently of each other there are almost infinite ways to tailor systems to your specific business needs. This flexibility should also create lower licensing costs and an end to the ‘HCI tax’.

Very often data centres have compute resources running almost idle because they are handcuffed to a storage array which forces overprovisioning of storage and compute capacity as they scale. This brings a cost as well as a management drain on the enterprise.

But perhaps the biggest improvement is in automating infrastructure deployment and management which can radically change the way that the IT department works and how it services the business.

By automating and streamlining management, accelerating deployment and making the whole process simpler thanks to a comprehensive set of APIs staff get the chance to look forward at how the business is changing rather than reacting to existing conditions and demands.

Net App’s HCI deployment engine cuts required inputs from more than 400 to less than 30. This slashes time spent initializing, configuring, building and finishing systems.

You can use VMware vCenter for day to day running and more complex management is simplified by APIs, plug-ins and tools.

This might all sound technical but it is really bringing a much bigger change to the culture of the IT department and the business as a whole.

Hyper converged systems which work require engineers with a more general, broader, skill set. If systems can be deployed, tested and put online in minutes instead of weeks then your staff will have the bandwidth to do more important things.

But it also helps create a staff culture that encourages  lateral and strategic thinking about how technology can enable the business as a whole.

That is a vital capability for any organisation looking to survive and thrive in the digital economy.

By freeing both your data and your staff HCI can help turn data into information and knowledge. Proper analysis of data about your systems, and the time to do it, can deliver valuable insights into how the business is functioning as well as offer potential cost savings from identifying duplicate shadow IT projects or under-used resources. Lessons learnt from this process can be carried across to the wider business.

This sort of service provision will become an ever more important part of the IT department’s job. But it is only possible in a world where the IT department is not spending its days provisioning and configuring servers or storage devices.

The IT department has long been a service provider to the business. But hyper convergence is changing what that service is.

In future the business will demand not technology but better ways to deal with internal and external data.

The new insights and knowledge that data can provide requires a new kind of infrastructure.

Topics in this article :
Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how Progressive Media Investments may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.