When HPE launched its composable infrastructure system, HPE Synergy, last year it promised to change the way we think about hardware. Other vendors including Cisco, Intel and IBM are moving in a similar direction.
The idea is an evolution of how software development works – instead of hardware which is static and fixed modern enterprises need an architecture which moves as fast as software – they need hardware which can reconfigure itself according the changing demands of the business.
They need a system which can do this automatically and quickly without requiring detailed provisioning and oversight from IT staff.
Pools of resources
Instead of data centres made up of servers and compute functions which interact with network and storage hardware we began to think about fluid pools of resources.
The hope of composable infrastructure is to create an architecture which offers the same flexibility as software.
Hardware in a composable architecture is defined by its function – storage, computing and networking.
Individual applications then choose which aspects of the local pool of resources they require to function properly.
So rather than having one set of hardware for one function you have a fluid pool of resources which your software can then call up as required.
It is a step beyond modular hardware – like rack systems – which allow hardware to be used interchangeably.
But such a system will only work if it can configure itself with minimal need for time consuming set-up and integration processes.
Hardware as code
Importantly the architecture allows the software to do all this itself – it does not need a complicated provisioning dashboard – it does all this automatically.
It needs one set of APIs, a single line of code to identify each piece of hardware, so that any application can ‘talk’ to the hardware and ask for the resources it requires.
Treating hardware as code should help enterprise IT deal with the new demands coming from a changing business landscape.
Traditional IT vs app-driven development
IT departments are being pulled in two directions by the two different challenges faced by modern businesses – servicing both traditional IT functions and the new, application and data-driven demands of the business.
The different demands of these two functions create problems for a traditional IT architecture.
Traditional IT functions are predictable – you know when pay day is and exactly how much computing power is required to run it.
It has to be accurate, it has to be on time and it has to work exactly the same way every single month.
But creating a hardware infrastructure for such a function is relatively easy – because you know what the demands will be each month.
Newer applications are much more difficult to manage. You are unlikely to know exactly how much demand mobile applications will put on your internal systems at any one time. It makes sense to try and predict as accurately as possible but you need hardware with the flexibility to change just as fast as demand changes.
In the past the IT department was seen as a supplier to the business. It provided set functions – like a web server or a customer relationship management system.
But in today’s business the demands for IT can come from any department. Marketing might need to run a big data project on visitors to a website, or social media reactions to a specific marketing campaign.
The IT department needs to meet these requirements at an accelerated speed – data needs to analysed quickly to create actionable results.
Some marketing campaigns are already altered in real-time as positive and negative reactions from customers and would-be customers are assessed and measured via social media and other forms of feedback.
These new ways of doing business create rapidly changing demands on a company’s IT department.
The days of year long development projects are over. The technology demands of today’s businesses change on a daily basis.
Predicting the future is a dangerous game, particularly when it comes to enterprise technology. But looking forward five years or so we can be fairly sure that the pace of change is not going to decrease.
Businesses will be using more, not less, applications and using them for more business functions. Enterprises need to build a hardware infrastructure that has some chance of keeping up with the accelerating pace of change.
Taking the first steps
Moving to a composable infrastructure might seem like a frightening prospect.
But one of the benefits of the new architecture is that it doesn’t require and ‘rip it up and start again’ strategy. It can be added onto existing architecture and developed in steps.
HPE’s Paul Durzan has written a neat check list, or bill of rights, which outlines the main questions to ask any supplier.
These begin with: ‘The right to use a single infrastructure for all applications’ – this is almost the definition of composable infrastructure – you must be able to run any application across any system whether it is cloud based, a virtual machine or your own server.
The system should include: ‘The right to software-defined intelligence’ – any composable infrastructure should remove any complexity or management requirements from provisioning resources and maintaining systems. The system should also give you the right to do all this through a single API.
Any move to composable infrastructure should help future proof your IT investment. It should remove the barriers which can be created by hardware, allow you to upgrade when you want to at your own speed and be flexible enough to deal with whatever changes the future brings.
Finally the bill of rights ends with a quote from Mark Twain – ‘the secret to getting ahead is getting started’. So any composable infrastructure project should allow you to move forwards in steps – getting the benefits of the new infrastructure where they will be most valuable without interrupting core systems and functions.
The full list of ten points is here: