In days gone by, the solution to tackling increasing data centre demand was to simply throw more hardware at it. More servers equalled more capacity. But that required more space, power consumption, man hours and more bits and bobs to keep everything running.
The explosion of data – one report suggests 15 petabytes of information is being generated every day – is driving up data centre demand, which in turn is pushing up energy costs.
During the economic boom of the late 1990s and early 2000s this was not an issue; companies had money to throw at data centres. But that is no longer the case. IBM suggests that data centre costs, such as energy and space, have risen eight times since 1996 and companies can no longer afford to throw good money after bad at the problem.
So what can enterprises do? Data, applications and any other information contained in a data centre have to stay there, and it has to be available whenever the company needs it. Data centres need to run all the time, switching the infrastructure off at night, along with the lights, is not an option.
Simply reducing costs is not enough. Enterprises need to spend their money more wisely. That does not have to mean spending less money, instead the pressure is on to find more effective investment opportunities. Is it a sensible option to cap server power consumption when that could impact the response time of applications and SLAs?
Enterprises are increasingly turning toward virtualisation as a way of reducing costs without having to worry about a decrease in the availability of business-critical operations. Virtualisation can be described as the abstraction of “a form of technology away from its original environment – a literal and physical form – and redeliver it in a virtual form.”
The idea of virtualisation is not new. Companies such as IBM have been involved in it, in one form or another, since the 1960s, “in the specific context of separate logical partitions running in parallel on a shared mainframe,” according to the company. Since those days, virtualisation has grown to cover systems, storage, networks and applications.
Data centre consolidation, shifting the functionality of many servers onto fewer servers, enables a company to manage the infrastructure as a single entity and results in reduced space, power and cooling requirements, as well as management costs.
Having all these systems in place is one thing, but getting the best out of them is another. IBM says its technology can also improve performance, with what it is calling a dynamic infrastructure. Unveiled at IBM Pulse 2009 in Las Vegas, it aims to close the gap between where a business is and where it needs to be, to enable it to operate more efficiently.
Richard Esposito, vice president of IT strategy and architecture services, IBM Global Technology, said: “A dynamic infrastructure is the intelligent connection of underlying business and IT assets that are highly-automated to reduce costs, increase service levels and better manage risk.”
Esposito went on to say that the line between business assets and IT assets is becoming blurred and that is actually improving organisational efficiency. Taking a more business-minded approach to IT assets means companies are getting greater visibility of their infrastructure, which in turn is driving greater utilisation and performance.
Al Zollar, IBM Tivoli general manager, said that this is where IBM’s dynamic infrastructure sits. “Our approach is holistic, it’s not just based on our equipment but connecting with the equipment provided by others, such as power distribution units and air conditioning units,” he said. “Then we bring all of those assets into a single unified view where we can get a single set of measurements.”
This helps provide a much clearer picture of what is happening within a data centre, says Zollar. “We are bringing not just traditional IT assets but assets that are being enabled with IT. Data centre assets like cooling and power distribution are being smartened up with sensors that can communicate with control systems that can be driven by automated actions,” Zollar said.
The improvements to data centre optimisation look set to continue. Many industry analysts have suggested that Fibre Channel over Ethernet (FCoE) has the potential to revolutionise next-generation data centres.
The ability to leverage 10Gb Ethernet networks should not only improve the performance of the data centre network but it could also reduce power and cooling costs as well as the amount of physical space needed, as the number of network interface cards required should be reduced.
In a recent interview with CBR, Dante Malagrino, marketing director, data centre solutions at Cisco Systems offered a ringing endorsement of FCoE: “Network infrastructures can be very costly to build and manage, and it’s too complicated. So as we look at IT simplification and reducing costs, FCoE will consolidate lots of different environments on one piece of cable,” he said.
Malagrino’s comments were backed up by Craig Nunes, VP of marketing at utility storage vendor 3PAR. He recently told CBR: “We’ve seen a lot of I/O transitions come and go and they always take longer than predicted, but it is clear that FCoE has the momentum. You get better protocol consolidation, it’s easier to deal with and you get better leverage of your data centre equipment,” Nunes said.
But Nunes did issue a word of warning. “All the signs are there that it’s going to be an important interconnect from storage to host. That said, it won’t be today or tomorrow or later this year,” he said.
That still leaves enterprises with the issue of what they can do now. One company helping to deliver IBM’s dynamic infrastructure is Ilog, headquartered in Gentilly, France. In August 2008, the company was acquired by IBM for $340m, with the deal completing in January 2009.
CBR recently caught up with Jeremy Bloom, senior product marketing manager for optimisation at Ilog. He said his company is providing optimisation technologies which fit with IBM’s Smarter Planet initiative, particularly in the power supply field.
“We’re delivering a few applications for the dynamic infrastructure. I think there is a great potential there. One of the ideas behind it is that it enables you to locate and fix a lot of issues through sensors and automation,” Bloom said.
This is an important development. Enabling a company to proactively monitor and manage its infrastructure should improve resiliency. If a sensor detects that a server is getting close to operational capacity, business-critical applications can be automatically moved to a different server, with no drop-off in performance or availability.
Integrating both the business and IT infrastructures, what IBM terms a dynamic infrastructure, should enable a company to consolidate an existing infrastructure by using virtualisation technologies to operate much more effectively. It is hard to argue with the benefits: higher efficiency, improved performance and reduced energy and management costs.
Photo credit: Pandiyan on Flickr, CC licence This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.