At the Citrix Synergy event in Berlin in early October, the audience saw 600 virtual desktops created in seconds to demonstrate the power of virtualisation. Whether you find that image amazing or horrifying, the point is the same: we now have the power to change our ICT environments in ways we just didn’t have prior to virtualisation.

But are we using that power – comic book hero-style – ‘for good’?

For some, the move to virtualisation is a no-brainer – and if the CIO hasn’t already aggressively replaced his physical server portfolio with a virtual equivalent, he’s practically sabotaging his company’s chance of survival. For others, it’s a potentially dangerous move, with the implicit danger that real-world mess will just get replaced with cyber inefficiency. Who’s right?

Without doubt, there is a regularly aired fear that poor virtualisation can give you ‘virtual server sprawl’. In May, for instance, one US vendor (Embotics) claimed an environment of 150 virtual machines (VMs) may have anywhere from $50,000 to $150,000 worth of IT utility "locked up" in "redundant" virtual spaces – and that some of its customers have found after a physical audit that more than 50% of the VMs in their environment were not actually being used.

There seems little point in going virtual if all you’ll end up with is as much of a management headache with ghost VMs all over the place as you may have now with your underused (physical) servers. As Eric Kuzmack, an enterprise strategist for Dell, says: "Too many people forget that doing VMs isn’t ‘free’ – it’s so easy to spin up another VM and not think of the drain on resource."

And as Julian Box, CTO at cloud services firm Virtustream, adds: "Every VM consumes resources – memory disk, processor and network. If you consume all of your resources with unnecessary or depreciated virtual machines, then you won’t have the necessary resources available when a real need presents itself."

There are also the well-known potential pitfalls of software licence and asset issues, which may have implications for you on both the compliance and support fronts.

CBR recently sent out a question to the market that asked recipients to answer the following question: how can I manage my virtualisation move in such a way as to avoid duplicating inefficiency and avoid such sprawl?

The range of responses suggests that as with so many promising ICT techniques designed to make your life easier as an IT leader, getting virtualisation right is as much about the way the project is planned and managed as it is to do with the underlying tech itself. Pick a supplier and they’ll have their own special remedy for the problem, which tells us, surely, it’s not down to just technology itself, but something more. For example, some say once servers are virtualised, the tendency is for the environment to better police itself as it’s now clearer to see what’s what and where.

Other vendors say I/O or storage virtualisation is the ‘real’ answer, as that will again drive overall efficiency. It seems there really is no such thing as a free lunch… not even a virtual one, alas.

People and processes
It’s important to note that in CBR’s research none of the three leading virtualisation frameworks – VMware’s vSphere, the Citrix XenServer or Microsoft’s Hyper-V – were at all criticised as promoting VM sprawl or inefficiency. Still, some experts did say the easiest way to avoid VM sprawl is to, well, choose the right supplier or consultant – as in, their firm.

Another group is convinced that getting a virtualisation project right is in effect the same as getting other complex IT projects right; it has to be a matter of getting the people and processes as right as the products. Take the London Borough of Hillingdon, one of the capital’s largest individual local authorities, which has moved to a virtualised storage environment with Compellent, a specialist in the area, and has installed VMware to virtualise servers, too. Roger Bearpark, assistant head of ICT, says he’s been able to crunch 94 production servers to just three and reduce the number of server rooms from three to two as a result, saving significantly on power, too. Had he worried about re-creating the same ‘mess’?

"I think we avoided that by concentrating in people and skills more than focusing on the technology alone," he says. "There’s a lot of potential to do and create more in the virtual environment, but the way to make the most of that is the same sort of good project management disciplines you should be using anyway."

Key to that has to be planning the best you can before a single VM is spawned. "Any enterprise starting a project like this needs to ensure that correct scoping of the physical infrastructure is carried out, to ensure that sufficient virtualised resources are available to support the current requirements and projected growth," says Duncan Ellis, European systems engineering director at networks firm Ciena. "This includes sizing the CPU use, memory utilisation, storage requirements and connectivity requirements."

"To ensure the virtualisation project is efficient you still need to use effective, ‘traditional’, IT processes," says Richard Blanford, managing director of infrastructure optimisation consultancy Fordway. "In the virtualisation context this means having good, effective change control, release management, capacity management and systems management, not allowing staff to make unapproved changes, making sure old, unused systems are controlled and deleted when not in use so they don’t consume resources unnecessarily, and having effective systems management so you know what is out there and can control and monitor it."

In other words, fight fire with fire. If the problem is management, get the right management software to run the new environment, just as you would have the old one. Look to your internal processes and policies and if you need to, set different privileges and limit administrators’ rights to create new VMs. Another approach could be to split out virtual servers by workload, i.e. I/O, CPU or memory heavy, and only allocate VMs to physical machines based on workload class, and so on.

Majority verdict
For Bob Tarzey, analyst and director at UK analyst house Quocirca, "what really helps is having the tools to manage this and move workload around, from physical to virtual and so on, or indeed to make sure the wrong workloads do not end up in the wrong place for compliance purposes. Tools like Novell PlateSpin and Hyperformix, now part of CA, can help with all this."

"Going to a virtual world means being able to provide just as much governance of who’s using what and where as you had in the physical world where you can point to actual boxes," Dell’s Kuzmack adds. Dell, of course, has its VIS portfolio to push here, which it is positioning as seamlessly integrating into existing systems management strategy and investment, but his point is valid in itself. (There is also the HP Virtual Connect architecture in this space.)

But it seems the majority verdict is that the best way to avoid this problem is to see virtualisation as not an end in itself but as a stepping-stone to a private cloud architecture. The idea is that virtualisation can be a great way to start the shift by introducing cloud basics such as shared infrastructures, service catalogues, portals, automation, dynamic/flexible infrastructures and chargeback models. It’s an idea that seems to be gaining ground – a Novell piece of market research in October claimed that private clouds are the "next logical step for organisations already implementing virtualisation", according to 89% of the sample (200 CIOs).

Says Tarzey: "Virtualisation has many benefits for larger organisations in its own right, but increasingly it makes sense to use as a basis for building a private cloud, which allows applications to be deployed easily without having to assign them specific physical resources."

Marc Benioff, founder of cloud pioneer salesforce.com, has claimed that the best way to understand the benefits of cloud versus simple virtualisation on its own is to remember that we used to make our own electricity locally at factories, but when national grid came along, we just plugged in.

So, what is the intelligent way to approach the issue of heading off virtual server sprawl? It’s actually all of the above – good supplier choice, proper project planning, and effective management – because virtualisation is really just good enterprise ICT, with all the complexity that brings with it. Maybe we shouldn’t be surprised that in the end it isn’t the raw technology itself that is either the problem or the solution, but how we as business-IT facilitators plan, manage, source and run it in our specific environments. We should be less focused on the specific ‘location’ than on the ‘destination’, perhaps?

"Whether virtualisation is the end goal or just a stepping stone to the cloud, to be deployed successfully, organisations must be able to create exceptional applications, manage the traffic to those applications to control operational cost and complexity and deliver those applications successfully to end-users," Owen Garrett, product manager of Zeus Technology, a provider of online traffic management software, reminds us. "It shouldn’t matter whether the applications are located in a physical data centre, on a virtualised platform or in any infrastructure cloud."

The verdict from the virtualised coalface
How do the CIOs themselves feel about the debate? We spoke to two to get their views, both of whom had recently led a successful virtualisation project: one in financial services and one eCommerce firm.

Thus for Joel King, infrastructure architect at the international division of South African-based Standard Bank, which worked with thin client supplier Wyse to move to a VMware-based enterprise-wide virtual desktop, both more management options and a move to private cloud are what’s needed. "In terms of control and management and capacity planning, there weren’t as many solutions when we started as there are now today, and that could have been useful," he told CBR. "A move to private clouds would also make sense as it would give us, I think, more control, in either a software or workflow-based context."

For Walter Sinowski, chief technology officer of European-wide dating site Parship, which worked with NTT Europe to build a virtual environment to better service its 10 million online users, using server consolidation to move data, applications and Web servers into one, single virtualised environment, "You need good internal capability to get the most out of the technology. I’d say at the enterprise architect level. We also found having accurate monitoring tools at the service layer, so we knew exactly what was being requested by which element, very useful, too."