In 2012, one trend is highly noticeable: more and more organisations are looking to cloud computing as the next big step in their IT strategies. In Gartner’s most recent worldwide CIO survey, cloud was the third most-important technology priority for CIOs in 2012, beaten only by mobile technology and business intelligence, while IDC predicted that businesses would maintain their interest in the cloud despite increasingly conservative IT budgets in 2012.
However, the attractiveness of the cloud for businesses can often cause organisations to take the plunge without first testing the water.
Why, Cloudius?
There are clear benefits driving this adoption: by providing services and resources on-demand, the cloud can greatly increase the efficiency and adaptability of the IT department.
An external cloud service will also allow the IT department to switch the costs of that service from up-front capex to an often more easily manageable ongoing opex. This can be particularly useful for specialist services such as in-memory computing, data replication and mobile applications, which can be hugely resource-intensive for the IT department to set up and maintain in-house.
However, cloud services are not a ‘get out of jail free’ card: as with any form of outsourcing, if a business cannot effectively manage a service in-house then placing it in the hands of an external supplier will not magically make everything run smoothly.
The success for any cloud project starts at the data centre. The primary job of the IT department is to manage both the resources it controls and the services it offers, regardless of whether that service is then provided by another organisation. If it can’t guarantee this, then it is operating on very shaky foundations.
At the very least, it will find it impossible to truly gauge the success and value of any external cloud project. At worst, it may be inadvertently putting extra pressure on itself and its infrastructure, which causes the whole edifice to crumble.
Even an internal cloud project can easily become an albatross for the IT department’s neck if it doesn’t already have its house firmly in order.
As a result, the IT department needs to take a close look at exactly how well it is controlling its own IT resources and services before it decides what, if anything, to place in the cloud.
Work with what you have
To begin with, the IT department should look at what it already has in place as, ideally, any cloud project will follow best practices that the department uses as a matter of course.
Tasks such as IT service management (ITSM), capacity planning and automation all need to be well under control before any expansion or migration into the cloud is considered. Automation of standard processes will ensure the department can provide and manage services without having to be hands-on in every aspect.
It needs to have a firm grasp on its IT environment’s capacity before it embarks on any strategy that could change how that capacity is used. And without full control of its IT services, changing the way in which they’re offered will be a recipe for confused and increasingly angry end users.
However, too often this is not the case: automation tools are purchased but sit unused on the shelf; resources are purchased and implemented with no idea of how they affect current capacity; and IT service desks operate at less than peak efficiency. As a result, the IT department needs to identify and address any of these outstanding issues to ensure it has full control over its infrastructure.
If and when the department has all this in place, the next question is: what needs to change? Only services it has full control and visibility over should be outsourced, to ensure that a rigorous level of control is maintained.
If the department isn’t 100% confident in its ability to manage a service, that service should be kept in-house until it is. At the same time, the department’s aim should be to have every single IT service managed to a degree where it could be handed over to the cloud tomorrow with no concerns. To do this, it needs to have a full grasp on its available resources and be working at optimal efficiency.
How to treat your resources
For organisations to truly benefit from the cloud they need to know where their workloads are. Currently, many ‘think’ they know where they are rather than knowing for sure.
At all times IT departments need to know exactly what its resources are, what applications they are running, what parts of the business they service and exactly how and why they are used. Only then can they better manage their own IT resource to cope with peaks in demand.
Without this knowledge, making any plans around IT use will be irrelevant as the IT department may as well be working blindfolded.
Together with these resources, the data it holds needs to be carefully controlled. It is relatively easy for information to reach the wrong eyes unless the department pays careful attention. It needs to know exactly what data is stored and where, as well as who has access, where it can be used and how it is protected. This insight will be critical when formulating a cloud strategy as it will inform what services can and should be placed in public or private clouds.
Lastly, the IT department needs to ensure the rest of the business understands both the value and cost of the services it provides. Unless there is an understanding of IT costs across the business there is a danger they will spiral out of control when cloud models are adopted.
Keeping things in perspective
Regardless of the exact actions an IT department takes when optimising its data centre, it needs to remember its role. If an optimised data centre is a cathedral then the IT department is an architect and foreman, rather than a stonemason. The department is there to manage its infrastructure and ensure the tools and equipment are in place to help its organisation as best as possible.
Regardless of an organisation’s ultimate aims for the cloud, taking this level of control of its data centre infrastructure should be of prime importance. Without taking these steps a business will at best be wasting money on inefficient, non-optimised infrastructure, while at worst it could be setting itself up for a very visible, very painful fall.
For more information visit www.2e2.com.