View all newsletters
Receive our newsletter - data, insights and analysis delivered to you
  1. Technology
  2. Data Centre
February 19, 2016updated 31 Aug 2016 5:11pm

The $16m wake-up call for the always-on enterprise? Under pressure CIOs must fight to close the availability gap

Analysis: CIOs under pressure, boards in shock and enterprises struggling. This is how the world is going today.

By Joao Lima

The always-on enterprise is far from perfect, as the gap between application uptime and downtime has significantly widened over the last 24 months as more people and things get connected to the internet.

A report has found that between 2014 and 2016, 84% of businesses worldwide experienced a break between what IT can deliver and what users demand. This is a 2% increase on 2014.

As a result, the costs associated with downtime have soared from an average of $10 million two years ago, to $16 million this year.

Worldwide, on a yearly basis, businesses are being faced with 15 downtime occurrences. In the UK the figure drops to 10 incidents, according to Veeam’s Availability Report 2016.

Globally, 24% of the 1,140 senior IT decision makers (ITDMs) surveyed across 24 countries ‘strongly agreed’ that despite all the investment in their organisation’s data centre, there is still an availability gap – the gulf between what IT can deliver and what users demand. In the UK, the percentage is lower at 14% (from a sample of 100 British companies), however, 68% of British ITDMs ‘agreed’ that there is an availability gap impacting business.

Richard Agnew, VP North West EMEA at Veeam, told CBR: "I think this is a wakeup call. It could possibly be a shock to the board if they were to read some of these statistics and they were to dig into their environment and understand this sort of ticking time bomb.

Content from our partners
How businesses can safeguard themselves on the cyber frontline
How hackers’ tactics are evolving in an increasingly complex landscape
Green for go: Transforming trade in the UK

"Clearly the always-on enterprise is struggling, the modern data centre is still struggling to deliver the availability.

"[This is due to the] complexity of applications. The number of mission critical applications has increased significantly, and the fact that the modern data centre is always on, there is no downtime associated with it, therefore organisations are struggling to keep up and manage the data centre for the business requirements and to serve customers."

always

What does it mean to be an always-on enterprise according to the 1140 respondents (in percentages)?

The average cost associated with downtime has increased $6 million in the last 12 months alone, according to the report.

This is despite almost all respondents, whose backgrounds varied across industries such as energy, finance or retail, saying that they have implemented stronger measures to reduce availability incidents.

Enterprises failing to address availability needs

The report highlights that as the world gets more connected and more people have access to the internet, the pressure on the enterprise and CIOs continues to grow.

Agnew said: "CIOs are under pressure to reduce cost, under pressure to produce better availability and better business services to turn everything around quicker, to integrate cloud.

"Every CIO I speak to says that they sort of know but quite often [nothing is done] until something goes wrong. They wait for the impact and they said they cannot let that happen again, that is regretfully one sort of things that happens.

"I think as regulation creeps on this, they will be under more pressure to mange this."

With consumers now demanding more from CIOs and businesses, the study found that 63% of users now want support for real-time operations and 24/7 global access to IT services to support international business (59%).

Agnew said: "There is significantly more data, and more unstructured data out there. Everybody has more intelligent smart phones, the demand for data is just going to increase, which is going to push for enterprises harder to deliver that amount of availability to their customers and the scale of the data they are having to manage is just growing exponentially.

"The gap will continue to grow and the cost associated with it will grow until people realise and do something about it."

veeam imapct

Major impacts of downtime by percentage of total respondents

As the overall cost of downtime has now reached $16 million, hourly costs for mission critical applications have also risen to $79,510, a value that in the UK shoots up to $100,266. The UK is also the only country where a percentage of ITDMs (1%) have experienced downtime with over $500,000 in hourly costs.

The average cost per hour of data loss resulting from downtime of mission-critical applications is $88,564 globally, while in the UK that sum is a little lower at $80,227.

When it comes to non-mission-critical applications, the average cost per hour is $56,176 globally and $42,104 in the UK.

According to Agnew, when boards look at these numbers, a lot of it comes down to money, telling CBR that "everybody is driving towards operations savings, and also costs."

"It is also about explaining the cost of downtime. If most organisations knew that on average it costs them $100,000 an hour, they would probably do something about it.

"The average service level agreement [for recovery time objectives] with the business to get those back up again is 1.6 hours, but in reality it actually takes up to five hours [up to eight in the UK].

"It is taking them far too long. 1.6 in the first place is too long but the fact that it is taking them on average five hours to get up and running, I would point to the technology that is being used in data centres, which is still legacy technology."

Agnew said that most businesses are now significantly virtualised, but far too many are running legacy software to manage virtualised environments which does not fit and is too complex, taking them too long to actually recover.

"They should be demanding significantly better recovery point objectives and recovery time objectives to get to data backup."

Companies failing to test backups

On top of all the previously mentioned causes for the widening availability gap, is the fact that most companies are failing to test their environments.

He said that the fact they are not testing their backups is worrying. Agnew said companies think it is all about the backup and "it is not, it is about recovery".

"In too many cases, when they come to do the recovery they fail, so having tested backups is really important."

Another factor to such a high number of downtime occurrences is "probably the complexity of the stack on which they are running".

hoursdown

The Veeam report also asked businesses how often they back up their applications. Globally, companies carry out a back up every four hours. UK firms take on average one extra hour to do this. As for the backup of non-mission-critical applications, worldwide this is done every 14 hours, while in the UK it takes 20 hours.

However, organisations have increased their service level requirements to minimise application downtime, with 96% of ITDMs saying that their organisation has increased the requirements, or guarantee access to data (94%) to some extent over the past two years, but the availability gap still remains.

To address this, respondents said that their organisations are currently, or are intending in the near future, to modernise their data centre in some way. 85% are planning to use virtualisation and 80% plan to use backups.

Just under half only test backups on a monthly basis, or even less frequently, according to the report.

Long gaps between testing increase the chance of issues being found when data needs to be recovered, at which point it may be too late for these organisations. Of those that do test their backups, just 26% globally and 34% in the UK test more than 5% of their backups.

Agnew said: "That is a very small number. You are opening yourself to a problem because you would probably expect 100% of companies to test 100% of their backups."

As a way of pressing companies to test their applications, Agnew said that regulations will probably be the way.

"[Regulation] is already in some of the financial services and some governments issue penalties for companies who fail to protect and manage consumer data.

"If people cannot get access to data (…) then you will see governments possibly regulating that, and that filters down to the back-end of companies on how they manage it around testing backups, and so on. There is certainly more pressure on businesses to deliver those services, and I think governments will regulate."

With the next report expected in 2018, it is predicted that the availability gap will grow from today’s 84% as the always-on enterprise and modern data centre fails to cope with the demands of an ever more connected world.

 

Websites in our network
Select and enter your corporate email address Tech Monitor's research, insight and analysis examines the frontiers of digital transformation to help tech leaders navigate the future. Our Changelog newsletter delivers our best work to your inbox every week.
  • CIO
  • CTO
  • CISO
  • CSO
  • CFO
  • CDO
  • CEO
  • Architect Founder
  • MD
  • Director
  • Manager
  • Other
Visit our privacy policy for more information about our services, how New Statesman Media Group may use, process and share your personal data, including information on your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.
THANK YOU