Infrastructure and networking issues are the most common cause of system outages, with server hardware failures close behind, a new report reveals, with 95 percent of organisations globally still facing unexpected down time: each lasting an average of 117 minutes. (There’s a reason for #hugops).
Such outages cost not just hair loss for CIOs/heads of IT and their teams, but financial losses too: One hour of downtime for a “high-priority” application is estimated by disaster recovery specialist Veeam (which conducted the report, based on the survey of 1,500 senior IT workers globally) to cost $67,651.
(Clearly where such applications are underpinning retail or financial services, those costs can rapidly spiral significantly higher than this sum.)
The survey suggests that almost every company experiences substantial downtime yearly, with one out of every 10 servers having unexpected outages each year. (Veeam cites “unreliable, legacy technologies” as hindering digital transformation, or DX, journeys and points to the “urgent need to modernize data protection” and disaster recovery, as DX efforts continue).
Dave Russell, Vice President, Enterprise Strategy, Veeam, told Computer Business Review: “In the industry we like to believe that our systems, infrastructure and networking are becoming more reliable and resilient, but this data shows that the top cause remains our data center equipment.
“Infrastructure still fails and networking issues still happen. These hardware-oriented outages are occurring at a much higher rate than most would think. Of course software in the form of applications and operating systems account for a large number of outages as well. The complexity associated with the typical data center in terms of managing and operating infrastructure and applications continues to be the primary causes of IT outages.”
He added: “Note that cybersecurity threats are a bit lower on this list in terms of what has already caused an outage, but recall earlier data from this report where cyberthreats were the number one concern going forward.
“In the future, cybersecurity and ransomware remediation are going to be significant areas to protect against. Backup is the last line of defense in a cyberthreat. At the point that you revert (restore) your backup data, you have either entirely lost, or lost confidence in the integrity of your primary data. At this point, your backups have become your primary data.”
According to the survey, modern data protection strategies hinge on cloud integrations: organisations’ ability to do DR via a cloud service (54 percent), the ability to move workloads from on-premises to cloud (50 percent), and the ability to move workloads from one cloud to another (48 percent) were all considered highly important.
Veeam’s Danny Allan, CTO said: “The Achilles Heel still seems to be how to protect and manage data across the hybrid cloud. Data protection must move… to a higher state of intelligence and be able to anticipate needs and meet evolving demands. Based on our data, unless business leaders recognize that – and act on it – real transformation just won’t happen.”
Other key findings of the Veeam 2020 Data Protection Trends Report included a clear emphasis on staff shortages. Lack of staff to work on new initiatives (42 percent) was cited as the most impactful data protection challenge, closely followed by lack of budget for new initiatives (40 percent) and lack of visibility on operational performance (40 percent) were also cited.
Some 23 percent said their organisations’ data is replicated and made business continuity (BC)/DR capable via a cloud provider; 21 percent of data is not replicated or staged for BC/DR and 27 percent said their data is backed up to the cloud by a Backup as a Service provider. When asked what their primary backup solution will be by 2022, respondents said they plan for BaaS-managed cloud backups to account for 43 percent of backups, while self-managed backups that use cloud services are expected to work for 34 percent.