CBR: What is backup?
RA: The general concept of backup refers to copying physical or virtual files or databases to a secondary site. The backup itself takes a snapshot of a point in time, which enables enterprises to return data to its previous state. Enterprises should backup data they deem to be valuable or could leave them vulnerable to considerable reputational damage, should it fall into the wrong hands.
There are a whole host of ways to externally backup data, from system disks and removable hard drives to offline tape devices and cloud backups. But whichever option a business chooses, the backup repository itself must be protected against attack.
CBR: What does backup protect against?
RA: Backing up data prevents against data loss in the case of equipment failure or some other business disaster. Backing up data protects enterprises against the consequences of faulty software, data corruption, cyber-attacks and hacking, user errors and more. The right backup solution should protect data against unauthorised access, ensure data remains unchanged during storage and promise accessibility when and where it’s needed.
Why is backup important to businesses?
RA: Things can and do go wrong in IT, and human error is inevitable. So when mistakes do happen it’s vital for enterprises to have the appropriate procedures and processes in place, as well ensuring they have a robust contingency plan.
However, in this digital age simply backing up is no longer enough. Data downtime costs business millions every year, with the hourly cost of mission-critical app outage now over $100,000. It’s therefore vital that businesses not only backup their data but ensure 24.7.365 access to their data by having robust backup and replication systems in place.
CBR: What happens when backup goes wrong?
RA: A prime example of backup going wrong occurred earlier this month, when GitLab suffered a near-crippling data loss as a result of ineffective backup procedures. The site was temporarily taken offline after suffering a major backup restoration failure, caused by a system admin accidentally deleting a directory containing 300GB of live production data during a database replication process.
While the site is thankfully now back up and running as normal, it lost six hours-worth of database data which, in this digital age, is unacceptable and potentially hugely costly. In GitLab’s case, every layer of contingency appears to have been inadequate or failed. It’s vital that businesses learn from this event and understand the dire consequences of not having effective backup technology in place.
CBR: What steps can be taken to ensure backup does not go wrong?
RA: The reality is that what happened to GitLab could happen to so many other businesses. Despite the proliferation of sophisticated cyber breaches in the news today, 24.7.365 availability is not a priority for many IT leaders. In fact, most do not test backups to check if they are actually able to recover from an outage due to an attack.
Our research of UK CIOs found less than a quarter (24 percent) are able to recover mission-critical applications and two-thirds (67 percent) fail to backup their mission-critical data more frequently than every 30 minutes. This tells us that a vast number of businesses remain at risk of suffering a fate worse than GitLab.
It’s therefore vital to ensure frequent backup testing, to prevent against issues only being discovered when data needs to be recovered – at which point, as Gitlab nearly discovered, it is often too late. In this fast-paced digital age companies must guarantee that, in the event of a failure, the recovery of even critical applications will take no longer than 15 minutes.
While boards are squeezing budgets and expecting more from their IT deployments, GitLab’s experience highlights the need to ensure data and applications are Always-On. Businesses must have systems in place to ensure their data is always available, and help them understand the basic best practices around backup and replication.