Yet anyone running a data infrastructure has to confront death sooner or later. Systems that once cavorted through the dewy, evergreen pastures of their backup, recovery and archiving processes, will all inevitably age. They can’t last forever. Death is, to coin a phrase, simply a fact of life.
‘End of life’ is both an objective term given by vendors who no longer wish to support legacy kit, and a subjective one where IT teams must face up to reality and strike the final blow.
With the pace of technology innovation, and migration to new hybrid cloud models, more and more IT pros can’t wait around for someone else to pull the plug. Rather than let it expire without dignity, many are proactively euthanizing their once beloved old kit.
However, when it comes to data management infrastructure, the pronouncement of death can be a tricky business whenever individuals disagree about end-of-life timing. One is reminded of the Monty Python ‘dead parrot’ sketch, when the shopkeeper gamely refuses to accept that which is inescapably true.
“This is an ex-parrot!”
“No, no it’s not – it’s just pining for the fjords…”
Any student of comedy will tell you that a great deal of positive energy can stem from facing up to the honest reality of decay, frailty and death. Python proves the point beyond mere parrots with its seminal work, “The Holy Grail”; the hilarious film about King Arthur’s famous quest during the Dark Ages (and its occasional bouts of Black Death).
Data management infrastructure that can’t cope with the processing and scale demands of digitally-enabled enterprises can have serious consequences for business performance. And that’s no joking matter.
Regardless of its age, any infrastructure that cannot offer simplicity, speed and scale at a sustainable cost could be a threat to revenues, profitability and compliance.
Too often, these legacy systems are unreliable, difficult to manage and prone to failure. And that’s before considering just how many hours many backups and other processes can take. Legacy systems can also make it very hard to transparently prove to auditors what exactly your data management processes are and how they are performing.
The legacy data management solution at one of London’s biggest museums wasted 15 hours per week on monitoring backups alone, before the IT team put it out of its misery. It was also taking two days to restore a file from tape.
Likewise, the multiple backup platforms at a major regional UK government department ran six times slower than they do now. The radical improvement stemmed from being EOL’d and shifted to a modern cloud data management solution.
An insurance policy that won’t pay out
The whole purpose of old-fashioned backup was as an insurance policy against failure. Well, insurance policies are great when they ‘pay out’ quickly. The problem with legacy technology isn’t just that it is slower and clunkier than modern equivalents (an issue compounded by the sheer weight of increasing data demands).
It’s that they don’t provide any of the additional ROI that modern approaches do. Enterprises are increasingly innovating their use of data via modern approaches to cloud data management; searching, securing, analysing and monitoring it in ways that transcend basic notions of ‘keeping a copy for safekeeping’.
Another problem is that many backups fail, particularly when (as is frequently the case in the real world) using unsupported legacy technology coarsely integrated together in some kind of Heath Robinson architecture.
That’s a serious issue when it comes to expecting your ‘insurance policy’ to pay out, and finding out that – literally and figuratively – you haven’t been paying your premiums properly.
If that sounds far-fetched, then consider the impact of a ransomware attack that renders your live data unusable. What you want is a near-instant restore of all data from the last backup, and for that backup to be recent enough to have no discernable effect whatsoever on your business operations.
Unfortunately, what you might end up getting is a nowhere-near-instant delay to restore a backup that turns out to be failed or corrupted, causing you to spend even longer retrieving and restoring even older archives, leading directly to a material loss of business, customer confidence and market reputation.
Seek immortality or prepare to bring out your dead
By contrast, cloud data management isn’t constrained by the complexity of legacy approaches. This enables, if not immortality, then at the very least a set of awesome super-powers; resetting the kind of expectations organisations should have about – for example – their data governance, disaster recovery and instant search capabilities.
Time waits for no-one. But how much time does your IT operations team spend managing complex data integrations, waiting for backups to complete, restoring systems or searching for specific archive data?
If that feels a lot like living in the past (in more ways than one), then perhaps it’s finally time to “bring out your dead.”
This article is from the CBROnline archive: some formatting and images may not be present.
Join Our Newsletter
Want more on technology leadership?
Sign up for Tech Monitor's weekly newsletter, Changelog, for the latest insight and analysis delivered straight to your inbox.