UK data centres will be worth $135 billion by 2025.
Let that sink in for a moment.
$135 billion US dollars, in the UK alone…an astonishing number.
That’s according to a new report on the UK data economy, which also predicts that business investment in data infrastructure and new technologies will increase by up to 11 percent in the next couple of years. What does that actually mean? Apparently a $10 billion increase in the value of data.
This shouldn’t come as any real surprise. In today’s modern world – where new technology, increasingly tech-savvy populations, and the digitalisation of everyday life all combine to create amounts of data we never dreamed of – in goes without saying that our dependence on centres to process and store this data was only ever going to increase. In fact, it’s a pretty safe bet to say that not many industries are so uniquely positioned for growth in the modern world as data centres.
But how do businesses make the most of this growth, and prepare themselves for it? Put simply, organisations need to streamline their management of data centres and address some of the huge inefficiencies that currently exist. Conventional approaches to data centre management are quite labour intensive, with IT professionals spending their days (and sometimes even nights) manually tweaking their infrastructure in order to deal with unexpected issues.
This is a colossal waste of time and resources – Gartner even puts a dollar number on it, and believes the cost of network downtime for an organisation is roughly $42,000 every hour. And with average downtime for businesses sitting at an average of 175 hours each year, this could easily result in losses in the millions for organisations – not to mention the additional reputational damage and loss of customers that comes hand in hand with IT meltdowns.
The Complexity Challenge
IT infrastructures are undeniably complex – and becoming more so every day as the growing digitisation of applications significantly increases. So, what do increasingly complicated infrastructures mean for businesses? A similarly increased risk of system disruption as IT teams struggle to manage these applications.
See also: Frankfurt Moves on London with €120 Million Data Centre Plans
Market research firm Enterprise Storage Group (ESG) estimates that almost half of large organisations (45%) are actually running over 500 business applications. Ensuring that such a large number of applications are running at performance levels requires a great deal of time-consuming manual intervention, as well as specialised resources that often involve equally time-consuming trial and error.
So, what can organisations do to try and mitigate this problem?
Previous efforts to achieve reliability, performance and availability across this increasingly number of applications have been focused on watertight control of IT processes, as well as over-capacity and hardware redundancy. However, this is becoming increasingly challenging with the growing complexity of data storage technology. But this is somewhat of a redundant tactic – primarily since this level of complexity just can’t be effectively tackled with conventional data centre management tools.
What is needed, therefore, is a new generation of management solutions that relieve data centre administrators from arduous day-to-day work through automation and analytics, and free up their valuable time for genuine value-adding activities…
…otherwise known as an autonomous data centre.
A New Generation of Storage has Arrived
A data centre infrastructure, powered by Artificial Intelligence (AI), can overcome the limitations of traditional approaches by using intelligent algorithms, powered by sensor data from the systems, to effectively run itself. This intelligent AI engine will be able to automatically detect malfunctions, bottlenecks, or faulty configurations and has the potential resolve them autonomously – already removing the need for time-consuming human intervention. It can even blacklist problems it has previously detected, to avoid repetition and stop customers hitting problems they’ve experienced before.
Not only could AI in the data centre detect and repair issues, but it also has the potential to proactively give suggestions for improvements. By leveraging the data and insights generated, it can identify opportunities for systems optimisation and better performance, which in turn has a positive impact on business processes, the effectiveness of the IT team, and – ultimately – customer experience.
How does it do this? Put simply – AI in the data centre allows for simultaneous monitoring of all systems in an installed base. This enables the system to develop an understanding of the ideal operating environment for every workload and application, and then spot abnormal behaviour through recognition of the regular, underlying I/O patterns. In other words, as the depth and breadth of data generated within your business increases, so too does the effectiveness of the AI system as it recognises regular data patterns. This, in turn, extends the life of the AI system, and means that it will continuously look to improve your IT infrastructure – either by patching new problems that emerge, or suggested newer ways to optimise and improve processes.
The system can then use deep telemetry data to create a ‘base foundation’ of knowledge and experiences, shared across every system connected to its AI engine globally. This allows the technology to analyse and predict if any other system in the installed base will be susceptible to similar issues by using pattern-matching algorithms. Additionally, this insight allows application performance to be modelled and tuned for new infrastructure based on historical configurations and workload patterns, reducing risk for new IT deployments and cutting down implementation costs.
The Autonomous Data Centre: Faster, Better, Stronger
Based on the predictive analytics and the shared ‘knowledge’ of how to optimise system performance, the AI can determine the appropriate recommendations needed to ensure the ideal operating environment and apply these changes automatically on behalf of IT administrators. When automation is not available, specific recommendations can be delivered through support case automation. This frees IT staff from a lot of the manual work that needs to be done to identify the causes of system glitches and eliminates the guesswork in managing the infrastructure.
Data has shown that by using our predictive analytics engine, customers resolve 86 percent of problems before the business is impacted. For the remaining 14 percent of issues, the user has immediate access to experienced engineers, who can help find a solution as quickly as possible. Similarly, data from ESG shows that 70 percent of our customers using this technology can solve problems or malfunctions in less than an hour and as many as 26 percent of them have resolved issues within 15 minutes.
To put things into perspective, with the conventional approach to data centre management, it takes an average of 84 minutes for a third (32 percent) of the users to get their issue escalated to an engineer with the right level of expertise to be able to resolve the problem.
By putting AI at the heart of data centre infrastructure management, organisations will be able to predict, prevent and resolve issues faster than ever before. This can drive significant efficiency gains and operational improvements, while making the infrastructure smarter and more reliable. Most importantly, businesses will be able to minimise service disruption and speed up time to resolution of IT issues, allowing their IT teams to focus on tasks that add value and improving the quality of the customer experience.