It’s clear that corporations want to buy insurance to reduce their exposure to losses from cyber-attacks, and insurers have responded to the need. However, most buyers are dissatisfied – the coverage amounts are low, and the covered events are too narrow. From the insurer’s point of view, it had to be this way, due to historic challenges with visibility into cyber risk and liability.
When everyone wants the same kind of policy, the insurer has to think about the systemic risk, and if that systemic risk is poorly understood, each individual policy has to stay small. Think of everyone in a Medieval town wanting to buy fire insurance at the same time – individually, they all want the same thing, but the insurer can’t take on the combined risk without understanding whether the houses are all in the same town, or made of the same flammable material.
Until recently, there has been little research into this systemic risk for cyber insurance. A new report co-authored by Lloyd’s of London aims directly at the nightmare scenario: what if all the insured businesses use the same technologies, and therefore share a common flaw, and we have the equivalent of the Great Fire of London?
The report estimates that direct economic losses from a “cloud service disruption scenario” could reach $121 billion. In the abstract, that could be just another mind-numbing number, but to put it in context, they point out that it’s far higher than the estimated $70bn in costs stemming from 2012’s devastating Superstorm Sandy.
It’s time to wake-up to the reality that as our world goes digital, and threats continue to escalate, a major online incident could decimate the nascent cyber insurance industry.
Fortunately, tools exist today which can help underwriters gain a more accurate insight into cyber risk across hybrid cloud networks, ensuring more accurate premiums, fewer claims and stable profits. They can even help to strengthen customer relationships by providing actionable information that IT teams can then use to improve network resilience.
Counting the cost
The Lloyd’s report – Counting the cost: Cyber exposure decoded – makes clear the structural problems with cyber insurance. Compared with other classes of insurance, understanding of risk is “relatively underdeveloped” (as they politely put it). In an industry fuelled by data analysis, it’s a continuing source of frustration that there are so few authoritative sources of data available to work with. Some estimates claim cybercrime alone cost the global economy as much as $450 billion last year, and with globe-spanning attacks like WannaCry and NotPetya unleashed with increasing frequency, it could reach $2 trillion by 2019.
Yet what of individual incidents? The Lloyd’s report, co-developed with risk modelling firm Cyence, paints two scenarios. The more damaging one involves a sophisticated group of hacktivists modifying a key hypervisor at a global cloud provider, causing customer servers to fail and widespread disruption.
The point of this scenario, of course, is that it is a shared weakness – akin to suddely learning that every building you insure has a shared but previously unknown source of fire risk, and worse, a scenario where a fire in one building is likely to be spread across all the others. The direct economic impact of such an event is estimated at $4.6 billion for a large event and $53.1 billion for an “extreme” event, but could be as high as $121.4 billion, depending on the organizations involved and how long the disruption lasts for.
Even with coverage levels at an estimated 13-17%, such an incident could prove catastrophic for an industry Lloyd’s estimates is worth just $3.5bn today. The report has the following gloomy prediction:
“When assessing current estimated market premiums against the forecasted cyber scenario insurance loss estimates set out in the report, it is apparent that a single cyber event has the potential to increase industry loss ratios by 19% and 250% for large and extreme loss events, respectively. This illustrates the catastrophe potential of the cyber-risk class.”
Given the stakes, insurers must find ways to more accurately quantify and price cyber risk in a world increasingly dominated by shared cloud computing systems.
Resilience to the rescue
Many underwriters today are looking to technology providers to help, by providing cyber risk scores. However, these are usually based around external views of a particular network. That is, before you write a policy for some company, you can take a once-over of their existing cyber footprint – what is the quality of their publicly reachable infrastructure? This is an important element, but by no means tells the full story about risk. It’s akin to making a decision about the fire safety of a building based on a photograph taken from across the street. Such a photo definitely has its uses – you can see, for example, whether the building is already smoulering! That is perhaps a little harsh – more realistically, a photo of the outside of a building can see whether it is derelict, or well maintained.
Likewise, external scans of the online presence of corporations can detect gross negligence – if the externally visible machines can’t pass basic hygeine checks, you have a decent means to guess what the inside looks like. But the reverse isn’t true – just like a photo taken from across the street, an external scan of a company could show a nice enough façade, but a rotting interior. In cyber terms, this begs the question “what about the network?” – how well configured is it? Does it follow industry best practices? Can unqualified users reach critical internal information? Are the near-constant changes it goes through analyzed for risk? Do security management teams even know about and manage all parts of the network?
These are all questions that need answering if insurers want to get better at assessing cyber risk. The best technology solutions in this area will be able to provide a digital resilience score based around these internal network answers, as well as a complete inventory of all assets to be insured, and actionable intelligence to improve resilience going forward.
In a cloud-based world, there are also opportunities for insurers to compare and contrast weaknesses across multiple customers in a way that is much harder to do with on-premise infrastructure. The ability to correlate in this way may even allow insurers to make informed decisions about whether to force customers to diversify their mix of cloud providers.
It’s a win-win for insurers and customers. The former can gain a competitive edge and improve profitability by expanding and tailoring their insurance policies more accurately, being more selective about whom to insure, and at what rates. For customers receiving the detailed assessments from such tools there’s an opportunity to improve baseline security, lower premiums and better mitigate cyber risk. Then there’s the prospect of creating a virtuous circle where the data generated by network resilience tools can be used to make networks more secure, lowering premiums in the process and reducing the chances of being breached. Ultimately, this is how we get away from medieval building practices – all wood, crowded, with no controls – to a modern city with open spaces and reliable buildings.
We’re not there yet. Current tools can only map IaaS hybrid set-ups; ideally we also need the same level of insight into PaaS and SaaS environments. But the good news is that there’s plenty that insurers can already do today to shine a flashlight on the murky world of cyber risk from shared infrastructure, improve the resilience of customers networks and drive profits in the process.
The Lloyd’s report is a sobering read, pointing to the online equivalent of a Great Fire of London, if we don’t do something to avoid it.