Connected sensors and other datasets allow insurance providers to tailor their policies to individual customers. Now, they are using insights from this data to influence policy holders’ behaviour. This combination of extreme analysis and behavioural influence changes how insurance providers manage risk, from distributing it collectively to assigning it to specific individuals. There is a danger this will entrench inequalities, critics warn, and so far it is unclear how hyper-personalisation should be governed.
Since 2008, there have been more networked devices than people. There were 8.7 billion active IoT devices worldwide at the end of 2020, according to Transforma Insights, and this is predicted to reach 25.4 billion by 2030. More than 50% of homes in Britain now contain a smart device, according to a survey by Smart Home Week. Cars, in particular, are quickly becoming saturated with sensors: analyst company Berg Insight predicts that shipments of cars with connected telematics systems will grow from 40 million units in 2019 to more than 70 million in 2025.
The data created by these sensors is revolutionising the insurance industry, experts say. The granularity of insight it can provide on individual customers allows providers to tailor their policies to "a segment of one," says Doug McElhaney, partner at McKinsey Insurance and associate partner at the company's financial services analytics platform Ingenuity. "They can take everything about you and say 'This is exactly the premium I should give you.'"
And it is not just sensor data that is fuelling this hyper-personalisation of insurance, says McElhaney. "There are firms that collect information about what you spend your money on, including where you spend your money. There are firms and organisations that can figure out what you have in your refrigerator. All that data is out there, so the train has left the station."
In the US and other countries, insurers often use credit scores to calculate the risk of an individual policyholder making a claim. But critics argue that this is discriminatory - earlier this year, the US state of Washington banned the use of credit scores by insurance providers on these grounds. If such bans become more widespread, alternative datasets will become all the more compelling, says McElhaney. "If [they] take credit away, the telematics data is going to be awfully attractive [for car insurance providers]. And you can argue it's objectively fair. It's just how you drive."
How insurers influence their customers' behaviour
Once insurance providers have developed highly granular representations of individual customers, the next step is to influence their behaviour to reduce risks, says McElhaney. Insurance providers may reward customers that drive safely or eat healthily with reduce premiums or more direct rewards such as cash payments or vouchers. "This is behavioural psychology that you're layering into this," he explains.
South Africa's Discovery Insurance, for example, has created what it calls the Vitality Behaviour Change Platform which "guides, incentivises and provides clients with access to a broad range of personal pathways to lessen personal risk". The company prices its services dynamically in response to policyholders' behaviour. "By understanding the correlations between behaviour, cost and outcomes and by leveraging behavioural economics to design a behaviour change model that plugs into insurance and other financial services products, we are able to impact behaviour positively and measure price-related risk dynamically on an ongoing basis," it says.
And insurers are not just rewarding good behaviours. "The industry has started to shift to [a] bimodal approach to pricing where they are actually starting to increase your premium if you are a bad driver," says McElhaney.
How will the hyper-personalisation of insurance be regulated?
This combination of extreme data collection and behavioural influence gives insurance companies not only more insight into policyholders' lives than many may be comfortable with, but also greater control over their actions. This may well change the role the insurance industry plays in society, from collectively pooling risk to accentuating and entrenching inequality.
A report by the World Economic Forum on the future of data-driven healthcare warned last year that misuse of sensor data could lead to "discriminative insurance policies and prices, making it harder for marginalised populations to access basic healthcare, and other types of insurance." More broadly, it could "fragment the solidarity foundation of insurance and shake its socioeconomic and ethical basics".
So far, however, aside from general privacy protections, there have been few concrete measures to regulate hyper-personalisation in insurance. The European Commission's proposed AI rules would impose certain obligations on organisations that operate "high risk" AI systems, such as ensuring data quality and transparency. But European consumer rights group BEUC argues that "this leaves out many uses of AI which affect consumers in their everyday lives, such as smart thermostats or risk assessment for health insurance". In the US, the National Association of Insurance Commissioners (NAIC) has assembled a Big Data Working Group that is examining the need for new governance mechanisms.
For the meantime, then, governance of hyper-personalisation within insurance will depend on existing rules and self-regulation. It may even rely on the social conscience of insurance industry leaders, says McElhaney. "You would hope leadership in these companies are thinking about what the social norm is that we should be adhering to."