Independent report

Snapshot Paper - AI and Personal Insurance

Published 12 September 2019

1. Summary

Artificial intelligence (AI) is expected to alter several dimensions of the personal insurance industry, including customer onboarding (e.g. by powering customer service chatbots), pricing (e.g. by enabling more precise risk assessments) and claims management (e.g. by screening out fraudulent claims).

Looking further ahead, AI could one day enable insurers to offer novel forms of advisory services that help customers to live healthier and safer lives, for example by recommending safer driving routes or by flagging early signs of damage in the home.

The industry is still in the nascent stages of AI adoption. Incumbents have found it challenging to marry this technology with their legacy infrastructure, as well as to find the talent required to take forward innovation programmes. Yet insurance leaders are confident that AI will soon be embedded across their value chains.

Critics say that AI could lead to detrimental outcomes for customers, particularly where it allows or requires:

  1. the collection and sharing of large data troves, which could impinge on privacy if done without the express consent of customers
  2. hyper personalised risk assessments, which could leave some individuals ‘uninsurable’ by revealing previously unseen indicators of risk
  3. new forms of nudging, where insurers use AI to alter the behaviour of customers in a way that could be viewed as intrusive

However, insurers have a strong case for engaging in these activities. Using AI to produce more accurate risk assessments, for example, could make insurance products more accessible to individuals previously deemed too risky.

Over time, the industry will need to engage with the public to reach a consensus on what constitutes a responsible use of AI and data, for example by deciding under what conditions it is acceptable to process data from social media platforms or to use algorithms to predict people’s willingness to pay higher premiums.[footnote 1]

It should also consider whether tighter controls need to be in place on the use of personal characteristics in pricing. If AI allows insurers to identify high risk characteristics it wasn’t able to previously (e.g. chronic health conditions), this could leave more people facing unaffordable premiums. Society should have a say in any decision on where to redraw the boundary between acceptable and unacceptable forms of discrimination

The search for consensus, however, should not stop the industry from intervening today to address obvious harms. More accessible privacy notices, data discrimination audits, and industry-wide registers for third party suppliers of data are all potential measures to ensure AI and data continue to be used for the public good.

Underpinning all of these measures should be a sector-wide commitment to transparency. Without greater disclosure, insurers will struggle to build trust with customers and regulators will lack the information to design proportionate regulatory responses.

2. Introduction

For hundreds of years, the personal insurance industry has used the same practices to help people prepare for unforeseen events. Yet if predictions about the development and adoption of artificial intelligence are correct, tomorrow’s industry could look markedly different from today’s. From refining risks assessments to improving the detection of fraud, new data-driven algorithms could lead to significant changes across the insurance value chain.

Opinion is divided on whether this would be a trend to laud or lament. Some fear the adoption of AI for assessing risks could lead to a spike in prices and create a new class of ‘uninsurables’ in society. Others say AI will open up insurance to those previously locked out of the market, by revealing that they are healthier, safer and more trustworthy than they first appear. Still others worry that expanding the use of data-driven algorithms in the industry will impinge on people’s privacy, particularly where that data is collected without consent.

This paper takes a closer look at these and other claims. It examines the potential use cases of AI across the insurance industry, compares these with the reality of how AI is used today, and explores the arguments for and against such applications, looking in particular at the ethical concerns associated with data collection and sharing (which AI requires), and hyper personalised risk assessments and behavioural nudging (which AI allows).[footnote 2] It finishes by setting out several proposals for how AI could be used more responsibly by insurers.

3. How might AI change insurance?

Artificial intelligence (AI) refers to computing systems that can complete tasks requiring human-level intelligence. This paper predominantly looks at AI in the form of machine learning software, which is trained to make predictions by identifying patterns in historical data. Unlike traditional forms of software, whose rules are painstakingly hand-coded, machine learning software ‘learns’ rules by finding connections between different data points (e.g. learning that certain shapes and shades of colour in an MRI scan indicate the presence of a malign tumour, based on what has been labelled as cancerous in the past).

The insurance industry has long used algorithmic approaches to price the risk of customers, including Generalised Linear Modelling, which some view as a form of machine learning. However, what has changed in recent years is the sophistication and applicability of these systems, owing to low cost data storage, growing computing resources and a developing market for new types of data. These new AI systems are expected to alter at least four dimensions of the industry:

  • Onboarding – AI is already used to identify new customers and speed up the process of providing quotes. Insurers and price comparison websites can make use of AI-powered online advertising to segment consumers and target adverts at those more likely to be looking for a policy. Insurers have also developed chatbots that use natural language processing and generation to answer customer queries and offer quotes, including via social media platforms like Facebook Messenger. The insurer Lemonade claims its chatbot can provide a personalised policy in just 90 seconds.

  • Pricing – AI can improve pricing by finding new patterns between personal characteristics and specific risks (e.g. between someone’s credit score and the quality of their driving).[footnote 3] Combined with real-time collection of data through sensors, the use of AI opens the door to hyper personalised risk scores, allowing premiums to be based on people’s actual behaviour (e.g. their exercise regime), not just the risk profile of a category to which they belong (e.g. their age group, postcode or family health conditions). A related use of AI is for customer retention, with insurers modelling the minimum benefit it would take for customers to renew their policy.

  • Claims management – AI can improve claims management by identifying fraudulent behaviour or predicting it before a claim is made. Hanzo has created AI tools that can trawl social media sites including Facebook and Twitter for corrupting evidence, such as messages that reveal someone was in a different location to the one they say they were at the time of an accident. AI can also be used to undertake damage assessments. UK-based Tractable has created an AI package that can review pictures taken at the scene of a car crash and provide an instant estimate of repair costs.[footnote 4] At the back-end of insurance firms, AI can be deployed to extract relevant claims information from the bundles of written evidence passed onto insurers, including medical invoices and police reports.
  • Advising – AI can be used to advise customers on how to avoid risks. AXA’s “Xtra” health app includes a chatbot that can suggest ways for policyholders to meet fitness and nutrition goals. US tech company Cape Analytics combines machine learning software with aerial images of people’s houses to analyse the quality of their rooftops - information that can then be channelled to customers to help them spot and repair damage before it worsens. In the future, insurers may be able to use AI to steer the behaviour of policyholders in real time, for example by notifying drivers of different travel routes that are known to be safer. Innovation of this kind promises to alter the underlying business model of insurance companies, such that they generate income not only from rectifying damage but preventing it from occurring.

Not every insurance innovation is driven by AI. Many insurance companies, particularly new entrants to the market, are experimenting with novel product offerings that draw more on developments in user experience (UX) and user interface (UI) design than they do on machine learning software. Several companies now operate usage-based insurance (UBI) models, allowing customers to insure property only as and when it is used (e.g. sports equipment and bicycles).[footnote 5] Similarly, many insurers are disaggregating their policies so that customers can pick and choose individual items to cover rather than be forced to take out catch-all policies.

4. How many insurers are using this technology?

It is difficult to determine the precise level of AI adoption in the industry, partly because definitions of the technology differ between firms and analysts. However, corporate surveys such as those undertaken by consultancies can help to shed some light. A recent C-suite poll from PwC found that 80 percent of global insurance chiefs believe AI is already integrated into their business or would be within the next three years. A similar survey by Accenture showed that 84 percent of insurers believe AI will either ‘significantly change or completely transform’ the industry over the same time period.

The level of investment flowing into insurance technology and ‘insurtech’ start-ups appears to corroborate these survey findings. CB Insights, a leading commercial research agency, estimates that the final quarter of 2018 represented the second-highest ever quarter of global insurtech investment. The first quarter of 2019, meanwhile, saw the highest number of insurtech transactions - 1 in 10 of which occurred in the UK - and the highest volume of Series B and Series C funding rounds since the agency began tracking investment activity. Many AI-driven insurance companies have witnessed significant growth, including Lemonade which recently launched in Europe after raising $300 million in a new funding round.

Insurance is a large and complex industry, however, and not one that will find it easy to integrate AI within its products or backend systems. While there has been an increase in AI-led innovation in recent years, both among incumbents and new market entrants, it is important not to overstate the changes witnessed to date. Irish risk management company Willis Towers Watson believes that few insurers have meaningfully integrated AI within their operations. A 2018 Capgemini survey revealed that only 2 percent of insurers worldwide have seen full-scale implementation of AI in their business, whereas 34 percent are still in ‘ideation’ and 13 percent use-case testing.

A closer look at the industry shows there to be multiple barriers to the adoption of AI. Incumbent insurers often struggle to break free of legacy computing infrastructure, which can be difficult to marry with new forms of data-driven technology. Innovation is also made complicated by the number of players in the insurance value chain, including price comparison websites, brokers and reinsurers, each of which have their own operating systems that are difficult to align. At a more basic level, insurers can find it difficult to attract staff with the necessary technical skills to develop and oversee transformation projects.[footnote 6]

Nevertheless, the industry is seeing meaningful experimentation with AI, and it may only be a matter of time before the pilots that are underway today are turned into fully established products. Some insurance markets have already witnessed significant innovation, including the automotive sector where telematics - data tracking via in-car devices or mobile apps - has proven popular with car insurers. It is worth remembering, too, that AI does not have to be adopted at scale and in every dimension of a business for it to have a significant impact on policyholders. The increase in accuracy brought about the use of a single algorithmic system (e.g. to aid fraud detection) could affect thousands of policyholders in a short space of time.

5. What are the ethical implications of insurers using AI?

Deployed responsibly and in competitive markets, AI could:

  • Reduce prices for policyholders – Automating aspects of onboarding, pricing and claims management could improve productivity, potentially leading to lower premium costs for consumers.

  • Lead to fairer outcomes – Using AI to filter out fraudulent claims would ensure the industry only pays out to those who deserve a settlement. Deloitte estimate that annual fraud-related costs add up to 10 percent of insurers’ overall claims expenditure.

  • Open up insurance to new groups – AI, combined with the collection of data from new sources (e.g. social media and wearables), could reveal that many individuals lead safer and healthier lives than is suggested by traditional methods of risk scoring, making them eligible for more insurance policies.

  • Protect against harm – The use of AI to advise policyholders could reduce damage to people and property. The German reinsurance company Munich Re has used machine learning to create a health-focused ‘adherence support’ service, which helps users to follow medicine schedules.

  • Incentivise take-up of insurance - By making products more useful and customer interactions more seamless, the use of AI could encourage greater take up of insurance products. This would be a welcome development in markets where relatively few people are protected from harm (e.g. income protection insurance).

Not everyone is convinced that AI will be a blessing for policyholders. Some have argued that insurers are pursuing innovation without fully considering the ethical consequences.[footnote 7] Admiral was roundly criticised in 2016 for attempting to use Facebook data to draw patterns between the content of people’s social media posts and the quality of their driving. Elsewhere, a mystery shopping investigation by The Sun newspaper found that insurers had given higher premium quotes to motorists with the name Mohammed, suggesting their underlying pricing algorithms were racially biased.

While such concerns should be taken seriously, it is important to understand the dilemma the industry faces. Insurance companies seeking to abide by legislation can find themselves on the wrong side of what is deemed ethically permissible. A central challenge is that, while insurers have legitimate reasons to use AI in the way they do, many of these behaviours are out of kilter with what some in society find acceptable. In many cases, customers themselves are divided on what they see as a valuable and ethical use of AI and data processing.[footnote 8]

Three activities in particular demonstrate these ethical tensions: i) the collection and sharing of personal data to power AI systems; ii) the use of AI to calculate hyper personalised risk scores; and iii) the use of AI to nudge policyholders to change their behaviour.

5.1 1. Collecting and sharing personal data to power AI systems

The insurance industry has long collected customer data to inform its decisions. For the most part, this has been provided data, where customers are asked directly for information or where that information is looked up on their behalf (e.g. credit scores). However, with more powerful algorithms at their disposal, today’s insurers are incentivised to collect a wider array of data that could yield new insights about the likelihood of customers making a claim. This includes i) observed data, which is gathered indirectly through the monitoring of customers (e.g. with wearables used to track people’s exercise regimes); and ii) inferred data, where individual characteristics can be inferred from seemingly unrelated data. An inference might be made, for example, that someone is likely to drive more or less safely based on the groups they visit on Facebook.[footnote 9] These three data types – provided, observed and inferred – can then be processed by AI systems to build a richer risk profile of customers.

Data type Example
Provided A life insurer predicts that a person takes part in regular exercise because they have explicitly said so within a policy application form.
Observed A life insurer predicts that a person takes part in regular exercise because they have observed them doing so using a wearable fitness tracker.
Inferred A life insurer predicts that a person takes part in regular exercise on account of what they purchase (tracked through supermarket loyalty cards), which may have no obvious relationship with exercise.

The principal objection to the use of observed and inferred data is that it can be captured without the express consent of individuals (even if the collection of this data is legally permissible).[footnote 10] Whereas provided data is given knowingly to insurers, for example through an online form, observed and inferred data is often taken without the knowledge of customers. Few policyholders, for instance, are likely to know that insurers could be interested not just in what they type into online forms, but how they do so, including the pattern of mouse movements and the time it takes to respond to questions.[footnote 11] Some policyholders will view data collection of this kind as merely creepy. However, others may see it as a credible threat to their privacy, particularly if the data that has been collected is of a sensitive nature (e.g. data about possible medical conditions) and in danger of being leaked.

Further concern arises when insurers purchase data from third parties. The industry relies heavily on externally sourced information to train and run its algorithms. This includes credit scores gathered from credit websites and details of car repairs shared by mechanics. Insurers often need only collect a handful of data points directly from their customers in order to find additional data about them from other sources. Aviva’s Ask It Never initiative was launched to substantially cut the number of questions posed to customers by having a sophisticated system of third party data collection running in the background. While this makes for a smoother on-boarding process, it may give the false impression to customers that insurers hold little data on them, which in turn prevents them from exercising their right to challenge the use of that data (e.g. to have it rectified if incorrect).

Insurers could respond to these criticisms by making several adjustments, including by being more upfront with customers about the types of data they use to train and run their algorithms.[footnote 12] Yet insurers have grounds to capture data from a variety of sources. The more data they collect, the more accurate their risk assessments are likely to be, meaning premiums will more closely reflect individual risk. Insurers also have a legitimate interest to collect data in order to tackle fraud. This includes gathering information that individuals post on social media, which may be the only place to accurately gauge whether someone has made a fair claim.[footnote 13] From a legal point of view, the General Data Protection Regulation (GDPR) does not oblige insurers to ask for explicit consent from customers to collect their data, so long as there is another legal basis for processing that data.[footnote 14]

Box 1: What types of data should insurers hold onto?s

As new sources of data come on stream - including wearables and telematic devices - insurers may find themselves collecting more information about their customers than is necessary to deliver their core services. While insurers may be tempted to store this data, perhaps in the expectation they will be able to put it to use in future, doing so raises several ethical concerns.[footnote 15] One is the threat to people’s privacy, especially where datasets are at risk of a cyber breach. Another relates to fair compensation. If customer data is later sold onto third parties, it raises the question of whether the subjects have been adequately reimbursed for the value they have created for the company. The collection of more data may also increase the chance that algorithms pick up biases during the training phase (see Box 2 for more detail). To limit these harms, the industry could draw up data storage standards, possibly developed by the Association of British Insurers (ABI) or British Standards Institute, that discourage insurers from storing data that is not central to their mission. Such standards could include an expectation for insurers to review their datasets on a regular basis to determine whether they are material to their core business practice, and if not, to eliminate them from company records.

5.2 2. Using AI to power hyper personalised risk assessments

It is not just the collection and storage of data that raises concerns. It also the use of that data, through AI systems, to power hyper personalised risk assessments. Insurers have long sought to assess how likely it is that someone will make a claim, whether it be estimating the probability they will fall ill or be burgled while travelling abroad. But the use of new machine learning models promises to bring more precision to this process by detecting new correlations between different characteristics and risks. One insurance company reportedly draws on 1,000 data points to judge the risk of someone making a motor insurance claim, including whether they drink bottled or tap water. Coupled with real-time data collected through sensors, AI allows for people to be assessed based on their actual behaviours and characteristics, not just on what might be expected of them given the abstract group to which they belong (e.g. their age group or postcode).

Commentators fear these granular risk assessments could leave some customers uninsurable by revealing new predictors of risk that were not apparent before (e.g. a previously undiscovered link between someone’s occupational grouping and their likelihood of falling ill at work). It is unlikely that an individual would be denied insurance outright as a result of better risk assessments, not least because these assessments are merely predictions. However, the price of insurance products could increase for some individuals to the point where they effectively become out of reach. Were price rises to affect a large number of people, the customer bases of insurance companies could shrink to such an extent that risk pooling becomes impractical.

Although it is too early to assess the distributional effects of more accurate risk assessments, one consequence could be that vulnerable and less privileged groups are left worse off. They may lack a sufficient understanding of how algorithms and new data sources influence the deals they receive from insurers, denying them the opportunity to mitigate those effects, for example by managing their social media profiles. These groups may also have less capacity to engage in the types of behavior that insurers increasingly promote in return for discounts, for instance signing up for gym memberships (the section below looks in more detail at the behaviour change schemes of insurers). Citizens Advice recently estimated that UK home insurers make half (51 percent) of their profits from people defined by the market regulator as potentially vulnerable, suggesting that this group already lacks the capacity to seek out affordable deals.

However, just as with the collection and sharing of data, insurers have legitimate reasons to use AI to formulate more precise risk profiles. One is that some individuals will be better off as a result, for example young people who drive safely and homeowners who have taken action to protect their properties from flooding. One could argue that it would be unethical not to seek more accuracy in risk scoring, since it would be unfair to these lower risk customers. (Although note that while some risks are borne from harmful ‘behaviours’ that people can change, others are the result of personal ‘characteristics’ which they may have no control over). Suggesting that insurers hold back from deploying hyper personalised risk scores also calls into question the independence of the industry. Commercial insurance may have historically and unintentionally subsidised riskier prospects, but they are not obliged to do so. If some people become uneconomical to insure, a wider debate is needed on whether the government should be called on to intervene, and if so, on what terms.

Box 2: How can we protect customers from biased algorithms?

Insurers are prohibited by law from basing pricing and claims decisions on certain protected characteristics, including sex and ethnicity. However, other data points could feasibly act as proxies for these traits, for example with postcodes signalling ethnicity or occupation categories signalling gender. This means that AI systems can still be trained on datasets that reflect historic discrimination, which would lead those systems to repeat and entrench biased decision-making. A Propublica investigation in the US found that people in minority neighbourhoods on average paid higher car insurance premiums than residents of majority-white neighbourhoods, despite having similar accident costs. While the journalists could not confirm the cause of these differences, they suggest biased algorithms may be to blame.[footnote 16]

Like any organisation using algorithms to make significant decisions, insurers must be mindful of the risks of bias in their AI systems and take steps to mitigate unwarranted discrimination. However, there may be some instances where using proxy data may be justified. For example, while car engine size may be a proxy for sex, it is also a material factor in determining damage costs, giving insurers more cause to collect and process information related to it. Another complication is that insurers often lack the data to identify where proxies exist. Proxies can in theory be located by checking for correlations between different data points and the protected characteristic in question (e.g. between the colour of a car and ethnicity). Yet insurers are reluctant to collect this sensitive information for fear of customer believing the data will be used to directly discriminate against them.

5.3 3. Steering the behaviour of policyholders

A third controversial use of AI is in changing the behaviour of customers, either unknowingly via subtle nudges or by providing advice directly.[footnote 17] Although this practice is not yet widespread in the industry, a number of insurers have begun to experiment with behaviour change schemes in the context of the policies they offer, including life insurance companies who are promoting exercise by offering premium discounts to gym goers. Most of the nudges seen to date have drawn on conventional data analytics rather than AI. However, there is clear scope for using machine learning models to give advice to policyholders, for example by informing them about the flood or crime risks of different properties, or by recommending low-risk travel routes to drivers (and rewarding them with lower premiums for following such advice).

Insurers could gain significantly from these practices, as payouts become smaller and less frequent. Yet behaviour change schemes could also pose a threat to the autonomy of policyholders, with insurers gaining the power to influence their lives in multiple ways, from where they live to how they drive to how often they exercise. While one could argue that signing up to behaviour change schemes is a choice, it would be relatively simple for insurers to turn a voluntary scheme into a mandatory one.[footnote 18] John Hancock, a large US life insurance company, decided last year to include digital fitness tracking in every one of its policies. Even if behaviour change schemes were to remain officially voluntary, the costs of not participating (e.g. losing out on premium discounts) could be great enough to make the schemes effectively mandatory. Refusing to participate in such schemes may also signal to insurers that customers are high-risk, since low-risk individuals would have every incentive to be monitored.

Still, for all these objections, it seems unreasonable to ask insurers to entirely avoid using AI to influence the behaviour of their policyholders. Many of the experiments conducted to date have shown promise in improving the lives of participants. US-based Electric Insurance Company claims its customers can save up to 20 percent off their auto insurance premiums by using its Great Driver Programme, which gives feedback on driving habits including acceleration and braking techniques. Behaviour change initiatives appear, on the surface, to be popular with some customers. A Capgemini survey found that 37 percent of consumers would be willing to share additional data with insurers in return for risk control and prevention services, while 35 percent would be willing to pay more. It is also worth noting that not all nudges attempt to change the behaviour of individuals. Some are focused on spotting and addressing physical damage (e.g. leaks in the home), which may pose fewer ethical questions.

6. What would it take for the industry to use AI responsibly?

From hyper personalised risk assessments to the expansive collection of customer data, the use of AI by insurers is allowing for and encouraging new behaviours that are highly contentious.

But are these feelings warranted? There are few excuses for using algorithms that allow for unlawful discrimination (e.g. such that ethnicity or gender become a factor in pricing), nor for collecting and holding onto customer data longer than is necessary. However, the reality is that AI is allowing for and encouraging new behaviours that society has no common view on. While some may see the industry’s practice of gathering data from social media platforms as invasive, others say it is necessary to clamp down on fraudulent claims. Equally, although there are many who fear that hyper personalised risk assessments will lock people out of insurance, one could equally argue they will open up insurance to those who were erroneously viewed in the past as too risky, for example some young drivers.

The challenge for the industry will be to find common ground on what constitutes an ethical use of AI. The Chartered Insurance Institute (CII), which has 125,000 professional members, recently took a step in this direction by launching a Digital Companion to their Code of Ethics. Drawn up with the help of several insurers and trade bodies, this code offers industry practitioners clearer guidance on how to deploy AI and data responsibly, including by abiding by the spirit of the law and not just the letter, and anticipating unintended consequences when using data. But the industry also needs to involve the wider public in these debates. This could mean using citizen juries and other public engagement exercises to deliberate on unsettled questions, such as the degree to which insurers should be able to nudge policyholders using AI, and when it might be unacceptable for insurers to use AI to infer characteristics about their customers.

The government should also consider whether it needs to intervene through legal and regulatory changes to ensure people have adequate access to insurance. This could mean reconsidering the types of data that should not be used in risk assessments. Lessons can be learned from the Code on Genetic Testing and Insurance, which was established by the ABI in partnership with the government to manage the use of genetic test results to make insurance decisions.[footnote 19] The government could also explore ways to financially incentivise insurers to cover individuals they would otherwise choose not to. While state-led intervention should rarely be the first port of call, the reality is that insurers would face a strong competitive disadvantage in acting alone to adjust their use of AI and data.

These are live debates that will take time to resolve. However, there is much that insurers can do today to address obvious harms, while protecting their commercial interests. While it is not for this paper to make formal recommendations, the following measures are worthy of further consideration:

  • Undertake data discrimination audits - Insurers are prohibited by law from discriminating against customers on the basis of their sex, ethnicity and several other characteristics. Yet the Financial Conduct Authority’s research suggests insurers are at risk of indirectly discriminating along these lines via proxy variables. Insurers could begin auditing their algorithms and training datasets as a matter of course to check for unwarranted bias - before, during and after their deployment.[footnote 20] This has been recommended by Insurance Europe, a federation of insurers based across 37 countries.
  • Review third party data and software suppliers - Insurers have a legitimate reason to purchase data from third party providers, including credit score agencies and damage repair companies. However, they should do so with caution and always request assurances from data suppliers that the information they are being given is accurate, unbiased and collected with the knowledge of the data subject. The ABI or FCA could assist due diligence checks by maintaining an industry-wide register that documents complaints and instances of poor standards among data sellers and brokers. Insurers could also review any partner organisations to which they outsource business. Liberty Mutual was fined £5.2m by the FCA in 2018 for a lack of oversight of a third party supplier whose overreliance on voice analytics software led to some claims not being investigated adequately..

  • Make privacy notices more accessible - Privacy notices, including terms and conditions statements, are the standard method by which insurers inform customers about how they use, store and share their data. But these notices are often lengthy and opaque, confusing more than they clarify.[footnote 21] In partnership with user experience designers, insurers could produce ‘key facts’ data statements that convey in straightforward terms how they use customer data and how customers can seek redress (a measure that should be possible without compromising their intellectual property). Insurers could also establish dedicated teams to answer customer queries about their data rights.

  • Comply with data protection standards - Insurers should double down on their efforts to abide by the provisions of the General Data Protection Act and the Data Protection Act, including when it comes to storing personal data and being clear on the legal basis for processing data. Industry leaders could work more closely with the Information Commissioner’s Office to better understand their obligations and explore whether current or proposed uses of AI are at risk of breaching GDPR and DPA rules.

  • Give customers the power to port risk profiles - Customer data is regularly traded between different firms in the insurance industry, often without the knowledge or input of data subjects. But under GDPR, customers also have the right to transfer personal data of their own accord. The government’s Smart Data proposals would boost these abilities by requesting that data be transferred immediately and on an on-going basis rather than a one-off exchange. Insurers could assist these ambitions by acting now to help customers port their data - that which is non commercially sensitive - between different insurance providers. Such a move would mirror the Bank of England’s proposal for banking customers to be able to port their credit files between lenders. Separate CDEI research is exploring the potential for risk porting in different settings.

  • Establish clear lines of accountability - A 2018 study by the FCA found that several insurance companies were unable to name a dedicated member of staff who had ownership over their pricing strategy (which could include how AI-led risk assessments influence premiums).[footnote 22] Insurers should consider whether their organisational structures are fit for purpose, and whether they need to allocate individual Board members responsibility for overseeing uses of AI and other forms of data-driven technology. In doing so, insurers should refer to the FCA’s Senior Managers and Certification Regime, which requests a Statement of Responsibilities for senior managers.

Each of these measures would go some way towards keeping insurers on the right side of the ethical divide in their use of AI and data. Yet oversight will remain patchy until the industry is more transparent about how they use data-driven technology in day-to-day operations. Do insurers collect data from social media platforms? Do they purchase data from individual sellers or data brokers? How many use AI to predict people’s purchasing power, and thereby their willingness to pay higher premiums? The answers to such questions could be used by regulators and policymakers to develop more effective governance measures. Critically, greater transparency would help to distinguish genuine threats from those that are overstated, and would support the development of interventions that are proportionate to the risk in question, thereby allowing responsible innovation to flourish.

While not quite reaching the optimum level of disclosure, some insurers are becoming open about how they use algorithms and data. Aviva recently launched a Customer Data Charter that sets out what happens to the information they collect on customers, including whether they sell it (they do not) and who they share it with. Other insurers have established expert panels to shape company policy, including AXA, whose Data Privacy Advisory Panel meets twice a year to consider the firm’s use of data and algorithms. According to AXA, the Panel - made up of privacy experts, academics and former members of regulatory bodies - discusses the firm’s ‘actions and commitments’, and covers topics ranging from the international exchange of data to the impact of a digital single market on AXA customers. More insurers should be encouraged to develop their own ethical panels, being sure that they concentrate on issues that are specific to the firms involved rather than on generic concerns applicable to the entire industry. These could be linked to an industry-wide Code of Conduct, which is encouraged under Article 40 of the GDPR.

Box 3: Key institutions and initiatives governing the insurance industry

  • The Financial Conduct Authority is the chief regulator for 59,000 financial services firms and financial markets in the UK. This includes general insurance companies and insurance intermediaries. Relevant initiatives include the Insurance Distribution Directive, which requires firms to identify customers’ insurance demands and needs, and ensure that the products offered are consistent with them; and a General Insurance Market Study, which was launched in 2018 to study the impacts of pricing practices in the home and motor insurance markets.

  • The Prudential Regulation Authority is responsible for overseeing prudential regulation among 1,500 banks, building societies, credit unions, insurers and major investment firms. This means ensuring that financial firms hold sufficient capital and have adequate risk controls in place. The PRA does not aim to prevent firms from failing but rather to create an environment so that when a firm does fail, it does not lead to significant disruption to critical financial services.

  • The Chartered Insurance Institute is a professional body geared towards building public trust in the insurance and financial planning profession. The CII provides leadership, guidance and learning opportunities to its 125,000 members. It promotes a Code of Ethics, which advises professionals on how to abide by common ethical principles in their day to day practices, and recently published a Code of Digital Ethics as an accompaniment. It also provides qualifications, accreditation and a free online ethics course to its members.

  • The Information Commissioner’s Office is the UK’s principal regulator for upholding information rights in the public interest. The ICO advises organisations on how they can adhere to the Data Protection Act and the GDPR, among other legislation. It cuts across every sector and affects the majority of organisations, including within the insurance industry. Recent and relevant ICO initiatives include Project ExplAIn, which will assist organisations as they attempt to explain the results of AI decision-making; and the development of an Auditing Framework for AI, that will guide the regulator’s efforts in examining algorithms for fairness.

7. Where next for the industry?

This paper began by asking what difference AI might make to the insurance industry. According to its proponents, the future could be one where AI cuts the cost of premiums, reins in the number of fraudulent claims, and allows for insurance to be purchased with greater ease. Combined with the ubiquitous deployment of sensors, AI could even spur the emergence of a new service offering, with insurers not just repairing damage but intervening early on to prevent it from occurring. Yet not everyone agrees with this portrayal of the future. Critics claim the deployment of AI will lead to the most vulnerable people being excluded from insurance, while ushering in excessive levels of customer surveillance.

The reality is likely to be somewhere in between these two extremes. However, the future of the industry is not predetermined. With most insurers still in the midst of scoping out potential applications of AI in their product and service ranges, there is a window of opportunity to develop a blueprint for a credible governance regime - one that sets out a vision for the responsible use of AI but also the practical steps needed to achieve it. Before long, the industry will face fresh challenges, among them the potential entry of large tech companies into the market and the growth of new verticals such as cyber security insurance. It is in the industry’s interests to put in place the necessary safeguards for AI’s deployment before these new players and products materialise.

There is an opportunity for the UK to be a global leader in the deployment of ethical AI for insurance. Given the size of our insurance sector - the fourth largest in the world and the largest in Europe - UK-based firms have the power to influence the terms by which insurers across the world engage with AI and other data-driven technology. It is not just UK customers, therefore, who stand to gain from domestic efforts to improve how AI is deployed and governed.

8. Frequently Asked Questions

How might AI improve the insurance industry?

Artificial intelligence is expected to alter multiple dimensions of the insurance industry. This includes customer onboarding, with chatbots speeding up the time taken to deliver a quote, and pricing, with AI being used to generate more accurate predictions of whether someone will make a claim. In future, AI could be used to advise customers on how to live safer and healthier lives, for instance by suggesting less risky travel routes to drivers or by detecting early signs of damage in the home.

How many insurers use this technology and for what purposes?

Insurers have long used algorithms to inform their underwriting decisions. However, most firms are still in the nascent stages of integrating machine learning software into their operations. Multiple barriers stand in the way of adoption, from the difficulty of marrying new machine learning algorithms with legacy infrastructure, to the challenge of finding staff with the right skills to lead data transformation programmes. Nevertheless, insurers have begun to experiment meaningfully with AI, particularly within back office functions (e.g. by pairing customer correspondence with relevant claims records).

What are the main risks of using AI in insurance?

Critics say the use of AI could lead to detrimental outcomes for customers, particularly where it allows or requires: 1) the collection and sharing of large data troves, which could impinge on privacy if done without the express consent of customers; 2) hyper personalised risk assessments, which could leave some individuals ‘uninsurable’ by revealing previously unseen indicators of risk; and 3) new forms of nudging, where insurers use AI to alter the behaviour of customers in a way that could be viewed as intrusive.

Is the widespread collection and sharing of customer data problematic?

Insurers have an incentive to collect a wide range of data points about their customers. Some of this data is asked for directly, for example via forms that request information about medical issues in the family. Other data is observed or inferred, for instance inferring that someone takes part in regular exercise based on the items they buy. In some cases, customer data is purchased from third parties, including credit scores from credit agencies and repair service information from car mechanics. The principal concern about this widespread data collection is that customers are often unaware that it takes place, denying them the opportunity to address inaccuracies that may a ect how the insurer treats them. Insurers, however, could argue that the only way of pricing people accurately and tackling fraud is by collecting data on this scale.

Are AI-powered risk assessments a concern?

AI is set to make risk assessments more accurate by revealing new predictors of risk. This could result in some groups paying more for their insurance premiums, possibly to the point where products become unaffordable. Yet the opposite may also be true, with AI-powered risk assessments showing individuals to be less risky than they first appear (e.g. some young drivers). If large parts of society become uneconomical to insure, a wider debate will be needed on whether the state should intervene, and if so, on what terms.

Should insurers be allowed to suggest lifestyle improvements to their customers?

AI could one day be used by insurers to advise customers on how to avoid risks, for example with chatbots suggesting healthy eating and exercise regimes. Some believe behaviour change initiatives like these would impinge on the autonomy of policyholders, while others say they could result in meaningful improvements in people’s living standards. Each initiative should be judged on its own merits, and much will depend on whether they can be truly opted out of without penalty.

What can we do now to make sure AI is used responsibly? A central message of this paper is that more work needs to be done to understand what the public views as an acceptable use of AI within the industry, including the types of data that insurers should be able to make use of. However, that should not stop insurers from taking steps today to address obvious harms. Among other measures, insurers could commit to regularly undertaking discrimination audits on their datasets and algorithms; making privacy notices more accessible so that customers know how their data is being used; and establishing clear lines of accountability within their organisations so that it is apparent who is responsible for overseeing the responsible use of algorithms.

9. About this CDEI Snapshot Paper

The Centre for Data Ethics and Innovation (CDEI) is an advisory body set up by the UK government and led by an independent board of experts. It is tasked with identifying the measures we need to take to maximise the benefits of AI and data-driven technology for our society and economy. The CDEI has a unique mandate to advise government on these issues, drawing on expertise and perspectives from across society.

The CDEI Snapshots are a series of briefing papers that aim to improve public understanding of topical issues related to the development and deployment of AI. These papers are intended to separate fact from fiction, clarify what is known and unknown, and suggest areas for further investigation.

To develop this Snapshot Paper, we undertook a review of academic and grey literature, and spoke with the following experts:

  • Ed Leon-Klinger, Flock
  • Peter Lukacs, Paul Hamalainen, Brian Corr and Joseph Smith, Financial Conduct Authority
  • Matt Cullen, Association of British Insurers
  • Lex Sokolin, Fintech advisor and investor
  • James Lawrence, Behavioural Insights Team
  • Nick Pester, Capital Law
  • Melissa Collett and Ian Simons, Chartered Insurance Institute
  • Andrew Morgan and Chris Mullan, Deloitte
  • Jimmy Hill, independent data scientist

10. About the CDEI

The adoption of data-driven technology affects every aspect of our society and its use is creating opportunities as well as new ethical challenges. The Centre for Data Ethics and Innovation (CDEI) is an independent advisory body, led by a board of experts, set up and tasked by the UK Government to investigate and advise on how we maximise the benefits of these technologies.

The CDEI has a unique mandate to make recommendations to the Government on these issues, drawing on expertise and perspectives from across society, as well as to provide advice for regulators and industry, that supports responsible innovation and helps build a strong, trustworthy system of governance. The Government is required to consider and respond publicly to these recommendations.

We convene and build on the UK’s vast expertise in governing complex technology, innovation-friendly regulation and our global strength in research and academia. We aim to give the public a voice in how new technologies are governed, promoting the trust that’s crucial for the UK to enjoy the full benefits of data-driven technology.

The CDEI analyses and anticipates the opportunities and risks posed by data-driven technology and puts forward practical and evidence-based advice to address them. We do this by taking a broad view of the landscape while also completing policy reviews of particular topics.

More information about the CDEI can be found on our website and you can follow us on twitter @CDEIUK

  1. The FCA has developed a framework for assessing when price discrimination may be a cause for concern. 

  2. This study does not comment on systemic risks, such as the danger that algorithmic decision-making leads to wide scale mispricing and market instability. These macroprudential risks should be explored in future investigations. 

  3. There is a distinction between pricing based on the risk of the customer (e.g. that their house will be burgled) and pricing based on their willingness to pay a higher amount (i.e. price discrimination). 

  4. Another example of an AI-driven solution to claims management is BAIL. Created by law rm DAC Beachcroft, BAIL gathers information from insurers about the nature of a car accident, including witness testimonials, and then uses this data to establish liability. 

  5. See for example Cuvva and Slice. 

  6. See for example the panel discussion on insurance at the 2018 CogX Festival. 

  7. See for example Jeong, S. (2019) Insurers want to know how many steps you took today The New York Times, 10th April 2019. 

  8. A survey undertaken by Deloitte in 2015 found that 40% of customers would allow insurers to track their behaviour for a more accurate healthcare insurance premium, versus 49% who disagreed. The gures for home insurance were 38% and 45% respectively. See Deloitte (2015) Insurance Disrupted. 

  9. Sandra Wachter and Brett Mittelstadt have called for greater attention to be paid to the use of algorithms in making non-intuitive inferences. See Wachter, S. and Mittelstadt, B. (2018) A Right to Reasonable Inferences. Oxford Internet Institute. 

  10. Observed and inferred data still counts as ‘personal data’ under GDPR if it relates to an individual who can be identified. 

  11. See for example Hibbeln et al. (2014) Investigating the Effect of Insurance Fraud on Mouse Usage in Human-Computer Interactions. Thirty Fifth International Conference on Information Systems, Auckland 2014. 

  12. The GDPR gives data subjects a right to be informed about the collection and use of their personal data, including retention periods and who it will be shared with. 

  13. The US state of New York recently gave the green light to life insurance companies using data from customers’ social media ro les. Note that the ICO’s view is that social media data should be subject to the same GDPR provisions as other private data. 

  14. Although they may not be using consent, insurers must still have a legal basis for processing this data, under Article 6(1) of the GDPR. Note that insurers would need to meet extra conditions to process ‘special category’ data. This data is more sensitive, and includes information about a person’s race, ethnicity, political views and health conditions. 

  15. Under GDPR, organisations must only collect personal data which is relevant and limited to what is necessary to enable the purpose of their processing (article 5(1)(c)) and must not keep it for longer than is needed (article 5(1)(e)). 

  16. The CDEI is part way through a year long review looking at algorithmic bias, which is exploring its causes, consequences and potential remedies. For more information see: CDEI (2019) Interim report: Review into bias and algorithmic decision-making. 

  17. US scholars Cass Sunstein and Richard Thaler de ne nudging as ‘any aspect of choice architecture that alters people’s behaviour in a predictable way without forbidding any options. Thaler, R. (2008) Nudge: Improving decisions about health, wealth and happiness. Yale University Press. 

  18. Under Article 22 of the GDPR, people have a right not to be subject to automated data processing that has legal or other similarly signi cant e ects. However, this right does not apply if the processing is necessary to ful l a contract, which will often be the case in the insurance industry. However, in this circumstance, Article 22(3) requires certain safeguards, including the right to challenge the decision and to receive an explanation. 

  19. Members of the ABI are automatically signed up to the Genetics Code of Practice. For more detail see: www.abi.org.uk/data-and-resources/tools-and-resources/genetics/code-on-genetic-testing- and-insurance/ 

  20. Aided by one or more of the many tools now coming on stream (e.g. Google’s What-If, IBM’s AI Fairness 360, and Accenture’s AI Fairness Tool). 

  21. In a recent analysis of customer policy documents, the University of Nottingham found that every policy they viewed required education to at least A-level (and in most cases Graduate or Post-Graduate) to be meaningfully understood. See University of Nottingham (2018) How clear are your policy wordings? 

  22. Financial Conduct Authority (2018) Pricing practices in the retail general insurance sectors. The FCA expects rms to take reasonable care to organise and control their a airs responsibly and e ectively so that the governance, control and oversight of their pricing practices are appropriate.