Credo AI Governance Platform: Reinsurance Provider Algorithmic Bias Assessment and Reporting

A global provider of reinsurance used Credo AI’s platform to produce standardised algorithmic bias reports to meet new regulatory requirements and customer requests.

Background & Description

A global provider of reinsurance used Credo AI’s platform to produce standardised algorithmic bias reports to meet new regulatory requirements and customer requests.

The team needed a way to streamline and standardise its AI risk and compliance assessment process, with the goal of continuing to showcase responsibility and governance to customers and regulators while significantly reducing the burden of governance reporting on technical teams. With [Credo AI’s Responsible AI Platform] (https://www.credo.ai/solutions/risk-management ), the company found a solution that met its needs.

Relevant Cross-Sectoral Regulatory Principles

Safety, Security & Robustness

By systematically evaluating AI models for biases, the reinsurance provider is able to pre-emptively identify and correct any disparities that might lead to unfair outcomes for certain groups of people. This proactive stance on bias detection and mitigation is expected to not only enhance the fairness and reliability of AI applications but also build trust among stakeholders and users. By establishing an internal AI risk and compliance assessment process, the reinsurance provider process was able to assess its risk prediction models for unintended harmful bias systematically and at scale.

Fairness

The reinsurance provider integrated Credo AI into its machine learning operations (MLOps) system. MLOps is a set of practices that aim to deploy and maintain machine learning models in production reliably and efficiently. By incorporating Credo AI into their workflow, the reinsurance provider could automatically perform checks for any biases and assess the performance of their AI models before they are officially deployed. This step is part of creating the model, meaning every time they build a new AI model, it must go through these checks.

Once a model is built, data scientists no longer have to remember to manually run these bias and performance tests because the system is set up to do this automatically. After the tests are completed, the results are sent back to the Credo AI Platform which helps ensure that the models meet the required standards for fairness and effectiveness. This setup helped the reinsurance provider maintain high standards for their AI models, ensuring they are both fair and effective, while also adhering to compliance and risk management protocols.

Accountability & Governance

The provider used the resulting risk and compliance reports to prove to its customers—and to regulators—that it was effectively governing its AI and mitigating any potentially harmful bias in its models.

Why we took this approach

The insurance industry, like many regulated industries, is facing increasing scrutiny around its application of AI/ML to sensitive use cases like risk prediction and fraud detection. In particular, policymakers and customers have focused on algorithmic fairness as a critical issue for insurance and reinsurance companies to address as they apply machine learning models to these areas. These concerns are reflected in regulations like Colorado’s SB21-169, which prohibits insurers from using any “algorithm or predictive model” that discriminates against an individual based on protected attributes like race and sex.

This approach allowed the reinsurance provider to systematically map, measure, and evaluate its AI models for biases based on its internal risk and compliance assessment policies and regulatory requirements.

Benefits to the organisation using the technique

Prior to using Credo AI, the compliance assessment process was managed using Excel, and putting together a risk and compliance report was incredibly burdensome on technical development teams. By implementing Credo AI, the global reinsurance company was able to reduce the amount of time that it takes for a ML model to get through its risk and compliance assessment process, while still being able to produce high-quality risk and compliance reports to share with customers and regulators.

The reinsurance company worked with Credo AI to develop a set of custom Policy Packs that operationalised the company’s internal risk and compliance assessment policies. This allowed the governance team to manage and track progress through the risk and compliance assessment process, rather than having to navigate through many different Excel spreadsheets and Word documents.

Technical teams were no longer needed to gather assessment requirements from the governance teams, nor did they need to manually write code to run standard bias and performance assessments; this approach allowed teams to generate technical evidence for governance without manual effort.

Limitations of the approach

Organisations need to have access to protected demographic data to effectively run bias tests and to eventually comply with certain anti-discrimination regulations, such as NYC’s LL-144. Collection, use, and retention of demographic data itself, however, can present a tension when trying to comply with privacy laws and regulations.

This provider was only able to compile necessary data on a quarterly basis and therefore could only perform bias tests on data that was a quarter old and not real-time bias testing. Approaches that can help overcome these self-reported data limitations include using human-annotated demographic data which relies on a human annotator’s best perception of an individual’s demographic attributes or machine-inferred demographic data which relies on algorithmically inferring an individual’s demographic attributes. However, both of these alternatives can present additional risks including exacerbating biases.

Further AI Assurance Information

Published 9 April 2024