Credo AI Transparency Reports: Facial Recognition Application

A facial recognition service provider used Credo AI’s platform to provide transparency on fairness and performance evaluations of its identity verification service to its customers.

Background & Description

A facial recognition service provider required trustworthy Responsible AI fairness reporting to meet critical customer demand for transparency. The service provider used Credo AI’s platform to provide transparency on fairness and performance evaluations of its identity verification service to its customers.

Relevant Cross-Sectoral Regulatory Principles

Appropriate Transparency & Explainability

Transparency in the AI governance journey can take many forms. In this case, being able to provide information to downstream users i.e. customers using the identity verification service was a central factor for the service provider.

The AI value chain can be understood as an interrelated grouping of different entities that are providing, developing, deploying, and using AI. Each of these entities throughout the value chain needs to provide a degree of transparency about the AI model they are providing or developing, how it was trained, its intended purpose, and intended or unintended risks for third parties or customers to be able to understand and mitigate their use case specific risks.

In this context, the facial recognition identity verification provider had customers that ranged from early-stage startups to Fortune 500 companies and from diverse industries, ranging from transportation to technology. Because this service was used globally for a variety of use cases, such as online proctoring and traveller identity check, transparency became a critical element for customer trust.

Fairness

Demographic performance parity assessments serve as an essential tool in aligning AI models with fairness principles. They do this by measuring and analysing how individuals from various demographic groups—such as those defined by race, gender, age, and socioeconomic status—are treated by the models.

The service provider conducted a demographic performance parity assessment using Credo AI’s Responsible Al Governance platform and open-source framework to measure how the facial recognition technology performed across intersectional gender-skin-tone groups.

Accountability & Governance

This approach, through the creation of a fairness report derived from performance parity assessment outcomes, enhanced accountability towards customers of the identity verification service provider. This level of transparency, increased trust in the service and enabled customers to make more informed decisions about the context and scope within which to embed and deploy the service in their downstream products.

Why we took this approach

The service provider was able to take actionable steps to ensure the development and performance was aligned with requirements from regulators, standard-setting bodies, and industry best practices. The Credo AI platform enabled the provider to use False Non-Match Rate and False Match Rate to measure performance and disparities at varying confidence thresholds, based on the guidelines set forth by the National Institute of Standards and Technology’s (NIST) face recognition vendor tests.

Benefits to the organisation using the technique

Credo AI helped the service provider curate a representative image dataset of real subjects. The diversity of age, genders, apparent skin types, and ambient lighting conditions, the reliability of annotations, and the fact that this dataset has never been used by the customer made it an effective dataset for our assessment. More than 100 million pairwise identity verification comparisons were made possible and performed to ensure the results are statistically significant.

A Responsible AI fairness report was generated to provide actionable findings and insights into the performance and fairness of the service. The transparency report communicated disaggregated performance metrics across intersectional demographic groups, undesirable biases that the service may exhibit, and the groups for which mitigation is the most needed.

This process helped illustrate how transparency and disclosure reporting can encourage responsible practices to be cultivated, engineered, and managed throughout the AI development life cycle.

Limitations of the approach

This approach underscored the significance of data availability and diversity in assessing fairness, revealing its limitations in accounting for age-related facial changes, thereby affecting its applicability to practical facial recognition scenarios. Additionally, the dataset exhibited insufficient variability in image quality, failing to capture the breadth of real-world technology usage conditions.

Further AI Assurance Information

Published 9 April 2024