Trilateral Research: Ethical impact assessment, risk assessment, transparency reporting, bias mitigation and co-design of AI used to safeguard children

Case study from Trilateral Research.

Background & Description

This case study is focussed on the use of an AI-enabled system called CESIUM to enhance decision making regarding safeguarding of children at risk for criminal and sexual exploitation.

Child exploitation is among the most hideous crimes in our society. Through leveraging the value of the data held by safeguarding partnerships through the secure sharing of insights among partners, CESIUM augments multi-agency decision-making by identifying and prioritising vulnerable children. In a 2022 validation exercise, CESIUM identified 16 vulnerable children up to six months before they were referred. Findings from a 2022 validation workshop forecast at least a 400% capacity gain with a multi-agency deployment of CESIUM.

For the CESIUM application, Trilateral implemented AI assurance and ethics-by-design mechanisms and procedures in several different ways. Key steps include:

  • Inclusion of ethics requirements and ethical assessment of business objectives
  • Ethical assessment of data objectives
  • Stakeholder analysis or involvement in the business understanding phase
  • Ethical data collection and assessment
  • Ethical data description, exploration, and verification
  • Ethical assessment of modelling
  • Ethical assessment of outcomes
  • Impact assessment of foreseen deployment
  • Ethical guidelines for monitoring and governance

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety, Security & Robustness

CESIUM is hosted in the cloud and leverages the latest AWS technologies in data security, by applying security measures at different layers of the system, from limiting internal access through AWS IAM, to guaranteeing data isolation with the use of AWS RDS role-based access control features, and finally protecting data through encryption at rest and in transit. Trilateral also runs ethical AI algorithms to assess risk using SageMaker.

Appropriate Transparency & Explainability

To achieve transparency, Trilateral Research are early adopters of the UK Algorithmic Transparency Recording Standard. Moreover, processes and the findings of our technique are rigorously documented and, whenever possible (i.e. absent any IP or privacy concerns), are disseminated to stakeholders.

The role of the explainability tools should allow end users to understand, scrutinise, question and challenge the results of the algorithm. Along with their professional judgement, they can judge the applicability and relevance of results to decide whether to discard them or to further investigate. To achieve meaningful explainability, we adopted a co-design process that can validate and revise frontend features to ensure that the algorithmic output is not only understandable to end users, but that it can be integrated into their existing decision-making processes. The application includes several different explanations of algorithmic output to be accessible to a variety of end users who may learn or understand information in different ways. It also includes an additional rationale to explain its reasoning and enable critical analysis for assured decision making.

Fairness

To promote fairness and identify and minimise bias, the multidisciplinary team mapped bias concerns and mitigation steps corresponding to each and every step of product development. These bias concerns align with the steps of the AI lifecycle from planning to data collection, model building, validation and monitoring and governance.

An important step in this process is to come to an agreement regarding which definition of fairness is most accurate and relevant for the given use case. Quantitative fairness metrics can also be employed based on this definition and evaluated alongside qualitative assessments of unfairness and bias.

CESIUM contains a transparency insights dashboard that seeks to highlight any potential biases in the training data. The dashboard enables the filtering of data by potential sources of biases such as race and gender. As an operational mitigation, when analysts identify biases and blindspots, they may implement focus periods on groups who may be unfairly targeted. Such focus periods might be short periods of time that focus on a particular demographic within the population.

Accountability & Governance

Trilateral developed a Monitoring and Governance framework for CESIUM. This framework includes decisions on how to monitor the models for degradation in accuracy and fairness, timelines to retrain the models and clear lines of accountability identifying who is responsible for what and when. Relatedly, it includes a prospective schedule to continue technical and operational validation exercises.

Contestability & Redress

The Monitoring and Governance framework includes a clear definition of roles, responsibilities and whom to contact for redress and contestability concerns. This information is communicated to the relevant stakeholders.

Why we took this approach

Our goal is to promote and achieve the following values, which constitute ethical AI: ethics and privacy by design, explainable outputs, compliance with legal and ethical standards, promotion of human decision making, establishing accountability through shared responsibility with end users. Our technique has been created and validated through two avenues. The first is expertise in ethical theory and applied ethics. This background provided a comprehensive understanding of ethical values and ethical AI. The second is two decades of experience conducting ethical impact assessments on emerging technologies with the goal of identifying concrete, practical steps to operationalize ethics and create responsible AI tools and services. We have implemented and verified this process in several projects with public sector clients, including several police forces, and with our own AI products. Our approach is an end-to-end ethical AI process that includes ethical business objectives, ethical data collection, processing, and output, and an ethical assessment of the operation of the tool and its impact on social good, societal values and independent rights.

Benefits to the organisation

  • Identified potential sources of bias in the datasets and created mitigations for this potential bias.
  • Provided an assessment of whether any special issues (e.g. vulnerable populations, sensitive data such as medical data or biometrics, etc.) are likely to be involved, and if so, established guidelines for addressing these special issues.
  • Included perspectives from direct and indirect stakeholders to assess their values and interests.
  • Allowed for data selection and fair algorithmic design to be coupled with an ongoing ethical need to understand the historical and social contexts into which the system might be deployed.
  • Emphasised transparency, safety and robustness, and included documentation and reporting of findings.

Limitations of the approach

Although not limited to these techniques, bias, especially implicit bias or bias appearing through proxies, can be difficult to identify in poor quality datasets.

Further AI Assurance Information

Published 6 June 2023