Alan Turing Institute and University of York: Trustworthy and Ethical Assurance Platform

Case study from the Alan Turing Institute and University of York.

Background & Description

The Trustworthy and Ethical Assurance platform (TEA) is an open-source tool that has been designed and developed by researchers at the Alan Turing Institute, in collaboration with the University of York. The purpose of the tool is to support a process of developing and communicating structured assurance arguments that show how data-driven technologies, such as machine learning or AI, adhere to ethical principles and best practices. The outputs of the tool are known as ‘assurance cases’—structured and graphical representations of an argument made about some principle related to a project, technology, or system.

Assurance cases have been widely used in safety-critical domains, such as health, energy, and transport, for many decades. Traditionally, these have focused on goals related to technical and physical safety. The TEA platform extends this approach to consider a broader range of ethical goals.

Users are required to have a project or system in mind, ideally at an early stage of design, and to use the platform to iteratively build a structured assurance case. To support this process, the TEA platform guides the user through the process of developing an assurance case step-by-step. It also provides freely available resources and guidance to help build a supportive community of users with identifying claims and evidence to demonstrate the achievement of a particular outcome or goal. For instance, users can share and comment on publicly available assurance cases, access argument patterns that serve as templates that help implement ethical principles throughout a project’s lifecycle, and, in general, help build best practices and consensus around assurance standards (e.g. determining evidence for specific claims).

The assurance cases can be used for a wide range of purposes, including internal quality assurance, reflection, and documentation, as well as external assurance (e.g. compliance or auditing).

Why we took this approach

Our rationale for taking this approach was to a): enable more diverse users and stakeholders to participate in the co-creation of ethical standards and best practices for a wide range of principles (e.g. fairness, explainability) and b): build on a well-established and validated method for safety assurance—with existing standards, norms, and best practices—but to extend the methodology to include ethical goals and practices. In doing so, this tool also supports and aligns with principles-based regulatory frameworks, such as the UK’s Office for AI pro-innovation approach to AI regulation, which outlines the following principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

We also sought to ensure that our platform was easy to use and accessible, recognising the needs and challenges that many sectors or domains have (e.g. low levels of readiness for data-driven technologies). Therefore, the platform has been designed to be simple and accessible, but also flexible and extensible using additional guidance, freely available on our documentation site.

The open-source nature of the tool also allows for extensibility and community support. For instance, a free-to-access version of the tool is available so that users and organisations can deploy the platform in a local/private environment.

Benefits to the organisation using the technique

  1. Aiding transparent and structured communication within project teams and among stakeholders to help create a more systematic and open approach to AI assurance;
  2. Providing a logical structure that supports the integration of evidence from disparate sources (e.g. model cards, international standards), to help users identify shared best practices and communicate emerging best practices within a single platform;
  3. Making the implicit explicit by helping project teams clearly specify the practical steps and decisions taken over the course of a project’s lifecycle, and linking respective claims together into a unified (and evidence-based) argument;
  4. Aiding project management and governance by providing a flexible tool for transparent documentation of assurance processes;
  5. Supporting ethical reflection and deliberation through complementary resources (e.g. structured bias identification and mitigation activity, templates for assuring general ethical principles); and
  6. Supporting an open-source repository, helping build a shared knowledge base, and improving the usability of the platform for the wider community by sharing feedback.

Limitations of the approach

In the ideal case, developing an assurance case requires wide-ranging involvement of stakeholders, and iterative deliberation and involvement of expertise across a project team. This may require significant time and organisational capacity. In large or distributed teams, this can present a barrier to effective project governance. However, the methodology is highly flexible and tiered or proportional approaches can be followed.

Further AI Assurance Information

Published 19 September 2023