Fairly AI: FAIRLY End-to-End AI Governance Platform

Case study from Fairly AI.

Background & Description

FAIRLY provides an AI governance platform focussed on accelerating the broad use of fair and responsible AI by helping organisations bring safer AI models to market. The system provides end-to-end AI governance, risk and compliance solutions for automating model risk management, applying policies and controls throughout the entire model lifecycle.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety Security and Robustness

Asenion is a compliance agent built by FAIRLY through rigorous pilots with a tier-1 bank’s Internal Audit, the BSI audit for certification preparation for the EU AI Act and the Responsible AI Institute’s audit for financial AI models. Asenion is an AI agent focussing on compliance enforcement to ensure safety, security and robustness of AI systems.

Each AI system is unique and has unique operational requirements. Asenion provides a platform for configuring an on-demand custom AI agent to businesses unique needs based on sector, jurisdiction, use case, methods and project lifecycle stage. Asenion checks and verifies operational AI’s against policies through a set of controls. This can be repeated or integrated into your systems for continuous monitoring with a single line of code.

Appropriate Transparency and Explainability

The ICO and Alan Turing Institute’s Project ExplAIn described explainability as outcome-based explainability and process-based explainability. The FAIRLY platform provides outcome-based explainability using industry standard post-hoc explainability methods such as SHAP and LIME. It also provides process-based explainability by making it easy to capture micro-decisions of the model developers throughout their model development cycle with built-in approval workflow to achieve both transparency and comprehensive explainability.

Another challenge of explainability the FAIRLY platform addresses is the different level of explanations required for a diverse audience with varied non-technical and technical background. By using LLMs combined with qualitative and quantitative controls, we auto-generate these documentations to provide transparency and explainability to all stakeholders.

Fairness

We work with best-in-class third-party providers to create Fairness policy packages and controls. For example, we have SolasAI’s Model Fairness Testing package which is widely used in financial services and insurance industries. We are also an official partner of ISO which enables us to offer e.g. ISO/IEC TR 24027 - Information technology — Artificial intelligence (AI) — Bias in AI systems and AI aided decision making as a policy package for fairness controls, testing and monitoring as well.

Accountability and Governance

The FAIRLY platform was incubated at Accenture’s Innovation Lab out of London UK where we worked with 14 tier-1 bank’s Model Risk Management teams. The Model Risk Management frameworks that came out of the financial crisis in 2008-2009 has been proven and credited to why the financial systems did not collapse during the COVID pandemic. The FAIRLY platform’s governance framework is rooted in the same Model Risk Management concepts where independent three lines of defence is part of the built-in approval workflow that allows for audit trail and accountability tracking.

Why we took this approach

Growing use of AI models leads to an increased need for model risk mitigation. This is a particularly daunting challenge for audit and validation teams faced with AI-based models. Several common challenges associated with AI models include a lack of transparency resulting in difficulties in explaining the relationship between model inputs and outputs, a lack of model stability under changing conditions, and difficulties in ensuring that training datasets are fair, unbiased and trustworthy. AI models are growing increasingly complex which further increases model risk. High-stake AI use cases in particular require end-to-end AI oversight.

Benefits to the organisation

FAIRLY bridges the gap in AI oversight by making it easy to apply policies and controls early in the development process and adhere to them throughout the entire model lifecycle. Our automation platform decreases subjectivity, giving technical and non-technical users the tools they need to meet and audit policy requirements while providing all stakeholders with confidence in model performance.

Limitations of the approach

We are continuing to leverage partners to develop thresholds for some specific use cases as well as policy packages.

Further AI Assurance Information

Published 6 June 2023