Holistic AI: Audits

Case study from Holistic AI.

Background & Description

Holistic AI’s AI audits are a bespoke, tailor-made solution to identify AI risks that promises deep technical, quantitative analysis. Based on a detailed consultation with users of the Platform during onboarding, the precise requirements of each audit are determined to facilitate a customised experience. In particular, users are able to decide whether to focus on just one or several technical risk verticals, including bias, robustness, privacy, efficacy, and explainability. The depth of analysis can also be tailored as required to meet governance, risk, and compliance needs.

The AI Audits, which are derived from extensive academic research and industry expertise, have four key steps:

  • Triage – Holistic AI’s auditing and assurance team works with the user to document the system to generate a risk level ranging from high-risk to low-risk. The risk level given to a system depends on factors, including the context that it is used in and the type of AI that it utilises, as well as inputs and outputs.
  • Assessment – depending on the requirements of the user, Holistic AI’s auditing team assesses the system against one or more risk verticals to give a clear evaluation of the current state of the system to assess its residual risk:
  • Bias – The risk that the system treats individuals or groups unfairly.
  • Privacy – The risk that the system is sensitive to personal or critical data leakage.
  • Efficacy – The risk that a system underperforms relative to its use-case.
  • Robustness – The risk that the system fails in response to changes or attacks.
  • Explainability – The risk that an AI system may not be understandable to users and developers.
  • Mitigation – based on the risk mapping and information collected above the system, Holistic AI provides recommended actions to lower this risk. These recommendations can be technical, addressing the system itself, or non-technical, addressing issues such as system governance, accountability, and documentation.
  • Assurance – Contingent on the outcome of the audit and implementation of appropriate mitigations Holistic AI’s assurance team declares that a system conforms to predetermined standards, practices, or regulations. Assurance can also be given on a conditional basis, with mitigation actions still outstanding for higher risk processes.

Based on the outcome of the audit and associated recommendations, an audit report is produced summarising the main potential risks of the system and how they can be mitigated, with a view to reducing risk and re-evaluating the system once outstanding risks have been mitigated.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety, Security & Robustness

System robustness is assessed by examining how well the system handles dataset shifts and adversarial threats. Custom mitigations can be suggested to help the system withstand malicious actors and better handle dynamic environments, ensuring system performance is consistent.

Appropriate Transparency & Explainability

System transparency and explainability are assessed by examining both documentation and reporting/notification procedures and more technical efforts to maximise explainability. Custom mitigations can be offered based on current efforts to increase transparency and explainability, and give users the required information to make informed decisions about interacting with a tool.

Fairness

Systems can be audited by Holistic AI’s team for bias using a number of widely used metrics in the fields of computer science, psychology and social sciences, and more. Metrics and appropriate thresholds are selected based on the system’s outputs and context of use and can be used to inform mitigation recommendations.

Accountability & Governance

Conducting an audit voluntarily is a key way to demonstrate accountability for AI systems. Enterprises opening themselves up to impartial scrutiny is vital to ensuring that systems conform to best practices and legal requirements, and that those involved in their design, development, and distribution are held accountable for the system’s use, outputs, and impact.

Why we took this approach

AI audits are emerging as a regulatory requirement in both the United States and European Union, with New York City Local Law 144 and The Digital Services Act both requiring independent algorithm audits.

Outside of compliance, audits also provide an opportunity to identify risks and prevent harms before they occur by introducing guardrails and allowing action to be taken early. With the adoption of AI growing around the globe, it is becoming increasingly important that the risks of AI are realised and mitigated before they can have an impact.

Benefits to the organisation using the technique

Audits are highly customisable to the needs of each enterprise and their systems, allowing for anything from a deep, narrow audit that focuses on one type of risk to a wide audit that covers five technical risk verticals.

Limitations of the approach

While the level of access to the system could impact how deep the audit can go, through an interactive process, Holistic AI’s team of auditors ensure they extract as much information as possible to maximise the value of the audit.

Published 19 September 2023