Holistic AI: Governance, Risk and Compliance Platform

Case study from Holistic AI.

Background & Description

Holistic AI’s AI Governance, Risk Management and Compliance Platform provides a SaaS one-stop-shop to govern enterprise AI systems at scale. The platform utilises unique and proprietary solutions based on foundational research in trustworthy AI, such as AI robustness, bias, privacy, and transparency.

The platform shares the EU’s risk-based approach to AI governance, where systems are designated a risk level of low, medium, and high risk in a single pane Red-Amber-Green dashboard. Key risk verticals examined are bias, robustness, efficacy, transparency, and privacy, with a separate risk rating created for each.

The platform provides a solution for both internal development and deployment and for procurement (third-party risk management). It examines both the inherent risk of a system based on the technology it uses, the context it is deployed in, and how its outputs are used, as well as safeguards already in place to reduce and mitigate these risks.

Risk mapping and verification are used to produce custom reports along with recommendations for how to mitigate risks and introduce relevant safeguards.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety, Security & Robustness

The Governance, Risk, and Compliance Platform allows users to assess their system’s robustness and security in terms of how well it is able to withstand adversarial attacks and detect and handle dataset shifts, for example. Users are also asked to provide information about any relevant system training, safeguards, and tests of adversarial resistance to map robustness risks.

Appropriate Transparency & Explainability

Users of the GRC platform are asked to provide information about efforts to communicate the system’s use, capabilities, and limitations to relevant stakeholders, including users of the system. They are also asked about processes for dataset documentation and model cards for model reporting, and quantitative assessments can be used to examine or support system explainability.

Fairness

Fairness is typically operationalised in terms of bias, comprising equal or equitable treatment and outcomes. To evaluate fairness, users of the Platform are asked for information about the processes in place to identify and mitigate bias as well as any built-in safeguards. Additionally, users can provide datasets for quantitative analyses to examine whether outputs vary for different groups using a range of metrics, for both regression and classification systems. Moreover, users can provide information about steps to ensure equal access to the system and accessibility considerations.

Accountability & Governance

To assess systems’ governance practices, users of the GRC platform are asked to provide information documentation procedures and internal controls (detective, corrective, and preventative). They are also able to use the platform as a real-time inventory to track AI use across the business, which can be useful to establish accountability mechanisms if they haven’t been already.

Why we took this approach

The Holistic AI Governance, Risk, and Compliance Platform brings AI law, policy, and engineering together to facilitate an assessment of each system informed by multidisciplinary insights. The platform uniquely interconnects all risks related to bias, efficacy, robustness, privacy, and explainability as needed to generate a complete picture of an enterprise’s AI risk exposure, considering both inherent risks and residual risks after any mitigations or safeguards.

Governance covers the AI inventory and internal controls, AI Risk covers risk mapping, mitigation, and ongoing monitoring, and compliance covers AI regulatory compliance and lawful usage.

Benefits to the organisation using the technique

The Holistic AI Governance, Risk, and Compliance platform is a single dashboard for risk posture management, role-based reporting, and automated workflows. The platform acts as an inventory for AI systems used across the business, creating greater visibility into the technologies that are being used and how.

Limitations of the approach

While it is not possible to anticipate every single impact of a system, using the Platform can significantly reduce the risk associated with the system, particularly if the appropriate mitigations and safeguards are implemented. Further, accurate risk mapping and risk mitigation relies on accurate information being provided by users, which could be challenging for those with a limited technical background or those who were not involved in the design and development of the system. However, through open communication with users, Holistic AI’s team seeks to clarify any inconsistencies and obtain as much information as possible to maximise the accuracy of the assessment.

Published 19 September 2023