RAI Institute: Artificial Intelligence Impact Assessment (AIIA)

A system-level AI Impact Assessment (AIIA), developed by the Responsible Artificial Intelligence Institute (RAI Institute), informs the broader assessment of AI risks and risk management.

Background & Description

This case study focuses on a system-level AI Impact Assessment (AIIA). This tool, developed by the Responsible Artificial Intelligence Institute (RAI Institute) informs the broader assessment of AI risks and risk management. Based on the occurrence of specific events, it allows management and development teams to identify actual and potential impacts at the AI system level through a set of defined controls across stages of the system lifecycle. The impacts identified are categorised in line with generally accepted principles for safe and trustworthy AI, in particular: accountability and transparency, fairness, safety, security and resilience, explainability and interpretability, validity and reliability and privacy. For assurance purposes, this assessment tool is accompanied by complmenetary guidance on evidence documentation requirements.

Relevant Cross-Sectoral Regulatory Principles

Safety, Security & Robustness

Safety, security and robustness controls are tested against a system development use case and throughout the system lifecycle. More concretely, they require organisations to think about their objectives, identify potential areas of misuse, and determine the appropriate training and metrics for the model to ensure safe and secure operation. Additionally, they highlight the importance of human oversight and testing as means to establish assurance.

Appropriate Transparency & Explainability

The RAI Institute AIIA puts in place controls to ensure that users of an AI system are aware of the existence and extent of their interaction with it, and understand its intended use(s), functioning, performance, and limitations. These transparency and explainability controls flow through all stages of the system lifecycle and are aligned with AI actor roles, which should be clearly defined and established.

Fairness

The RAI Institute AIIA puts in place controls to assess potential bias and fairness impacts in relation to an AI system throughout all stages of the system lifecycle. This assessment translates to team composition and stakeholder consultation, training data, model and system development and functioning.

Accountability & Governance

Accountability is a key theme running throughout the assessment tool, inspiring organisations to account for and review all the stakeholders involved throughout the system lifecycle and ensure appropriate controls are put in place, along with relevant monitoring and compliance mechanisms. Procedural governance mechanisms are considered in the broader system level assessment framework which the AIIA forms part of.

Contestability & Redress

The RAI Institute AIIA address contestability and redress through a comprehensive framework that emphasises ongoing public consultation and stakeholder engagement across the system lifecycle. This process is designed to surface and address concerns, risks, and potential impacts, particularly by engaging with groups that may be impacted and facilitating a mechanism for receiving and addressing feedback and grievances. In relation to the deployment stage, controls are put in place to ensure accountability and offer remedies for errors or inaccuracies, including robust mechanisms for recourse and remedy.

Why we took this approach

The RAI Institute aims to advance the practice of responsible AI through the development of tools and guidance to put responsible AI principles into practice. The AIIA was developed to further this aim since it is essential for an organisation to be able to accurately and easily assess the impacts of its AI systems and confirm the compliance status against relevant regulations and standards in order to ensure adequate oversight and risk management.

Benefits to the organisation using the technique

The RAI Insitute AIIA equips organisations with a vital tool for ensuring their AI models and systems comply with pertinent policies and industry benchmarks for responsible AI. It increases visibility and accountability, allows for early risk identification, and fosters stakeholder trust by demonstrating a commitment to safe, secure and trustworthy AI. By streamlining the evaluation process and defining actionable controls for assessment, the AIIA not only helps organisations put responsible AI principles into practice but also strengthens governance, mitigates risks, and supports sustainable innovation.

Limitations of the approach

The RAI Institute AIIA informs the broader risk assessment and risk management scheme in relation to the development and deployment of an AI system. As such it may function as a stand-alone tool but for risk assessment purposes, it will be required to understand the outcomes of the AIIA in relation to the chances of identified impacts occurring and the severity of such impacts. Moreover, the results of the AIIA and consequent assurance level, will depend on the evidence documentation provided by an organisation to assert the extent to which the controls have been met.

The tool is grounded in several regulatory and policy sources, including in particular:

  • NIST AI 100-1 Artificial Intelligence Risk Management Framework (AI RMF 1.0);

  • The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (October 30, 2023);

  • Office of Management and Budget (OMB) draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence (November 1 2023); and

  • ISO/IEC DIS 42005 – Information technology – Artificial intelligence – AI system impact assessment

Further AI Assurance Information

Published 9 April 2024