Guidance

Ethics, Transparency and Accountability Framework for Automated Decision-Making

Updated 29 November 2023

What the framework is for

Context

The ethical considerations of artificial intelligence and automated systems is at the centre of technological advancement.

According to a recent EU survey and a British Computer Society survey in the UK, there is a distinct distrust in the regulation of advanced technology. A review by the Committee on Standards in Public Life found that the government should produce clearer guidance on using artificial intelligence ethically in the public sector.

Why we need the framework

Current guidance can be lengthy, complex and sometimes overly abstract. This is not just a Digital, Data and Technology issue. We need to improve the general literacy of automated or algorithmic decision-making with clear information and practical steps for civil servants and ministers to support the agenda and provide appropriate challenge.

Decision-makers should not assume that automated or algorithmic decision-making is a ‘fix-all’ solution, particularly for the most complex problems.

What the framework is

This 7 point framework will help government departments with the safe, sustainable and ethical use of automated or algorithmic decision-making systems.

It has been developed in line with guidance from government (such as the Data Ethics Framework) and industry, as well as relevant legislation. It supports the priorities of the Central Digital and Data Office, and aligns with wider cross- government strategies in the digital, data and technology space.

Departments should use the framework with existing organisational guidance and processes.

Understanding automated and algorithmic decision-making

What we mean by automated decision-making

Automated decision-making refers to both solely automated decisions (no human judgement) and automated assisted decision-making (assisting human judgement). There are some different legal requirements for the two forms of automated decision-making. This framework should be applied in its entirety for both forms to ensure best practice.

Solely automated decision-making means decisions that are fully automated with no human judgement. This will likely be used in a scenario that is often repetitive and routine in nature.

Automated assisted decision-making is when automated or algorithmic systems assist human judgement and decision-making. These are more complex, often with more serious implications for citizens.

Example of solely automated decision-making A worker’s pay is linked to their productivity, which is monitored using an automated system. The decision for how much pay the worker receives for each shift they work is made automatically by referring to the data collected about their productivity. (Source: Information Commissioner’s Office)

Example of automated decision-making assisting human judgement An employee is issued with a warning about late attendance. The warning was issued because the employer’s automated clock-in system highlighted that the employee had been late on a number of occasions. The actual decision to issue a warning was then taken by the employer’s manager after being informed by the automated system. (Source: Information Commissioner’s Office)

Article 22 of the General Data Protection Regulation (GDPR) states that when a solely automated decision is made, resulting in a legal or similarly significant event, individuals have the right to not be subjected to it. You can only make a solely automated decision that has legal or similar significant effects on an individual when it is any of the following:

  • necessary for the entry into or performance of a contract
  • authorised by law
  • based on the individual’s explicit consent (caveat: Recital 43 GDPR states that consent is not a legal basis where there is a significant imbalance in the relative power of the parties) (Source: Information Commissioner’s Office)

Before you use this framework

Algorithms are not the solution to every policy problem

Before using this framework, you should consider whether using an automated or algorithmic system is appropriate in your context.

Scrutiny should be applied to all automated and algorithmic decision-making. They should not be the go-to solution to resolve the most complex and difficult issues because of the high-risk associated with them. Read more information about risks.

The risks are dependent on policy areas and context

When using an automated or algorithmic decision-making system, the risks are different and this should be taken into account. Senior owners should conduct a thorough risk assessment, exploring all options.

You should be confident that the policy intent, specification or outcome will be best achieved through an automated or algorithmic decision-making system.

Algorithmic risks include, but are not exclusive to:

  • input data, including biased, outdated datasets
  • algorithm design, including flawed assumptions and bias logic
  • output decisions, including incorrect interpretation
  • technical flaws, including insufficient rigour in development and testing
  • usage flaws, including integration with existing operations
  • security flaws, including deliberate flawed outcomes

Who this framework is for

This framework is for all civil servants, specifically:

  • senior owners of all major processes and services, subject to automation consideration
  • process and service risk owners
  • senior leaders
  • executive leaders
  • operational staff
  • those in digital, data and technology roles
  • policy makers
  • ministers when considering an algorithm or automated system

Responsible data

Everyone must ensure that data is used responsibly. The Data Ethics Framework and data protection law must be followed.

Review the quality and limitations of datasets used. For example, is it accurate and representative? Has it been assessed for potential bias and discrimination?

When datasets are used for decision-making processes they were not intended for - like proxy or generalised social datasets - caution, human oversight and intervention are required.

Using third parties

When working with, or dependent on, third parties the framework should be adhered to. This requires early engagement with commercial expertise to ensure that the framework is embedded into any commercial arrangements.

How to use this framework

When you use automated decision-making in a service, you should:

  1. Test to avoid any unintended outcomes or consequences.
  2. Deliver fair services for all of our users and citizens.
  3. Be clear who is responsible.
  4. Handle data safely and protect citizens’ interests.
  5. Help users and citizens understand how it impacts them.
  6. Ensure that you are compliant with the law.
  7. Build something that is future proof.

1. Test to avoid any unintended outcomes or consequences

Prototype and test your algorithm or system so that it is fully understood, robust, sustainable and that it delivers the intended policy outcomes (and unintended consequences are identified).

Context

Algorithmic and automated decision-making should not be seen as the solution to all issues, particularly for complex and challenging policy areas.

Rigorous, controlled and staged testing should take place before going live. Throughout prototype and testing, human expertise and oversight is required to ensure technically resilient and secure, as well as accurate and reliable, systems. Ensuring that it does not cause unintentional harm, particularly when human life or safety are dependent on it.

More sustainable solutions should be prioritised to ensure the delivery of intended policy outcomes in an ever evolving and changing technology landscape.

Practical steps

  • Make the policy specification (the problem you are trying to solve and what you are trying to achieve) the priority in testing phases.
  • Be clear on what you are testing, for example is it the accuracy, security, reliability, fairness, explainability of your system?
  • Test based on quality, relevant, accurate, diverse, ethical and appropriately sized datasets to deliver sustainable and intended outcomes - even when training datasets for fully automated decision-making with no human judgement, safeguarding the characteristics of the people impacted.
  • Do regular impact and risk assessments, such as the Data Protection Impact Assessment and Equality Impact Assessment where appropriate.
  • Ensure that testing is done by someone who is properly qualified, even independent if possible.
  • Do ‘red team testing’, with the presumption that all algorithms systems are capable of inflicting some degree of harm.

Relevant resources

2. Deliver fair services for all of our users and citizens

Involve a multidisciplinary and diverse team in the development of the algorithm or system to spot and counter prejudices, bias and discrimination.

Context

Algorithms can be used to identify the inherent biases associated with human judgement, but they can also inherit human and societal biases and forms of prejudice, particularly those related to sensitive characteristics such as: race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.

It should be presumed that the algorithm or system that you are developing is capable of causing harm and injustice.

Human judgement, engagement, and testing with diverse and representative stakeholders is required to avoid the unjust impacts on the individual and for the collective. This is integral to the policy specification development phase.

Throughout the entire lifecycle of your algorithm or system, draw on and be led by the expertise from other disciplines (for example policy and operational delivery), to ensure that your algorithm or system has integrity and is developed with an inclusive and collective approach at its core, addressing and answering the desired policy intent.

A multidisciplinary team with diverse roles and skills will help contribute to reducing bias and producing more accurate outcomes.

Practical steps

  • You must do an Equality Impact Assessment to adhere to the Equality Act (2010) and Public Sector Equality Duty.
  • Run ‘bias and safety bounties’, where ‘hackers’ are incentivised to seek out and identify discriminatory elements.
  • Use quality and diverse datasets, to spot and counter apparent prejudices, bias and discrimination in the data that is used.
  • Do early engagement with commercial teams to ensure ethical practices are embedded into commercial arrangements with third parties.

Relevant resources

3. Be clear who is responsible

Work on the assumption that every significant automated decision should be agreed by a minister and all major processes and services, subject to automation consideration, should have a senior owner.

Context

The algorithm or system should be designed to be fully answerable and auditable.

Responsibility and accountability for algorithms and automation, and their associated outcomes should be made clear. Organisations and individuals should be held accountable to ensure the proper functioning of artificial intelligence.

In the public sector there are stricter rules for a decision made or advised to be made by a machine. Officials make decisions on a daily basis on behalf of their Secretary of State, who is ultimately the one accountable for all decision-making in their department. Every process or service that involves an algorithm or system making a significant decision is required to get ministerial sign-off. This should be done in alignment with the Ministerial Code.

Practical steps

  • Give all major processes and services subject to automation consideration an assigned senior owner or senior process owner to drive accountability, with a coherent and whole-system understanding of the entire process or service. They are responsible for ensuring that the necessary mitigation and measures are taken so that the algorithm or system does not cause unintended harm. This should help ministers make an informed decision.
  • Embed into existing governance processes where possible - it is not solely the responsibility of the Government Digital and Data profession.

Relevant resources

4. Handle data safely and protect citizens’ interests

Ensure that the algorithm or system adequately protects and handles data safely, and is fully compliant with Data Protection legislation.

Context

The public sector has a responsibility to lead the way in citizen data handling. Good data can give us insights that help intelligent decision-making. Poor use of data, particularly in algorithmic or automated decision-making, can be damaging.

Implementation should align with the Data Ethics Framework, and by default, the design of the algorithm and system should keep data secure, and comply with data protection law. In particular, when datasets are used for decision-making purposes they were not intended for, such as proxy datasets and generalised social datasets (for example individual decisions based on regional location data from the Census), additional caution and robust human oversight is required.

To build trust, individuals accountable for the risk management and compliance of the algorithm and automated system should create or build on data governance processes that handle and protect data safely while maintaining the quality of the data used.

Practical steps

Relevant resources

5. Help users and citizens understand how it impacts them

Work on the basis of a ‘presumption of publication’ for all algorithms that enable automated decision-making, notifying citizens when a process or service has automated decision-making with plain English explanations (all exceptions to that rule agreed with government legal advisors before ministerial authorisation).

Context

Context is essential to the explainability of an automated decision. Under data protection law, for fully automated processes, you are required to give individuals specific information about the process. Process owners need to introduce simple ways for the impacted person(s) to request human intervention or challenge a decision. When automated or algorithmic systems assist a decision made by an accountable officer, you should be able to explain how the system reached that decision or suggested decision in plain English.

Traceability mechanisms should be in place to help explain. There are different approaches to explain how a decision has been made so you need to understand what explanations are possible. The explanation needs to be appropriate for your audience, expert or non-expert, and should be scrutinised and iterated by a multidisciplinary and diverse team (including end users) to avoid bias and group speak.

Practical steps

  • As a minimum you should understand the functioning of the model, individual components and training algorithms, including understanding and documenting the limitations, asking ‘what if’ questions (‘what would be the outcome if I altered X?’). End users should be provided with warning labels, or disclosure requirements to ensure they understand and consent to the decision process.
  • Invest in education and provide logical information to citizens impacted by the automated decision.
  • Appoint an accountable officer to respond to citizen queries in real time.
  • Provide clear guidance on how to challenge a decision and incorporate accessible end user feedback loops for continuous learning and improvement.
  • Share information about automated decision incidents through collaborative channels to drive wider improvement in the field.

Relevant resources

6. Ensure that you are compliant with the law

Ensure that your algorithm or system adheres to the necessary legislation and has full legal sign-off from relevant government legal advisors.

Context

Relevant laws include data protection law (covered in point 5) and the Equality Act (covered in point 2).

Government legal advisors need to be involved from the start, to help you to understand what is possible. They will also be able to advise on any upcoming or current legislation that affects your specific policy intent or automated system.

Human rights and democratic values should also be at the heart of your approach.

Practical steps

Relevant resources

7. Build something that is future proof

Continuously monitor the algorithm or system, institute formal review points (recommended at least quarterly), and end user challenge to ensure it delivers the intended outcomes and mitigates against unintended consequences that may develop over time (referring to points 1 to 6 throughout).

Context

Automated systems are tools that can be used to drive a sustainable and inclusive society. There are significant opportunities but they should be approached with awareness, caution and an understanding of the trade-offs.

Testing should continue past the initial development stage with datasets regularly reviewed and evaluated by an established governance mechanism of diverse representation, working on the presumption that your algorithm or system is capable of causing harm to a person(s).

You should test and apply all of the points in this framework throughout the lifecycle of the system and make sure that your system still aligns with the intended policy outcome (which could change over time).  

Practical steps

  • Do performance monitoring.
  • Establish ‘formal review’ points (recommended at least quarterly), reviewing datasets and establishing whether the policy intent remains the same and how robust the governance, transparency and explainability mechanisms are. Incorporate any new risks into your assessments and adapt to any changes in legislation.
  • Continue to follow all of the suggested mechanisms raised in points 1 to 6.

Relevant resources

Case studies

These case studies from the public sector and private sector, both in the UK and abroad, should help teams recognise if their project involves automated assisted decision-making.

Policing case study

Avon and Somerset Constabulary use a tool called Qlik Sense that connects internal databases with local authority databases. Using AI predictive modelling, it produces individual risk assessment and intelligent profiles to assist the decision-making of officers, handling offenders according to their perceived risk level. The predictive models are regularly validated on a quarterly basis to ensure quality and accuracy. Source: Centre for Data Ethics and Innovation (CDEI), ‘Review into bias in algorithmic decision-making’

Financial services case study

Many Fintech companies use AI to enhance services, for example predicting whether people are able to repay personal loans - enabling a decision to be made. Source: The Alan Turing Institute, ‘Artificial intelligence in finance’

Many insurance companies use machine learning and predictive models in the claims process, assessing information to make a judgement impacting customers. Source: CDEI, ‘Review into bias in algorithmic decision-making’

Healthcare case study

AI can be used in mammogram screening. Currently, two radiologists are required to examine the mammogram, with a third radiologist sometimes required to provide unanimity - this is a constraint on resources. AI systems can perform the second mammogram reading, indicating whether a radiologist assessment is needed. This reduces radiologists’ workload. The automated decision here has potentially life-changing implications. Source: Karin Dembrower, ‘Effect of artificial intelligence-based triaging of breast cancer screening mammograms on cancer detection and radiologist workload: a retrospective simulation study’

Border security case study

Canada introduced automated decision-making in its immigration and refugee system. Predictive analytic systems automate activities previously conducted by immigration officials and support the evaluation of some immigrant and visitor applications - this results in a more efficient service, but has life changing implications. The Canadian government has put in place a series of mitigations to reduce risk such as algorithmic impact assessments and bias testing. Source: Roxana Akhmetova, ‘How AI is Being Used in Canada’s Immigration Decision-making’

Glossary

Algorithm

A set of step-by-step instructions. In artificial intelligence, the algorithm tells the machine how to go about finding answers to a question or solutions to a problem.

Source: Matthew Hutson (2017), ‘AI Glossary: Artificial intelligence in so many word’

Artificial intelligence

There is no single agreed definition of artificial intelligence, but broadly artificial intelligence is the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence.

Source: Government Digital Service, Office for Artificial Intelligence (2019), ‘A guide to using artificial intelligence in the public sector’

Automated data processing

The creation and implementation of technology that automatically processes data with the purpose of processing large amounts of data efficiently with minimal human contact. In this context it is internal decision-making and would not lead to a ‘decision’ in a public policy context.

Source: OECD (2013), ‘Automated Data Processing’

Data ethics

An emerging branch of applied ethics that studies and evaluates moral problems and describes the value judgements related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms, and corresponding practices, in order to formulate and support morally good solutions.

Source: Luciano Floridi, Mariarosaria Taddeo (2016), ‘What is data ethics?’

Data protection law

The term data protection law is used to encompass all required data-related legislation that must be followed throughout the entire life-cycle of your algorithm or automated system. This includes the Data Protection Act (2018) and EU General Data Protection Regulations (GDPR) and its UK successor (UK GDPR).

Source: GOV.UK ‘Data protection’

Red team testing

Red team testing is a structured effort to find flaws and vulnerabilities in a technical system, often performed by dedicated ‘red teams’. To anticipate the potential risks associated with artificial intelligence system’s, a diverse team should seek out these flaws and vulnerabilities of the system.

Source: Miles Brundage et al. (2020) ‘Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims’

Transparency

Actions, processes and data are made open to inspection by publishing information about the project in a complete, open, understandable, easily-accessible and free format.

Source: Government Digital Service (2020) ‘Data Ethics Framework’

Users

A person(s) who is impacted by the service or automated decision. This includes, but is not exclusive to, civil servants, citizens and intermediaries.