Press release

UK government publishes pioneering standard for algorithmic transparency

The CDDO has launched an algorithmic transparency standard for government departments and public sector bodies, delivering on commitments made in the National Data Strategy and National AI Strategy.

  • The Cabinet Office’s Central Digital and Data Office has developed an algorithmic transparency standard for government departments and public sector bodies with the Centre for Data Ethics and Innovation
  • The standard will be piloted by several public sector organisations and further developed based on feedback
  • The move makes the UK one of the first countries in the world to develop a national algorithmic transparency standard, strengthening the UK’s position as a world leader in AI governance

The UK government has today launched one of the world’s first national standards for algorithmic transparency.

This move delivers on commitments made in the National AI Strategy and National Data Strategy, and strengthens the UK’s position as a global leader in trustworthy AI.

In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals.

This call for transparency around the use of AI systems has been strongly supported domestically and internationally, including by civil society organisations such as The Alan Turing Institute and Ada Lovelace Institute, and international organisations such as the OECD and Open Government Partnership. These renowned organisations have advocated for greater transparency to help manage the risks associated with algorithmic decision-making, bring necessary scrutiny to the role of algorithms in decision-making processes, and help build public trust.

The Cabinet Office’s Central Digital and Data Office (CDDO) has worked closely with the CDEI to design the standard. It also consulted experts from across civil society and academia, as well as the public. The standard is organised into two tiers. The first includes a short description of the algorithmic tool, including how and why it is being used, while the second includes more detailed information about how the tool works, the dataset/s that have been used to train the model and the level of human oversight. The standard will help teams be meaningfully transparent about the way in which algorithmic tools are being used to support decisions, especially in cases where they might have a legal or economic impact on individuals.

The standard will be piloted by several government departments and public sector bodies in the coming months. Following the piloting phase, CDDO will review the standard based on feedback gathered and seek formal endorsement from the Data Standards Authority in 2022.

By publishing this information proactively, the UK government is empowering experts and the public to engage with the data and provide external scrutiny. Greater transparency will also promote trustworthy innovation by providing better visibility of the use of algorithms across the public sector, and enabling unintended consequences to be mitigated early on.

Publication of the standard comes after the UK government sought views on a proposal to introduce transparency reporting on public sector use of algorithms in decision-making, as part of its consultation on the future of the UK’s data protection regime. The UK government is currently analysing the feedback received.

Lord Agnew, Minister of State at the Cabinet Office, said:

Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery. However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact. I’m proud that we have today become one of the first countries in the world to publish a cross-government standard for algorithmic transparency, delivering on commitments made in the National Data Strategy and National AI Strategy, whilst setting an example for organisations across the UK.

Adrian Weller, Programme Director for AI at The Alan Turing Institute and Member of the Centre for Data Ethics and Innovation’s Advisory Board, said:

Organisations are increasingly turning to algorithms to automate or support decision-making. We have a window of opportunity to put the right governance mechanisms in place as adoption increases. This is why I’m delighted to see the UK government publish one of the world’s first national algorithmic transparency standards. This is a pioneering move by the UK government, which will not only help to build appropriate trust in the use of algorithmic decision-making by the public sector, but will also act as a lever to raise transparency standards in the private sector.

Imogen Parker, Associate Director (Policy) at the Ada Lovelace Institute, said:

Meaningful transparency in the use of algorithmic tools in the public sector is an essential part of a trustworthy digital public sector. The Ada Lovelace Institute has called for a transparency register of public sector algorithms to allow the public - and civil society who act on their behalf - to know what systems are in use, where and why. The UK government’s investment in developing this transparency standard is an important step towards achieving this objective, and a valuable contribution to the wider conversation on algorithmic accountability in the public sector. We look forward to seeing trials, tests and iterations, followed by government departments and public sector bodies publishing completed standards to support modelling and development of good practice.

Tabitha Goldstaub, Chair of the UK Government’s AI Council, said:

In the AI Council’s AI Roadmap, we highlighted the need for new transparency mechanisms to ensure accountability and public scrutiny of algorithmic decision-making; and encouraged the UK government to consider analysis and recommendations from the Centre for Data Ethics and Innovation, and the Committee on Standards in Public Life. I’m thrilled to see the UK government acting swiftly on this; delivering on a commitment made in the National AI Strategy, and strengthening our position as a world leader in trustworthy AI.

Sir Patrick Vallance, UK Government Chief Scientific Adviser and National Technology Adviser, said:

We need democratic standards and good governance for new technologies, such as AI, that will enhance the way we work and benefit society. The launch of this new standard demonstrates this government’s commitment to building public trust and understanding of the application of these technologies, including exploring increased transparency in public sector use of algorithms.

Contact:

Cabinet Office Press Office 020 7276 7545 pressoffice@cabinetoffice.gov.uk

Notes to editors:

  • CDDO was formed in April 2021 as the new strategic centre for digital, data and technology for the UK government. It is part of the Cabinet Office and is responsible for shaping and delivering the UK government’s innovation and transformation strategies to overhaul legacy IT systems, strengthen the UK’s cyber security, and ensure the UK government can better leverage data and emerging technologies. It is charged with delivering Mission 3 of the National Data Strategy; to transform the UK government’s use of data to drive efficiency and improve public services.

  • The CDEI is a government expert body that enables the trustworthy use of data and AI. Its multidisciplinary team of specialists, supported by an advisory board of world-leading experts, work in partnership with organisations to deliver, test and refine trustworthy approaches to data and AI governance. The CDEI is part of the Department for Digital, Culture, Media and Sport (DCMS).

  • An algorithm is a set of step-by-step instructions. In AI, the algorithm tells the machine how to find answers to a question or solutions to a problem. An algorithmic tool is a product, application, or device that supports or solves a specific problem, using complex algorithms. Algorithmic transparency means being open about how algorithmic tools support decisions. The standard establishes a systematised way to present this information.

  • In June 2021, CDDO worked with the CDEI and BritainThinks to conduct a deliberative public engagement exercise to explore public attitudes towards algorithmic transparency in the public sector; the results of which informed development of the standard. CDDO also convened a wide range of stakeholders from within and outside of government. This included working with Reform, Imperial College London’s The Forum and the CDEI to host a policy hackathon, which brought together international experts from government, academia and industry to explore practical solutions to the challenges posed by algorithmic transparency.

  • The algorithmic transparency standard will complement existing guidance available on GOV.UK, such as the Data Ethics Framework and Guide to Using AI in the Public Sector, that supports the trustworthy use of algorithms in the public sector.

  • A few cities, including Amsterdam, Helsinki and New York, have begun to experiment with approaches to increase algorithmic transparency at a local level. France and the Netherlands have also made progress on developing national algorithmic transparency measures.

  • The UK government recently published its National AI Strategy, in which it committed to establishing the most trusted and pro-innovation system for AI governance in the world. In addition to developing a public sector algorithmic transparency standard, it made several other commitments, including to publish a roadmap that outlines how the UK can become a world leader in the emerging AI assurance industry, as well as release a White Paper that sets out a national position on governing and regulating AI.

  • In September 2021, DCMS launched a wide-ranging consultation on the UK’s data protection regime, presenting proposals that build on the key elements of the current UK General Data Protection Regulation (UK GDPR). The consultation closed on 19th November 2021.

  • A recent study on algorithmic accountability in the public sector by the Open Government Partnership, Ada Lovelace Institute, and AI Now Institute, highlighted the role that mandatory public reporting obligations can play in increasing transparency around the implementation of algorithmic accountability policy (p.48, 2021).

  • Guidance on the responsible design and implementation of AI systems in the public sector, produced by The Alan Turing Institute’s Public Policy Programme, states that each AI project should be justifiable by prioritising both the transparency of the process by which the model is designed and implemented, and the transparency and interpretability of its decisions and behaviours (p.6, 2019).

  • The OECD Principles on AI state that there should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.

Published 29 November 2021