Correspondence

Appendix A: Update on algorithmic bias

Published 5 July 2021

Introduction

Advances in the way we use and deploy data and AI are revolutionising almost every aspect of our lives. From faster, more accurate diagnosis of illnesses, to smarter and more sophisticated solutions to energy use and security threats - the use of data and AI has the potential to enhance our lives in unprecedented, powerful and positive ways.

Nevertheless, the use of data and AI is giving rise to complex, fast moving and far reaching economic and ethical issues. Increasingly sophisticated algorithms can glean powerful insights, which can be deployed in ways that influence the decisions we make, or target the services and resources we receive.

The government recognises the urgent need for the world to do better in using algorithms in the right way: to promote fairness, not undermine it. Algorithms, like all technology, should work for people, and not against them. Significant growth is happening both in data availability and use of algorithmic decision-making across many sectors; we have a window of opportunity to get this right and ensure that these changes serve to promote equality, not to entrench existing biases.

The CDEI review surveyed the issue of bias in algorithmic decision-making, and studied four initial areas of focus to illustrate the range of issues and ethical questions relating to bias in algorithmic decision-making. These were recruitment, financial services, policing and local government.

The review moved on to identify how some of the challenges we identified can be addressed, the progress made so far, and what needs to happen next.

The review identified three main areas to consider:

  • The enablers needed by organisations building and deploying algorithmic decision-making tools to help them do so in a fair way;
  • The regulatory levers , both formal and informal, needed to incentivise organisations to do this, and create a level playing field for ethical innovation;
  • How the public sector, as a major developer and user of data-driven technology, can show leadership through transparency.

The update below highlights developments in a number of key areas.

Guidance to support local authorities

The Department for Education has commissioned the CDEI to develop guidance for local authorities to use data analytics in children’s social care responsibly. The CDEI is currently developing draft guidance to be tested with local authorities in the autumn.

Enabling fair innovation

The UK Innovation Strategy, to be published later this year, will set the government’s vision for the UK to be the world’s most innovative economy by 2035. Underpinning this vision are more specific objectives including investing in people and talent, improving diversity and inclusion, and closing the gap between the workforce - including within the technology sector - and the skills employers need.

This objective will build on from current programmes, such as Innovate UK’s first diversity and inclusion campaign: Women in Innovation. The strategy is a key next step in delivering on the plan for growth, to support new opportunities in every part of the country, enabling anyone to acquire the skills to do those jobs, wherever they live and whatever their stage of life.

The regulatory environment

The AI Roadmap, commissioned by the government and delivered by the independent AI Council, made two recommendations for Data, Infrastructure and Public Trust concerning governance. They are: ‘[The government should] lead the development of data governance options and…should lead in developing appropriate standards to frame the future governance of data’; and ‘Building on its strengths, the UK has a crucial opportunity to become a global lead in good governance, standards and frameworks for AI and enhance bilateral cooperation with key actors.’ These two recommendations are now being taken forward into the new National AI Strategy, which the Office for AI will publish later this year.

Furthermore, the independent Regulatory Horizons Council has been appointed to horizon-scan for new technological innovations – including Artificial Intelligence – and provide the government with impartial, expert advice on the regulatory reform required to support its rapid and safe introduction, while protecting citizens and the environment.

The CDEI report highlighted a potentially greater role for the EHRC in investigating algorithmic discrimination. Development of the EHRC’s 2022-25 Strategic Plan is underway. In light of the changing world that we are in following the pandemic, the EHRC will take a responsive approach to using their powers, considering carefully the equality and human rights issues which are most pressing and where they can have the most impact. One of the emerging areas of focus being considered is artificial intelligence and emerging digital technologies, which they will continue to explore through internal engagement, and as part of their statutory public consultation later this year.

The EHRC is also currently developing new guidance on artificial intelligence and the public sector equality duty, for public authorities and private organisations that carry out public functions. The EHRC are liaising with the CDEI and the Alan Turing Institute on the draft guidance, and they are intending to publish it in September 2021.

Regulators are increasingly considering algorithmic bias in their research and guidance work. For example the ICO’s AI and Data Protection guidance and the CMA’s algorithmic harms report both explicitly reference discrimination and equality law.

The Competition and Markets Authority (CMA), the Information Commissioner’s Office (ICO) and Ofcom have together formed the Digital Regulation Cooperation Forum (DRCF) to support regulatory coordination in digital markets and cooperation on areas of mutual importance, and will be examining the issues of algorithmic processing as part of their work plan for the coming year, building on work undertaken by the CMA. Further, the ICO, Alan Turing Institute, CDEI and Office for AI have agreed to work together to develop, roll out and monitor training for regulators on issues around AI.

The Office for AI, CDEI, and ICO and other regulators also sit on a larger Regulators and AI working group, comprising 32 regulators and other organisations. This forum will be used to discuss how to take forward the recommendations made in the report, forming a special sub-group chaired by the ICO with active membership from the CDEI, Office for AI, Alan Turing Institute, and key regulators. They will identify gaps, consider training needs and make recommendations.

Transparency in the public sector

The CDEI report proposed that the Government introduced a mandatory transparency obligation on all public sector organisations using algorithms that have a significant influence on significant decisions affecting individuals. Since the report was published the Cabinet Office has established the Central Digital and Data Office (CDDO) as the new strategic centre for Digital, Data and Technology for the government. The CDDO is responsible for shaping and delivering the government’s innovation and transformation strategies to overhaul legacy IT systems, strengthen our cyber security, improve capability, and ensure the government can better leverage data and emerging technologies.

The CDDO recognises that ensuring fairness in how the public sector uses algorithms in decision-making is crucial for gaining and maintaining public trust. Introducing mechanisms for a more transparent use of algorithms within the government will encourage responsible public sector innovation and further enhance the UK’s long-standing leadership in the field of transparency and openness. In the National Data Strategy the government committed to collaborate with the leading organisations and academic bodies in the field to scope and pilot methods to enhance algorithmic transparency, and I’m pleased that the Centre are working with the CDDO, alongside a number of leading organisations in the field, to support the development and piloting algorithmic transparency measures later this year, including considering what transparency measures would be most effective at increasing public understanding about the use of algorithms in the public sector.

The CDDO also recently published an Ethics, Transparency and Accountability Framework for Automated Decision Making. This framework, aimed at senior public sector leaders, incorporated case studies provided by the CDEI’s work, and it also links to the CDEI’s Bias report as an important resource for teams considering using algorithms in their work.

In addition, the Crown Commercial Service has introduced a Dynamic Purchasing System, similar to a framework agreement, for public sector procurement of AI that includes data ethics requirements for suppliers. Suppliers are expected to follow the data ethics framework to mitigate bias and ensure diversity in the team that developed/ is developing a solution; as well as transparency/ interpretability and explainability of the results, including audits. The supplier will need to be open around how an AI service was built.