FSA: Developing an AI-based Proof of Concept that prioritises businesses for food hygiene inspections while ensuring the ethical and responsible use of AI

Case study from the Food Standards Agency.

Background & Description

This case study is focussed on the use of AI to support hygiene inspection of food establishments by prioritising businesses that are more likely to be at a higher risk of non-compliance with food hygiene regulations. Currently, this process is manual, labour intensive and inconsistent across local authorities. Using this AI-enabled tool is expected to benefit local authorities by helping them to use their limited resources more efficiently. The tool was developed as a Proof of Concept to explore the art of the possible. Note that, it was decided not to put the tool into live usage owing to multiple reasons and competing priorities.

Our Approach

The Food Standards Agency’s (FSA) Strategic Surveillance Service is a data science team that strengthens the FSA’s food safety mission. This team develops tools and techniques to turn data into intelligence, using machine learning and AI. One such tool is the Food Hygiene Rating Scheme – AI (FHRS AI) built as a Proof of Concept in collaboration with FSA’s supplier Cognizant Worldwide Limited, to help local authorities become more efficient in managing the hygiene inspection of food establishments. The tool supports local authorities to prioritise which businesses to inspect in the first instance by predicting which establishments might be at a higher risk of non-compliance with food hygiene regulations.

FSA has created a Responsible AI (RAI) framework to overlay its 10-week agile sprint methodology. The framework is based on five RAI principles of Fairness, Sustainability, Privacy, Accountability and Transparency. Underpinning FSA’s Responsible AI framework is the ‘reflect, act and justify’ approach posited by the Turing Institute in their paper ‘Understanding Artificial Intelligence Ethics and Safety’. Three different risk and impact assessments were conducted during the development of the FHRS-AI:

  1. Responsible AI Risk Assessment
  2. Stakeholder Impact Assessment
  3. Privacy Impact Assessment

FSA’s process-based RAI framework has specific responsibilities assigned to various stakeholders, including Business (Business Owner, Business SMEs, Executive Leadership, Steering Committee), FSA Legal and Compliance (Knowledge and Information Management and Security team, Legal team) and FSA Strategic Surveillance (Business Analyst, Change Consultant, Development Lead, Development Team, RAI Lead).

In addition to these impact and risk assessments, the AI model outputs were validated using other empirical methods. FSA also participated in the Central Digital and Data Office’s (CDDO) pilot for the Algorithmic Transparency Standard and published the output.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles.

Safety, Security & Robustness

  • Our Responsible AI Risk Assessment helped identify potential risks related to the use case, data and technology used. Identification of these risks led to the consideration and documentation of potential mitigation techniques. This was conducted iteratively throughout the development of the use case, ensuring risks are continuously identified, assessed, and managed.
  • The FSA considers it a good practice to conduct Privacy Impact Assessment (PIA) when using personal data. We used a structured process to identify and minimise data protection risks. These were conducted in an iterative process throughout the design, development, and delivery of the use case.
  • The model was designed and developed by adhering to the guidance provided by FSA’s Knowledge and Information Management and Security (KIMS) and Legal teams on regulations, information governance, data protection compliance and security.

Appropriate Transparency & Explainability

  • Information on FSA’s Food Hygiene Rating Scheme’s privacy policy, why we require data, what we do with the data and consumer rights.
  • Information on how FSA handles personal data, consumer rights and privacy notices.
  • Our Responsible AI framework is run alongside the design and development sprint to ensure a robust, structured approach is taken and that all pertinent information is captured. Our methodology and evaluation of the model and associated risks is documented in a way that can be evidenced.
  • The Stakeholder Impact Assessment helped build confidence in the way we designed and deployed the system by bringing to light unseen risks. We used this assessment to demonstrate forethought and due-diligence and that the various stakeholders have collaborated to evaluate the social impact and sustainability of the project.
  • All the processing of the data used by the FHRS AI tool is in accordance with FSA’s Public Task to provide advice and assistance to enforcement authorities to keep food and feed safe.
  • Our design and development approach ensures that business and Data Science collaborate to understand feature importance and explainability.
  • From a technical perspective, feature importance is assessed on all our model iterations to ensure the transparency and explainability of the model. We assess the model and the predictions at a local and global level during training and inference stages.
  • The tool takes a ‘human-in-the-loop’ approach i.e. There is a human check of the rating predicted by the tool before any decisions are made.

Fairness

  • Our model development process includes a Fairness Assessment check, via which we can assess attributes for group fairness (Accuracy, Balanced Accuracy, Precision, Recall), compare disparity in accuracy and predictions.
  • We ensure collaboration between Business and Data Science to identify whether outcomes are considered fair or whether more in-depth analysis is required.
  • For the FHRS AI model, we have considered economic bias. This is being monitored to ensure that the model does not disproportionately affect any group.
  • We also used the FairLearn tool to monitor the model for bias. FairLearn helped the developers identify bias by showing how the model’s prediction deviates from the true value based on a type of input.
  • The combination of the model predictions with officers’ local knowledge prior to any decision making helps to avoid any unfair decisions.
  • Also, there is a provision for users to feedback outcomes into the model to improve the model’s predictive accuracy.

Accountability & Governance

  • Our Responsible AI framework takes a ‘process-based governance approach’. We have designed it to be technology agnostic. It focuses on the processes rather than the specific architectures used to enable AI/ML development.
  • We apply tangible processes, artefacts and tooling to the delivery and operationalisation methodology in a way that enables development of AI/ML that aligns with the agreed upon RAI principles of Fairness, Sustainability, Privacy, Accountability and Transparency. These processes are completed in parallel with existing delivery processes.
  • We have ensured that the right level of authority and control is exercised over the management of each use case. This enables alignment to the accountability principles, as decisions on the use of AI/ML are attributed to responsible stakeholders, with key decisions captured throughout the delivery lifecycle.
  • FSA has procedures in place to ensure that all staff with access to the information have adequate Information Governance and data protection training.

Contestability & Redress

  • This tool was developed as a Proof of Concept, which wasn’t put into live usage. Hence, we haven’t fully tested the Contestability & Redress principle.
  • The processes built around the FHRS AI tool ensure that there is always a ‘human-in-the-loop’ expert involved, thus safeguarding against any potential bias or inaccuracies.
  • The FSA only collects and uses information in a manner consistent with data subject rights and its obligations under the law, including the UK General Data Protection Regulation (UK GDPR) and the Data Protection Act 2018 (DPA). FSA’s Personal Information Charter provides further information on data subject rights and how FSA’s Data Protection Officer can be contacted.

Why we took this approach

This approach was based on good practice approaches to AI ethics and safety proposed by leading academics in this field. It allowed us to exercise due-diligence, identify potential risks and put in place mitigations, and build confidence in the FHRS-AI tool.

Benefits to the organisation

  • Iterative process ensuring risks are continuously identified, assessed, and managed
  • Demonstrated recognised good practice
  • Helped identify unanticipated AI related risks
  • Identified potential risk mitigation techniques
  • Identify and minimise data protection risks.
  • Demonstrated forethought and due diligence
  • Built frontline user confidence in the AI system

Limitations of the approach

  • Some stakeholders lack the necessary readiness for the successful implementation of an AI tool.
  • Different stakeholders follow different methods for evaluating accuracy of AI predictions.

Further AI Assurance Information

Published 6 June 2023