Research and analysis

Briefing note on the ethical issues arising from the public sector use of biometric voice recognition technology (accessible)

Published 9 September 2025

April 2025

Overview

The Biometrics and Forensics Ethics Group (BFEG) have undertaken a scoping exercise examining the technical, ethical, and legal use of biometric voice recognition technology by the public sector. Throughout this process, BFEG have gathered evidence and insight from relevant stakeholders across the public sector, academia, and civil society, with expertise covering the technical, ethical, legal, and governance challenges of this technology. This briefing note summarises key findings, with a focus on possible use-cases of voice recognition technology, including by law enforcement in a surveillance context, and by government user services teams as a means of logging in to a secure portal.

In relation to the latter use case involving a secure portal, voice recognition technology is currently employed within His Majesty’s Revenue and Customs’ (HMRC) Customer Services Group. For the purposes of this report the more general term ‘user’ is used to emphasise that these recommendations apply to a wide range of potential operators across the public sector and beyond.

The scope of the applications examined in this exercise is the biometric and forensic verification and identification of subjects.

During the evidence-gathering process, BFEG heard from the following contributors:

  • Defence Science and Technology Laboratory (Dstl)
  • Home Office National Police Capabilities Unit
  • His Majesty’s Revenue and Customs (HMRC) Customer Services Group
  • Information Commissioner’s Office
  • Metropolitan Police Service
  • Oxford Wave Research
  • Ingenium Biometrics
  • Intelligent Voice
  • Independent Reviewer of Terrorism Legislation
  • Equality and Human Rights Commission
  • A range of academic experts in the fields of computer science, biometrics, forensics, law, and social science

This briefing note outlines a series of recommendations produced by BFEG. These recommendations should be strongly considered for adoption prior to any use of voice recognition technology, to ensure that strong ethical principles are adhered to. This project, its methodology, and findings are informed by the BFEG Principles (BFEG, 2023) (see Annex I for additional information).

Summary

A key takeaway from the evidence-gathering process was that the use of voice recognition technology is likely to increase in future. This is due partly to the rapid development of Artificial Intelligence (AI) and Machine Learning (ML) techniques. In line with the findings of related reports on the use and regulation of biometrics for law enforcement in the UK, BFEG noted that government regulation and oversight is currently sparse in this area, and that work is needed to ensure any use of voice recognition technology adheres to strict ethical principles (Alan Turing Institute, 2024). The specific ethical concerns highlighted by BFEG include:

  • The scientific accuracy and reliability of the technology
  • Inherent bias present in training datasets
  • The collection of data without informed consent
  • A lack of human input and oversight
  • An inability to effectively detect spoofing and deepfakes.

BFEG take the view that government regulation would be the most effective way of ensuring voice recognition technology is used ethically. In the absence of this, the group have provided a set of recommendations that organisations operating the technology should strongly consider adopting, to ensure effective, rigorous and ethical governance.

Introduction and Definitions

Through a series of evidence-gathering exercises, BFEG have established that the use of voice recognition technology is expanding at present, with this expected to increase in future as further advancements to AI and ML algorithms are made. In 2021, BFEG produced a related briefing note on the ethical issues arising from public-private collaboration in the use of live facial recognition (BFEG, 2021).  The purpose of this briefing note on voice recognition is to outline the ethical challenges any use of voice recognition technology must meet. However, while there is considerable public and media attention directed towards facial recognition, BFEG believe that the awareness of voice recognition is significantly less, despite many similar ethical and governance concerns. As a result, there are a very limited number of media reports or public statements on this technology.  

Current evidence nevertheless suggests that voice recognition is already being used in the public sector. For example, this includes HMRC who use voice verification for call centre interaction security. Although it is not possible to envisage the range of possible use-cases that may emerge in future within the field of law enforcement and criminal justice, BFEG believe that some early consideration should be given to the ethical implications that are likely to arise. Significantly, the HMRC use-case has been the subject of past media attention (BBC News, 2019). HMRC failed to obtain proper consent from users to store their biometric data and were subsequently forced to delete large amounts of data. There are also cases of expert witnesses presenting opinions formed using voice recognition evidence in court cases, although at present all examples of this have involved recognition by humans, rather than relying upon automated voice recognition technology (Singh, C., 2014). However, this clearly indicates that voice recognition technology may soon be utilised by experts to inform their opinions.

The Evidence-Gathering Process

BFEG held an evidence-gathering session to collect oral evidence from academics, industry professionals, legal experts, and public servants on all aspects of voice recognition technology. In addition, a range of written submissions were also received on the topic. BFEG heard from representatives in HMRC Customer Services Group on HMRC’s use of voice authentication technology in order to explore a specific use-case of voice recognition technology in the public sector. However, with this exception, BFEG could not identify any examples of current use-cases of this technology in the UK public sector. The use of voice recognition as evidence in the UK criminal justice system also remains limited.

What is Voice Recognition Technology?

For the purpose of this briefing note, voice recognition technology is defined as a form of automated biometric recognition that is able to recognise the identity of an individual based on an audio recording of their voice. In “Ethical Issues Arising from the Police Use of LFR Technology” (BFEG, 2019), BFEG defined automated and biometric recognition as follows:

Biometric recognition is the automated recognition of individuals based on their biological and behavioural characteristics, for example, facial image, DNA, voice, and gait.

Automated recognition implies that a machine-based system is used for the recognition, either for the entire process or assisted by a human being.

This definition aligns with the definition of biometric data as provided under the UK Data Protection Act 2018, as amended (section 205) (GOV.UK, 2018). While BFEG are aware of some work to use voice recognition technology to identify an individual’s emotion or truthfulness, these purposes are beyond the scope of this report.

Voice recognition systems can largely be classified in one of two ways: one-to-one systems or one-to-many systems. One-to-one (‘verification’) systems function by taking an audio sample as an input (known as the probe sample) and comparing this to a recording(s) of a single individual (the enrolment sample(s)). The system then makes a binary choice as to whether the probe sample matches the voice of the individual in the enrolment sample(s). Such systems are often used to perform functions such as authenticating users into a secure portal over the phone.

In contrast, one-to-many (‘identification’) systems also work by taking a probe sample as an input, but this is compared to a range of reference recordings. The system is tasked with determining whether the probe sample corresponds to the voices in any of the reference recordings and identifying the specific match. Example use-cases of one-to-many systems could include identifying a suspect from a recording of multiple voices. The privacy and data protection implications and risks posed by use of these identification systems are more significant than those of verification systems. This necessitates greater consideration for relevant technical and legal safeguards.

Use of Voice ID by HMRC – A Case Study

BFEG have considered the specific use-case of voice recognition technology by HMRC as an example of government deployment of voice recognition. This is particularly relevant for organisations in the public sector, to ensure that any adoption of voice recognition technology is implemented in accordance with strong ethics and the necessary technical and legal safeguards.

In 2017, HMRC began using voice recognition to identify users over the phone and automatically authenticate them into their personal accounts. This is a one-to-one (verification) voice validation system. Users are asked to record themselves saying a phrase when they sign-up for the service, which they are then asked to repeat each time they wish to log-on. The system assesses whether these two voices are the same and, if so, satisfies HMRC’s identification and verification requirements and confirms to the adviser that the user is who they say they are. When the system was first deployed, consent was not explicitly sought for capturing voices, with newly enrolled users being put through to an automated service that asked them to repeat the phrase “my voice is my password” up to five times (BBC News, 2019). This generated a reference/enrolment passphrase that was then stored securely. Each time a user then wished to login, they would again be asked to repeat this passphrase, and the system would assess a match (HMRC, 2017.

In 2018, the UK Parliament enacted the Data Protection Act 2018, which implemented the EU General Data Protection Regulation (GDPR) (EUR-Lex, 2016a).  The GDPR set strict new standards governing how personal data can be collected, used, and stored, including a requirement for users to give explicit and informed consent before their personal or biometric data can be gathered. Following this, a complaint was made to the Information Commissioner’s Office (ICO) in which it was alleged that HMRC had not obtained valid consent from users before collecting and using their biometric data. This complaint was upheld by the ICO. An Enforcement Notice was issued, mandating HMRC delete all user records obtained without valid consent (ICO, 2019. This was completed in May 2019 and involved the deletion of five million records.

HMRC have since set about developing a new voice biometrics ID process that complies with the Data Protection Act 2018. This process is not yet operational. However, HMRC are considering a pilot of this approach. Under these proposals, instead of relying on a purely automated sign-up system within their call centre helpline, users would be put through to an adviser who will clearly explain: the purpose of the voice ID system, how biometric data will be stored, and the means by which a user may revoke consent.  Users would also be made aware of the HMRC Privacy Notice (HMRC, 2018) available online. sers would only be transferred to the voice ID sign-up system if they provide what HMRC deem to be clear and explicit consent for their biometric data to be collected and used in this way.

Ethical Concerns

Based on the HMRC case study, and other evidence presented to BFEG on the use of voice recognition technology across the public sector, BFEG have identified several ethical concerns that need to be addressed. The principal concerns are below.

Poor accuracy and reliability

Voice recognition technology is a form of biometric identity recognition, which aims to identify an individual based upon an audio recording of their voice. Any system which seeks to identify an individual, regardless of the reason, must be accurate and reliable. The most common uses of voice recognition are likely to be one-to-one voice validation or voice log-on authentication systems, such as the HMRC use-case outlined above, or one-to-many voice identification in a surveillance context. In both examples, the consequences of a false positive or false negative could be costly. As a result, voice recognition technology must demonstrate that it can give accurate and reliable identifications. This is crucially important in the field of law enforcement and criminal justice, where miscarriages of justice may result from inaccurate and unreliable voice recognition technology.

In the case of one-to-one voice validation systems, which are often used to enable users to log-on to a secure portal as in the case of the HMRC use-case above, false positives could also have negative consequences. False positives involve an individual being incorrectly recognised as a specific user by the voice recognition system when they are not that person.  this were to happen, it would mean an individual could gain entry to someone else’s secure portal, potentially giving them access to personal data, financial records, or other private information. This would represent serious violations of privacy and data protection. Thus, voice recognition systems must be able to safeguard against this.

False negatives, in which the correct user is trying to gain access to their account, but the voice recognition system does not recognise them as themselves and denies access, do not lead to a privacy breach. However, this may prevent individuals from exercising their right to access their personal data (and other related rights), as provided under the Data Protection Act 2018. Moreover, unjustified exclusion of certain groups or categories of persons from certain services or processes may result from false negatives. In addition, a significant number of false negatives is likely to lead to a deterioration in public confidence, both in the organisation and the technology, as users become frustrated that they are erroneously unable to access their records.

High standards of accuracy and reliability are also essential for one-to-many voice recognition or surveillance systems. Such systems may be used to identify suspects in criminal cases, and so false positives could lead to an innocent person being convicted of a crime. Equally, false negatives could lead to guilty suspects being exonerated, emphasising the need for reliable and accurate voice recognition technology.

Inherent bias present in model training

With the rapid recent advancement of AI and ML models that employ (deep) neural networks, voice biometric systems are becoming increasingly reliant on this technology. A key aspect of these neural networks is their dependence on a training dataset that is used to build voice recognition models and enables them to distinguish between different voices. As a result, voice recognition systems can only ever be as effective as the data they are trained on. It is vital that this data be representative of the population who will be using the voice recognition system, as any discrepancy here may result in inherent biases or discrimination being present in the technology. In addition, the models that are trained on these datasets must be able to handle the features and characteristics of a wide range of voices, as the features of voices common to one demographic group may differ from those of another (for example, speaker accents).

The Equality Act 2010 (GOV.UK, 2010) outlines a set of characteristics that must be considered, with any implicit or explicit discrimination on these grounds being unlawful. these characteristics are age, disability, gender reassignment, marriage or civil partnership, pregnancy and maternity, race, religion or belief, sex, and sexual orientation. While not all these characteristics may have an effect on voice, it is possible that certain traits will have an impact, for example sex, gender reassignment and ethnicity. To guarantee that these groups face no implicit discrimination, it is vital that all training datasets contain a representative sample of individuals. Equally, the neural networks of the model must be able to map the features of these different demographic groups.

Research studies indicate that both race and sex disparities exist within current voice biometrics systems (Chen, X. et al., 2022). central to the Chen, X. et al. report’s findings was the notion that existing neural networks used as a part of voice biometrics systems were not able to extract as many features from female and non-white voices as they could from white male voices. This serves as evidence that developers of these systems need to consider these factors when designing a model, in addition to the data it is trained on. As a result, the possibility for bias on these grounds is key among BFEG’s concerns with the use of voice recognition technology.

 An individual’s voice is biometric data, as it constitutes a biological and behavioural characteristic that could be used for identification under the Data Protection Act 2018. It is therefore within the scope of ‘special category’ data as it constitutes biometric data, one of the types of data that requires higher protection due to the particular risks posed by their processing. As such, in many use-cases, consent must be given to process this data. The Data Protection Act 2018 states that this consent must be ‘freely given, specific, informed and [an] unambiguous indication of the individual’s wishes.’

Consequently, it is vital that the collection, storage, and use of voice data is done with informed consent from the individuals involved. The HMRC case study documented above highlights the consequences of not seeking informed consent; the organisation could contravene the GDPR and be forced to delete the data they hold. Not only would the organisation suffer reputational damage and a loss of public trust, but the rights of individuals and groups of categories of people may be adversely affected.

Obtaining clear and informed consent from all users is fundamental to the ethical use of voice recognition technology, as well as ensuring compliance with GDPR. Users of the technology must give proper thought to the development and publication of a Privacy Policy and consent process that clearly explains to individuals what the voice recognition system is, why it is being used and how their data will be used. Individuals must be presented with this information clearly and factually, free from undue influence and with the opportunity to opt out at any time.

A lack of human input and oversight

While automated voice recognition technology systems may be able to function with a high level of automation, it is vital that humans are not removed from the process and continue to provide some oversight. despite its considerable maturity in comparison to voice recognition, like any technology, facial recognition has been shown to be less than 100% effective (Cavazos, J. et al., 2021).  . a result, many organisations operating facial recognition technology are also incorporating a ‘human-in-the-loop’ into their systems, where human overseers validate the matches made by the facial recognition technology to ensure that false positives are not being recorded. Given voice recognition is also likely to yield false positives, human oversight must be deployed to mitigate the risk of false matches.

A failure to incorporate human oversight within an overarching governance and accountability framework will lead to a less reliable voice recognition system and may also erode public trust in both the technology and organisation.

Inability to detect spoofing

With the capability of AI and deepfake technology on the rise, voice recognition technology faces a significant threat as malicious actors have increasing access to technology that enables them to generate artificial voices. Given this, it is essential that any voice recognition system has the capacity to distinguish a genuine voice from one that has been generated using deepfake technology. It is not sufficient simply to demonstrate this prior to the rollout of a system, as generative AI models that form the backbone of deepfake technology are undergoing rapid development at present. As a result, any voice recognition system must be continually monitored, reviewed, and updated, to ensure that it is able to handle the latest advancements in deepfakes.

As well as being able to identify artificially generated voice recordings, voice recognition systems must be equally adept at distinguishing similar voices from one another. There have been cases of voice recognition systems being fooled by twins who sound alike (BBC News, 2017), which could result in a major privacy violation. in addition, while this case focuses on the similarity of twins’ voice profiles, it is possible that unrelated individuals may also have near-identical voices by pure chance or that someone trains their voice to mimic another’s, which offers even more potential for misuse of voice recognition systems.

If such systems are unable to detect spoofing attacks, this leaves open the possibility of digital identity theft if malicious parties can imitate an individual’s voice. Any such instances of this would not only result in significant privacy and data protection violations, but also lead to a loss of public confidence, both in the organisation concerned and voice recognition technology more broadly.

Recommendations

To ensure the above concerns are addressed effectively and that voice recognition technology is deployed ethically, BFEG make the following recommendations to the Home Office. These recommendations should be carefully considered, while the decision to implement the technology is being made. The recommendations are broadly applicable to any organisation operating voice recognition technology. However, the duty to ensure public safety and public confidence is much more significant for government departments.

Recommendation 1:

Voice recognition models must be demonstrably free of implicit biases against any groups of the population. This may be through an understanding and insight into the data used to train it, or through extensive and inclusive testing.

Ensuring there is no bias against any of the protected characteristics outlined in the Equality Act 2010 is essential for any voice recognition system. Organisations operating the technology should be open and transparent about the makeup of the data used to train their models, making certain that all groups are fairly represented within this and giving peace of mind to all stakeholders that no implicit bias is present in the system. alternatively, operating organisations could show that their model(s) are free from bias by demonstrating an extensive testing regime that assesses the model on a diverse and inclusive dataset. If it can be verified that the model performs well under such testing, it would be a strong indicator that no implicit bias is present.

BFEG urge more transparency in this area and encourage any operating organisation to disclose the demographic details of the dataset that has been used to train their model.

Recommendation 2:

All audio recordings that have the potential to be used by voice recognition technology should be treated as containing biometric data and handled accordingly.

To maintain compliance with EU GDPR, EU Law Enforcement Directive 2016/680 (EUR-Lex, 2016b), and the UK Data Protection Act 2018, or any successor, it is essential that clear and informed consent be gathered from all users, without duress, and with the option to withdraw consent at any time. Please note that BFEG has made reference to EU GDPR, EU Law Enforcement Directive whilst the UK is no longer part of the EU, and the UK Data Protection regime is set out in the UK GDPR and DPA 2018 BFEG believes there does need to be consistency with regards to approach. BFEG also notes that UK data reforms may well be addressed in the upcoming Data (Use and Access) Bill

All data must then be gathered and stored in a secure way, and a Privacy Notice must be published to ensure transparency as to how personal data is processed. The above should be done in line with the ICO Data Sharing Code of Practice (ICO, 2021).

Recommendation 3:

A clear role must be established for humans to oversee the voice recognition system in its entirety, with the means to intervene easily and immediately if any abnormalities are spotted.

While automated voice recognition systems may be able to speed up the process of identification, evidence demonstrates that these systems are fallible, and mistakes can be made. Consequently, regular review and strong oversight is crucial to ensuring that decisions taken by voice recognition technology can be relied upon. As such, organisations operating the technology must demonstrate that well-trained humans are kept prominently in the decision-making loop, so that any abnormalities can be noted and rectified. Adherence to this recommendation will help in gaining public confidence in the technology and provide an essential oversight mechanism. This demonstration should be a requirement within a more comprehensive governance and accountability framework for the use of voice recognition technology.

Recommendation 4:

Voice recognition use-cases must be underpinned by a clear and effective approach for spotting and excluding any voice samples that are not genuine, i.e. have been generated through spoofing or deepfake technology.

As outlined above, this is essential to mitigate the risk of digital identity theft, privacy violations and a loss of public confidence. The chosen approach must be continuous rather than one-off, to ensure that any risks in future deepfake technology advancements can be mitigated. If it is not possible reliably and consistently to detect illegitimate voices, the use of the voice recognition system should be suspended until this can be rectified.

Recommendation 5:

An independent ethics group should be established to govern the use of voice recognition technology by the operating organisation and provide an open and evidence-based assessment of the accuracy and reliability of the technology.

Allowing a third party to provide an independent assessment of whether the voice recognition system is fit for purpose will eliminate the risk of a conflict of interest and allow a fair review of the technology. This will help in gaining public confidence as it will demonstrate that consideration is being given to the risks of voice recognition systems and will ensure that any limitations in accuracy or reliability can be identified and mitigated.

Recommendation 6:

An independent regulator should be empowered to govern the use of voice recognition technology in all settings and ensure that the technology is deployed ethically and without bias or discrimination, particularly against on the basis of protected characteristics.

Organisations operating voice recognition technology may have competing interests or may not have access to the expertise necessary to understand the ethical risks associated with the technology and how to mitigate these. It is recommended that the UK Government empower the ICO to develop a clear set of regulations that govern how voice recognition technology should be used by the public and private sectors, ensuring adherence to strict ethical principles. This will guarantee that any use of the technology is consistent and that a common understanding of the risks and necessary mitigations can be developed industry wide.

Glossary

Operating organisation An organisation that deploys voice recognition technology for the purpose of identifying users.
User Any individual who supplies a voice sample and/or whose identity is verified by voice recognition technology.

Authors

This briefing note was created by the members of the Biometrics and Forensics Ethics Group’s Biometric Recognition Technologies working group. Membership of this working group were:

Dr Nóra Ni Loideain (Co-Chair)

Dr Peter Waggett (Co-Chair)

Professor Anne-Maree Farrell

Professor Richard Guest

Professor Emeritus Charles Raab

The report was ratified by all members of the Biometrics and Forensics Ethics Group (see Annex II for a list of members).

Acknowledgement is also given to the BFEG Secretariat, in particular Oliver Coughlan who provided dedicated support and administration with this piece of work.

References

Biometrics and Forensics Ethics Group (2023) ‘Ethical principles: Biometrics and Forensics Ethics Group’. Available at: https://www.gov.uk/government/publications/ethical-principles-biometrics-and-forensics-ethics-group [accessed 2 May 2024]

Alan Turing Institute (2024) ‘The Future of Biometric Technology for Policing and Law Enforcement’. Available at: https://cetas.turing.ac.uk/publications/future-biometric-technology-policing-and-law-enforcement [accessed 2 May 2024]

Biometrics and Forensics Ethics Group (2021) ‘Briefing note on the ethical issues arising from public–private collaboration in the use of live facial recognition technology’. Available at: https://assets.publishing.service.gov.uk/media/6005b3c98fa8f55f6156b4ec/LFR_briefing_note_18.1.21.final.pdf [accessed 14 February 2024]

BBC News (2019) ‘HMRC Forced to Delete Five Million Voice Files’. Available at: https://www.bbc.co.uk/news/business-48150575 [accessed 13 February 2024]

Singh C (2014) ‘Quis custodiet ipsos custodes? Should Justice Beware: A Review of Voice Identification Evidence in Light of Advances in Biometric Voice Identification Technology,’ International Commentary on Evidence, vol. 11. Available at: https://www.degruyter.com/document/doi/10.1515/ice-2014-0009/html [accessed 8 March 2024]

Biometrics and Forensics Ethics Group (2019) ‘Ethical issues arising from the police use of live facial recognition technology.’  Available at: https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/781745/Facial_Recognition_Briefing_BFEG_February_2019.pdf [accessed 2 May 2024]

GOV.UK (2018) ‘Data Protection Act 2018’. Available at: https://www.legislation.gov.uk/ukpga/2018/12/ [accessed 02 April 2024]

HMRC (2017) ‘Voice ID Showcases Latest Digital Development for HMRC Customers’. Available at: https://www.gov.uk/government/news/voice-id-showcases-latest-digital-development-for-hmrc-customers [accessed 7 March 2024]

EUR-Lex (2016a) ‘Regulation (EU) 2016/679 of the European Parliament and of the Council’. Available at: https://eur-lex.europa.eu/eli/reg/2016/679/oj [accessed 1 March 2024]

ICO (2019) ‘Her Majesty’s Revenue and Customs data protection and audit report’. Available at: https://ico.org.uk/media/action-weve-taken/audits-and-advisory-visits/audits/2615969/hmrc-final-executive-summary-v1_0.pdf [accessed 2 May 2024]

HMRC (2018) ‘Voice Identification Privacy Notice’. Available at: https://www.gov.uk/government/publications/voice-identification-privacy-notice/voice-identification-privacy-notice [accessed 8 March 2024]

GOV.UK (2010) ‘Equality Act 2010’. Available at: https://www.legislation.gov.uk/ukpga/2010/15/part/2 [accessed 1 March 2024]

Chen X, Li Z, Setlur S, and Xu W (2022) ‘Exploring Racial and Gender Disparities in Voice Biometrics’, Scientific Reports, vol 12. Available at: https://www.nature.com/articles/s41598-022-06673-y [accessed on 8 March 2024]

Cavazos J, Phillips P, Castillo C, and O’Toole A (2021) ‘Accuracy comparison across face recognition algorithms: Where are we on measuring race bias?’, IEEE Trans Biom Behav Identity Sci. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7879975/ [accessed on 8 March 2024]

BBC News (2017) ‘BBC Fools HMRC Voice Recognition Security System’. https://www.bbc.co.uk/news/technology-39965545 [accessed on 8 March 2024]

EUR-Lex (2016b) ‘Directive (EU) 2016/680 of the European Parliament and of the Council’. Available at: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32016L0680#:~:text=Directive%20%28EU%29%202016%2F680%20of%20the%20European%20Parliament%20and,such%20data%2C%20and%20repealing%20Council%20Framework%20Decision%202008%2F977%2FJHA [accessed on 2 May 2024]

ICO (2021) ‘Data Sharing Code of Practice’. https://ico.org.uk/media/for-organisations/data-sharing-a-code-of-practice-1-0.pdf [accessed on 8 March 2024]

Annex I – BFEG Ethical Principles

BFEG have developed a set of governing ethical principles against which BFEG measure the operation, usage, and governance of all projects. These governing principles are as follows:

  • Principle 1: procedures should enhance the public good
  • Principle 2: procedures should respect the dignity of individuals and groups
  • Principle 3: procedures should not selectively disadvantage any group in society, particularly those most vulnerable, as protected under UK law, such as those who have ‘protected characteristics’ as defined in the Equality Act 2010 (age, disability, gender reassignment, marriage or civil partnership, pregnancy and maternity, race, religion or belief, sex, sexual orientation)
  • Principle 4: procedures should respect the processing of sensitive personal data and human rights as guaranteed by UK law, including the Human Rights Act 1998 and the Data Protection Act 2018. Any limitations of non-absolute rights, such as the right to respect for private life and the right to freedom of expression, must be demonstrably lawful, for a legitimate aim, proportionate and necessary
  • Principle 5: scientific and technological developments should be harnessed to advance the process of criminal justice and its governance; promote the swift exoneration of the innocent; and afford protection and redress for victims
  • Principle 6: procedures should be publicly accessible and explainable
  • Principle 7: procedures should be based on robust scientific principles, including evidence and ongoing review of their necessity and ethical robustness
  • Principle 8: procedures should be subject to review by an independent body, both ex ante and ex post where possible

The complete document detailing these principles and how they should be applied can be viewed here: https://assets.publishing.service.gov.uk/media/6414590dd3bf7f79df1aa9fd/BFEG_principles.pdf.

Annex II – BFEG Membership

BFEG is an advisory non-departmental public body, sponsored by the Home Office. The group provides advice on ethical issues in the use of biometric and forensic identification techniques such as DNA, fingerprints, and facial recognition technology. BFEG also advises on ethical considerations in the use of large and complex datasets and projects using explainable data-driven technology.

The following is a list of the membership of BFEG at the time of writing, June 2024:

Professor Mark Watson-Gandy (Chair) – Practicing barrister

Professor Niamh Nic Daeid – Professor of Forensic Science and Director of the Leverhulme Research Centre for Forensic Science, University of Dundee

Professor Anne-Maree Farrell – Professor of Medical Jurisprudence, Edinburgh Law School

Professor Richard Guest – Professor of Biometric Technologies, University of Southampton

David Lewis – Former Deputy Chief Constable of Dorset and Devon & Cornwall Police, and previous National Police Chiefs’ Council Lead for Ethics and National Lead for Forensics Performance and Standards

Dr Nóra Ni Loideain – Director of the Information Law and Policy Centre and Senior Lecturer, Institute of Advanced Legal Studies, University of London

Professor Sarah Morris – Professor of Digital Forensics, University of Southampton

Professor Emeritus Charles Raab – School of Social and Political Science, University of Edinburgh

Professor Thomas Edward Sorell – Professor of Politics and Philosophy, University of Warwick

Professor Denise Syndercombe-Court – Professor of Forensic Genetics, Kings College London

Dr Peter Waggett – Director of Research, IBM