Generative AI in animals in science: letter to Lord Hanson (accessible)
Published 9 March 2026
Dr Sally Robinson, Chair of the Animals in Science Committee
5th Floor, 2 Ruskin Square
Broad Green
Croydon
CR0 2WF
Email: asc.secretariat@homeoffice.gov.uk
27 February 2026
Dear Lord Hanson,
Animals in Science Committee: the opportunities and risks of the use of generative AI in licensing, ethical oversight, analysis and insights
You encouraged the Animals in Science Committee to bring to your attention emerging issues that may call for a policy response. The rapid development of artificial intelligence (AI), and its potentially transformative impacts on animals in science, is one such issue. In this letter we highlight opportunities and risks concerning the use of AI in the licensing processes and ethical oversight of animal research.
A crucial point of clarification: the application of AI to ethical review is fundamentally different from ongoing work to replace animal models with “in silico” predictive models in toxicology, drug discovery and biomedical research, even though both use machine learning techniques and may be described as forms of “AI”. This too is an exciting and fast-moving area. This letter, however, concerns only applications of AI to ethical review, as these issues fall squarely within the remit of the Home Office.
“Generative AI” is a broad term for any AI technology that generates content, such as text or images. The best-known examples are chatbots such as ChatGPT and Claude. These chatbots are powered by large language models (LLMs). Although the technology is already widely used, there are no clear rules or norms around generative AI use in ethical review documentation. It is not known whether applicants are using these tools already, but there is a high chance that some are, and we can expect their usage to steadily increase. This is likely to lead to problems without clear norms.
There are various elements of an application where at least some applicants may well be using LLMs and where clear guidance is needed:
- Generating the non-technical summaries in licence applications.
- Generating the primary content of a licence application, such as descriptions of experimental protocols, from notes.
- Reviewing and checking a licence application to highlight potential ethical risks or inconsistencies.
There are also various possible future applications from the AWERBs, inspectorate (ASRU) and Animals in Science Committee:
- Performing an initial screen of a licence application (e.g. checking it against a list of criteria).
- Searching for, and assessing the relevance of, other similar studies in the scientific literature and other past and ongoing licences.
- Performing an initial harm-benefit analysis.
- Summarizing key ethical issues raised by an application and identifying risks.
- Analysing and triaging licence-holder correspondence, such as SC18 reports.
- Identifying emerging patterns, trends and issues across groups of licences, potentially assisting both ASRU and the Animals in Science Committee (which conducts strategic licence reviews).
- Facilitating deliberation and debate on review bodies (AWERBs).
- Transcribing and minuting oral interviews and discussions.
- Reminding AWERBs and inspectors of relevant past discussions and cases to facilitate consistent decision-making.
We want to emphasize that, in describing these as possible applications, we are not endorsing them. In all cases, further investigation is warranted. At this stage, it is wise to reserve judgement.
On the one hand, the potential benefits are substantial. Britain has a world-leading licensing and ethical review infrastructure, but it is under strain, with many AWERBs reporting high workloads. In addition, AI could assist ASRU in data analysis and insight which would have substantial impact in terms of learning and 3Rs implementation. LLMs, used responsibly and with an understanding of their limitations, have strong potential to assist in valuable ways. Britain could lead the way in incorporating the ethically responsible use of AI in this area, improving the quality, capacity, efficiency and transparency of our processes.
On the other hand, the risks require careful and impartial analysis. There are ongoing debates about the quality and reliability of what AI generates for professional applications with technical elements. Three major areas of concern can be immediately seen: hallucination, data security, and public trust.
Concerning hallucination: LLMs often excel at reviewing documents, summarizing, contextualizing, suggesting, translating and transcribing, but they have a notorious tendency to tell the user what they want to hear. This can sometimes involve fabricating plausible references for claims. A single false reference, if used to justify a licence approval, could have disastrous consequences.
Evidence suggests that frontier LLMs vary greatly in their tendency to hallucinate, with the most reliable hallucinating at a rate of 1 in 200 responses and the least reliable hallucinating in almost one third of their responses.[footnote 1] In the US, the FDA recently attracted controversy by deploying an AI tool to accelerate drug approval decisions, leading critics to highlight the special dangers of hallucination in this context.[footnote 2] Ethical review is a similarly high-stakes context where errors must be minimized. It is often possible, however, to reduce the level of hallucination in specific domains through fine-tuning and other techniques.
Concerning data security: documents involved in licensing and ethical review are often highly sensitive. Governments and other institutions will need to work with tech companies to provide ASRU, AWERBs and applicants with access to sufficiently secure tools, comparable to the tools available to government departments.
Concerning public trust: it is not clear whether AI-facilitated oversight is able to command public confidence or meet public expectations. It will be crucial in the years ahead to resume regular surveys of public attitudes towards animals in science. In these surveys, it will be valuable to explore public attitudes towards various kinds of AI use.
Now is the time to begin working towards a set of clear rules. After all, we can expect to see increasing use of generative AI by applicants regardless of whether a set of rules exists. Without clear rules, there is potential for erosion of public trust in the licensing process. With clear rules that command public confidence, we can take the opportunities presented by AI without falling prey to the risks.
Key recommendation
We recommend that the Minister fully assesses the opportunities and risks of the use of generative AI in licensing, ethical oversight, analysis and insights, leading to clear guidance for applicants, AWERBs, ASRU, and the future work of the Animals in Science Committee itself. The Animals in Science Committee stands ready to assist and would welcome a commission.
Yours sincerely,
Dr Sally Robinson
Chair of the Animals in Science Committee
-
See the Hughes Hallucination Evaluation Model (HHEM) Leaderboard. ↩
-
Owermohle, S. (2025). “FDA’s artificial intelligence is supposed to revolutionize drug approvals. It’s making up studies”, CNN, 23 July 2025. ↩