Notice

Collaboration on the safety of AI: UK-US memorandum of understanding

Published 2 April 2024

This note summarises key elements of the Memorandum of Understanding (MoU) agreed by the governments of the UK and United States on artificial intelligence (AI) safety, on 1 April 2024.

In November 2023, the UK and US governments announced the creation of their respective AI Safety Institutes and confirmed their intention to work together toward the safe, secure, and trustworthy development and use of advanced AI. This MoU provides a more detailed basis for both countries to build upon in realising their shared goals on AI safety, through a partnership on AI safety between the two countries’ Institutes. The partners will continue to identify and develop new opportunities for collaboration, on an ongoing basis, with a view to increasing alignment over time.

Through this MoU, the partners intend to engage in the following activity:  

  • the Institutes intend to work closely to develop an interoperable programme of work and approach to safety research, to achieve their shared objectives on AI safety

Specifically, the Institutes intend to:

  • develop a shared approach to model evaluations, including the underpinning methodologies, infrastructures and processes
  • perform at least one joint testing exercise on a publicly accessible model
  • collaborate on AI safety technical research, to advance international scientific knowledge of frontier AI models and to facilitate sociotechnical policy alignment on AI safety and security
  • explore personnel exchanges between their respective institutes
  • share information with one another across the breadth of their activities, in accordance with national laws and regulations, and contracts

  • the partners remain committed, individually and jointly, to developing similar collaborations with other countries to promote AI safety and manage frontier AI risks and develop linkages between countries on AI safety
  • to achieve this, the partners intend to work with other governments on international standards for AI safety testing and other standards applicable to the development, deployment, and use of frontier AI models