Press release

OpenAI and Microsoft join UK’s international coalition to safeguard AI development

OpenAI and Microsoft pledge funding to AI Security Institute’s Alignment Project: an international effort on AI systems that are safe, secure and under control.

  • OpenAI and Microsoft pledge new funding to AI Security Institute’s flagship Alignment Project: an international effort to work towards advanced AI systems that are safe, secure and under control
  • AI Alignment – making sure AI acts as intended – is a crucial field of AI research, building public trust in the technologies already reshaping public services and delivering new jobs
  • Additional £5.6 million OpenAI backing plus support from Microsoft and others confirmed at the AI Impact Summit means over £27 million is now available for AI alignment research, backing some 60 projects

Leading tech firms OpenAI and Microsoft are the latest to join an initiative spearheaded by the UK’s AI Security Institute (AISI) - encouraging trust and public confidence in AI as it rewires public services and drives national renewal.

Announced by Deputy Prime Minister David Lammy, and AI Minister Kanishka Narayan as the AI Impact Summit in India draws to a close today (Friday 20 February), the news bolsters the work of AISI’s Alignment Project which was first announced last summer.

Some £27 million will now be made available through the fund, supporting research efforts to ensure AI systems work as they’re supposed to, with £5.6 million coming from OpenAI, and additional support from Microsoft and others.

Cementing the UK’s position as a world leader in frontier AI research, today also sees the first Alignment Project grants awarded to 60 projects from across 8 countries, with a second round due to open this summer.

AI alignment refers to the effort of steering advanced AI systems to reliably act as we intend them to, without unintentional or harmful behaviours. It involves developing methods that prevent such unsafe behaviours as AI systems become more capable. Progress on alignment is something that will boost confidence and trust in AI, ultimately supporting the adoption of systems which are increasing productivity, slashing medical scan times for patients, and unlocking new jobs for communities up and down the country.

Without continued progress in alignment research, increasingly powerful AI models could act in ways that are difficult to anticipate or control - which could pose challenges for global safety and governance.

UK Deputy Prime Minister, David Lammy, said:

AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset. 

We’ve built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology. The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort.

UK AI Minister, Kanishka Narayan, said:

We can only unlock the full power of AI if people trust it – that’s the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on.

With fresh backing from OpenAI and Microsoft, we’re supporting work that’s crucial to ensuring AI delivers its huge benefits safely, confidently and for everyone.

Alignment is crucial for the security of advanced AI systems and its long-term adoption across all walks of life. It is about making sure AI models operate as they should do, even as their capabilities rapidly evolve. With the rise of AI systems that can perform increasingly complex tasks, there is a growing global consensus that AI alignment is one of the most urgent technical challenges of our era.

Besides OpenAI and Microsoft, AISI’s Alignment Project is supported by an international coalition including the:

  • Canadian Institute for Advanced Research (CIFAR)
  • Australian Department of Industry, Science and Resources’ AI Safety Institute
  • Schmidt Sciences
  • Amazon Web Services (AWS)
  • Anthropic
  • AI Safety Tactical Opportunities Fund
  • Halcyon Futures
  • Safe AI Fund
  • Sympatico Ventures
  • Renaissance Philanthropy
  • UK Research and Innovation (UKRI)
  • Advanced Research and Invention Agency (ARIA)

It is led by a world-class expert advisory board, including Yoshua Bengio, Zico Kolter, Shafi Goldwasser, and Andrea Lincoln.

Mia Glaese, VP of Research at OpenAI, said:

As AI systems become more capable and more autonomous, alignment has to keep pace. The hardest problems won’t be solved by any one organisation working in isolation—we need independent teams testing different assumptions and approaches. Our support for the UK AI Security Institute’s Alignment Project complements our internal alignment work and helps strengthen a broader research ecosystem focused on keeping advanced systems reliable and controllable as they’re deployed in more open-ended settings.

As home to world-leading AI companies and research institutions, and 4 of the world’s top 10 universities, the UK is uniquely positioned to lead global efforts to build AI that we can have confidence in.

The Alignment Project builds on AISI’s international leadership, ensuring leading researchers from the UK and collaborating partners can shape the direction of the field and drive progress on safe, AI that behaves predictably.

The Project combines grant funding for research, access to compute infrastructure, and ongoing academic mentorship from AISI’s own leading scientists in the field to drive progress in alignment research.

Notes to editors

Visit the Alignment Project website for further information.

The Alignment Project advisory board includes: 

  • Yoshua Bengio, Full Professor at Université de Montréal and founder and scientific advisor of Mila - Quebec AI Institute 
  • Zico Kolter, Professor and Head of Machine Learning Department at Carnegie Mellon University 
  • Shafi Goldwasser, Research Director for Resilience, Simons Institute, UC Berkeley 
  • Andrea Lincoln, Assistant Professor of Computer Science, Boston University 
  • Buck Shlegeris, Chief Executive Officer, Redwood Research 
  • Sydney Levine, Research Scientist, Google DeepMind 
  • Marcelo Mattar, Assistant Professor of Psychology and Neural Science at New York University

DSIT media enquiries

Email press@dsit.gov.uk

Monday to Friday, 8:30am to 6pm 020 7215 3000

Updates to this page

Published 19 February 2026