Notice

Secure AI infrastructure: call for information

Published 29 January 2026

Background

The UK government is working to ensure the UK can develop and deploy the most advanced AI systems securely, supporting both national security and economic growth.

As AI models become more capable and more valuable, they are increasingly attractive targets for sophisticated attackers. A successful attack could involve:

  • theft of AI models (including model weights, the trained parameters that encode a model’s capabilities)
  • compromise of sensitive data processed by AI systems
  • attempts to modify system behaviour or disrupt service
  • compromise of systems and security measures due to actions of deployed, autonomous agents

AI deployments introduce novel technical challenges in cyber security, especially where strong protections for AI model weights and other sensitive data must be achieved without undermining performance and availability.

The government has established a joint research programme between the:

  • Department for Science, Innovation and Technology (DSIT)
  • AI Security Institute (AISI)
  • National Cyber Security Centre (NCSC)

to support the development of secure AI infrastructure: the computing environment that enables the development and deployment of advanced AI models. This programme will drive research and development on cutting-edge methods to protect model weights and other sensitive assets.

The UK has a globally leading track record and extensive sovereign capabilities in designing, building and assuring high‑assurance digital systems, including trusted computing and cross‑domain solutions. Through this programme, the UK will build on its capabilities and work with experts across industry and academia to develop, test, and mature the next generation of secure AI infrastructure, helping to ensure the UK is a trusted location for frontier AI development and deployment.

This call for information aims to gather views from the:

  • AI sector
  • cyber security sector
  • wider industry and academia

It is intended to help government gain a broader understanding of the security challenge, current capabilities, practical constraints, and promising directions for research and technical pilots.

This is not a formal procurement, competition, or invitation to tender.

What we want to learn

We are seeking input on 2 areas:

  • the security challenge and current approaches
  • capabilities that could strengthen protection

1. The security challenge and current approaches

How do you assess the risks to the confidentiality and integrity of model weights, sensitive data, and system configurations in AI compute environments? How is your organisation or sector working to address these risks?

We are particularly interested in:

  • the threats and attack vectors you consider most significant
  • current mitigations and their limitations
  • how security approaches are evolving as AI systems scale

2. Capabilities that could strengthen protection

What current or emerging technologies, methods, or architectures could improve the security of AI infrastructure in large-scale compute environments?

For each capability, we would welcome your views on:

  • technical feasibility, maturity, and realistic timelines for deployment
  • performance, cost, and operational trade-offs (including what is practical to retrofit versus build in from the start)
  • where the UK has strong existing capability and where international partnerships may be needed
  • what evidence, assurance approaches, or testing would build confidence in the solution

Views on market viability

We are also interested in views on market viability:

  • who the likely customers for enhanced security solutions are
  • what risk appetites exist
  • what level of investment is appropriate given the threats

We recognise that AI infrastructure presents a high-value target, and that threats are likely to grow in sophistication and persistence over time. We are therefore interested in approaches that offer robust, defence-in-depth protection, not only against known attack patterns, but with resilience to novel and adaptive threats. Responses that address how solutions can be evaluated, compared, and assured to a high standard are particularly welcome.

Initial research areas

The programme may explore the following areas (these may evolve based on responses):

1. Commodity cross-domain solutions for AI
How high-assurance separation and controlled information flows between systems of different trust levels could be adapted to AI workloads, while maintaining performance.

2. Trusted computing foundations for AI
Approaches to secure boot, integrity, identity, and attestation, including formal methods and other high-assurance techniques. Attestation here means evidence that systems are running approved configurations across the full stack, including hardware, firmware, software, model weights, and agent components. We are interested in practical paths to adoption in AI clusters.

3. Digital rights management approaches for protecting models
Mechanisms to reduce the risk of unauthorised copying of model weights and to verify that systems are running exactly the weights that have been approved. This includes cryptographic controls and secure execution patterns that limit how weights can be accessed and confirm their integrity.

4. Verifiable confidential compute
Advances needed for confidential computing (hardware-backed protection of data while in use) that are suitable for AI settings, including improving resilience to side-channels and enabling stronger assurance.

5. Advanced cryptography
Where applicable, how techniques such as privacy-enhancing computation could reduce exposure of sensitive data and model assets, and what practical deployment pathways and trade-offs exist.

6. Protective monitoring for AI system interconnects
How to improve detection and prevention of lateral movement and exfiltration in AI-specific high-speed interconnects and fabrics (for example, between accelerators, nodes, and storage).

7. Observability and telemetry for AI systems
End-to-end visibility across “whole AI systems” to support anomaly detection, compromise assessment, and incident response, including what data is useful and what is feasible at scale.

8. Adversarial machine learning defence
Mitigations for risks where model outputs or responses to queries can reveal information about the model or data, or enable indirect extraction attempts.

Who should respond

We welcome responses from organisations with relevant capability, including:

  • the AI sector, including model developers
  • data centre operators, cloud providers, and managed service providers
  • hardware, systems and networking vendors (including accelerators, servers, interconnects, storage, management)
  • cybersecurity and monitoring providers (identity, access control, telemetry, and threat detection)
  • the semiconductor industry including confidential computing specialists
  • cryptography and verification specialists
  • cross-domain solution providers, including those that have implemented the NCSC’s security principles for Cross Domain Solutions (CDS).
  • universities, research institutes, startups, and scaleups working in secure AI infrastructure

What to include in your response

Information sent in response to this call should be unclassified. Please keep submissions high level. Where relevant, include:

  • your view on the risk, and how it is or should be mitigated
  • a short description of your proposed or suggested capability, solution, or research area, and which theme(s) it supports
  • maturity (concept / prototype / deployable today) and indicative timelines to production
  • deployment considerations (integration, performance overheads, operational complexity)
  • key dependencies (for example, silicon/firmware support, vendor collaboration, supply chain constraints)
  • how you would propose to evaluate effectiveness and assurance (evidence, metrics, testing approaches)
  • any barriers to adoption and what would accelerate progress (standards, access to environments, partnerships)

Please also indicate whether you would be willing to be contacted for further discussion and briefly describe where your organisation’s relevant expertise lies.

How to respond

We would like:

  • your response provided in a Word or PDF document and is up to 5 pages (plus optional annexes)
  • a point of contact to follow-up

What happens next

We will use responses to:

  • broaden our understanding of the technical landscape and feasibility of different approaches
  • refine research priorities within the DSIT–AISI–NCSC programme
  • shape near-term technical pilot activity, including structured evaluation of approaches in a controlled environment
  • support ongoing engagement with industry, researchers, and international partners

We may publish a short summary of themes received, without attribution.

Important information

This is not a procurement or funding competition.

Do not include sensitive security details or classified information.

Information may be subject to the Freedom of Information Act 2000. If you consider parts of your response commercially sensitive, clearly mark them and explain why.