Open call for evidence

AI Growth Lab

Published 21 October 2025

Why we are issuing this call for evidence

We are seeking evidence on a pioneering cross-economy sandbox, that would oversee the deployment of AI-enabled products and services that current regulation hinders.

The AI Growth Lab would support growth and responsible AI innovation by making targeted regulatory modifications under robust safeguards and with careful monitoring.

This call for evidence aims to collect essential information about people’s feedback on the proposed AI Growth Lab to help inform government policy development.

Call for evidence details

Issued: 21 October 2025

Respond by: 11:59pm 2nd January 2026

Enquiries to:

Georgina Stockley
AI Regulation and Readiness
Department for Science, Innovation and Technology (DSIT)
26 Whitehall
Westminster
SW1A 2EG

Email: aigrowthlab-callforevidence@dsit.gov.uk

Call for evidence reference: “AI Growth Lab: Call for Evidence”

Audiences:

We would like to hear from individuals and organisations who:

  • are interested in using the AI Growth Lab
  • are going to be affected by the AI Growth Lab
  • have expert views on implementing sandboxes

Territorial extent:

We propose the Lab would apply to reserved areas of regulation across all of the UK. The government believes that the benefits of the sandbox should be shared across all nations and looks forward to working with Devolved Administrations on areas of devolved regulation.

How to respond

Respond online at: www.smartsurvey.co.uk.

Alternatively, if you wish to submit information which cannot be provided via a web form or would prefer to respond via email, please send your response to aigrowthlab-callforevidence@dsit.gov.uk.

When responding, please state whether you are responding as an individual or representing the views of an organisation.

Your response will be most useful if it is framed in direct response to the questions posed, though further comments and evidence are also welcome.

Confidentiality and data protection

Information you provide in response to this call for evidence, including personal information, may be disclosed in accordance with UK legislation (the Freedom of Information Act 2000, the Data Protection Act 2018 and the Environmental Information Regulations 2004).

If you want the information that you provide to be treated as confidential, please tell us, but be aware that we cannot guarantee confidentiality in all circumstances. An automatic confidentiality disclaimer generated by your IT system will not be regarded by us as a confidentiality request.

We will process your personal data in accordance with all applicable data protection laws. See our privacy notice.

We will summarise all responses and publish this summary on GOV.UK. The summary will include a list of names or organisations that responded, but not people’s personal names, addresses or other contact details.

The proposal

The adoption of AI is the defining economic opportunity of the coming decade. AI is already reshaping the economy and how we live our lives: from accelerating the discovery of breakthrough medicines, to personalised education for British pupils and better prediction of severe weather events. The OECD estimates that AI could add 0.4 to 1.3 percentage points to the UK’s productivity growth over the next decade – if fully realised, this could be the equivalent of adding up to £55 billion to £140 billion to UK output each year by 2030.[footnote 1]

The UK has a strong service sector, but to strengthen our competitive advantage, we must embrace AI quickly and safely. Pharmaceuticals, software, and financial services are already being transformed by AI, but there is much further to go.[footnote 2] Our ambition is for the UK to be the best place to develop and launch innovative AI applications - we risk missing out on transformative new services and products if we don’t accelerate adoption.[footnote 3] Across the economy, only 21% of UK businesses currently use AI.[footnote 4]

AI is the first technology that is truly autonomous and adaptive – but much regulation was designed before these capabilities existed. AI can make automated decisions; perform cognitive work previously exclusive to professionals; and operate in physical environments. Lots of our regulation is technology neutral and can still apply effectively – but not all. Some regulations needlessly assume activity will be undertaken by a person not AI. Other regulations assume a product remains static and doesn’t adapt.

To drive trust and innovation, it is crucial that our laws continue to keep pace with rapid AI development. Effective regulatory oversight is critical – but outdated rules could hold back the UK public from accessing the best products and constrain growth across our economy. UK’s AI leaders report regulatory barriers are one of the top challenges. 60% of businesses responding to a recent call for evidence say that regulation is a barrier to AI adoption.[footnote 5] Flexible, adaptive rules are crucial to maintaining public trust, increasing AI adoption and attracting investment into the UK economy.[footnote 6]

The UK’s rule book must also adapt to wider advances in emerging technologies. This includes quantum, advanced connectivity, advanced manufacturing, clean energy and other technologies.

AI Growth Lab

We are calling for evidence on an AI Growth Lab – a pioneering cross-economy sandbox that would carefully supervise the deployment of AI-enabled products and services that current regulation hinders.

The Lab would drive innovation and growth, positioning the UK the best place in the world to test, iterate and adopt responsible AI.  The Lab would supercharge investment in innovative AI start-ups, with participating companies anticipated to receive 6.6 times more investment than without sandboxing.[footnote 7] Preliminary analysis of case studies in legal services and the planning system indicates that lifting unnecessary legal barriers to AI in these and other applications the Lab could unlock billions of pounds in economic value by 2035.[footnote 8]

The Lab would deliver more dynamic UK regulation. Regulatory decisions on novel technologies can often take years, but we know that regulatory sandboxes can accelerate time to market by 40%.[footnote 9] The Lab would enable businesses and regulators to trial novel AI products and generate real-world evidence of their impact, which could speed-up regulatory approvals. Successful experiments would give regulators and public the confidence they need to make reforms permanent.

The AI Growth Lab would maintain public trust by monitoring sandbox participants under close and careful supervision, operating in live market environments with targeted regulatory modifications. The Lab would bring deep AI expertise to supervision, supporting some regulators who currently have skills gaps. This collaborative approach between innovators and government will ensure the UK builds the regulatory frameworks which transformative AI demands.

The Lab will explore place-based regulatory sandboxes to complement AI Growth Zones. Embodied AI applications could test capabilities in a safe environment and even get access to the AI Research Resource (AIRR).

The AI Growth Lab will deliver on the commitment in the AI Opportunities Action Plan to “implement pro-innovation initiatives like regulatory sandboxes”. The UK pioneered the global sandbox model with the launch of the FCA’s 2016 fintech sandbox. It inspired other jurisdictions such as the EU, USA, Japan, Estonia and Singapore to announce or implement some form of regulatory sandbox for AI. With Transformative AI approaching, the UK must stay at the vanguard of international best practice in regulatory innovation – and the benefits this brings for UK innovation and jobs.

Many regulators are already innovating to get responsible AI to citizens, faster – the AI Growth Lab will complement these programmes.

Take the MHRA’s AI Airlock, a safe testing space that lets developers explore regulatory frontiers for innovative AI tools.

The MHRA’s AI Airlock is currently exploring how to safely regulate Ambient Voice Technologies (AVT). AVTs use AI to listen to clinician-patient conversations and can then transcribe and summarise clinical notes and assessments, which are reviewed by the clinician before entry into the electronic health record or onward referrals. They can support clinicians with time-consuming administrative tasks so there is more time for patient care.

There are key differences in regulatory requirements for dictation tools that help with notetaking, and those that influence clinical decisions or patient care. These tools may be regulated as software as a Medical Device (SaMD) by the MHRA- the manufacturer will need to evidence the product as acceptably safe, effective and that it consistently performs in line with the product’s intended purpose. Products that perform more complex tasks move into a higher risk classification requiring additional evidence to validate safety and effectiveness.

Phase 2 of the MHRA’s AI Airlock includes TORTUS, a class I AVT. Through close-working between the MHRA and developers, the Airlock will explore the range of regulatory issues associated with this new technology. This includes making sure that developers can continue to update and improve the performance of models, at a frequency and pace that balances with patient safety, without an excessive or bureaucratic regulatory process.

Dynamic regulation to pilot and test AI products

The AI Growth Lab would enable controlled deployment of cutting-edge AI systems within live market environments, with targeted regulatory modifications where necessary. In partnership with sectoral regulators, the Lab would:

  1. Operate issue specific sandboxes, focusing on sectors where there is opportunity for innovation and adoption, but where regulatory modification may be needed;[footnote 10]
  2. Grant time-limited regulatory exemptions to innovative firms and products which meet eligibility criteria - ‘sandbox pilots’;
  3. Maintain rigorous oversight of sandbox pilots, designing limits and safeguards for regulatory modifications and carefully monitoring pilots; and
  4. Recommend the conversion of successful pilots into regulatory reforms through recommendations for updated guidance, codes of practice, or statutory amendments.

Modifying regulations for closely supervised pilots

The rapid pace of AI development means that regulatory reforms, which are typically measured in years rather than months, could stall AI innovation and adoption in places. We want responsible AI applications to make UK citizens lives better, faster. Most UK sandboxes are advisory - they enable valuable regulator-innovator dialogue.

Evidence demonstrates that sandboxes which make statutory modifications can deliver exceptional economic returns. The FCA’s Innovate Project, which includes a regulatory sandbox, accelerated time-to-market by 40% compared to standard authorisations[footnote 11], while Singapore’s multi-agency Innovation Hub participants reported significant increases in follow-on investment. Our ambition is for the AI Growth Lab to deliver significantly greater impact.

Modification powers would operate with robust safeguards protecting fundamental rights and safety. The Lab’s design must balance the need for rapid reform, with the maintenance of public trust and the UK’s high regulatory standards. Legislation would specify which regulations could be modified and regulations or ‘red lines’ which could never be modified to maintain safety and preserve public trust. We are seeking views on which regulations should be ‘red lines’. We intend that ‘red lines’ will include consumer protections, safety provisions, fundamental rights, workers’ protections, intellectual property rights. Within the Growth Lab, pilots will be closely and carefully supervised, with the Lab able to end pilots at any time. Where regulations continue to apply unmodified, existing regulatory framework would be applied and enforced in the same way as normal. Consumer redress, whether provided for under consumer protection legislation, in contract or in tort would remain unaffected.

Overseeing the Sandbox

We are seeking views on how best to deliver the flexibility necessary for the sandbox to rapidly test and pilot temporary regulatory modifications with appropriate public and Parliamentary scrutiny. Primary legislation could confer on ministers a power to create individual sandboxes focused on a specific innovation (e.g. agentic AI in professional services, or AI for planning assessments, or autonomous robots in advanced manufacturing) via secondary legislation. This secondary legislation would enable time-limited, targeted modifications to specified sectoral regulations, if they are hindering AI adoption. In turn, licences for participant firms in the sandbox would specify innovation-specific safeguards, monitoring and restrictions.

Lab operating model

We are exploring a range of options for the Lab’s design.  Two illustrative models include:

  1. Centrally operated Lab. A single, government-operated Lab would provide a unified entry point and be responsible for establishing and supervising pilots across sectors. It would work with sectoral regulators in an Oversight Committee to design and monitor sandbox pilots. Drawing on AISI intelligence about advanced AI development, we expect this model to be better placed to anticipate AI innovation in emerging sectors, and support products that can be applied in many sectors.
  2. Regulator-operated Labs. A lead regulator would be appointed for each sandbox, based on their existing regulatory remit. For cross-sector AI applications, a regulatory consortium would be formed with a lead regulator operating the sandbox. We expect this model to be better suited to sector-focused AI applications within highly regulated sectors, particularly applications highly dependent on regulatory expertise.

Priority issues and sectors

Transformative AI demands a cross-economy approach, but prioritising some sectors over others will be essential if we are to move quickly. The UK’s window for establishing competitive advantage is narrowing as global competitors accelerate their own AI strategies.

Initial sandboxes would prioritise applications delivering maximum strategic value for the UK. This includes those which:

  1. Align with the UK’s Sovereign AI strategic priorities and the Industrial Strategy;
  2. Have significant potential for innovation and growth; and
  3. Face regulatory barriers that are inhibiting AI adoption or innovation.

The Lab’s cross-economy design will enable innovations which cut-across traditional regulatory boundaries to be tested. This is increasingly true of AI applications. For example, multiple regulators oversee AI interfacing with healthcare systems, autonomous vehicles coordinating with smart infrastructure, AI-augmented professional services.

Where could the AI Growth Lab unlock innovation?

In almost every sector – from professional services to public services, from manufacturing to transport, from banking to education - AI-enabled products have the potential to transform the economy and our everyday lives. In this call for evidence, we are asking where the AI Growth Lab could unlock innovations and speed responsible AI adoption.

Speeding-Up the Planning System:

Planning decisions require the balancing of various competing factors, of private and public interests and weighting by the decision maker of different “material considerations”. While planning decisions need a human decision-maker, the AI Growth Lab could help pilot whether AI deployment in planning can support more efficient decision-making, and make it easier to navigate and assess the evidence base for planning applications.

For example, there could be scope for AI to support complex elements of the planning process such as Environmental Impact Assessment, where machine-learning and large language models could reduce the burden of assessment and help decision-makers and communities make best use of these important documents. A key test will be ensuring that the use of AI at minimum protects, and ideally enhances, the confidence that the sector and the public more widely have in planning decisions – all while reaching those decisions faster and at less cost.

Improving Patient Care in the NHS:

The Ionising Radiation (Medical Exposure) Regulations are UK laws designed to protect patients from unnecessary exposure to ionising radiation, such as X-rays and CT scans. Every exposure must be justified by a trained professional who determines that the expected clinical benefit outweighs the radiation risk. The Regulations define three legally accountable roles: each role must be fulfilled by an identifiable, registered healthcare professional. This includes interpreting and reporting the results of scans, a function that AI models are getting increasingly adept at.

This framework creates a significant barrier to the use of autonomous AI systems in diagnostic imaging. AI models cannot legally act as a practitioner or operator - they are not individuals and cannot hold professional registration or legal accountability.

AI tools in imaging can currently only support human decision-making - for example, by flagging potential abnormalities or drafting preliminary reports. Automation would breach the requirement for a human to be legally responsible for each exposure and its interpretation. With the pace of development for AI models in radiology, it is likely that some models will be able to match – or even surpass – human accuracy. In those instances, when properly validated, regulation should support automation and remove the need for over-stretched clinicians to double-check high-performing models.

As a result, AI tools in imaging can currently only support human decision-making - for example, by flagging potential abnormalities or drafting preliminary reports. Full automation would breach the requirement for a human to be legally responsible for each exposure and its interpretation.

Patient safety will always be paramount. The Ionising Radiation (Medical Exposure) Regulations were updated in 2024 to enable assistive AI use. But AI in radiology cannot be used autonomously. Given AI’s increasing capabilities, the AI Growth Lab could pilot regulatory modifications under careful supervision to assess potential efficiencies from adopting autonomous AI in radiology, helping to get care to patients more quickly and helping government to understand the impact of regulatory reforms, while safeguarding patient safety.

Enabling responsible robotics:

The laws on what can use our pavements and roads have been developed incrementally over time, with some of them dating back to 1835. Exciting new technologies already exist which use artificial intelligence to make decisions and navigate complex environments. If we can harness these technologies, we have the potential to make time-critical medical deliveries more efficient in our NHS and drive economic growth more widely. However, there is no specific regulatory framework to enable their safe, responsible use, which is holding businesses back.

We have already announced that we will pursue legislative reform for micromobility vehicles, including small delivery devices with automated technologies. The AI Growth Lab could accelerate this by testing and trialling how these devices can be used to drive growth and efficiency across business, the NHS and other public services.

Lab participants

Applications would be sought from start-up innovators, established FTSE companies and global AI developers alike, as well innovators in the public sector. We are seeking input on how the Lab should be designed to support participants, including the application criteria, the safeguards needed to protect participants’ patents and the length of the sandbox.

Making successful modifications permanent

Piloting regulatory modifications is a controlled and safe way to trial reform. Evidence from pilots will identify regulations that can be safely updated without compromising protections, enabling all businesses in the UK to deploy proven AI. There should be a mechanism to translate this evidence into permanent reforms. This way, participants can avoid regulatory cliff-edges when pilots conclude, while other businesses and the public can benefit from reformed regulatory frameworks. We are therefore considering powers for government to make permanent responsible regulatory modifications, validated through Lab testing, by secondary legislation. This would be subject to appropriate parliamentary scrutiny. This mechanism would mean that the Lab is not only a safe and controlled way to trial responsible AI, but a mechanism for dynamic regulatory reform.

By trialling responsible AI under careful supervision, the AI Growth Lab will also help innovators build the regulatory evidence-base they need to rapidly scale globally. This will position the UK as the best place to test, iterate and adopt responsible AI – attracting investment and jobs.

Other emerging technologies

We know that the UK’s rule book needs to adapt to technologies beyond just AI. Other emerging technologies are governed by regulatory frameworks conceived before their commercial or technical viability. We are therefore exploring whether sandboxing can help accelerate development in fields such as quantum, advanced connectivity and clean energy – technologies which have significant potential to drive growth. For instance, carbon capture and reuse technologies face significant regulatory barriers because of the classification of carbon dioxide as a hazardous product. Where technologies beyond AI could also benefit from utilising a Growth Lab model to support safe innovation, we invite industry to work with us to develop these options.

Call for evidence questions

About you

1. Are you responding on behalf of an organisation or in a personal capacity?

Please select one option:

  • In a personal capacity - I am responsible as an individual and do not represent an organisation
  • On behalf of an organisation

If you selected ‘On behalf of an organisation’, please answer following 3 questions:

2. Please select the group you most closely belong to:

Please select one option:

  • A business or industry group developing AI products or services
  • A business or industry group using AI services
  • A business or industry group not involved in AI
  • A research organisation, university or think tank
  • A charity, non-profit or community interest organisation, social, civic or activist group
  • A regulator or public sector organisation
  • Other

3. What is the size of your organisation (number of employees)?

Please select one option:

  • 1-9
  • 10-49
  • 50-249
  • 250+
  • Don’t know

4. Which sector do you work in?

Please select the most representative industry or enter under ‘Other’:

  • Primary Industries: Agriculture, Forestry, and Fishing; Mining and Quarrying
  • Manufacturing and Utilities: Manufacturing; Electricity, Gas, Steam and Air Conditioning Supply; Water Supply; Sewerage, Waste Management and Remediation Activities
  • Construction
  • Trade and Transportation: Wholesale and Retail Trade; Repair of Motor Vehicles and Motorcycles; Transportation and Storage
  • Accommodation and Food Service Activities
  • Information and Communication
  • Financial and Insurance Activities
  • Real Estate Activities
  • Professional, Scientific, and Technical Activities
  • Administrative and Support Service Activities
  • Public Administration and Defence
  • Education
  • Human Health and Social Work Activities
  • Arts, Entertainment, and Recreation
  • Other

5. We may want to follow up with you - if you are happy to be contacted, please provide us with a contact name, organisation (if relevant) and email address.

Please note, this question is optional.

Open-ended, word limit: 50 words

AI Growth Lab questions

6. The AI growth lab would offer a supervised and time-limited space to modify or disapply certain regulatory requirements. To what extent would an AI Growth Lab make it easier to develop or adopt AI?

It would make it (select one option):

  • Much easier
  • Somewhat easier
  • I don’t think there would be an effect
  • Don’t know

7. What advantages do you see in establishing a cross-economy AI Growth Lab, particularly in comparison with single regulator sandboxes?

Open-ended, word limit: 300 words

8. What disadvantages do you see in establishing a cross-economy AI Growth Lab, particularly in comparison with single regulator sandboxes?

Open-ended, word limit: 300 words

9. What, if any, specific regulatory barriers (particularly provisions of law) are there that should be addressed through the AI Growth Lab? If there are, why are these barriers to innovation? Please provide evidence where possible.

Open-ended, word limit: 300 words

10. Which sectors or AI applications should the AI Growth Lab prioritise?

Open-ended, word limit: 300 words

11. What could be potential impacts of participating in the AI Growth Lab on your company/organisation?

Please select all that apply:

  • It would allow us to test products that we wouldn’t otherwise be able to test
  • It would make our company/organisation more internationally competitive
  • It would allow us to bring our product to market quicker than otherwise
  • Don’t know
  • Other (please specify)

12. Several regulatory and advisory sandboxes have operated in the UK and around the world, for example, the FCA’s Innovate Sandbox, the Bank of England / FCA Digital Securities Sandbox, the MHRA’s AI Airlock, and the ICO’s Data Protection Sandbox. Have you participated in such an initiative?

Please select one option:

  • Yes
  • No

13. What lessons from past sandboxes should inform the design of the AI Growth Lab?

Open-ended, word limit: 300 words

14. What types of regulation (particularly legislative provisions), if any, should be eligible for temporary modification or disapplication within the Lab? Could you give specific examples and why these should be eligible?

Open-ended, word limit: 300 words

15. We propose that certain types of rules and obligations, such as those relating to human rights, consumer rights and redress mechanisms, and workers’ protection and intellectual property rights, could never be modified or dis-applied during a pilot. What types of regulation (particularly legislative provisions) should not be eligible for temporary modification or disapplication within the Lab (e.g. to maintain public trust)?

Open-ended, word limit: 300 words

16. What oversight do you think is needed for the Lab?

Please select all that apply:

  • Parliamentary scrutiny when modifying or disapplying regulations within the Lab
  • A Statutory Oversight Committee made up of sectoral regulators and independent experts
  • Public transparency and reporting
  • None
  • Don’t know
  • Other (please specify)

17. How would this oversight work most effectively?

Open-ended, word limit: 300 words

18. What criteria should determine which organisations or projects are eligible to participate in the Lab?

Please select all that apply:

  • You have an innovative product you want to bring to market
  • Your innovation is intended for the UK market, or you are a UK based firm
  • Your innovation will benefit consumers
  • Your innovation is directly connected to AI
  • There is a regulatory barrier (legislation) which the AI Growth Lab would help overcome
  • There is a significant regulatory compliance resource otherwise needed to test the relevant product, which the AI Growth Lab would help avoid
  • Other (please specify)

19. Which institutional model for operating the Lab is preferable?

Please select one option:

  • AI Growth Lab run by central government, with the support of sectoral regulators
  • AI Growth Lab run by a lead-regulator
  • Don’t know
  • Other (please specify)

20. What is your reason for selecting this institutional model?

Open ended, word limit: 300 words

21. What supervision, monitoring and controls should there be on companies taking part in the Lab?

Open ended, word limit: 300 words

22. Do you think a successful pilot in the AI Growth Lab would justify streamlined powers for making changes permanent, as opposed to following existing legislative processes which would take considerably longer?

Please select one option:

  • Yes
  • No
  • Maybe
  • Don’t know

23. If you answered ‘yes’ or ‘maybe’ to question 16, what is the most effective way to achieve streamlined powers to make permanent legislative changes?

Open ended, word limit: 300 words

24. Would there be value in extending the AI Growth Lab to other high-potential technologies?

Please select one option:

  • Yes
  • No
  • Maybe
  • Don’t know

25. If you answered ‘yes’ or ‘maybe’ to question 18, which technologies would benefit the most?

Open ended, word limit: 300 words

26. Thank you for taking the time to complete the survey. We really appreciate your time. Is there any other feedback or evidence that you wish to share?

Please select one option:

  • Yes
  • No

27. If you answered ‘yes’ to question 20, please set out your additional feedback or evidence.

Open-ended, word limit: 300 words

Next steps

During the call for evidence response window, the Department for Science, Innovation and Technology (DSIT) will organise roundtables to seek input from UK innovators, regulators, business and the public.

The responses provided via the call for evidence will inform the development of the AI Growth Lab.

  1. OECD (2025), “Macroeconomic productivity gains from Artificial Intelligence in G7 economies.” 

  2. McKinsey (2023), “Economic potential of generative AI” 

  3. Jung, C (2025) The new politics of AI: Why fast technological change requires bold policy targets, I 

  4. ONS (2025), “Business insights and impact on the UK economy: 2 October 2025” 

  5. McLean, A & Dr Smith, D (2025), “Technology Adoption Review 2025” 

  6. For example, Independent Financial Advisers (IFAs) must explain their decisions: if an investment recommendation is not “explainable”, they cannot lawfully make that recommendation. Complex or opaque AI models often do not meet the standard of explainability required, limiting the use of AI by IFAs – despite recent research showing Claude Opus exceeds human-expert personal financial advisers, and is significantly cheaper. 

  7. OECD (2023), “Regulatory sandboxes in artificial intelligence” 

  8. This is based on internal DSIT analysis 

  9. OECD (2023), “Regulatory sandboxes in artificial intelligence” 

  10. and existing regulatory sandboxes are not in place 

  11. OECD (2023), “Regulatory sandboxes in artificial intelligence”