AI adoption by the Engineering Biology community: survey findings
Published 29 January 2026
Insights from Artificial Intelligence in Engineering Biology Survey
Research conducted by BMG, an RSK Company
Executive summary
Artificial intelligence (AI) is already advancing Engineering Biology (EB) applications and processes. AI can enable novel design, high throughput analysis, and reduce experimental cost. It has already supported highly accurate protein folding predictions through the likes of AlphaFold, and has the potential to bolster vaccine and drug discovery, and help tackle global challenges like climate change and food security. The UK government has identified the high cross-sector potential of EB applications and has prioritised EB as one of the 6 frontier technologies in the Industrial Strategy Digital and Technologies Sector Plan published in June 2025.
The UK is in a strong position to leverage opportunities and deliver meaningful impact through EB. We are already harnessing its strengths in R&D, scientific publications, intellectual property and venture capital.[footnote 1] The UK sector has more than 1,000 businesses innovating across different subsectors.[footnote 2] By exploiting the transformative opportunities of the convergence of AI and EB, the UK can create practical solutions to some of society’s biggest challenges. To champion these positive uses, we want to understand how the UK’s EB community—including researchers, academics, and commercial enterprises operating at the crossroads of biology, engineering, and computation—is adopting AI.
We commissioned a survey-based research project, that took place in June 2025, to explore how AI is currently being used, the barriers to its adoption and perceptions of its value, utility, accessibility, and associated risks within the EB community. This research will inform the development of impactful policies targeted at enhancing technology development and adoption, including bolstering research capability across the country.
This survey received 368 total responses (1.42% return rate) limiting the generalisability of the results. Furthermore, in the absence of an agreed specific data-driven definition of the EB community and its components, this research could not aim to produce a statistically representative sample of the EB community. Nonetheless, the diversity of responses provides confidence that the findings offer valuable insights into current practices and attitudes toward AI adoption in EB.
The insights generated will help DSIT to design and enact impactful policies to unlock AI EB innovation as it delivers on the government’s ambitious agenda:
-
Industrial Strategy (July 2025): A 10-year plan to make the UK the best country in the world to invest in through a new economic approach and a comprehensive programme of support. The Digital and Technologies Sector Plan is central to this. It focuses on steps to unlock growth in frontier technologies, including EB and AI.
-
The AI Opportunities Action Plan (Jan 2025): An ambitious roadmap for the UK to strengthen its global leadership in AI. It sets a strategic vision to secure the UK as a world leader in both building and using AI. The Plan emphasises the opportunities of AI for science.
-
AI for Science Strategy (Nov 2025): On 20 November, DSIT published the AI for Science Strategy, to ensure the UK maintains its scientific leadership and seizes the opportunity to shape the transformation of science by AI. Our AI for science vision is centred around: developing a data landscape that facilitates transformative research; ensuring that researchers have access to compute resource at sufficient scale; building research communities made up of truly interdisciplinary teams; and ensuring we capitalise on rapid developments in autonomous laboratory infrastructure and general-purpose and specialist AI science tools.
Overview of findings
This research collects insights from the following categories of respondent:
| Base | Group | Definition |
|---|---|---|
| N=368 | All respondents | Everyone who completed the survey, including those who do not currently use AI. |
| N=252 | Users | All users of AI products, services or platforms, including those who identify as vendors. |
| N=205 | Non-vendor users | Users of AI products, services or platforms, not including those who identify as vendors. |
| N=57 | Vendors | Vendors (sellers) of AI products, services or platforms. |
| N=10 | Non-user vendors | Vendors of AI products, services or platforms that do not identify as AI users. |
| N=262 | Vendors and users | All vendors and all users of AI products, services or platforms. |
Who is using which AI models
-
About 7 in 10 (69%) of all respondents use AI tools (n=368). Among users 47% are academic researchers (n=252).
-
Among all respondents, 52% use AI on an ad‑hoc basis; 7% report fully embedded use.
-
Among all non-vendor users, the most widely adopted tools are ChatGPT (30%) and AlphaFold (20%) (n=205).
-
University-based users stand out for their use of more specialist models such as RFdiffusion (8%), Chai‑1 (5%), and Evolutionary Scale Modelling (ESM) (5%).
What AI is used for
-
The most-popular applications among users include conversational assistance (89%), background research (83%), drafting/writing (73%). Uptake is lower for more technical applications like hypothesis testing (42%) or embedded hardware (26%) (n=252).
-
Among users who engage with specialist tools for hypothesis testing, the most popular applications include protein design (52%), omics and data analysis (48%) (n=105).
Where AI is most helpful
-
Vendors rate their own AI tools as most helpful for designing biological systems (79%), upstream processing (78%) and hypothesis generation (77%) (n=57).
-
Users involved in R&D find AI most helpful for research and literature surveys (92%), data analysis (81%) and iterative learning/optimisation (73%) (n=196).
-
Users involved in R&D estimate that most organisational-level uplift from AI comes from: data analysis (85%), predictive modelling (81%).
-
Among users of AI tools for manufacturing biological products, AI is seen as most helpful for testing and validation (53%) and monitoring processes (52%) (n=96).
Barriers and risks
-
Trust in outputs is the most cited barrier to AI adoption by all respondents, both at an organisation level (52%) and an industry-wide level (60%). Capability and skills gaps emerge as the next most cited barrier both at an organisation level (46%) and an industry-wide level (54%) (n=368).
-
Over half of respondents (54%) cannot identify what skills or training are needed for effective AI use.
-
Among all respondents, the greatest perceived risks of AI adoption in EB are reliability and trust in AI outputs (23%). Notably, 38% of respondents are unsure what the key risks are.
-
Over 4 in 10 (42%) of the respondents that are aware of key risks indicate that perceived risks have “somewhat” influenced their adoption of AI (n=227).
Future trajectory
-
Most respondents (89%) expect to use AI more in the future. Nearly 4 in 10 (39%) indicate their organisation is planning to invest more in AI (n=368).
-
Top expected future applications among those increasing AI use are omics data analysis (64%), protein design/optimisation (59%) and genetic engineering (54%) (n=328).
-
Among all respondents, funding and investment (25%) emerge as the key areas where DSIT can support the EB community in its adoption of AI. Training and education support are also highlighted (17%).
Glossary of key terms
Ad-hoc use of AI
- Sporadic and informal use of AI within an organisation (as opposed to use of AI tools that are fully embedded into organisational processes.)
AI drafting tools
- Tools used to support users to draft documents, for example by generating novel text, identifying errors, finding information or checking citations.
Biomanufacturing
- The use of biological systems to produce commercially important biomaterials and biomolecules.
Biosecurity
- Measures taken to protect against the misuse or accidental release of biological agents, a key concern in AI-enabled EB.
Conversational AI
- A non-technical personal assistant such as ChatGPT.
Diagnostic models
- AI systems designed to identify diseases or conditions based on biological data.
Engineering biology (EB)
- The design, scaling and commercialisation of biology-derived products and services.
Engineering biology (EB) community
-
Individual and entities whose work enables or is enabled by EB tools and techniques. This includes (but is not limited to) biomanufacturing, bioprocessing and precision fermentation across research, innovation and commercialisation.
-
In the few instances where the report refers to the “EB sector,” the definition of sector is synonymous with community.
General-purpose AI tools
- Computational models for natural language processing incorporating vast amounts of text through a self-supervised training regime. This includes conversational AI tools such as ChatGPT.
Genetic engineering
- The direct manipulation of an organism’s genes using biotechnology, often supported by AI tools.
In-silico experiments
- Computer-simulated experiments used to test hypotheses or model biological processes.
Omics data
- Large-scale datasets related to fields such as genomics, proteomics, and metabolomics, often analysed using AI for insights.
Predictive modelling
- The use of statistical or machine learning techniques to forecast outcomes based on data.
Protein design
- The use of AI to model, predict, or engineer protein structures and functions
Methodology
Survey design
Figure 1 outlines the structure of the survey, with users and vendors completing bespoke questions depending on their role in the industry.
Figure 1: Overview of questionnaire design
Scoping
A taxonomy sought to identify the relevant organisations and individuals in the target audience which built on DSIT’s definition of EB.
Building the sample
Given the absence of a pre-existing sample frame for the EB community, the research team began by collaboratively defining the boundaries of the EB community, including relevant roles and organisation types. This informed the development of a taxonomy, which was used to identify individuals and organisations operating within the community. A population was then built using publicly available sources such as LinkedIn and staff directories, from which email addresses were collected.
Initial survey distribution was carried out, using unique links sent to approximately 26,000 individuals identified through this crawl. These links allowed the research team to follow up with 6 separate reminders for respondents to complete the survey but also helped match respondents to known variables such as organisation type, enabling deeper analysis. Due to a lower-than-expected response rate, an open survey link was subsequently created and shared by DSIT through its networks to widen participation. The survey was live from 5 to 27 June 2025. A total of 368 responses were collected.
Generalising findings to the UK EB community
This research did not aim to produce a statistically representative sample of the EB community, given the inherent challenges in reaching all segments of this diverse and emerging field. Instead, the goal was to gather insights from a broad cross-section of organisations and roles. While the sample includes a good spread of regions, organisation types, and EB applications, it is not possible to generalise findings to the entire EB community. Due to the lack of a definitive population frame, we cannot quantify over- or under-representation. However, certain groups, such as academic institutions, may be more represented in the response pool. Notably, there were more CEOs of SMEs and junior staff from universities who completed the survey than other combinations of role and organisation type.
Nonetheless, the diversity of responses provides confidence that the findings offer valuable insights into current practices and attitudes toward AI adoption in EB.
Caveats and limitations
This survey asks vendors and users about their use of AI tools. There are several caveats with this approach. As some respondents may use AI only occasionally, or as part of an existing software package, respondents may not accurately recall their use of AI.
Identifying and characterising component parts of the EB community is another limitation of the research. Prior to the survey, a scoping exercise was conducted to estimate the size of the UK EB community and gather information on the actors within – at a company/organisation and people level. It is important to note that this estimated market size was designed to service a specific need, which is gathering insights via an online survey. It was not designed to be a precise size of the EB community.
Due to a lack of existing sample frames, the respondents for the sample were identified from their online presence. Steps were taken to quality assure and validate the outputs, in a community and people context. However, there might be instances where:
-
Respondents may not have identified their work as falling within the EB domain, even if it aligns with the EB community’s broader objectives. We included definitions in communications to mitigate this, but this may have affected both participation and how questions were interpreted.
-
Relevant individuals may be missed as a result of their online presence and how they describe themselves.
-
Non-relevant individuals may be included. However, in such instances, individuals will likely have been filtered out as a result of their survey responses and will not have made it into the final dataset as a result.
Who we spoke to
Approach
We spoke to both vendors and users of AI in the EB community. As the sample only included those in the EB community, vendors of more generic AI products were not included.
We asked respondents about their professional background, organisation, use, adoption and understanding of AI in EB, and their views on the EB community as a whole. Where any “organisation-level” findings are reported, this is based on individual perspectives.
We also asked individuals about their perceptions of the EB community as a whole. Where any “community-level” findings are reported, this is also based on respondents’ individual perspectives.
Geographical distribution of respondents
The greatest proportion (23%) of all respondents are based in London (n=368). Overall geographical distribution of all respondents is concentrated in the east and southeast of England.
Figure 2: Geographical distribution and profile of participants
Notes: (Base: all respondents, n=368).
*Unknown due to respondent company address not being available on database.
Professional background of respondents
Over two thirds (72%) of all respondents are based in universities or private companies (n=368).
Figure 3: In which of these ways is your work related to the Engineering Biology sector?
Notes: S01_A. In which of these ways is your work related to the Engineering Biology sector?
S09. What term best describes your organisation?
(For all questions, base: all respondents, n=368).
Among all respondents (n=368):
- 46% identify as having an EB role in the EB sector
- 28% identify as having a non-EB role in the EB sector
- 11% have an EB role in a non-EB sector
- although 21% selected ‘other’, they were deemed as EB relevant respondents following reviews of their profiles and answers to the rest of the survey
All respondents were asked about their seniority within their organisation, and the size of their organisation. Over half (56%) of all respondents have a specialist role in their organisation (technical/professional/academic/research). Thirty-seven percent identify as working in strategic leadership roles (director or above). Furthermore:
- 31% of people from organisations with 1-10 employees surveyed are CEOs
- 21% of people from organisations with 11-50 employees surveyed are CEOs
- those in a less senior role (professionals or researchers) are more likely to be in universities (56% compared with 39% for the whole sample)
Organisations with between 1-50 employees are more likely to be looking to expand their AI usage (18% compared with 14% overall).
Figure 4: In which of these ways is your work related to the Engineering Biology sector?
Notes: S01_A. In which of these ways is your work related to the Engineering Biology sector?
S04. Which of the following best describes your role in your organisation?
(For all questions, base: all respondents, n=368)
EB application areas represented in the sample
Respondents from organisations that have applications in health are the highest represented (71%) in the total sample, but there is a spread across other areas (n=368). Respondents could return multiple answers, reflecting how organisations work across multiple application areas.
Figure 5: Which of these EB/Biotechnology applications is relevant to you?
Notes: S01_C. Which of these EB/Biotechnology applications is relevant to you?
(For all questions, base: all respondents, n=368, single code).
Survey results
AI adoption
Key insights
-
About 7 in 10 (69%) of all respondents use AI tools, yet over half (52%) do so on an ad‑hoc basis rather than having AI fully integrated into core workflows (n=368).
-
Among those that identify as non-vendor users, the highest proportions are academic researchers (47%) and university-based users (45%) (n=205).
-
Vendors (n=57) are more likely to report use of fully-embedded AI tools (25%) than the total sample percentage of 7%.
Full results
Users of AI
About 7 in 10 (69%) of all respondents identify as users of AI tools (n=368). Within this figure there are distinctions:
Of the respondents that identify as vendors (n=57), those that feature prominently include:
- Private companies (56%), including startups (39%).
- Strategic leaders: CEOs (28%), technical leads/directors (26%).
- Those with a health focus (84%).
Of the respondents that identify as non-vendor users (n=205), those that feature prominently include:
- Academic researchers (47%), including those in research intensive roles (38%).
- University-based respondents (45%).
Of the respondents that identify as neither users nor vendors (n=106):
- Almost 1 in 5 (19%) have an EB related role but not within an EB company/organisations/institution.
Figure 6: Is your organisation a?…
Notes: S16. Is your organisation a?…
(Base: all respondents, n=368).
Definitions of vendors and users were not provided to respondents and use cases were not given as examples for this question.
Nature of AI integration
Just over half of all respondents claim their organisation uses AI on an ad-hoc basis (52%), whereby AI tools are employed informally by staff or researchers (n=368). Ad-hoc use is more frequently cited by university-based respondents (62% vs total sample percentage of 52%).
Vendors of AI tools are more likely to report use of fully-embedded AI tools (25% vs total sample percentage of 7%).
Figure 7: How integrated are AI tools in your company, organisation or institution?
Notes: D1A. How integrated are AI tools in your company, organisation or institution?
(Base: all respondents, n=368).
AI Model use and development
Key insights
- Among non-vendor users ChatGPT (30%) and AlphaFold (20%) are the most widely adopted tools (n=205). However, university-based users report greater adoption of more specialist models such as RFdiffusion (8%), Chai‑1 (5%), and Evolutionary Scale Modelling (5%) compared to the total sample (n=93).
- Vendors primarily offer products that support data analysis (75%). Tools for testing hypothesis in silico are also popular (35%) (n=57).
- Vendors are most active in developing AI for predictive/personalised medicine or disease prediction (72%), followed by fundamental biology (39%), fermentation and bioprocessing (39%) and metabolic pathway engineering (32%) (n=57).
Full results
AI tools in use
Among non-vendor users, ChatGPT emerges as the mostly widely adopted tool, reported by 30% of respondents (n=205). AlphaFold follows as the next most popular, cited by 20% of respondents. The substantial “other” category (38%) encompasses a range of tools, from genomics-based platforms to solutions for in-house mathematical modelling.
Findings indicate that university-based users (n=93) are more likely to be using specialised AI tools than the broader respondent group. This includes:
-
RFdiffusion (8% vs 4%)
-
Chai-1 (5% vs 2%)
-
Evolutionary Scale Modelling (ESM) (5% vs 2%)
Figure 8: What tools/algorithms/models are you using?
Notes: B3. Please list all the tools/algorithms/models you are using.
(Base: non-vendor users, n=205).
Capabilities offered by vendors
Among vendors surveyed, 75% indicate their tool functionality is concentrated around data analysis and model development (n=57). Thirty-five percent offer tools that support hypothesis testing through in silico experiments. Very few surveyed vendors focus on conversational AI or drafting tools (7% each).
Vendors’ products typically offer multiple functionalities (averaging ~2) suggesting a multi-purpose orientation rather than a narrow specialisation.
Figure 9: What does your AI product, service or platform do?
Notes: A1/C1 What does your AI product, service or platform do?
(Base: all vendors, n=57).
AI vendors— application of capabilities
Among the vendors surveyed the majority (72%) are developing tools for healthcare, such as for predicative and personalised medicine or disease prediction (n=57). Other popular applications include:
-
Understanding fundamental biology e.g. building cellular models (39%).
-
Fermentation and bioprocessing e.g. optimising fermentation parameters (39%).
-
Metabolic pathway engineering e.g. optimising micro bacterial strains (32%).
Figure 10: Which applications are you developing AI for?
Notes: A4/C6. Which applications are you developing AI for?
(Base: vendors of AI tools, n=57).
All AI users and vendors – technologies in development
Among all users and vendors of AI tools, the most common AI-based technologies in development are for analysis of omics data (31%), genetic engineering (26%) and drug discovery and development (24%) (n=262).
Several distinctions arise when looking at technologies in development by application area:
-
Companies in the agriculture and food sector (39%) are more likely than the total sample (31%) to be developing AI-based technologies for omics data analysis.
-
Companies in the waste and environment (42%), and chemicals and materials (38%) sectors are more likely than the total sample (30%) to be developing AI based technologies for protein design and optimisation.
Figure 11: What type of AI based technologies are you developing?
Notes: A3/B1B/C5. What type of AI based technologies are you developing?
(Base: vendors and users of AI tools, n=262).
AI users vs vendors – technologies in development
Among vendors and users, vendors are more likely than users to be developing tools for genetic engineering (37% vs 22%), drug discovery (39% vs 20%), and diagnostic models (37% vs 8%) (n=262). There is less difference between vendors and users’ development of analysis of omics data (39% vs 29%) and protein design and optimisation (33% vs 29%) technologies.
Figure 12: What type of AI based technologies are you developing?
Notes: A3/B1B/C5. What type of AI based technologies are you developing?
(Base: all users or vendors, n=262).
AI applications
Key insights
-
Among users, AI is predominantly used to support general tasks such as conversational assistance (89%) and background research (83%). Uptake is lower for more technical applications like hypothesis testing (42%) or embedded hardware (26%) (n=252).
-
Among users who engage with specialist tools for hypothesis testing, the most common applications are protein design and optimisation (52%) and omics data analysis (48%) (n=105).
-
Examining the use of specialist AI tools across EB application areas reveals variations in sector-specific use cases (n=105). For example, chemicals and materials, and waste and environment sectors lead in protein design and optimisation compared to other application areas (73% vs the total sample percentage of 52%).
Full results
Application of AI – overview
Amongst users, AI use is most prevalent in tasks that are less EB-specialised, such as conversational AI (89%), background research (83%) and drafting or writing (73%) (n=252).
The same users report lower uptake of more technical applications, such as testing hypothesis in silico (42%) or using hardware with AI embedded in it (26%).
Figure 13: How much do you use AI for the following tasks?
Notes: B1/C1. How much do you use AI for the following tasks?
* Percentage of reported use may slightly differ against combined percentages from chart due to rounding.
(Base: users of AI tools, n=252).
Application of AI – specialist use
Among users who engage with specialist tools for hypothesis testing, the most common applications are protein design and optimisation (52%) and omics data analysis (48%) (n=105).
Examining the use of specialist AI tools across EB application areas reveals variations in sector-specific use cases:
-
Chemicals and materials, and waste and environment sectors are more likely to use specialist tools for protein design and optimisation in comparison to other sectors (73% vs the total sample percentage of 52%).
-
The health sector is more likely to use specialist tools for diagnostic modelling in comparison to other sectors (18% vs the total sample percentage of 13%).
-
The agriculture and food sector is more likely to use specialised tools for biocomputing in comparison to other sectors (19% vs the total sample percentage of 9%).
Figure 14: You indicated at least some usage of testing hypotheses using specialist tools/ in silico experiments. What kind of tasks are you using these tools for?
Notes: B2/C4. You indicated at least some usage of testing hypotheses using specialist tools/ in silico experiments. What kind of tasks are you using these tools for?
(Base: users of AI tools, n=105, multi code).
Application of AI – personal specialist use
Of those that find AI helpful for building and testing the R&D pipelines, 15% of respondents cite enzyme and protein engineering (n=88). Responses point to a wide range of uses, from cancer therapeutics and disease modelling to agricultural innovation and automation in scientific processes. However, a significant proportion (34%) prefer not to answer, and 16% say they do not know.
Application of AI – organisational specialist use
Users who indicate that their organisation finds AI helpful for building and testing the R&D pipeline were later asked to share what their organisation is building/testing with AI. 12% of respondents cite enzyme and protein specialisation, often linked to medical and therapeutic technologies (n=101). This represents the most common response among the group. However, a considerable proportion of respondents (34%) choose not to disclose details.
AI helpfulness
Key insights
-
Vendors rate their tools as most helpful for specialised tasks such as designing biological systems (79%), upstream processing (78%), and hypothesis generation (77%), but less so for research and literature review (45%) or iterative optimisation (42%) (n=57).
-
Users involved in R&D find AI most helpful for research and literature surveys (92%) and data analysis (81%) (n=196). These users also find AI helpful for iterative learning and optimisation (73%).
-
At an organisational level, these users report the greatest benefit in structured data tasks such as analysis (85%) and predictive modelling (81%).
Full results
Perceptions of AI helpfulness—vendors
Vendors perceive their tools as most helpful in specialised applications such as designing biological systems (79%), upstream processing (78%) and hypothesis testing (77%) (n=57). Nearly 8 in 10 vendors rate their tools as helpful in these areas. Vendors consider their products as least helpful for tasks like research and literature reviews (45%) and iterative optimisation (42%).
Figure 15: How helpful is your product, service or platform with the following applications?
Notes: (A5/C10) How helpful is your product, service or platform with the following applications?
(Base: all vendors, n=57).
Perceptions of AI helpfulness—users of AI tools who do R&D
Among users involved in R&D, AI is regarded as delivering the greatest benefits in less specialised tasks such as research and literature surveys (92%) and facilitating data analysis (81%) (n=196). These users also find AI helpful for iterative learning and optimisation (73%).
Figure 16: If you had to estimate, how much does the use of AI help with the following steps of the upstream R&D pipeline in your individual work?
Notes: B4B/C7B: If you had to estimate, how much does the use of AI help with the following steps of the upstream R&D pipeline in your individual work?
(Base: all users involved in R&D, n=196).
Perceptions of AI helpfulness – organisations who do R&D
When asked to estimate the extent AI uplifts upstream R&D at an organisational level, users identify data analysis (85%) and predictive modelling (81%) as the areas where AI provides the most significant benefit (n=196). Some respondents perceive AI as ‘not helpful’ for testing (18%), ideation or hypothesis testing (19%), and the design of biological systems (15%).
When comparing perceptions of AI uplift provided within organisations for an individual’s work vs their organisation in general, there is a drop in perceived helpfulness of AI for research and literature surveys:
- 92% perceive AI as helpful for research and literature review tasks in their individual work within their organisation (n=196).
- 74% perceive AI as helpful for research and literature review tasks at a wider organisational level (n=196).
Figure 17: If you had to estimate, how much does the use of AI help with the following steps of the upstream R&D pipeline in your organisation in general?
Notes: B5/C8. If you had to estimate, how much does the use of AI help with the following steps of the upstream R&D pipeline in your organisation in general?
(Base: users of AI tools who do R&D, n=196).
Comparing perceptions of AI helpfulness in R&D pipelines
Perceptions of how AI can help with the R&D pipeline differs between vendors and users, whether at an individual or organisational level. The reason for this discrepancy is unclear.
Figure 18: % of respondents who report “very helpful”.
Notes: A5/C10. How helpful is your product, service or platform with the following applications?
(Base: vendors of AI tools, n=57, single code).
B5/C8. If you had to estimate, how much does the use of AI help with the following steps of the upstream R&D pipeline in your organisation in general?
(Base: users of AI tools who do R&D, n=196, single code).
Perceptions of AI helpfulness—manufacturing biological products
Among users of AI for manufacturing biological products, AI is seen as most helpful for testing and validation (53%) and monitoring processes (52%) (n=96). Post market surveillance, though cited as least helpful comparatively, is still seen as helpful by 42% of those surveyed, and very helpful by 15%.
Figure 19: If you had to estimate, how helpful is AI in your organisation for the following steps of the biomanufacturing process?
Notes: B6B/C9B. If you had to estimate, how helpful is AI in your organisation for the following steps of the biomanufacturing process?
(Base: users of AI tools that manufacture anything using EB, n=96).
Barriers to AI development and adoption
Key insights
-
Among all respondents, the most-cited barriers to adoption of AI are low trust in AI outputs (52% at organisational level; 60% at industry-wide level) and capability or skills gaps (46% at organisation level; 54% at industry-wide level) (n=368).
-
Over half of all respondents (54%) could not identify what skills or training are needed for effective AI use; identified needs are highly varied.
-
Respondents’ views on UK AI regulation maturity are highly varied, with only 5% of all respondents saying it is very or extremely advanced.
Full results
Overview of barriers to AI adoption
Trust in outputs, then capability or skills gaps, are the most cited barriers to AI adoption by all respondents (n=368), both at an organisational level and an industry-wide level. When asked about obstacles to AI adoption within the respondent’s organisation, 52% cite trust in output and 46% highlight capability or skills gaps. When asked for reflections on barriers to AI adoption by the EB community as a whole, respondents similarly identify trust in outputs (60%) then capability or skills gaps (54%) as the principal obstacles.
Figure 20: What are the barriers to you or your organisation adopting AI, and barriers to wider industry adopting AI?
Notes: D2/D10 What are the barriers to you or your organisation adopting AI, and barriers to wider industry adopting AI?
(Base: all respondents, n=368).
Skills and training requirements
When asked to identify the specific skills or training required for their organisation’s workforce to effectively utilise AI, 54% of all respondents do not know (n=368). Among those who do specify needs, responses are highly varied. 10% of respondents cite technical skills, such as software development and computational biology. A further 10% cite the requirement for knowledge of AI’s practical application, such as having experienced software developers or those who understand the limitations of AI. Additionally, 5% point to the necessity for AI specific training: for example, to enable individuals to discern which tools are appropriate for specific tasks.
UK AI regulations
When asked about the maturity of UK AI regulations, one in five (21%) of all respondents feel that regulations are not at all advanced, while another quarter (26%) are unsure (n=368). A very small number (5%) of respondents say regulations are extremely or very advanced.
Figure 21: D7. How advanced do you think the state of UK regulations are with regards to AI?
Notes: How advanced do you think the state of UK regulations are with regards to AI?
(All respondents, n=368).
Risks from AI adoption
Key insights
- Among all respondents, the most cited risk is reliability and trust in AI outputs (23%), followed by biosecurity and safety (16%) and loss of human expertise (13%) (n=368).
- Almost 4 in 10 of all respondents (38%) are unsure what the key risks from AI use in EB are.
- 42% of the respondents that are aware of the key risks indicated that perceived risks had “somewhat” influenced their adoption of AI. A further 19% said they were influenced “to a great extent” (n=227).
Full results
Overview of key risks associated with AI in EB
Among all respondents, the greatest risk cited is reliability and trust in AI outputs (23%) (n=368). However, 38% “don’t know” what the key risks are, surpassing all other categories of potential risks. Biosecurity and safety issues are mentioned by 16%, while 13% expressed apprehension about over-reliance on AI and the diminishing role of human expertise. The “other” category (14%) includes “insufficient data’’ and “public opinion’’. There is no notable difference in responses by organisation type.
Figure 22: What do you view as the key risks associated with the use of AI in Engineering Biology?
Notes: E6. What do you view as the key risks associated with the use of AI in Engineering Biology?
Open text box.
(Base: all respondents, n=368).
Influence of risks on adoption
Of those that are aware of the key risks, 42% indicate that perceived risks “somewhat” influence their adoption of AI (n=227). Only 19% say they are influenced “to a great extent”.
Figure 23: To what extent have these perceived risks influenced your adoption of AI?
Notes: E7. To what extent have these perceived risks influenced your adoption of AI?
(Base: all respondents that did not answer “don’t know” at E6, n=227).
Influence of specific risks on adoption
When mapping specific risks to their influence on adoption, environmental and resource concerns stand out. A small group (n=12) raise environmental concerns as a barrier to adoption: 50% of this group say it influenced their adoption to “a great extent”.
Attitudes, equity and future support
Key insights
- Among all respondents, 89% expect to use AI more in the future (n=368). 19% express certainty around their organisation’s plans to increase investment in AI. 20% say plans to invest in AI are dependent on the success of pilot projects.
- For future use, among all respondents, there is interest in a wide range of emerging AI technologies, with protein design and prediction tools emerging as most popular (14%).
- Among all respondents, funding and investment (25%) is the most cited way that DSIT could support the EB community in its use of AI. Although, 44% say they do not know how.
Full results
Future AI adoption: overview of expectations
Most respondents (89%) anticipate increasing their use of AI in the future (n=368). Only a few report that they either don’t expect to use AI in the future (5%) or expect their usage to remain unchanged (4%). A minority foresee a reduction in their AI use (1%) or are uncertain about their future AI use (1%).
Of those who do not identify as users or vendors of AI technologies, the majority (82%) anticipate increasing their use of AI in the future (n=106). Only 8% indicate that they neither use AI at present nor expect to adopt it going forward.
Future AI adoption: investment expectations
Of all respondents, almost 1 in 5 (19%) express certainty around their organisation’s plans to increase investment in AI, with future activity closely tied to this commitment (n=368). A similar number (20%) say plans to invest in AI are dependent on the success of pilot projects. However, 7% say they have no plans to invest.
Figure 24: Is your organisation planning more investment in this area, for example buying more equipment, hiring more machine learning engineers?
Notes: E3. Is your organisation planning more investment in this area, for example buying more equipment, hiring more machine learning engineers?
(Base: all respondents, n=368).
Among all respondents, differences emerge when comparing anticipated future AI investment by different respondent groups:
- Vendors (n=57) are more likely than users (n=252) to intend to invest more in AI, seeing future investment as crucial (47% vs 21%), or intend to invest depending on the outcome of pilot projects (32% vs 22%).
- While users are also typically planning to invest, including waiting to see the outcome of the pilot projects, they are more likely than vendors to still be evaluating their options (16% vs 4%).
- Meanwhile non-users (n=106), compared to users and vendors, are more likely to be evaluating their options (27% vs 16%, 4%) or waiting for new evidence (8% vs 3%, 0%).
Figure 25: Is your organisation planning more investment in this area, for example buying more equipment, hiring more machine learning engineers?
Notes: E3. Is your organisation planning more investment in this area, for example buying more equipment, hiring more machine learning engineers?
(Base: all respondents, n=368).
Future AI adoption: tools of interest
For future use, all respondents express interest in a wide range of emerging AI technologies, with protein design and prediction tools emerging as most popular (14%) (n=368). LLMs for text processing (11%) and data integration tools (10%) also feature prominently. 33% of respondents say they “don’t know” which technologies they are most interested in.
Figure 26: What emerging AI technologies are you most interested in, and why?
Notes: E1. What emerging AI technologies are you most interested in, and why?
(All respondents, n=368).
Future AI adoption: applications of interest
Of respondents who expect to use more AI in the future, the primary applications of interest are for analysis of omics data (64%), protein design and optimisation (59%) and genetic engineering (54%) (n=328).
Application area-specific trends in anticipated future AI use emerge:
-
Those working in chemicals and materials (74%), and waste and environment (71%) are more likely to target future AI use towards protein design and optimisation than the total sample percentage of 59%.
-
Those working in low carbon fuels (73%), and agriculture and food (63%) are more likely to target future AI use towards genetic engineering than the total sample percentage of 54%.
-
Those working in agriculture and food (32%), and waste and environment (35%) are more likely to target future AI use towards environmental modelling than the total sample percentage of 24%.
Figure 27: What kind of tasks do you expect to use AI for more in the future?
Notes: E2a. What kind of tasks do you expect to use AI for more in the future?
Respondents could select multiple, so data will not sum to 100.
(Base: all respondents who expect to use more AI in the future, n=328).
AI policies and frameworks
Among all respondents, 45% report their organisation has either formal or informal guidelines for AI use (n=368). Of these, 28% have formal policies and 17% have informal guidelines. 1 in 10 (11%) are developing guidelines. 23% still report having no policies/frameworks in place.
Differences emerge when comparing AI policies in place across different organisation types. Universities are more likely to have formal policies and frameworks in place (36% vs total sample of 28%). Smaller organisations are more likely to not have any AI policies in place (46% vs total sample of 23%).
Figure 28: Does your organisation have policies or frameworks in place for AI governance?
Notes: D5. Does your organisation have policies or frameworks in place for AI governance? (All respondents, n=368).
Increasing AI adoption by the UK’s EB industry – DSIT’s role
Funding and investment (25%) emerge as the most popular form of support DSIT can offer to support the EB community to adopt AI (n=368). Training and education (17%) is also commonly cited. Notably, nearly half of all respondents (44%) are unsure how DSIT can best support industry.
Of those who mention regulations and standards (16%), common themes that arise are regulation of intellectual property and the need to develop regulations in collaboration with researchers and innovators.
Preferences for support vary by context:
-
Senior staff and vendors (37% for both) are more likely to prioritise funding and investment vs the total sample percentage of 28%.
-
Those who have been in the industry for over 10 years focus on training and education (23% vs the total sample percentage of 17%).
-
Technical professionals want more emphasis on regulation and standards (32% vs the total sample percentage of 16%).
Figure 29: DSIT aims to support UK industry in its use of AI for Engineering Biology. In your opinion, how can we best help?
Notes: DSIT aims to support UK industry in its use of AI for Engineering Biology. In your opinion, how can we best help?
(Base: all respondents, n=368).
Increasing AI adoption by the UK EB community – advancing regulations
Almost a third (32%) of all respondents are unsure if advanced UK regulation would help internal AI investment cases, while 27% think that it would be extremely or very helpful (n=368). Amongst those who report that advanced regulation would be helpful (n=208), the main reasons given are:
-
Boosting investor confidence through clear regulatory guidelines (59%).
-
Offering better protection of intellectual property (58%).
-
Improving safety and ethical standards (53%).
Figure 30: If UK regulations were more advanced than they are now, would this help your investment case in AI for Engineered Biology?
Notes: D9. If UK regulations were more advanced than they are now, would this help your investment case in AI for Engineered Biology?
(All respondents, n:368, single code)
D9A. For what reasons would more advanced UK regulations help your investment case in AI for Engineered Biology? (Where more advanced UK regulations would at least slightly help, n:208, multi code)
Annex A: Community and population sizing method
This method note provides a summary of the work undertaken to develop a population sample for the EB community. This supported primary research and the gathering of insights from people within the community via the survey. This note sets out the key steps undertaken to generate a targeted dataset at a community and people level.
Rationale:
The aim of this stage of the project was to estimate the market size the UK’s EB community and gather information on the actors within. This was at a company/organisation and people level. This builds on:
-
Existing research and data collection (which provide useful prompts for the scope and scale).
-
Taxonomy.
-
Refinements to the definition and expanding its scope by including organisations beyond the private sector.
This process provided a snapshot of the community, and the basis for administering primary research to relevant people within.[footnote 3] It is important to note that this data was designed to service a specific need, which was gathering information via an online survey. As such, the approach has been optimised to support this use case. Data collection methods are comprehensive and proportionate to the task, but not equivalent to those which would be deployed for a policy-led market research assignment and a precise sizing of the EB market.
Key principles applied to this exercise:
This research has been directed by the following key principles:
-
The main aim is to identify relevant EB community companies/organisations and employee contacts, using a collectively agreed and directed research approach. This will support an assessment of the size of the target population to establish scale and reach, and to check the representativeness of the survey.
-
Given that the main aim is not to fully size the EB community, the exercise deployed a proportionate methodology that would:
-
Maximise the number of relevant companies/organisations engaged in EB activity, based on open web presence and descriptive content.
-
Maximise the number of relevant individuals in the sample.
-
Minimise the number of irrelevant individuals in the sample.
-
Limitations:
Steps have been taken to quality assure and validate the outputs, in a community and people context. However, there might be:
-
Instances where relevant individuals were missed as a result of their online presence or how they describe themselves.
-
Instances where non-relevant individuals are included. To further ensure that such individuals are not included in the survey results, survey questions were used to filter out outliers.
Guided by these principles, a methodology has been deployed that:
-
Made the best use of existing research tools and data sources.
-
Leveraged direct inputs and feedback loops from DSIT and the supplier.
-
Shared data excerpts and samples to refine the approach and ensure relevance.
-
Used a variety of approaches to focus on a target population audience.
Community-level data
The method adopted here drew on an established approach to community and market sizing:
-
It confirmed the agreed EB community definition according to DSIT’s understanding and existing resources. It also captured example company data, generated in 2023, which helped to augment language models.
-
It adapted these inputs to support a directed web crawl, that would provide a view of the community, inclusive of organisations and institutions, in addition to private companies.
-
It generated an initial data sample to validate initial results and used feedback to optimise the crawl in terms of overall discovery, in-scope determinations and group classifications.
-
It developed a fuller iteration of the community dataset, including company-level records and select data fields to support further review and estimation of community size.
-
It further refined the dataset according to Frontier review to remove peripheral records, tighten classifications, and update any gaps. Frontier validation checks included:
-
Reviewing the initial data sample to provide detailed feedback on conditions to improve the accuracy of in-scope determinations and group classifications.
-
Comparing Frontier’s sample review with the review done by DSIT for the sample to ensure alignment of the scope.
-
Reviewing an additional random sample of 50 companies that were selected in the fuller iteration of the community dataset.
-
Finalising community data, including enrichment with further fields and lower-level sector classification. This was concluded in parallel with people data enrichment.
-
Population-level data
The method used here has focused on identifying people within the companies and organisations discovered as being active in the community.
Overarching participants:
-
We have sought to adopt a “dynamic” approach to sizing the population, using a variety of inputs and filtering decisions that have allowed for an estimate to be generated.
-
Key inputs into this process have been the role taxonomy (based on community groups) and feedback from Frontier based on the sharing of data samples and excerpts (see below for details).
-
Glass.AI used automation to further evolve the role taxonomy to a more granular level, that could support people’s discovery across open web sources, including LinkedIn, websites (including staff directories).
-
A wide base of people data was gathered that could be subsequently filtered in accordance with the targeting strategy. These people were UK based employees only, aligned to the primary research requirements.
Filtering of the data:
-
The data was then filtered according to a series of rules, taking specific account of company/organisation size and mitigating against the inclusion of peripheral people:
-
For larger entities (250 employees or more) – a combination of roles (matched to the taxonomy) and EB key terms were used, to obtain an appropriate balance of people with requisite skills/focus.
-
For small and medium entities (11-249 employees) – a combination of roles and key terms but not weighted as heavily as larger entities.
-
For micro entities (10 employees and less) – focused on matching to target roles only. Where there is low people data obtained within a company:
-
Small/micro entities – the nearest role is included based on key term matching or based on seniority, beyond the role taxonomy. As a backstop, a general company email address is provided.
-
For universities/research institutions – applying an approach consistent with the above, factoring in the relevance of associated departments/functions.
-
Validation process:
-
This approach has been informed by engagement across the consortium and focused on Frontier’s targeted review of data excerpts and samples. The focus for Frontier’s validation checks included:
-
Reviewed the role title dataset and provided detailed feedback on in-scope determination.
-
Refined in-scope rules as explained above to increase the accuracy of the crawl.
-
Identified role titles that required further investigation and reviewed in detail population profiles provided thereafter for the in-scope determination of the roles.
-
-
The outcomes of this review have been on evolving the dataset to ensure appropriate/ proportional community/people targeting:
-
We identified some roles that should be removed from the population estimates as a result of being peripheral/junior.
-
Some additional roles have been added on account of this review.
-
We have applied some fine-tuning linked to some roles (such as managerial positions) to ensure they are senior in nature, and/or aligned to EB activity.
-
Considered how people gaps can be plugged where no roles have matched in a small company context, with the objective of including l2-13 contacts per company.
-
Result
This led to a population estimate, which was adjusted to account for the availability of matched contact information (email address).
-
McKinsey, The UK biotech sector (2022) ↩
-
Note – whilst the primary purpose of the data is to develop a population sample for primary research, it may also have wider analytical value. This could be in terms of understanding the composition and scale of the EngBio community, and for other outreach to the community of interest. ↩