Securing converged technologies: insights from subject matter experts
Published 8 August 2025
1. Executive summary
The convergence of existing and emerging technologies are increasingly reshaping contemporary capabilities of states, organisations, and society with significant risks and opportunities for cyber security. Based on research with 20 subject matter experts primarily from within the cyber security sector, this report examines the role of technology convergence, its impact upon cyber security, and potential policy responses to identifying and mitigating technology convergence in the domain. We present a series of findings on the challenges of identifying technology convergences for cyber security, how technology convergence operates across broader technology ecologies, and how engagement with a range of actors in order to build expert communities is essential to addressing and mitigating the impact of technology convergence for cyber security.
Report by Dr Andrew C Dwyer and Mikaela Brough.
1.1 Key findings
-
Identifying technology convergences and their cyber security impact is challenging: Across 16 interviews and an additional validation workshop, subject matter experts frequently focused on pre-existing strategic documents and well-known technologies (existing and emerging) to identify future convergences with impacts for cyber security. The experts struggled, and were often reluctant, to articulate novel technology convergences outside of those previously identified by others. This suggests that individuals do not have the confidence nor the wide-ranging and systematic knowledge required to identify new forms of technology convergence alone.
-
Technology convergence emerges across technology ecologies: Technologies that reshape capability (e.g., artificial intelligence (AI) applications or quantum engineering) were regarded by experts as essential foundational components for convergence, but that technologies alone do not determine whether convergence will occur and be impactful. The impact of such convergences are shaped by geopolitics, markets and innovation, as well as everyday socio-technical adoption and use (which this report calls technology ecologies). Identifying individual ‘key’ technologies and their particular technical capabilities for convergence without understanding how they are integrated and adopted in broader social, economic, and political contexts, was considered unlikely to be beneficial.
-
Cyber security poses novel challenges for addressing and mitigating the impacts of technology convergence: Due to technology ecologies, adversarial behaviour, and the domain’s interdependent complexity, the impact of technology convergence for cyber security is often difficult to assess in advance. Experts disagreed over whether it may be easier in cases of national security and defence, where technology development can be more rigorously modelled, or for everyday socio-technical harms. Yet all agreed that the dynamism of the broader domain and context means that social and technical vulnerabilities, for instance, can manifest in unexpected ways with uneven impacts for cyber security response.
-
Active community engagement is required to manage technology convergence and cyber security for effective policy: Rather than a focus on early-stage research, experts suggested that the focus of managing cyber security risks and opportunities should focus on higher technology readiness levels (TRLs). This is because many cyber security impacts are only likely to be appropriately identified when technologies and processes are closer to deployment. However, to build an effective and cohesive picture at the tempo required, insights should derive from established relationships between experts from across multiple academic disciplines, industry, applied research and elsewhere; which may be achieved best by a long-term task force dedicated to technology convergence.
2. Introduction
Ongoing scientific advances, ranging from artificial intelligence (AI) applications to quantum engineering and biotechnology, promise to transform scientific and security capabilities for states, organisations, and societies. The integration of novel scientific capabilities with a range of existing technologies, infrastructures, and processes will lead to new forms of technology convergence – creating new threats and vulnerabilities, as well as opportunities to improve cyber security response. New convergences have the potential to perpetuate online harms, redefine national security and defence capabilities, establish innovative markets, as well as foster new forms of connectivity. This report engages with the complex domain of technology convergence by examining how subject matter experts expect It will shape the future of cyber security. We present research conducted with 20 experts primarily from the cyber security sector, funded by the Department for Science, Innovation and Technology (DSIT). The research was conducted with three objectives:
-
To understand which factors shape technology convergence in cyber security.
-
To explore the risks and opportunities of technology convergences for cyber security.
-
To develop perspectives on how to respond to technology convergence in the cyber security domain.
Findings from this research show that the impact of technology convergence upon cyber security is not driven by technology alone, but by a wide range of contextual factors, including geopolitics, economic markets, and everyday adoption. Subject matter experts found it difficult to engage with potential future technology convergences – apart from those already popular in the public domain – therefore limiting the identification and assessment of particular risks and opportunities for cyber security. However, through qualitative interviews and a validation workshop, the broader conditions for technology convergence and domain-specific considerations for cyber security were identified as well as how to address and mitigate technology convergence’s impacts upon the domain.
The remainder of this report 1) provides a methodological background to the research; 2) explores how technology convergence is identified by participants, by distinguishing between more easily understood technology types and fewer technology groups; 3) examines what this report calls technology ecologies to explain how subject matter experts consider broader contextual features of convergence; 4) assesses domain-specific aspects of convergence for cyber security; 5) examines how to manage convergence in organisations and public policy; and 6) suggests how to build futures research for technology convergence and recommendations for policy.
3. Methodology
This report presents results from a qualitative study conducted in January and February 2025. The study engaged with 20 subject matter experts, henceforth participants, with 16 individual qualitative interviews and one expert validation workshop involving four participants who were not interviewed.
3.1 Research ethics
This study was approved by Royal Holloway, University of London’s Research Ethics Committee in December 2024. Participants were provided with an information sheet outlining the purposes of the study and informed consent was sought in writing in advance of the interviews and workshop. Interview participants were explicitly asked for permission to audio record. Audio files were transcribed by a GDPR-compliant third-party service where all personally identifiable information (PII) was removed. The anonymised data (each transcript was assigned a number) was analysed by a member of the research team, who did not have access to the original audio recordings. This allowed both for participant privacy and for greater impartiality in the analysis. All personal data associated with this project has been deleted.
3.2 Participants
All but one of the participants were recruited using a convenience sampling method[footnote 1], where participants were selected from the research team’s cyber security network. The participants came from a variety of backgrounds, with seven drawn primarily from cyber-policy, six from computer science, four from cyber defence, two from international relations, and one from futures-related thinking. Some participants had interdisciplinary specialisations that added further depth to the above but are not identified here to limit deanonymisation. All participants are currently working in academia or have done so within the past five years. Each participant has in-depth experience in their respective fields, often mixing industry, academic, and government knowledge. All participants have experience in either cyber security or adjacent knowledge in security and technology (including national security). 15 participants hold doctorate degrees in their field, with the remaining five working on doctorates or holding Master of Science degrees.
3.3 Subject matter expert interviews
16 individual interviews were conducted in January and February 2025 by the lead researcher (Dr Andrew Dwyer). Each interview lasted an average of one hour and was conducted online using Microsoft Teams. These interviews were conducted in a semi-structured style, following a set of open-ended questions developed collaboratively between the lead researcher and DSIT. The major questions posed to participants included:
-
Which emerging technologies do you think are going to impact cyber security most in the next 10-15 years?
-
What do you understand technology convergence to be?
-
What technological convergences are likely to be most important for cyber security in the future?
-
For the identified technological convergences, what are the risks and opportunities for the cyber security of each?
-
Which technological convergences have the greatest likelihood of occurring in the next 10-15 years?
Each thematic question was extended with additional questions to explore emergent participant ideas that involved significant elements of unstructured exploration.
3.4 Expert validation workshop
A three-hour workshop was conducted on 30 January 2025, attended by four subject matter experts in cyber security. This workshop was not audio-visually recorded to encourage an open dialogue between participants, with responses captured by a member of the research team using hand-written notes. Participants were provided with five activities that broadly reflected the structure of the interviews as well as building upon and validating the initial analysis of the interviews:
-
Key Technologies for the future of cyber security: Participants were tasked to identify relevant technology groupings.
-
Examining (technology) convergence: Participants were tasked to create a definition of convergence, comparing this to existing definitions.
-
Convergences for future cyber security: Participants were tasked to predict possible future scenarios with horizons of 5, 10, and 15 years.
-
Scanning opportunities and risks: Participants were tasked to identify and discuss the most pressing risks and opportunities of convergence in terms of cyber security.
-
Securing converged technologies in practice: Participants were tasked to brainstorm some key recommendations for policymakers.
Each of the five areas of discussion was accompanied by an online Excalidraw whiteboard, where participants wrote on virtual ‘sticky notes.’ This served as a way of sharing, repurposing, and reordering ideas and acted as a supplementary record of the workshop. The workshop followed a loose structure, allowing participants to lead the discussion in ways that they chose, mimicking the style and tone of the semi-structured interviews.[footnote 2]
3.5 Data analysis
Full interview transcripts, interview notes, workshop whiteboard, and author notes were analysed together through a process of conventional thematic analysis, following a codebook approach.[footnote 3] The codebook approach to thematic analysis involves the production of discrete ‘codes’ to be applied to sections of text, aiming at constructing more general meanings across the different texts. All codes, categories, and themes can be viewed in Appendix B. The codes were axially grouped together into four themes, informing the findings of this work. The thematic analysis is iterative, meaning that these steps were not completely discrete. The entirety of the analysis was conducted on the qualitative analysis software NVivo14, which provides a front-end for the construction of a relational database. The four sections that this report is structured along relate to each of the high-level ‘Level 1’ themes identified in the analysis:
-
Methodologies for Thinking about Convergence: Convergence was difficult to define and many participants spoke about the topic through individual technology types or through technology groups.
-
Contextual Aspects of Convergence: Participants framed convergence within broader contextual trends rather than seeing the process as purely technology-driven.
-
Thinking about the Future: Participants reflected on the risks and opportunities of potential convergences for cyber security.
-
Challenges in Addressing Convergence Through Policy: There are pressures and organisational barriers that structure how the public sector can address convergence. Participants outline their perceptions of bureaucratic processes, ideological fragmentation, and division of labour within the public sector.
These four themes inform the subsequent sections of this report, in turn 1) Convergence Identification; 2) Technology Ecologies; 3) Cyber Security Convergence(s); and 4) Managing Convergence.
3.6 Study limitations
This research engages with a limited number of participants (20) on a complex and open topic of technology convergence and its impact upon cyber security. This was intended to draw a rich and detailed understanding of subject matter experts’ views, but this study does not seek to provide a representative study and should not be interpreted as generalisable. Participants were recruited using convenience sampling, primarily drawn from known contacts of the research team. This sampling method, while efficient, introduces selection bias and limits the potential diversity of perspectives, as it does not ensure a comprehensive or systematically varied participant pool. As a result, the findings reflect the viewpoints of a relatively narrow group of participants rather than achieving a broader consensus on the topic. In addition, this study exclusively used remote individual interviews and a validation workshop. Online facilitation has been shown to create an emotional and social distance between the interviewer and the interviewee reducing the potential depth of engagement, as much as they enable greater access and speed of interaction.[footnote 4] Further, the wide-ranging subject matter was covered over a short period of time within each interview (averaging one hour), meaning that conversations tended towards shallower engagement with particular technologies for convergence with significant unstructured questioning required to encourage wider reflections.
Data analysis was conducted manually and the output is therefore marked by a degree of subjectivity, reducing the study’s reproducibility. Some steps were taken to make sure that the analysis was rigorous and systematic. First, all new codes were checked against previous codes to ensure a greater coherence with existing frameworks. In addition, transcripts and other materials were retrospectively re-coded many times upon the discovery of new key open codes. Guidelines for effective and trustworthy analysis were followed.[footnote 5] We sought to take advantage of the openness of the responses provided by participants as an inductive exercise, seeing this as an opportunity to understand the priorities of participants and exploring the phenomenon of technology convergence and its domain-specific attributes in cyber security more widely than initially envisaged.
4. Convergence identification
The interviews and workshop began with consideration of ‘emerging’ technologies that may be important to cyber security. In this exercise, participants were most comfortable discussing the different ways in which technological convergence might emerge and be shaped, rather than identifying which convergences are likely, their scale and their risk profile. When participants did identify technologies, they discussed these either individually or in pairs. Subsequently, technology types were the most common method through which participants reasoned productively and pragmatically about the impact of convergence generally and within cyber security. No participant identified technology groups combining three or more technology types. This suggests that participant mental models (internal representations of the world)[footnote 6] were limited. Experts in this research found abstract discussions of technology convergence challenging without clearly articulated parameters of investigation, as fostered by dedicated futures-orientated methodologies.[footnote 7]
The hesitation of participants, both within the interviews and workshop, was particularly pronounced when reflecting on the likely timelines for the impact of technology types and groups upon cyber security, with varying expectations for their impact. The difficulty of participants assessing time horizons was referenced by participants during interviews, with two participants referring to the late January 2025 publication of a large language model (LLM) by the Chinese hedge fund, DeepSeek[footnote 8], as an example where futures-orientated thinking is difficult. At the time of this report’s writing, claims of advances in quantum computing by Microsoft challenge some participants’ claims that the impacts will be more than 15 years in the making.[footnote 9] This may be caused by a lack of intrinsic close knowledge of this group of participants whose primary focus is on cyber security, meaning that this group alone is not the strongest vehicle for understanding future cyber security implications deriving from technology convergence.
Across the identification of technology types and groups, many participants referenced existing strategic documents. This included the eight emerging and disruptive technologies (EDTs) identified by NATO (Data; Artificial Intelligence; Autonomy; Space; Hypersonics; Quantum; Biotechnology; and Materials),[footnote 10] the EU’s identified critical technology areas (Advanced Semiconductor Technologies; Artificial Intelligence Technologies; Quantum Technologies; Biotechnologies; Advanced Connectivity, Navigation and Digital Technologies including 6G telecommunications, Cyber security technologies, and virtual reality among others; Advanced Sensing Technologies; Space & Propulsion Technologies; Energy Technologies; Robotics and Autonomous Systems; and Advanced Material, Manufacturing and Recycling Technologies),[footnote 11] as well as the UK National Quantum Strategy.[footnote 12] Throughout the interviews, these references structured responses to what are likely to be important technologies for future convergence.
4.1 Technology types
Participants were most confident in identifying technology types, even if there was less confidence in assessing their approximate temporal horizon, likelihood, and scale. These predominantly focused on AI, quantum technologies, and biotechnologies. In Table 1 below, we outline the technology types identified by participants mapped to NATO’s EDTs and the EU’s critical technology areas referenced above.
Table 1: A mapping of identified technology types by participants to NATO and EU strategic foci.
Technology type identified by participants | NATO EDTs | EU critical technology areas |
---|---|---|
Artificial Intelligence (AI) (including artificial general intelligence (AGI)) | X | X |
Augmented Reality (AR) and Virtual Reality (VR) | X | X |
Authentication technologies (password-less) | X | |
Automated scientific discovery (including vulnerability discovery) | X | X |
Autonomous vehicles | X | X |
Autonomous weaponry | X | X |
Biotechnology (including drug discovery and neural interfaces) | X | X |
Cloud infrastructure | X | X |
Cryptocurrencies | X | |
Internet of Things (IoT) | X | X |
New propulsion technologies | X | X |
Next generation networks (including 6G technologies) | X | X |
Novel interfaces (including neural interfaces) | X | X |
Quantum technologies (including compute, cryptography, and sensing) | X | X |
Renewables | X | X |
Robotics | X | X |
Social media stack | X | |
Space technologies | X | X |
Synthetic media (including ‘deep’ fakes) | X | |
Wearables | X | X |
Participants did not identify any technology types outside of NATO’s EDTs and most of the EU’s critical technology areas. In this report, we do not pass assessment on the relative importance or applicability of the technology types for convergence on cyber security, but participants were both influenced by and cited this literature. Participants did not have insight into additional technology types for convergence that they regarded as important.
4.2 Technology groups
Some participants identified two technology types converging as a paired technology group. Below are the groups identified by participants:
-
AI and quantum technologies
-
AI and next generation networks
-
AI and biotechnology
-
AI and Operational Technology (OT)
-
AI and weaponry
-
Big data and blockchain
-
Biotechnology and military processes
-
Novel materials and hardware
-
Quantum technologies and facial recognition
The AI and quantum technologies technology group was most frequently referenced, with participants articulating that a profound change is likely, primarily with reference to compute (for more on this connection, see Castelvecchi)[footnote 13]. Participant 12 however remarked that security stacks would still be isolated even in the case of convergence and how these technology types may group remains unclear:
They might be in one thing, but they’re still isolated from each other… If you have this quantum random number generator aided by… AI defensive software that keeps on scanning the phone and that’s then being connected to a smart ring...
From the above quote, the vast range of different intersections and convergences demonstrates the complexity that participants were attempting to parse and model. Despite participants noting that technology groups are important, the ability of participants to confidently identify technology groups was a challenging task. The difficulty in identifying technology groups exposes a fundamental challenge in discussing technology convergence and cyber security; that security concerns and capabilities emerge through complex systems rather than necessarily at the level of technical capabilities alone. When conversations lingered at an abstract level, even for subject matter experts, technology convergence and their potential challenges and opportunities remained conceptually intangible.
5. Technology ecologies
While participants found identifying technology types and groups challenging, they were more confidently in discussing the broader factors shaping technology convergence, which we term technology ecologies. Technology ecologies are defined as an assemblage of known, and unknown, objects, processes, and networks that come together to produce socio-technical outcomes that are difficult to assess directly from a focus on isolated technological capabilities alone.
Participants used various analogies to understand how technology convergence emerges and intersects with cyber security, for example by reference to a “stack” of technologies[footnote 14] (Participants 2, 3, 7, 9, and 12) and the “layering” of social, economic, cultural, and political aspects of technologies for convergence (Participants 5, 7, 12). One participant noted how technology convergence builds upon, and is shaped by, “hard baked” technical legacies. Taken together, these represent both a topological understanding of convergence – defined by a particularly hierarchical frame – as well as one that foregrounds political economy and socio-technical practices.
The remainder of this section explores how technology ecologies articulated by participants – through geopolitics, markets and innovation, and everyday practice – shape the capacity and effects of socio-technical convergence on cyber security.
5.1 Geopolitics
Recent technology “arms races” (Participant 5) to achieve technical superiority, primarily driven by national security objectives, have drawn attention to a range of key, ‘critical’ technologies for states. As much as the existence of technical capabilities were recognised by participants as a prerequisite for convergence to occur, states invest, prioritise certain technology types, and restrict access to technologies. Collectively, participants thought state action was a strong indicator of what capabilities, and thus convergence, are possible.
Many participants likewise noted that geopolitics is a key determinant of whether convergence occurs or not. Participant 4 introduced the concept of ‘technospheres’ as one way of thinking about this idea. In this context, technospheres are geopolitical spaces (consisting of separate but not isolated spaces) where political, economic and cultural factors shape the adoption of technologies. For example, the application of export controls, calls for ‘digital sovereignty,’ and strategic national security investments align with this perspective. Within technospheres, there is however a broader ecology of interconnectedness:
If [a technology] originates in China, rather than the US, I think it will still go global. It won’t be confined to that technosphere. It will be through imitators or competitors, right? Local champions in different areas, because of the geopolitical boundaries, but it won’t mean that it just doesn’t exist elsewhere, because there’s enough interaction and like, like I said, acknowledgement of value. This goes back to the political economy of it, the reason it will exist everywhere, is because people will want to have it there (Participant 4).
This sense of geopolitical competition continued in examining whether it is necessary to weaken other states’ capabilities to have an “edge” (Participant 12) and how transforming geopolitical alliances and trust between allies shapes potential convergence (Participants 5 and 7). This means that some states will be able to develop convergences perhaps sooner than others. However, reflecting on technospheres, this is only a temporary situation – and simply trying to use export controls, for instance, to gain an “edge” is likely to be temporary.
5.2 Markets and innovation
Across all participants, the discussion of geopolitics was interwoven with how markets and innovation within private enterprise supported state capabilities and everyday adoption of technologies (and therefore, which convergences are possible). The private sector, according to participants, drives experimentation and narratives regarding convergence. As Participant 5 stated, it is “[p]rivate companies who are now in the driving seat. If you look at cyber security and the role of Microsoft. But when it comes to AI and quantum, the future generation networks, well we have Nokia and Ericsson.” This suggests that a plurality of different private enterprises are likely to intersect, with various objectives and capabilities, making the assessment of multiple private enterprises’ intersection of goals and incentives difficult to anticipate.
Market forces likewise push companies to integrate new features (e.g., AI integration in a range of cyber security products), though some implementations may be superficial or driven more by marketing than substance as “industry hype.” For example, two participants speculated on the time frame for artificial general intelligence (AGI), often positioning this against their perception of industry hype. Other participants meanwhile reflected on what they perceived to be ‘snake oil’ in the market, which makes horizon scanning hazy. This makes decisions on cyber security investments hard:
You spend so much time trying to get it through the snake oil, you don’t get a chance to look beyond that. The adversary will always… It appears, for us to remain in our profession, that the adversary will always have the upper hand. We have a limited budget and they have an infinite budget (Participant 6).
The drive for companies to compete accelerates technology convergence. For example, participants explained that the push to implement AI in many technology ecologies has been driven by competition to increase revenue, often with little regard for cyber security implications. In summary, the types of convergences that are likely to emerge in the future, according to participants, will relate to i) profit motives, ii) the need to commoditise and sell data, and iii) the types of innovation demanded by emerging markets and consumer expectations:
I don’t think convergence is a security issue in and of itself. I think when you merge something too quickly, often just to exploit a technology to make money, you often do so with the pound signs in your eyes, rather than the safety or security of that convergence. So, I am absolutely confident that the integration of AI in all of these products… So, for example, commercially we aren’t supposed to be using things like ChatGPT because of the way it handles data and it’s external to the organisation but instead we’ve started to use [Microsoft] Copilot. And I have absolutely no doubt that there will be all sorts of issues about Copilot that no-one foresaw because there wasn’t enough testing because everyone rushed to get their AI integration into the product, whether it was relevant or not to even have AI integrated (Participant 11).
A drive for market innovation was perceived to come into conflict with national security priorities in technology convergence. This is exacerbated as national security policy and industry were thought to operate in separate spheres, where many technologies and products are produced without security and defence as a focus.
When innovation does occur, it does so quickly and is difficult to assess according to participants. The most common view was that the rate of innovation will ever-accelerate. Participants agreed that when new, revolutionary technologies break through, they do so very quickly, and this can be hard to predict in advance. This was connected to the idea of evolutionary vs. revolutionary innovation, where participants strongly disagreed over whether recent convergences from AI should be regarded as evolutionary or revolutionary in their implications for cyber security. Related were predictions of how ingrained AI will become in the future, with many participants expressing that they thought the hype of this technology would wear out and its actual usefulness in society would become steadily clearer. For others, technology standardisation represented a form of convergence that enables a transition from innovation to a form of interoperability. However, technology convergence poses problems when cyber security has not had time to develop new or adapt existing standards.
5.3 Everyday practice
While participants understood the role of geopolitics and markets and innovation in creating conditions for technical capabilities, they emphasised that technology convergence is especially important in everyday practice and adoption.
Participants referred to different technologies to understand how technology convergence occurs. There were references to niche technology convergence, which was considered easier to map and understand. Two examples emerged from these discussions. The first referred to quantum technologies, which are likely to be deployed by a limited number of actors due to their cost and complexity. The second referred to cyber intrusion tools, including spyware, which require some specialised knowledge and development, with several participants noting ongoing discussions to limit their proliferation through the France and UK-led Pall Mall process.[footnote 15] In these two cases, it was the technical capability limitations that meant that technology convergence was potentially easier to identify. Yet, the growing proliferation of cyber intrusion tools represented an example where this can change and become more accessible, and where unforeseen cyber security impacts can emerge.
Beyond the two examples of quantum technologies and cyber intrusion tools, there were frequent references to technologies that are embedded more in everyday life. Six participants used the analogy of the smart phone to highlight how technology convergence functions and its impact on cyber security. Variously, participants referenced how hardware and software emerged to provide technical capability (e.g., increasing compute and 5G connectivity), new economic models of software production (e.g., agile software development and subscription-based consumer model), the rise of social media, and social acceptance and norms of use. Collectively, the smart phone is – if viewed over a period of 15 or more years – one of the most significant technology convergences of the contemporary period. This has introduced new threat vectors alongside the opportunities provided by smart phones, requiring new cyber security products, processes, and policies to manage social media impacts from audio-visual content to secure payments. Smart phones represent a convergence that created a new technology ecology that radically reconfigured cyber security but one that would have been difficult to manage in advance for either national security or everyday harms.
It is within these everyday settings where, due to technology ecologies, cyber security became most vexing for participants to articulate how particular future technical capabilities could converge and have impact.[footnote 16] They noted the significant potential for social harms beyond acknowledged national or organisational security, particularly in terms of privacy violations and disinformation. Identifying what these could be was very hard for participants. One example presented how AI had enabled phishing:
I would say the characteristic of generative AI that would be of most concern, is its ability to interact. We’ve been able to put together a phishing email and send it out to a million people and some of them will click on it. What we haven’t been able to do at scale is to be able to join an online forum and have a conversation with them – at scale – because each one of those conversations required a human being and now it doesn’t. So you can do things like warming people up and, if you get them to a certain point, automated systems could hand them over to real human beings to do the last mile or whatever. You might be able to fully automate the conversations that you need; talking to people on LinkedIn and saying that you’re interested in offering them a job (Participant 12).
These fears also extended to targeted attacks on everyday users, for example domestic abuse could be further facilitated by converged technologies, especially those bridging with biotechnology. However, to manage such everyday harms is something that is difficult to attend to due to market forces.
In the start-up phase, it’s purely a quest for survival. There’s an existential threat to the company. Normally, it’s a handful of small people, they’re developing core features – security is not what they’re [thinking] about beyond the bare-bones. So when you’re getting a lot of very innovative companies experimenting at the edge of the technological horizon and you’re not having that security baked-in, actually fixing those gaps when you get to kind of year three, year four, year five of the company – especially when some of these technologies can be distributed a lot quicker than previously, I think you could have a large amount of vulnerability there that has real-world implications, such as medical IoT [Internet of Things], such as elective human-based implanted IoT [device], that could cause issues (Participant 4).
Collectively, across geopolitics, markets and innovation, and everyday practice, technology ecologies are crucial to understand how technology convergence and cyber security interact. Participants were clear that, through prior examples, including smart phones and more contemporary AI, technical capability does not translate neatly into technology convergence. The broader technology ecologies therefore are as important, if not more so, than technical capability in how technologies become adopted, in which markets, according to which objectives, and the contexts in which technologies are used and integrated.
6. Cyber security convergence(s)
In this section, we explore how technology ecologies have domain-specific features for cyber security beyond the more general shaping of geopolitics, markets and innovation, and everyday practice in the previous section. Participants however challenged cyber security as a cohesive domain in its own right – with some viewing cyber security as closely tied to addressing and mitigating technical vulnerabilities and others considering it as linked to broader information spaces (e.g., social media). This means that, for some participants, convergences that affect the broader ‘information space’ can sometimes become omitted as government strategies and organisations were perceived to split between the technical and social. As Participant 1 contrasted however, “not everyone is distinguishing between this information space/cyber space”, demonstrating that there were diverse opinions on what the parameters of cyber security should be, and thus how technology convergence shapes the contested field of cyber security.
6.1 Threats to cyber security
Assessing the threats that arise from technology convergence were debated, with differing perspectives on whether past cyber security convergences and their impact could have been predicted: for instance, “I don't think we could have foreseen ChatGPT for security” (Participant 6). Participant 8 explained that the difficulty in foreseeing the kinds of risks that generative AI would come to pose exposes a fundamental difficulty in trying to predict the nature of threats. It was a common line of thought that threats are ever evolving and highly complex, making them hard to evaluate in terms of future convergence. It was considered a complicated and often futile task to try and forecast future threats, especially given the uneven and ever-changing technology ecologies that cyber security practitioners face.
The challenge with things like AI and quantum is that there [is] much more than one [use] case. It’s quite… You cannot map out all the different implications or possible ways which it might be used by the defenders, or your offenders, so you’re kind of still being surprised by things as they go along – sometimes (Participant 1).
The impact of use cases – the everyday practices – is one that is shaped by the relationships between adversarial behaviour and actors in the cyber domain. As Participant 2 noted, “[t]here’s not [an] incentive for attackers to innovate more than they need to and there’s incentives on defenders to catch-up as much as they need to, right? So that’s the basic dynamic.” Adversarial relationships, not dissimilar to competitive relationships in markets and innovation, therefore complicate cyber security assessments of technology convergence. Therefore, the nature of threats likewise emerge not only from technical capabilities but across adversarial relationships, making impacts upon cyber security uneven and hard to define in advance of their likely application.
6.2 Capabilities
Within the interviews, and subsequent validation workshop, a clear distinction emerged in how participants discussed the role of cyber security within technology convergence. This was split into two primary forms. First was technology convergence’s impact upon technology ecologies – the broader socio-technical landscape and its potential vulnerabilities – and second where technology convergence enables the emergence of cyber security capabilities. In the former are the vast, and growing, swathe of new technologies that are likely to emerge from technology types and groups that require cyber security to be implemented. For example, new biotechnologies such as neural interfaces, introduce new vulnerabilities and potential exploitation by threat actors in battlefield situations. In contrast, cyber security capabilities are likely to have fewer technology convergences that are going to increase the effectiveness of cyber security and defence. This reduces the range of technology types and groups – primarily AI – that participants considered to offer potential enhancements for cyber security.
6.3 Opportunities and risks
AI was the most identified individual technology type by participants when considering opportunities and risks (with a broad range of subcomponents, including generative large language models (LLMs), reinforcement learning, and AI’s broader capabilities for pattern recognition). AI was frequently cited as a potential threat to national security, with adversaries able to use AI to help with their organisational support. Some participants expressed scepticism that AI is likely to enhance threat actor capability. This is partially supported by research published by Google[footnote 17] and Open AI[footnote 18], that claims that there has been little substantive development by threat actors to develop novel use cases in cyber security even as it has enabled greater capability in creating social media posts and in conducting open-source research. As Participant 1 reflected, AI is “not necessarily used to write malware at this point – but to help them with some of the organisational aspects.” AI technologies were considered to lower the barrier to entry for identifying and generating exploits, especially regarding vulnerability discovery and the capacity enhancements for threat actors. As Participant 1 further reflected, “[t]he implications of that are, I think, quite seismic… A malicious actor could use these kind of technologies to find vulnerabilities. The barrier to entry is much lower in that sense.” Collectively, participants had significantly divergent views on the likely impact of AI on cyber intrusion and offensive capability.
Despite concerns around the technology, AI was considered by participants to be more useful for defence than for offence. As Participant 11 explained, “from a security defence perspective, they’re more useful than they are from an offence perspective” and will be “in favour of the defender, more than anything. Or, at least, in the next five or 10 years.” The main opportunity considered by participants was defensive capability, for both national security and for organisations. AI was considered useful for advanced threat detection but also to automate the first line of defence and intrusion detection. Participants often discussed this in terms of how much human intervention would be required in such systems, arguing for cases in which human intuition has been superior. As Participant 9 stated:
There’s a scalability thing that we’ve got with AI as an opportunity for cyber defence which means that we can have many more artificial eyes on things and it can do a lot of the donkey-work and simulate responses. So if you look at network protection, the main issue is false-positives. People buy an intrusion detection system, and one of the first things they do is they tune it down until it stops bothering them with alerts, when it’s hopeless. Which is why people sold intrusion prevention systems, which wrote new firewall roles for you and then broke your system. So then they’re watered down until they’re useless
Therefore, it is not simply the technical capability that is important for realising the opportunity of convergence in cyber security, but a broad range of socio-technical factors including markets, human-computer interactions, and layering of different organisational processes. Or, as Participant 11 reflected, “I think AI’s use in operational technology [OT] is more of a threat in terms of just its capability. I think it’s more likely that AI will cause problems in OT, just because AI is not trustworthy.” This means that the opportunities from AI, in this case, do not simply emerge from the technology, but from a confluence of factors that are highly contextual and dependent on technology ecologies.
Beyond the five participants who identified the potential for vulnerability discovery from AI, many of the opportunities for cyber security were generic, or focused on convergence of existing cyber security technologies that have already occurred or are in the process of doing so (e.g., anti-virus technologies being part of broader endpoint detection products). No participant highlighted an opportunity from quantum technologies for cyber security. For a range of technologies, opportunities for cyber security from convergence were limited, whereas a greater – albeit disperse – view suggested that the risks of technology convergence were more numerous. However, these risks did not emerge from the technology convergence itself, but from its contextual, everyday use by actors. This led to much figurative ‘hand waving’ about the possibilities.
What participants did focus on is how technology convergence will increase the complexity of technology ecologies, and thus the implementation of cyber security will become even more difficult. For example, deep packet inspection that analyses network traffic for malicious activity is offered as one example:
So I think one of the converged tech is that all of a sudden what we’ll be finding is that our logs will be overflowing, the parsing that we want to do is not happening, and with um – if we get a bit technical – if you’ve got one protocol inside of another, as we often do on networks nowadays – just ripping-out the various layers until you get to the bit that you’re trying to inspect, it’s often called Deep Package Inspection, will become more and more difficult (Participant 9).
This change in the practices emergent from technology convergence was suggested to impact future cyber security workforces by the participant. As an increasing automation of ‘lower level’ work occurs, the skills acquired from this foundational cyber security training may be lost. This means that as technology ecologies become more complex, the core knowledge and skills of the cyber security workforce may decrease, reducing resiliency in the longer term even as there is a short-term boost to productivity.
Cyber security as a field exhibits the dynamics of adversary behaviour and the complexity of technology ecologies – making the impact of technology convergence hard to assess. Yet, for cyber security, there is greater risk from expanding technology ecologies and their cyber security needs over the capability enhancements that AI will offer. Participants identified potential opportunities, such as vulnerability discovery, that could emerge from integration of AI into cyber security products. Despite this, participants were wary of the potential extent of opportunities and emphasised risks, particularly referring to a degradation of skills and knowledge that may emerge from greater automation.
7. Managing convergence
Cyber security was considered by participants to be a reactive field, with many current ‘fires’ to attend to. The management of technology convergence and cyber security is then often not the priority for many actors in the sector. As Participant 6 noted, “as cyber security practitioners, we are generally reactive because, thankfully, there are only a few of us and, therefore, our horizon-scanning is limited because we are fighting the current fires, rather than the fires of the future.” This concluding section therefore examines how to manage convergence, particularly through the prism of policy interventions.
Participants were reflective on how technology convergence and cyber security become recognised by governments. Some noted that attention is paid by governments when technologies appear in the popular media, which in turn drives a relatively slow policy response. Attention was, for some, driven by a ‘policy hype’, where trends flare up and then die down.
[I]t’s always fascinating to see how certain buzz words emerge or die or never leave the governance and policy conversations. And one of them is policy convergence, finding synergies and, at the same time, for me this is kind of almost a rhetorical exercise. Kind of empty words (Participant 5).
Although not all participants agreed, there was a sense that ‘hypes’ (including some who identified this research as part of contemporary AI-centred ‘hype’) tended to structure policy responses as short-term fixes to technology convergence. This then leads to a focus on particular technology types (e.g., around the significant policy attention on AI) rather than on the broader drivers of technology ecologies and the dynamics of the cyber security domain. Despite the concerns over policy hype, most participants did however recognise and acknowledge that managing technology convergence is complex and offered directions for future policy considerations.
7.1 Responding to convergence
There are pressures and organisational barriers that structure how the governments can address convergence that participants acknowledged. To meaningfully work on the cyber security opportunities and risks posed by technology convergence, this would require working across ‘silos’ within policy and cyber security. Several participants spoke of fragmented knowledge and capability in cyber security, limiting the capacity of small task forces. This can limit thinking about technology convergence since practitioner experience of a limited number of technologies and processes means that technology convergence is considered through a small cone of possibility. As Participant 1 reflected at two points in the interview, “we shoot ourselves in the foot by considering emerging technologies individually” and “thinking about convergence doesn’t happen naturally, unless you have the right leadership that cares about all of it.” This suggests that the impacts of cyber security convergence require both a broad pool of knowledge and expertise in cyber security alongside an appreciation of the associated technology ecologies.
Some participants explained that because of this, there can be the formation of echo chambers, in which task forces reinforce each other’s views without considering other ways of thinking about technology convergence. Participant 13 explained that different task forces also focus on different time frames, meaning that thinking cohesively about long term futures is usually the job of someone else. Especially with cyber security, there was uncertainty about where these types of responsibilities lay. In trying to bring together ideas across silos, participants described real struggles of knowledge translation, relating to the fact that different government bodies can have different attitudes to technology. For instance, militaries were described as quite “tech-phobic” since they can view technology through its potential as an attack surface – meaning that many processes are still done in an “old school” way. Whereas in the UK, the move to promote economic growth through embracing new technology convergences, for some participants, led to a concern that this could introduce unforeseen implications for cyber security.
7.2 Beyond “flat pack” futures
As much as responding to convergence and cyber security is affected by limited knowledge, practice, and silos of thinking, there was concern that futures exercises can become overly simplified. One participant noted how there is a danger that technology convergence, and its impact upon cyber security, risks representing a “flat pack” futures exercise that follows a basic formula of combining technological capabilities together without the broader contextualisation of technology ecologies. Rather, they continued, the future is akin to a “brownfield site,” articulating an analogy that cyber security is akin to a building site that is constructed across the foundations of prior structures and use. Across the study, participants found it difficult to confidently identify technology types and technology groups that will lead to technology convergences with impacts for cyber security, speaking to the limitations of such a technology-centric view of technology convergence.
Policy was considered to lag technology convergence trends, and generally attention to new capabilities only truly emerges after they are highly visible. This is particularly the case in non-national security domains where technology ecologies make the identification of social harms or complex integrations with national security (e.g., disinformation or the previously discussed smart phone) difficult to anticipate effectively. According to participants, this can lead to an imperative to identify individual technologies and companies that are likely to be important for technology convergence. As Participant 7 remarked:
It also introduces a sense of speed of urgency that you are unable to respond on a timely manner, you know, to something that you might have anticipated, but you could not identify in the timelines, or who would be the one, you know, leading the conversation, or which companies will be leading some of the conversations – let’s be honest.
This means that, for participants, there is a danger that technology convergence becomes a ‘flat pack’ exercise that seeks to reduce the cone of possibility to a limited number of technologies and actors. Some participants acknowledged that policy in this area is exceptionally difficult, and choices must be made over the limited resources of policymakers, which drives this risk. It is within the gaps – represented by technology ecologies – that cyber security was considered to be most important, which in turn requires a more complex and nuanced appreciation of how technology convergence shapes the possible beyond the capabilities of the technologies themselves.
7.3 Recommendations for addressing technology convergence for cyber security
Participants were asked how to better address technology convergence for cyber security. In this, multiple recommendations were made that we summarise below:
- Back to Basics
Since technology convergence makes systems and technology ecologies more complex, policymakers should first reinforce the importance of fundamental cyber security practices; a ‘back to basics’ approach (Participant 20) that may “cover the bases” of many technology areas. This includes promoting secure-by-design principles across all emerging technologies, ensuring robust authentication methods, and prioritising basic cyber hygiene. However, this does not necessarily work for some socio-technical harms that may emerge beyond insecurities in technologies themselves.
2.Boots on the Ground
Given the challenges identified by participants surrounding knowledge silos, policymakers may consider interdisciplinary task forces that bridge cyber security, industry, and policy domains. These task forces should include experts from multiple academic disciplines, industry, government, think tanks, and beyond to facilitate knowledge sharing that focus on the long-term to generate insights over time. The findings from this study speak to the pressing need to engage with ‘boots on the ground’ technologists. This could include ongoing engagement with large technology organisations as well as start-ups who have promising technology convergences that are of higher technology readiness levels (TRLs). This is important as it was considered that academic research alone is often too foundational to draw a comprehensive appreciation of cyber security impacts from technology convergence.
3.Continued Defensive Focus
The findings from this study discuss how cyber security is often reactive, responding to new threats rather than anticipating them. Findings also speak to the value of AI for defensive systems. Therefore, policymakers should prioritise funding for research and development of appropriate socio-technical adoption of AI-driven defensive technologies that can automate threat detection and mitigation in technology ecosystems whilst addressing potential changes in skills and knowledge of cyber security practitioners.
7.4 Technology convergence and cyber security
This research engaged with 20 subject matter experts to examine what factors shape technology convergence, the risks and opportunities for cyber security, and how to manage the relationship between technology convergence and cyber security. Across 16 qualitative interviews and a validation workshop, experts both were reluctant, and found it challenging, to identify technology convergence and its opportunities. When experts did, this was primarily with reference to AI and its potential for vulnerability discovery. However, the experts were clear that considering the broader role of technology ecologies over technical capabilities alone was key to understanding the impacts for cyber security, using the example of the smart phone to demonstrate this across diverse areas of cyber security, and aligned digital policy. To manage technology convergence and cyber security, experts suggested that later stage technology development and establishing contacts are crucial to attend to both short-term and longer-term risk. Alongside a continued focus on defence and an established focus on cyber resilience, we suggest from this research that a focus on technologies close to deployment, alongside a proactive socio-technical interest in the technology ecosystem (through engagement with a pool of experts from a variety of backgrounds and disciplines in academia, industry, and government), is best suited to manage the cyber security impacts from technology convergence.
8. Acknowledgements
We thank all participants who provided their expertise and time to the study. This research was funded by the UK Department for Science, Innovation and Technology (DSIT).
9. Authors
Dr Andrew C Dwyer is a Lecturer in Information Security at Royal Holloway, University of London and the Lead of the UK Offensive Cyber Working Group. His research examines the intersection between geopolitics, cyber policy, and embedded cyber security practices. He is an Associate Fellow at the UK Research Institute for Sociotechnical Cyber Security (RISCS) and holds a DPhil (PhD) from the University of Oxford.
Mikaela Brough is a PhD Candidate in Information Security at Royal Holloway, University of London. Her research explores the social and cultural foundations of information security, with a focus on social movements. She holds an MSc from the University of Oxford and a BA from McGill University.
10. Bibliography
Ball, Linden J., and Bo T. Christensen. ‘Chapter 2 - How Sticky Notes Support Cognitive and Socio-Cognitive Processes in the Generation and Exploration of Creative Ideas’. In Sticky Creativity, edited by Bo T. Christensen, Kim Halskov, and Clemens N. Klokmose, 19–51. Academic Press, 2020.
Bolgar, Catherine. ‘Microsoft’s Majorana 1 Chip Carves New Path for Quantum Computing’. Microsoft (blog), 19 February 2025.
Bratton, Benjamin H. The Stack: On Software and Sovereignty. Software Studies. Cambridge, Mass.: MIT Press, 2015.
Braun, Virginia, and Victoria Clarke. ‘Toward Good Practice in Thematic Analysis: Avoiding Common Problems and Be(Com)Ing a Knowing Researcher’. International Journal of Transgender Health 24, no. 1 (25 January 2023): 1–6.
Castelvecchi, Davide. ‘The AI–Quantum Computing Mash-up: Will It Revolutionize Science?’ Nature (blog), 2 January 2024.
Chen, Jing. ‘Risk Communication in Cyberspace: A Brief Review of the Information-Processing and Mental Models Approaches’. Cyberpsychology 36 (1 December 2020): 135–40.
Department for Science, Innovation and Technology. ‘National Quantum Strategy’. London, UK: UK Government, March 2023.
Dwyer, Andrew C, Lizzie Coles-Kemp, Clara Crivellaro, and Claude PR Heath. ‘Friend or Foe? Navigating and Re-Configuring “Snipers’ Alley”’. Yokohama, Japan, 2025.
Emerson, Robert Wall. ‘Convenience Sampling, Random Sampling, and Snowball Sampling: How Does Sampling Affect the Validity of Research?’ Journal of Visual Impairment & Blindness 109, no. 2 (1 March 2015): 164–68.
Engward, Hilary, Sally Goldspink, Maria Iancu, Thomas Kersey, and Abigail Wood. ‘Togetherness in Separation: Practical Considerations for Doing Remote Qualitative Interviews Ethically’. International Journal of Qualitative Methods 21 (1 April 2022): 16094069211073212.
European Commission. ‘ANNEX to the Commission Recommendation on Critical Technology Areas for the EU’s Economic Security for Further Risk Assessment with Member States’. Strasbourg, France: European Union, 2023.
Foley, John. ‘DeepSeek Changes Rules of AI’s Great Game’. Financial Times, 27 January 2025, sec. Opinion Lex.
Foreign, Commonwealth & Development Office. ‘The Pall Mall Process Declaration: Tackling the Proliferation and Irresponsible Use of Commercial Cyber Intrusion Capabilities’. UK Government, 6 February 2024.
Google Threat Intelligence Group. ‘Adversarial Misuse of Generative AI’. Google, 2025.
Government Office for Science. ‘A Brief Guide to Futures Thinking and Foresight’, 2022.
Nimmo, Ben, and Michael Flossman. ‘Influence and Cyber Operations: An Update’. OpenAI, October 2024.
Nowell, Lorelli S., Jill M. Norris, Deborah E. White, and Nancy J. Moules. ‘Thematic Analysis: Striving to Meet the Trustworthiness Criteria’. International Journal of Qualitative Methods 16, no. 1 (1 December 2017): 1609406917733847.
Reding, D. F., and J Eaton. ‘Science & Technology Trends 2020-2040: Exploring the S&T Edge’. Brussels, Belgium: NATO Science & Technology Organization, March 2020.
11. Appendix
11.1 Appendix A: participant background
Backgrounds of participants are presented below. Most participants were based, or have significant working experience, in the UK.
Participant | Field | Type of engagement |
---|---|---|
1 | Defence | Interview |
2 | Computer Science | Interview |
3 | Computer Science | Interview |
4 | Policy | Interview |
5 | Policy | Interview |
6 | Computer Science | Interview |
7 | Policy | Interview |
8 | Policy | Interview |
9 | Computer Science | Interview |
10 | Defence | Interview |
11 | Computer Science | Interview |
12 | Policy | Interview |
13 | International Relations | Interview |
14 | Computer Science | Interview |
15 | Humanities | Interview |
16 | Policy | Interview |
17 | International Relations | Workshop |
18 | Defence | Workshop |
19 | Defence | Workshop |
20 | Policy | Workshop |
11.2 Appendix B: thematic analysis
Below are the full range of codes from the analysis of the research. This produced four ‘Level 1’ themes (Challenges in addressing convergence through policy; Contextual aspects of convergence; Methodologies for thinking about convergence; and Thinking about the future). Each level is composed of four parts, where the preceding number is the parent of the succeeding number. For example, ‘1.0.0.0’ is the top level 1 code and ‘1.2.0.0’ is the level 2 code under level 1 (where ‘Timeframes and scope’ is under the ‘Challenges in addressing convergence through policy’ code).
Level | Name | Description | References |
---|---|---|---|
1.0.0.0 | Challenges in addressing convergence through policy | There are pressures and organisational barriers that structure how the public sector can address convergence as a concept. Participants outline their perceptions of bureaucratic processes, ideological fragmentation, and division of labour within the public sector. | 30 |
1.1.0.0 | Segregation of bureaucratic processes and knowledge | To think about convergence, policy would need to work across 'silos' of knowledge and bring together people from different task forces. | 13 |
1.1.1.0 | Difficulties in determining the ownership of responsibility | With task forces and departments being siloed from each other, ownership of participants reflect on the challenge of understanding whose job it is to think about certain risks of convergence. | 3 |
1.1.2.0 | Silos as counter-productive | Participants discuss how strict division of responsibility can be a barrier to thinking about convergence from a policy standpoint. | 7 |
1.1.3.0 | Translation and comprehension gaps between technology types | Since different areas can be very technical, it can be difficult for different knowledge areas to communicate with one other. | 1 |
1.1.4.0 | Variable attitudes towards technology | Different areas of the government may have drastically different stances on technology and security. | 2 |
1.2.0.0 | Timeframes and scope | Determining how different government departments and task forces think about timeframes and scope is a challenge, since some departments may have more short-term vision and narrow scope and others may be more future-oriented with broader scope, and vice versa. | 17 |
1.2.1.0 | Recognising and reflecting on policy hype | Participants discuss the prevalence of certain kinds of 'hypes' within government and policy. They discuss the possibility that convergence may or may not fall into a form of 'policy hype'. | 5 |
1.2.2.0 | Security as reactive | Some participants described themselves and other teams as having 'short term vision', putting out first current fires rather than future fires. | 6 |
1.2.3.0 | Slow institutional decisions | The pace at which government can move in terms of putting through policy can create a sense of having to play 'catch up.' | 4 |
1.2.4.0 | Task forces have narrow scopes | Participants describe how they adhere to narrow scopes for their work within different task forces, meaning that generalised interdisciplinary knowledge becomes challenging to obtain. | 2 |
2.0.0.0 | Contextual aspects of convergence | Participants framed convergence within broader contextual trends rather than seeing it as purely technology-driven. | 94 |
2.1.0.0 | Economic pressures | Dynamics surrounding the market drives what types of convergences may come to be in the future. | 26 |
2.1.1.0 | Commercial value of data and profit motives | The types of convergences that will emerge in the future will relate to a) profit motives, b) the need to commodify and sell data, and c) what types of innovations are demanded by emerging markets and the expectations of the consumer (e.g., biohacking). | 9 |
2.1.2.0 | Companies leading the conversation | Narratives surrounding convergence are primarily driven by companies, through both their PR and their core personalities (e.g., CEOs). | 4 |
2.1.3.0 | Company reputation and risk profile | How much risk a company is willing to take regarding their innovation profile will relate to how much the public trusts them. The company’s track record in being able to lose and regain trust and their confidence in this pattern will determine what they do going forward. | 1 |
2.1.4.0 | Free market competition | The need to compete in the free market will drive companies to integrate new functionality to their products, even if their security does not stand up well to these features (e.g., AI). | 4 |
2.1.5.0 | Industry hype | Industry needs to keep consumers excited by manufacturing hype, and therefore novel convergences may appear to emerge to be novel and cutting-edge. | 5 |
2.1.6.0 | Organisational size impacting cyber security | The size of an organisation (start-up, SME, or large corporation) will determine how much security they can afford to do. Small organisations may be developing new and cutting-edge products but without capacity to accommodate security concerns associated with 'paired' technologies they are implementing (e.g., a biotechnology startup using Artificial Intelligence (AI)). | 3 |
2.2.0.0 | International context of convergence | Geo-political factors play a prominent role in what types of convergences may come to be and where they may come to be. | 31 |
2.2.1.0 | Different countries have different priorities | The types of technologies that are important to different nation states priorities vary, as do their strategies for trying to compete on these fronts. | 4 |
2.2.2.0 | Trickling effect from big to small states | While big states tend to dominate this conversation, technologies do eventually get implemented in smaller states, sometimes in very different contexts from where they were developed. | 3 |
2.2.3.0 | Eastern vs. Western 'Technospheres' | When discussing convergence, it is important to see that a) innovation is not EU-centric per se, and that distinct yet interconnected competitive technospheres exist internationally. | 13 |
2.2.4.0 | International competition as an 'arms race' | State competition is described as a kind of arms race. | 2 |
2.2.5.0 | Recognising what is considered Critical National Infrastructure (CNI) | Thinking broadly about what is considered critical national infrastructure (CNI) from the lens of a nation state adversary is important in prioritising which convergences are important. | 1 |
2.2.6.0 | Unstable allies and networks of trust | Unstable international allies and changing allegiances has had an impact on the security of international supply chains and networks of trust. | 6 |
2.2.7.0 | Use of converged technology for authoritarianism | Converged technologies are thought to be useful for centralising power. | 1 |
2.3.0.0 | Tensions structuring convergence | There are several different conceptual ‘spheres’ that sometimes clash, making the project of convergence hard to understand. | 16 |
2.3.1.0 | Difficulties in demarcating cyber and information space | 'Cyber' is narrowly defined in certain institutional contexts, e.g., NATO. This means that other types of convergences that affect the broader 'information space' can be left out of the remit of bureaucratic efforts focussed on 'cyber'. | 4 |
2.3.2.0 | Innovation and cyber security in contradiction | From a national security perspective, cyber security always must try and play catch up to innovation and often adheres to the precautionary principle, thus structuring a tension between innovation and national security. | 3 |
2.3.3.0 | Separate spheres of policy and industry | The people who are best suited to understanding the security implications of a given technology are not in the policy space, but rather industry. | 2 |
2.3.4.0 | Specialised vs. general application spaces | There are different 'scales' of application space that affect convergence, but these scales are not discretely defined and are often in tension with each other. There are general use technologies (e.g., Artificial Intelligence (AI) broadly defined) and more specialised technologies (e.g., spyware). | 5 |
2.3.5.0 | The (in)compatibility of standardisation efforts | Technology standardisation efforts enable convergence of cyber security efforts, but this can take time with differing standards. | 2 |
2.4.0.0 | Everyday convergence | The impacts of convergence can be felt in different interacting 'spheres'. One oft-neglected sphere worthy of separate consideration is the everyday, both on the micro and meso level, that is, convergence can be felt emotionally but also through knock-on social issues. | 21 |
2.4.1.0 | Emotional responses to potential future convergence | Participants describe some of the emotional impacts that potential future technologies have on them, mainly showing fear, stress, and scepticism. | 5 |
2.4.1.1 | Cyber security as scepticism | Participants discuss the tendency for cyber security experts to err more to the default attitude of scepticism when it comes to the power of new innovations (e.g., being highly critical of AI hype). Following this logic, participants were reticent to make big claims about emerging technologies and tended more to empirical descriptions of existing technologies. | 1 |
2.4.1.2 | Fears of biotechnological innovation | Participants express fear of potential innovations surrounding technology and biology, drawing on examples of implantable technologies and their potential for fatal security issues. | 3 |
2.4.1.3 | Stress | Conversations surrounding potential technological advancements can cause stress and uncertainty about the future. | 1 |
2.4.2.0 | Social harms of convergence | Everyday social issues (e.g. disinformation, phishing) are amplified and changed by the emergence and convergence of technologies. | 16 |
2.4.2.1 | Disinformation and synthetic media | Reference to the proliferation and advancements to disinformation and synthetic media as the main impacts of convergence in the next 5-10 years. | 1 |
2.4.2.2 | Issues of personal privacy | Some harms that will emerge from converged technologies do not surround national security or organisational security but rather involve violations of personal privacy, e.g. risks posed by wearable tech. | 5 |
2.4.2.3 | Online harms and safety | Emerging technologies are thought to enhance and amplify online harms and digital safety for everyday users, through social media but also through the threat of phishing. | 3 |
2.4.2.4 | The 'fashion' of a given attack surface in the everyday | If a given piece of technology is 'in vogue' in terms of the everyday user, then this can translate to increased malicious attention and perhaps escalation to a more national security concern (e.g., if CEOs start wearing Apple AR). | 2 |
2.4.2.5 | Widespread 'social decline' | Participants fear that potential futures could involve 'social decline', that is, cognitively but also in terms of democracy. | 3 |
3.0.0.0 | Methodologies for thinking about convergence | Convergence was difficult to define and many participants spoke about the topic through individual technology types or through technology groups | 99 |
3.1.0.0 | Aspects of individual technology types | Participants discussed the potential for future convergences through individual technology areas. | 57 |
3.1.1.0 | AI | Artificial Intelligence (AI) was the most discussed technology area. | 13 |
3.1.1.1 | AGI | The time frame for Artificial General Intelligence (AGI was debated. | 1 |
3.1.1.2 | AI as a threat to national security | AI was commonly thought to be a potential threat to national security, mainly as it lowers the bar to entry for attackers but also aids their organisations. | 9 |
3.1.1.3 | Benefits to defence outweigh offence | Some participants thought that the benefits that AI brings to defence outweigh the risks that it may bring to national security. | 2 |
3.1.2.0 | AR and VR | The possibility of widespread Augmented Reality (AR) and Virtual Reality (VR) was discussed. | 2 |
3.1.3.0 | Automated scientific discovery | Automated scientific discovery was thought to be an emerging area. | 1 |
3.1.4.0 | Autonomous vehicles | The possibility of autonomous vehicles was discussed through several risks to security. | 4 |
3.1.5.0 | Autonomous weaponry | How autonomous weaponry may become more prevalent in the future was discussed. | 1 |
3.1.6.0 | Biotech | Biotechnology was an oft-cited area of technology to monitor in the future. | 12 |
3.1.7.0 | Cloud infrastructure | The prevalence of cloud infrastructure and the types of risks that this poses was discussed. | 1 |
3.1.8.0 | Future generation networks | 5G and 6G was discussed. | 1 |
3.1.9.0 | Novel Interfaces | Novel kinds of interfaces and their possibilities were discussed. | 4 |
3.1.10.0 | IoT | The future of the Internet of Things (IoT) was mentioned. | 1 |
3.1.11.0 | New propulsion technologies | New kinds of propulsion technologies may be considered when discussing convergence. | 1 |
3.1.12.0 | Quantum | The possibility of quantum computing and other quantum technologies was often discussed, with differing views on whether quantum was a realistic horizon. | 4 |
3.1.13.0 | Renewables | The kinds of technologies enabled by renewable energy were discussed. | 1 |
3.1.14.0 | Robotics | The security risks that robotics brings was mentioned but not elaborated upon. | 2 |
3.1.15.0 | Social media stack | Risks that emerge through the social media stack were outlined, mainly to do with online harms and safety. | 3 |
3.1.16.0 | Space infrastructure | Emerging space technologies were potential spaces of convergence. | 3 |
3.1.17.0 | Synthetic media | Synthetic media (e.g., deepfakes) were considered a threat to social security. | 2 |
3.1.18.0 | Wearables | Wearable technologies were discussed a potential space for convergence to occur within. | 1 |
3.2.0.0 | Assessing the nature of threats | Part of analysing the potential of future convergences and their impacts on cyber security surrounds assessing threat and the fundamental nature of it. | 13 |
3.2.1.0 | 1+1=3 | Whether to not a given convergence produces a new and unforeseen security risk was thought of as 1+1=3, rather than 1+1=2 (conventional layering of new technologies). | 1 |
3.2.2.0 | Lack of control over threats | Threats emerge organically and outside the control of one policymaker or country. | 4 |
3.2.3.0 | The boring vs. the spectacular | There are different types of threats, some are mundane threats that we are used to seeing in cyber security and then there are the 'killer robots' type of threats. | 1 |
3.2.4.0 | Understanding the adversary | How the adversary acts and whether they are using converged technologies themselves is central to crafting a realistic and reasonable response. | 7 |
3.3.0.0 | Institutional frameworks | Different institutions and bodies have different formalised ways of thinking about emerging and disruptive technologies. These often involve schemas and lists. | 12 |
3.3.1.0 | EU critical domains | EU critical domains was cited as an institutional way of thinking about which technologies are central to convergence. | 1 |
3.3.2.0 | Futures work | The ability to think about the future through pre-defined ideas of 'Futures' work in government is reflected upon in terms of forecasting strategy. | 6 |
3.3.3.0 | NATO Roadmap for Disruptive Technologies | The NATO Roadmap for Disruptive Technologies is cited to think about convergence and the types of technology groupings involved. | 4 |
3.3.4.0 | UK Quantum Strategy | The UK has devised a strategy when it comes to quantum technology, which informs how participants think about it. | 1 |
3.4.0.0 | Pairing up and then mapping the nexus | One way that participants thought about convergence was as 'pairs', so putting two technologies together and then thinking about what may be possible through this nexus. | 17 |
3.4.1.0 | AI and 5G | Artificial Intelligence (AI) within 5G networks was thought to provide new opportunities for attackers. | 1 |
3.4.2.0 | AI and biotech | The combination of Artificial Intelligence (AI) and biotechnology was identified as a very important pairing in terms of security risks. | 2 |
3.4.3.0 | AI and OT | The convergence of Artificial Intelligence (AI) and Operational Technology (OT) was thought to give an advantage to defenders. | 1 |
3.4.4.0 | AI and quantum | This was the most discussed pair and considered to mutually enhance one another. | 6 |
3.4.5.0 | AI and weaponry | The use of Artificial Intelligence (AI) in weaponry design outlined as a potentially dangerous area of convergence. | 1 |
3.4.6.0 | Big data and blockchain | The convergence of big data with blockchain was mentioned but not elaborated upon. | 1 |
3.4.7.0 | Biotech and military | Discussion of the opportunities that biotechnology presents to the military in terms of implantable and health tracking. | 1 |
3.4.8.0 | Novel materials and AI | Any kind of novel materials and hardware is thought to likely involve Artificial Intelligence (AI) in the future. | 2 |
3.4.9.0 | Quantum and facial recognition | The possibility in the future of quantum computing being used for facial recognition was discussed. | 1 |
4.0.0.0 | Thinking about the future | Participants reflect on risks and opportunities of potential convergences in terms of cyber security but also outline several predictions about the temporality of these dynamics. | 54 |
4.1.0.0 | Enhancing factors | Convergence was thought to be enhanced or accelerated by several factors. | 6 |
4.1.1.0 | Changes in manufacturing materials | Advances to the types of materials being used to manufacture electronics can enhance the capabilities of potential convergences. | 1 |
4.1.2.0 | Novel ways of storing data and advances in computing speed | Broader innovations and improvements in computing speed will speed up the rate in which convergence will occur. | 4 |
4.2.0.0 | Opportunities for emerging technology | Participants outline several opportunities that emerging technologies create when it comes to security, namely in terms of defensive and offensive cyber security in terms of organisations and national security. | 19 |
4.2.1.0 | AI for advanced threat detection and automation | Artificial Intelligence (AI) may be used to improve advanced threat detection and threat detection and automate several aspects of defence of national security. | 4 |
4.2.2.0 | AI for offensive cyber | Artificial Intelligence (AI) may be able to be used for offensive cyber purposes. | 1 |
4.2.3.0 | AI to automate defensive systems for organisations | Artificial Intelligence (AI) may be used as a first line of defence in terms of intrusion detection, for example. | 9 |
4.2.4.0 | Coding of applications | Artificial Intelligence (AI) provides companies an opportunity to code applications more effectively and easily than ever before. | 5 |
4.3.0.0 | Risks created by convergence | There are several challenges that converged technologies would pose in the future, both organisationally but also in terms of how cyber security is currently done. | 8 |
4.3.1.0 | Advances for terrorism | Convergence poses many new opportunities for terrorism. | 1 |
4.3.2.0 | Changing the nature of work for employees | More complex systems can lead to increasing burdens on employees if something in the system goes wrong (e.g. an autonomous system needs to be inspected), exacerbating issues surrounding skill, competency, and lack of transparency in this area. Some participants argue that human intuition is superior to machine, and that this may be lost through automation. | 3 |
4.3.3.0 | Technical tasks made harder by convergence | Certain technical tasks (e.g. deep packet inspection) will be made more challenging if the complexity of the system increases or automation becomes more prevalent. | 3 |
4.4.0.0 | Temporal predictions of innovation | Participants discuss the temporality of emerging technologies, that is, whether they will be revolutionary or evolutionary. The ability to horizon scan is also analysed through the same lens. | 20 |
4.4.1.0 | Incremental vs. revolutionary change | Whether a technology constitutes a revolutionary technology was debated, and Artificial Intelligence (AI) was debated through this lens. | 2 |
4.4.2.0 | Learning from the past | When thinking about convergence, it is easier and more productive to think about past events. The emergence of Artificial Intelligence (AI) as a dominant social force is a common example of how participants reflect on possibilities for the future. | 5 |
4.4.3.0 | Normalisation of emerging technologies | It was predicted that several technologies (mainly Artificial Intelligence (AI)) would become ingrained quickly and normalised over the coming years. | 5 |
4.4.4.0 | The project of horizon scanning | Difficulties in horizon scanning are discussed in relation to emerging technologies and convergence. 5 years was thought to be foreseeable, but any time past this was thought to be too far off to be meaningfully predicted. | 2 |
4.4.5.0 | Unfettered acceleration | Technological advancements are predicted to continue to accelerate quickly, given the trajectory of the past few years. | 5 |
-
Emerson, ‘Convenience Sampling, Random Sampling, and Snowball Sampling: How Does Sampling Affect the Validity of Research?’ ↩
-
Ball and Christensen, ‘Chapter 2 - How Sticky Notes Support Cognitive and Socio-Cognitive Processes in the Generation and Exploration of Creative Ideas’. ↩
-
Braun and Clarke, ‘Toward Good Practice in Thematic Analysis: Avoiding Common Problems and Be(Com)Ing a Knowing Researcher’. ↩
-
Engward et al., ‘Togetherness in Separation: Practical Considerations for Doing Remote Qualitative Interviews Ethically’. ↩
-
Nowell et al., ‘Thematic Analysis: Striving to Meet the Trustworthiness Criteria’. ↩
-
Chen, ‘Risk Communication in Cyberspace: A Brief Review of the Information-Processing and Mental Models Approaches’. ↩
-
For example, see Government Office for Science, ‘A Brief Guide to Futures Thinking and Foresight’ which provides examples of how to consider futures-based thinking. ↩
-
Foley, ‘DeepSeek Changes Rules of AI’s Great Game’. ↩
-
Bolgar, ‘Microsoft’s Majorana 1 Chip Carves New Path for Quantum Computing’. ↩
-
Reding and Eaton, ‘Science & Technology Trends 2020-2040: Exploring the S&T Edge’. ↩
-
European Commission, ‘ANNEX to the Commission Recommendation on Critical Technology Areas for the EU’s Economic Security for Further Risk Assessment with Member States’. ↩
-
Department for Science, Innovation and Technology, ‘National Quantum Strategy’. ↩
-
Castelvecchi, ‘The AI–Quantum Computing Mash-up: Will It Revolutionize Science?’ ↩
-
Bratton, The Stack: On Software and Sovereignty. ↩
-
Foreign, Commonwealth & Development Office, ‘The Pall Mall Process Declaration: Tackling the Proliferation and Irresponsible Use of Commercial Cyber Intrusion Capabilities’. ↩
-
Dwyer et al., ‘Friend or Foe? Navigating and Re-Configuring “Snipers’ Alley”’. ↩
-
Google Threat Intelligence Group, ‘Adversarial Misuse of Generative AI’. ↩
-
Nimmo and Flossman, ‘Influence and Cyber Operations: An Update’. ↩