Research and analysis

Commercial offensive cyber capabilities: red team subsector focus

Published 8 August 2025

1. Introduction

In December 2024, Prism Infosec were appointed on behalf of the Department for Science, Innovation and Technology (DSIT) to conduct research on how the commercial offensive cyber sector is integrating emerging technologies into their commercial offerings and what the implications are of this integration.

The sector comprises of entities which deliver legal and ethical security testing (commercial red teams), developers who produce the tools and capabilities for the industry, groups who conduct offensive cyber operations on behalf of third parties as a service, usually without their target’s consent, and groups who are contracted by states, entities or individuals who conduct operations on their behalf. Identification and contact details of relevant individuals from each group that comprises the sector was not possible within the available timescales.

As a result, the decision was taken to focus the study on the most readily available segment of the sector for the study – the commercial red teams, who simulate threat actor attacks on clients using similar tools, tactics, methods and procedures. This report therefore is a reflection on how this segment of the sector is integrating and adapting their business models to emerging technologies.

Prism Infosec were tasked to identify and approach suitable entities to achieve this, with a goal of interviewing between 25 and 30 companies, and conclude reporting by 14 March 2025. 294 entities were approached, and 18 were willing and available to be interviewed within the available period. This paper documents the approach taken to conduct this research and the results obtained from it.

2. Executive summary

Whilst this study did not achieve the intended number or diversity of entities for desired coverage, it was still possible to obtain a number of valuable insights. This study identified a number of opinions, attitudes, predictions and insights into how the ‘red team’ element of the commercial offensive cyber sector and clients of interviewees are adapting and integrating recent and emerging technologies into their security offerings.

This particular subsector of the commercial offensive cyber industry seeks to emulate what other threat actor groups are doing. They invest heavily in research and development, making use of threat intelligence to ensure that their tools and methods accurately reflect, as much as possible within commercial costs, the capabilities and methodologies of other actors in the sector. This is to help their clients defend against those other actors and threats. Therefore, whilst insights from the other groups are lacking, there will likely be strong parallels for the integration of new technologies into the business of these other segments of the sector.

Some of the more surprising results was the lack of discussion around technologies such as blockchain or cryptocurrencies. Whilst there was significant discussion around the adoption, use and hopes for AI in the industry, interviewees were keen to share insights not just into how their businesses were adapting but also the impact these technologies are having on recruitment, workforce attitudes and planned future expenditure. Regulation’s impact on this element of the sector for delivery of services also featured heavily in discussions with some focus also paid to how regulations will need to adapt to these emerging technologies.

Whilst AI has generated the most interest and has by far the most investment and expectation for future innovation, recent technology adoption and migration into cloud-based architecture has had a larger impact to services being offered by the commercial red teams. It has provided changes to traditional infrastructure, enforcing development of new tooling and practices as the sector has adapted to how client organisations have migrated into the cloud following the global coronavirus epidemic Covid-19, advancements in detection and response capabilities, changes in real-world threat actor behaviours. Examples include the increased use of ransomware attacks (such as the 2017 Wannacry ransomware attack), and attacks on digital supply chains by suspected nation states during espionage operations and military conflicts (such as the 2020 SolarWinds breach and the 2017 NotPetya attack).

Interestingly, topics such as quantum computing were still considered to be too abstract and only viable for laboratory settings, and therefore the impact to the industry has yet to be fully, or even partially, realised. Instead, more effort was being focused on exploring environments that were previously considered to be too risky to test, such as those containing operational technology or automated vehicles, which includes land, air and sea assets alongside unmanned drones.

Another area of interest was the lack of capability for alternative operating system environments. It was felt that investment into developing offensive cyber tools and capabilities for MacOS, Linux, Unix, Android, iOS, etc. had lagged significantly behind Microsoft Windows estates, in part due to the prevalence of that operating system in wider society. As a result, the lack of published research and tools into these was seen as having a hampering effect for using technologies like AI to be used to help develop new capabilities.

Overwhelmingly, our interviews demonstrated the sector remains deeply sceptical of the promises of AI, considering many of its capabilities overstated and overused in products, creating a confused environment as to its true potential and capabilities. It was perceived that the most common use by threat actors for AI at this time was to deliver more sophisticated social engineering attacks. Aside from the ethical issues of such use, interviewees highlighted risks of data privacy, large costs, and the security of public models as reasons for hampering widescale adoption of the technology in their current offerings.

There was optimism that in time these factors would be addressed by more accessible models which can be hosted and tuned privately by cyber security firms and then used for a variety of commercial offerings from attack surface monitoring through to vulnerability research and prioritisation. Until the technology reaches this level of maturity however, the red team element of the sector will continue to focus on the manual specialised human efforts for the delivery of commercial offensive cyber services.

3. Requirements

DSIT’s goal with this research was to develop a comprehensive understanding of the commercial offensive cyber sector by addressing the following key areas:

Market Dynamics:

  • Identifying the key players in the sector, the size of the market, and its overall composition.

  • Understanding whether organisations are predominantly SMEs, UK-based entities, or large international companies with UK operations.

  • Determining what proportion of their activities is dedicated to offensive cyber capabilities versus other sectors.

Adoption of Critical and Emerging Technologies:

  • Assessing how commercial cyber intrusion companies are incorporating technologies such as Artificial Intelligence (AI) and quantum computing.

  • Evaluating their awareness of and response to sectoral trends

Investment and Acquisition:

  • Analysing levels of investment into critical and emerging technologies.

  • Investigating whether companies are acquiring technology-focused entities to enhance their capabilities.

Recruitment of Expertise:

  • Understanding how organisations are recruiting talent with expertise in critical and emerging technologies.

Focus Areas in AI:

  • Examining whether AI efforts are geared towards improving intrusion capabilities, data analysis, or both.

  • Determining the extent to which companies use existing AI models or develop bespoke, narrow, task-specific AI solutions.

Industry Convergence:

  • Exploring how the integration of emerging technologies is blurring boundaries between the commercial cyber intrusion market and other sectors.

To achieve this, DSIT requested that Prism Infosec attempt to identify and contact entities for interview who belonged to at least one of the following groups:

  • Hacking-as-a-service companies, which are companies providing the capability and often the supporting infrastructure for computer system penetration as a service. The customers usually identify requirements, such as target selection, and consume the resulting information. This does not include consensual access, such as security testing.

  • Companies developing and selling cyber intrusion products, which are companies who develop tooling and exploits for use by legal and illegal entities who conduct offensive cyber operations.

  • Hackers-for-hire, which are unaffiliated individuals or groups of actors that are hired by States, entities or even individuals to conduct computer system penetration to meet customer requirements. They use their own tools and techniques and are aware of, and in some cases may select, who they are targeting.

  • Red team organisations / vulnerability research teams, which are companies providing the capability and often the supporting infrastructure for computer system penetration; operating only in environments in which the owners of such systems had requested and authorised their services, and operated under ethical and responsible disclosure standards.

These entities were considered to be sub-groups of the sector identified under the Pall Mall Process. The Pall Mall Process was the outcome of a joint UK and France initiative which brought together international partners and stakeholders for an ongoing and globally inclusive dialogue to address the proliferation and irresponsible use of commercial cyber intrusion tools and services at a UK conference in February 2024. It recognised that, across the breadth of this market, many of the tools and services could be used for legitimate purposes, but resolved that they should not be developed or used in ways that threaten the stability of cyberspace or human rights and fundamental freedoms, or in a manner inconsistent with applicable international law, including international humanitarian law and international human rights law.

Furthermore, it resolved that they should not be used without appropriate safeguards and oversight in place. The partners and stakeholders resolved to explore the parameters of both legitimate and responsible use, by State, civil society, legitimate cyber security, and industry actors alike.

During the course of the study, it quickly became apparent that identifying and contacting members of the Hackers-for-hire and Hacking-as-a-service elements of the sector for inclusion in interviews was not possible within the available timeframe. A decision was made to focus the study instead on the more readily accessible elements of the sector, the red team organisations.

4. Limitations

This study requested between 25 and 30 organisations be interviewed when this study was awarded in mid-December 2024 for completion in mid-March 2025. Every attempt was made to reach these requirements. The challenge of identifying, recruiting and interviewing suitable organisations, and then analysing and reporting on the findings was compressed into this timescale and was a significant factor in not achieving this goal. Had additional time been available, Prism Infosec were confident that additional companies could have been recruited and interviewed to reach these figures.

Within the available period Prism Infosec independently identified and approached twenty-six entities to interview across the pilot and main phases. Professional membership with CREST was used to reach an additional 252 global entities, of which, only two responded indicating they would be willing to participate in interviews. Law enforcement contacts were also approached to attempt to contact potential additional entities which Prism Infosec had no other way of reaching that could meet DSIT’s original requirements. A further nine entities were identified by DSIT, and contact details were provided for one of those, who agreed to interview. In total, 294 entities were approached and asked to participate in the study.

Twenty-two entities agreed to interview. One withdrew their agreement after agreeing a date, three declined for interview outright and the remaining number did not respond to request for interview.

No hacking-as-a-service companies, or hackers-for-hire were successfully identified and approached for interview during the study. As such, trends within this area of the sector were not directly observable. This means this study was not able to gain a direct understanding of how such entities are recruiting experts and embedding emerging technologies into their offerings.

Whilst not discussed as part of this paper, Prism Infosec researched and identified significant third-party research papers which discussed these entities’ use of emerging technologies such as AI. These were shared with DSIT for internal consumption and used to help inform approaches for this study.

Given the above limitations, readers of this study should note that this research will not provide a fully representative view of the sector due to the low numbers of participants and the missing segments of the market. Instead, this study will provide readers with insights into how the interviewed organisations within the specific subsector view the main topics of the study.

5. Methodology

To deliver this research, Prism Infosec used the following methodology to ensure a consistent approach was taken to the study.

5.1 Semi-structured interviews

Prism Infosec developed a bank of questions which would be used as the basis of a semi-structured interview and intended to last no more than an hour. These were grouped to address the key areas identified by DSIT’s requirements. Unless interviewees expressly requested a copy of the questions prior to the interview, they were not shared with them. This only occurred in a minority of cases, with most interviewees agreeing to interview with only a broad understanding of the topic that was widely shared when the identified companies were approached.

The questions were designed to elicit relevant responses but, in most cases, did not focus interviewees down specific avenues of interest or focus on specific technologies, however it should be acknowledged that to add context, specific technologies were called out as examples which could have influenced answers to some specific questions. This was done to avoid introducing bias and to help identify if recurring themes emerged. Furthermore, this was intentionally designed to permit interviewees to organically expand or focus on areas rather than be forced to focus only on specific areas of inquiry. A copy of the questions asked of interviewees is available in Annex A of this document.

5.2 Pilot phase

A pilot phase was conducted in January 2025 in which five entities from the sector were interviewed with a proposed set of questions. These interviews were concluded by 17th January and a review of the questions used and improvements which could be made were then discussed with DSIT. No analysis of the pilot interview responses was conducted until the entire study was complete to avoid introducing bias in future interviews.

5.3 Main phase

Due to interviewee availability, interviews could not resume until 6th February 2025. Despite approaches having been made to over 290 entities, only eighteen were able to accommodate an interview within the available timeline. These were completed by 1st March 2025, and the remaining time was dedicated to analysis and reporting of the results.

5.4 Interviews

Prism Infosec arranged interviews with the senior leadership of the following commercial offensive cyber industry teams:

Red Team Organisations / Vulnerability research teams Companies developing and selling cyber intrusion products
LRQA Balliskit
Starling Bank  
Sodium Cyber Ltd  
Experian  
Cyndicate Labs  
AmberWolf  
Resillion  
QinetiQ  
PWC  
NetSpi  
CovertSwarm  
Cyberis  
Accenture  
JumpSec  
Grupo Santander  
Hargreaves Lansdown  
Red Hunt Labs  

Attempts were made to approach more companies developing and selling cyber intrusion products; however, they declined to be interviewed. Attempts to identify and contact hacker-for-hire groups and hacking-as-a-service groups were also unsuccessful. Instead of directly speaking to those groups, instead attempts were made to speak with Threat Intelligence providers who may have insights into those groups, unfortunately none of these responded to requests for interview.

Each interview was recorded with the permission of the interviewee, and the same base set of questions was used to provide an initial structure to the interviews, however interviewees were encouraged to expand their areas of question as topics and answers were provided, or where the question was not relevant, or answered as part of an expanded answer to another question, it was skipped to maintain the flow of the interview. Each of these interviews were then transcribed. To assist in candid responses, it was agreed beforehand that transcripts would be anonymised for the study. Therefore, the views presented here are an amalgamation and summary of the points raised and do not reflect the views of specific entities who were interviewed.

5.5 Coding

An initial batch of codes were created by Prism Infosec and presented to DSIT. These initial codings were based on the key topic areas provided by DSIT and based on Prism Infosec’s thoughts and research on what topics would be discussed. One member of the team developed these codes whilst a second member conducted the interviews to ensure that the initial codings were not influenced by knowledge of the interviews.

Once the interviews were transcribed and reviewed, anonymised, and reformatted, the transcripts were subject to coding using the initial codes authored by Prism Infosec. Where interviewee responses were identified as being relevant to a code, they were marked with the code so that they could be grouped for further qualitative trend analysis. New codes were added over time, as interviewees raised new topics which Prism Infosec had not considered prior to the coding phase.

This paper identified and documented the following codes and topics.

Area Code Topic Definition
Market Dynamics & Trend MT-ALT Developing for alternative systems Shift in traditional tooling for additional environments such as Linux, MacOS, mobiles, etc.
  MT-ATT Society Attitudes to cyber How society has adapted to offensive cyber.
  MT-AUT Automation & Efficiencies Use of automation to drive services.
  MT-CHAN Changes in the Sector How the sector has been forced to change and adapt over time.
  MT-COMP Competition vs Collaboration How much entities collaborate or compete on offensive cyber capabilities
  MT-GEO Geopolitical Impact How Wars, Laws, Politics are influencing the sector.
  MT-INF Changes to Infrastructure on-premises vs cloud-based infrastructure
  MT-MOD Shift to Continuous Security Models/Attack Surface Monitoring Use of different service delivery models – moving from one off engagements to continuous delivery.
  MT-OUTS Outsourcing Use of contractors or third parties to deliver commercial offensive cyber services
  MT-REG Regulatory & Compliance Influence Regulation and Compliance’s role in the use of Offensive Cyber in the market
  MT-ROI Return on Investment Getting value out of the service offerings.
  MT-TRAD Traditional Markets How the client market has changed over the last two decades
Emerging Technologies TE-ASM Attack Surface Monitoring/Mapping/Management Emerging technologies used for continues attack surface monitoring to identify weaknesses in organisations for exploitation/changes in security postures.
  TE-BIG Big Data Management Interrogation/utilisation of large data sets.
  TE-BLCK Blockchain/Crypto Use of blockchain backed databases for immutability and/or use of cryptocurrencies in commercial offensive cyber operations
  TE-CLOUD Cloud Adoption & Infrastructure as Code Use of Cloud based environments and capabilities for offensive cyber.
  TE-EDR Defensive Tooling Testing Use of emerging technologies which is driving defensive capabilities, specifically if they are having an impact on offensive tooling.
  TE-GENAI Generative AI Impact Use of GenAI in offensive cyber offerings
  TE-IOT Internet Connected Technology Covers both new products and older Operational Technology systems which are being connected to the internet.
  TE-MIMIC Mimicking Real-World Threat Actors Use of emerging technologies to keep pace and mimic real world threat actors.
  TE-QUAN Quantum Computing Developments in Quantum computing which can drive better encryption
  TE-SE Social Engineering Use of emerging technologies to support social engineering attacks. (Deepfakes, phishing, vishing, smshing)
  TE-VEH Automation in Vehicles Cars, aircraft, boats and their use of emerging technologies which could be attacked.
Investment & Acquisitions INV-ADOP Adoption of technologies for the entire business Investments in emerging technologies which are improving efficiencies in additional business areas.
  INV-CUST Customer Expectations for Emerging Technology How customers are expecting to interact with emerging technologies from offensive cyber offerings.
  INV-EFF Improving efficiency through investment Investments in emerging technologies which are improving efficiencies in commercial offensive cyber capabilities
  INV-RD R&D Investment vs Acquisitions Entity use of Research and Development or Acquisitions to gain access to new capabilities in offensive cyber.
Talent & Recruitment TR-KEYS Key skills wanted from recruitment What skills and traits are being sought after through recruitment
  TR-RET Retention Through Innovation & Culture How offensive cyber organisations retain talent and skills.
  TR-SKILL Training vs Direct Hire Approaches to gaining offensive cyber skills through recruitment or training.
Future Implications & Risks FUT-BAL Balancing Innovation & Security Issues around development of new offensive cyber capabilities and how they may be misused.
  FUT-COM Commoditisation & Market Disruption How the market is expected to adapt to emerging technologies.
  FUT-INNO Undefined Expected Innovation Areas where specific innovation has not been defined but is expected.
  FUT-REG Future Regulation Expectations around future regulation that will impact offensive cyber.
  FUT-SEC Security Risks & Concerns Security risks and concerns of emerging technologies in use for commercial offensive cyber.
  FUT-SERV Service Delivery Benefits Service delivery benefits through the adoption of emerging technologies.
  FUT-VULN Vulnerability Research How emerging technologies are influencing vulnerability research and new tool development.
  FUT-WORK Workforce Evolution How the offensive cyber workforce is adapting to new technologies

5.6 Analysis & reporting

Once the coding was complete, Prism Infosec conducted qualitative analysis of the material under each code. These were then used to identify trends under the topic which were summarised and documented in this paper.

As a result of this process, Prism Infosec were able to identify topics which interviewees regularly discussed, as well as topic areas which Prism Infosec had initially thought would be raised but were not mentioned or were mentioned by few interviewees.

6. Thematic findings

This section details the outcome of the analysis from the coding phase. It documents each of the areas of study, drawing out trends and divergences of opinions and activities. Each section is accompanied by bar graphs which documents the breakdown of topics within each subject area discussed by interviewees. Each section will also contain some of the anonymised quotes from the interviews that reflects the analysis.

6.1 Market dynamics & trend

Figure 1: Breakdown of amount of discussion on topics under Market Dynamics & Trends

Code Topic
MT-ALT Developing for alternative systems
MT-ATT Society Attitudes to cyber
MT-AUT Automation & Efficiencies
MT-CHAN Changes in the Sector
MT-COMP Competition vs Collaboration
MT-GEO Geopolitical Impact
MT-INF Changes to Infrastructure
MT-MOD Shift to Continuous Security Models/Attack Surface Monitoring
MT-OUTS Outsourcing
MT-REG Regulatory & Compliance Influence
MT-ROI Return on Investment
MT-TRAD Traditional Markets

6.2 Developing for alternative systems

We have seen more and more people talk about Mac-OS in general. There’s more research done [but] nowhere near as much as Windows.

Interviewees were keen to point out that traditionally, much of the effort in commercial offensive cyber red team was focused on Microsoft Windows environments. However the clients of interviewees engaging red team services in recent years have developed new business models, which includes the new generation of entirely cloud based banking businesses such as Monzo, Starling Bank, etc. which eschew traditional infrastructure.

As a result, there is a growing recognition and trend of developing for Linux and MacOS heavy technology stacks, with significantly more effort being made by commercial offensive companies in research and tool development for these ecosystems. This was particularly true for teams which delivered internal security testing for dedicated clients. Mobile operating systems such as iOS and Android also fall into this category, though there has been more research into attacking these sorts of devices, again due to their prevalence. Furthermore, this trend is also expanding outside of traditional Information Technology (IT) systems, to look more closer at legacy Operational Technology (OT) as those are modernised and adopted into critical infrastructure.

Despite the interviewee’s increased awareness for required effort on alternative environments, there is recognition that investment and research on Android, iOS, Linux (and all the variants of Unix), and MacOS still lags significantly behind the effort placed in developing capabilities for Microsoft operating systems mostly due to a lack of client demand for testing in these environments. Development on OT is particularly slow due to the real-world risks involved from damaging those systems from offensive cyber. This means that using emerging technologies such as AI to help develop new tooling and capabilities is particularly limited, as the models have very little material and examples to ingest to help generate new or alternative techniques.

6.3 Client’s attitudes to cyber security

Society’s dependence on digital technology, and the raft of cyber-attacks that are becoming an increasing nuisance to society at large has made it a matter of public consciousness.

Interviewees were united in their assessment that their clients’ awareness of cybersecurity risks had increased significantly in recent years, with particular focus on rising cybercrime and data breaches. Increasing media coverage of personal information security and cyberattacks, and greater regulator interest had also pushed more of a spotlight onto the security of supply chains given their role in major cyber incidents.

Interviewees felt that this had forced a shift for clients of interviewees from short-term reactive defence measures towards longer term cybersecurity planning, with investments being prioritised towards sustainability and resilience against evolving threats, and stricter security policies and risk assessments for third-party vendors. More than one interviewee stated that they had been hired to conduct cyber security assessments, including offensive cyber engagements, with a focus on their client’s third-party vendors, following authorisation to do so.

Interviewees felt that despite an increased awareness amongst their clients, cost and technologies remained a significantly limiting factor. The interviewees noted that clients of red team companies recognised threats to their supply chains, but not all of them had the resources or available expertise in-house to enforce expected security standards. Service cost to interview clients was also one of the reasons that interviewees raised for the transition from traditional point in time assessments towards continuous security monitoring supported by threat intelligence being slow.

Of particular note, generative AI products and their role in offensive and defensive cyber security were raised many times during this topic; however, the overarching opinion from interviewees was that society was overestimating its capabilities and reliability at this stage in its development.

6.4 Automation & efficiencies

I would love to automate so I can focus on the harder problems and the interesting things.

Analysis showed there was a strong push towards automation by interviewees for new tools and services; specifically, interviewees were looking to automation to streamline processes and reduce costs. AI was seen as a major game-changer in the sector towards achieving this, particularly in automating security testing and code reviews. Notably, the industry would like to use automation for repetitive security tasks such as attack surface management, penetration testing, and vulnerability assessment. This is to free up expensive human resource to focus on analysis.

However, while AI is being discussed widely, many interviewees stated that they rarely use AI beyond basic tasks, this was due to a lack of trust in the products, a topic covered more in depth later in this paper. Automated security solutions, such as breach attack simulations and automated red teaming, were often seen to fail to meet expectations, leading companies to return to traditional consultancy-based solutions, and many experts stress that consultants are still required for critical tasks, such as evaluating vulnerabilities and interpreting security findings. Interviewees noted that client organisations which had invested in automation tools designed to replicate elements of the commercial offensive cyber sector, had elected to abandon them later due to overhyped promises and poor real-world performance.

6.5 Changes in the sector

I think for me it’s that realization that’s changed from the client side over the last five years that a simple vulnerability scan is not answering the questions that they need. They need answers too, because that’s not how they’re being attacked.

Interviewees were asked to give their perception of how the commercial offensive cyber market was changing. These views are specific to the red team subsector of the market, and therefore may not be representative of the market as a whole.

Amongst interviewees there was an almost equal split between those that felt small, specialised firms were emerging as strong competitors in the sector that challenge the position of large-scale providers, and those that felt large cybersecurity firms were acquiring smaller firms to expand capabilities. Whilst not drawn out in the interviews, anecdotal evidence suggests it is likely the growth of smaller specialised firms is not being driven by any specific factor or gap in the market. Furthermore, it is also possibly caused by the generalised churn of senior consultants seeking to have more independence, increased income, and less bureaucracy from large organisations whilst capitalising on a sector undergoing significant growth and demand.

A common theme on this particular topic was that whilst the number of boutique firms is increasing, clients of interviewees appear to still be showing a preference to working with larger, more established security providers. The exact reasons on why this was the case was not shared by interviewees. As a result, this is increasing the price driven competition at the lower end of the market which impacts service differentiation.

Professionalisation of the industry was seen as increasing. The expansion of available certifications and courses, however, is viewed with some scepticism. This is due to a perceived lack of quality from these courses in terms of teaching offensive cyber techniques, and the ability to deliver offensive cyber operations from an ethical and legal stand-point when viewed from different global regions, which operate under different legal frameworks. This makes finding individuals with comparable skills more difficult in a global market and is exacerbating the perceived different standards of delivered testing.

As identified in other topic areas, increases in automation are supporting a shift in services being delivered by many red team security testing organisations, driving a move away from traditional testing models. However, use of AI to support this continues to be a concern as interviewees had not seen enough evidence to convince them that these products were currently capable of fully replacing human expertise.

6.6 Competition vs. collaboration

I would say it’s very competitive and very difficult for these people making these decisions to pick for their tenders as they don’t necessarily know what good looks like.

The dominant view was that the market for red team services and capabilities was seen as extremely competitive. Many companies compete aggressively on price, particularly in lower-end services like commodity penetration testing. Eleven of the interviewees specifically mentioned or discussed the use of undisclosed proprietary capabilities and methodologies as a tool for competitive advantage. As a result, clients of interviewees, especially government agencies, often struggled to differentiate between high-quality and low-quality providers. This led to the perception from interviewees that their clients were choosing cheaper options over more competent firms.

There are threat intelligence groups in US specifically where top ten companies including Cisco, Fortinet, etcetera, [who] have collaborated and created a group and agreed to share intelligence. But that is something which is missing for smaller companies or small startup or boutique firms. But they do release interesting blogs every now and then, which is interesting to see.

Five interviewees mentioned that informal collaborations on techniques and tool development through industry groups, red teamer communities, and platforms like open-source tool sharing do occur. Some organizations share intelligence and tools with trusted peers, though usually after internal utilisation to maintain competitive advantage. Non-profit groups like CREST promote collaboration, and some major companies (e.g., Cisco, Fortinet) engage in threat intelligence sharing in order to lower the risk presented by threat actors and the commercial offensive cyber market.

6.7 Geopolitical impact

We’ve had a shift in geopolitics. It’s having a big effect … I would say so, Brexit and our relationship with Europe has changed the way we in the UK do work, and also in a way forced us away from Europe. So, there’s more work being done, more collaboration with Latin America and with the Asia Pacific region for example, like Australia, Philippines, Singapore, for example.

Analysis of interviewee responses showed there was a view that geopolitical tensions were influencing cybersecurity strategies, with many of the clients of interviewees reassessing their security postures in response to global conflicts. Interviewees also highlighted a growing client awareness of increased frequency and sophistication of nation-state backed cyberattacks, and increased government investments in cyber defences.

The interviewees repeatedly highlighted that developing countries are investing significant sums in cybersecurity talent and infrastructure to kickstart their own footholds onto the market. Specifically, interviewees focused on the Middle East where large datacentres were being constructed and large SOC services were being established. Growth in demand for cybersecurity services and in cheaper, rapidly upskilled, cybersecurity labour was identified by interviewees as occurring in the Philippines and Taiwan. The impact of this increasing demand for talent in the sector was also forcing the market to adapt and expand services into these new geographies, which can come with legal risks.

OK, we are not in good terms with XYZ, so you should not be providing services to these countries at all. But they’re your clients, right? They have been your clients for many years. So what do you do? … . It’s a difficult choice to make.

Talent recruitment and investment by interviewees’ firms was tempered by compliance with increasingly complex cross-border data regulations, and international compliance regulations. Interviewees held that countries lack a unified approach to cybersecurity laws and enforcement, and this fractured approach leaves significant obstacles for delivering legitimate red team services globally to the same standard. Furthermore, interviewees felt that trade restrictions and sanctions heavily affect cybersecurity product availability with business being forced to seek alternative solutions due to global tensions.

The interviewee opinion of the impact of export controls was varied, with companies in the sector seeking to help enforce them by going through the process of seeking export licenses or reconsidering who they would perform the work due to their ideologies or activities. For example, in one interview, the interviewee suggested that companies had considered working with clients based in Hong Kong due to its perceived independent status from China, but that perception of independence had changed, and was causing companies to reconsider the ethics of collaboration. One interviewee suggested red team companies could seek alternative solutions, such as seeking to provide the transfer through a less restricted country or perform the work from a country which does not enforce the same restrictions.

In the longer-term, the impact of geopolitics on cybersecurity was viewed with some uncertainty. Some organisations are unsure how long geopolitical factors will influence cybersecurity policies, whilst others believe geopolitical risks are being overstated in certain regions.

6.8 Changes to infrastructure

Cloud is the obvious one and has been for a while and that’s continuing.

Interviewees expressed the view that both they and their clients, had made a massive shift away from traditional on-premises infrastructure over the last few years, particularly in response to the global coronavirus epidemic Covid-19. As such, traditional offensive red team capabilities were becoming less effective as research and development of new capabilities are being forced to pivot to the new cloud environments, leading to new and alternative testing methodologies being developed. Whilst this has limited the impact of traditional red team operations, there is still a prevalent opinion that remains valuable, though automation and continuous testing are seen as feasible alternative approaches.

We know full well that … Microsoft has threat actors targeting it. It’s got… nation states targeting it continually and we know it’s had breaches and we know other large organisations have had breaches by nation states, and if there is a very large platform breach at Microsoft, the data of, again pretty much most companies in the UK and the personal data that they handle, would then be at risk. And I don’t really think anyone’s got a plan for that, because if that happens, like, who do you move to?

An additional outcome of this move was that interviewees’ clients were investing more heavily in cloud security. Interviewees also expressed that their clients were becoming more aware about the security risks posed by major cloud providers, specifically the impact of a breach or vulnerability in the service of a major cloud provider in relation to the number of entities it could affect. This has led to a struggle for some client organisations to define security responsibilities between cloud and software as a service providers (SAAS) and their users, and has increased scrutiny into secure cloud configurations and compliance standards. As a result, there is a growing trend amongst interviewees, and interviewees claim from their clients, that providers should bear more responsibility for cloud security risks rather than having it shared equally between cloud vendors and their customers.

6.9 Outsourcing

Outsourcing was mentioned by a small minority of the interviewees. When it was raised, it was generally in the context of firms who wanted to compete in the red team element of the sector without the relevant skills or resources, and would therefore seek to win red team contracts and then subcontract that work out to boutique specialised firms. The lack of deep discussion of this topic was surprising, given the views of increasing numbers of boutique offerings and limited availability of skills in the sector.

Interviewees did also mention outsourcing in relation to their clients making more use of managed service providers for security and organisation defence. Whilst not an area of study in this paper, it was pointed out that this model meant that overheads for security solutions were reduced, and external providers were able to offer better scalable solutions more cheaply than in-house security teams. It should be noted that whilst these positives were discussed, the negatives for such a trend were not.

6.10 Shift to continuous security models

I think the market is going much more towards that continuous model. That’s what we’re seeing with clients that are signing up for us.

Multiple interviewees perceived and suggested the offensive cyber market for both providers and clients of interviewees was moving away from point-in-time transactional assessments towards continuous security evaluations. This is being supported by improved automation and a recognition that security issues can evolve rapidly due changes in infrastructure and vulnerability research.

The benefits of such an approach in terms of long-term partnerships and therefore regular income for commercial security testers from clients of interviewees had been highlighted as a specific benefit for driving revenue generation in this direction. Whilst this is seen as more beneficial for the defence side of the sector, it does have a benefit for the offense side by allowing them to build up a picture over time of evolving infrastructure, identify common technologies which would benefit from further offensive cyber research, and with improved automation, free up human resource for more in-depth analysis.

Whilst this trend has generally been seen in a positive light, there is still some uncertainty as to whether security integration in this manner is genuinely new or just an evolving concept for threat modelling. Furthermore, there is fragmentation in terms of strategies for implementation of continuous testing with clients of interviewees lacking certainty with how to integrate it effectively.

6.11 Regulatory & compliance influence

I never thought I’d be calling for this, but I do feel like we need to have some form of regulation or control in place.

Discussion of regulation featured prominently in almost every interview. There was an expressed opinion that the UK has some of the strictest cybersecurity regulations, with interviewees citing testing frameworks such as the Bank of England’s CBEST, or the UK government’s GBEST scheme. The interviewees stated that the primary driver for legal commercial offensive security testing is due to regulatory compliance requirements, with organisations engaging such services to satisfy oversight bodies that they have the ability to defend and recover from illicit attacks. This has had the benefit of improving defence capabilities, which means the offensive side of the industry is forced to invest in further research and development to develop new evasions and capabilities.

The Digital Operational Resilience Act (DORA) is a major regulatory development in Europe, with expectations that regulations will move towards global standardization in the next decade. It was felt that regulatory-driven testing was slowly expanding beyond banks into sectors like telecommunications, national infrastructure, and even automotive cybersecurity, which was seen as a good thing. Furthermore, regulations were viewed as helpful in preventing unethical cybersecurity practices, such as overly aggressive red teaming and personal targeting.

Despite all of the positive elements raised by interviewees, there were concerns. It was expressed that different countries and industries interpret regulations differently, which leads to inconsistencies in enforcement and implementation. There was also a concern that overly strict regulations can create standardised testing frameworks, which can push cybersecurity services towards commodity pricing and reduce creativity in offensive security. Another opinion expressed was that the high bar set by regulations leads to a shortage of qualified staff, as many professionals would prefer to avoid repetitive certification processes which are only valid for 2-3 years, which are required for delivery of regulated testing, however certification was seen as generally a good thing.

6.12 Return on investment

A large, complex red team is going to cost a substantial amount of money, I don’t think the clients that we work with are surprised by that. I think the market kind of understands it because you either pay a very low rate for what is effectively automated vulnerability assessment with a different badge on it, or you actually get a proper team coming at you like an adversary would.

Interviewees shared their views that their clients were seeking a greater return on investment (ROI) for their security budgets; this translated into meaning that security leaders were required to justify spending by demonstrating the business value of red team services. Clients of interviewees would frequently raise that security is viewed as a cost centre, not an investment, which makes competition for investment into it, even with growing recognition, difficult.

Many clients of interviewees were reporting budget reductions in their security programs and requirement to deliver more, with fewer resources. While ROI is becoming a priority, it was expressed that many client organisations struggle with quantifying the financial impact of security investments and some companies lack clear metrics to justify security spending.

The move to the cloud by both interviewees and their clients however has been viewed as a cost-efficient alternative to on-premises assets, with organisations looking more widely at cloud-native commercial offerings for improved efficiency and scalability.

This drive for ROI however has had a knock-on effect in the industry. Use of AI and increased automation is being investigated as more cost-efficient alternative to traditional security testing (such as penetration testing) and red team offerings. Interviewee scepticism was widespread, with a number of red team organisations expressing the view that the immaturity of such technologies and the ongoing requirement for human intervention will limit cost-savings. In the view of interviewees, as a result, there is an increasing divide between organisations employing red team testing services, mostly driven by regulation and compliance, and those that are unwilling or unable to afford it.

6.13 Traditional markets

During the interviews, a recurring subject was the increased maturity of cyber defences. Interviewees also felt that the investment needed to break into traditional cyber security testing had lowered, as tools that were once innovative had become more widespread. This has had the knock-on effect of over saturating the security testing market, making those services more standardised and price driven, resulting in lower margins and increasing difficulty for clients of interviewees to differentiate between levels of service and capability.

The introduction of recent technologies such as AI is viewed as likely to make this situation even more challenging. A consequence of this is likely to be in the deterioration of the necessary skills consultants gain early in their career in the commercial offensive cyber sector.

You’ll have people who don’t have the years behind them to push back when a client says we’re not doing that.

Interviewees stated that traditionally, a red team consultant would gain experience by conducting security testing before specialising into offensive cyber research and delivery; they felt this would become more difficult. With the changes to the traditional testing markets, there was concern over how the red teams would adapt and be able to develop capable and skilled testers and consultants for the more mature markets.

Adding to the concern that consultant skill levels would drop was the view that red team engagements were becoming more challenging due to enhancements in detection and defensive capabilities amongst clients of interviewees who are benefiting from advancements in security technologies. This means that consultants would lack important fundamental knowledge and experience in how to fully test an environment and provide reasoned advice based on the context of the client’s ecosystem and security posture.

7. Emerging technologies

Figure 2: Breakdown of amount of discussion on topics under Emerging Technologies

Code Topic
TE-ASM Attack Surface Monitoring/Mapping/Management
TE-BIG Big Data Management
TE-BLCK Blockchain/Crypto
TE-CLOUD Cloud Adoption & Infrastructure as Code
TE-EDR Defensive Tooling Testing
TE-GENAI Generative AI Impact
TE-IOT Internet Connected Technology
TE-MIMIC Mimicking Real-World Threat Actors
TE-QUAN Quantum Computing
TE-SE Social Engineering
TE-VEH Automation in Vehicles

7.1 Attack surface monitoring/mapping/management

Chains of attacks is really useful. Again, as I said, I don’t believe it should be called automated red team. I don’t believe anything like that exists, but definitely attack chains, attack paths, cyber kill chains or vectors which can be sequentially based on what happened in the last one. That’s definitely helpful for companies who are moving from traditional pen test or traditional vulnerability scanning, where it will scan you, give you some payload.

To capitalise more on automation, interviewees expressed that more effort was being made to develop Attack Surface Management (ASM) tooling. This was deemed to be a benefit for both interviewees, and the clients of interviewees; incorporating both automated discovery, human-led intelligence, and threat intelligence feeds combined with vulnerability reports to refine attack strategies. These platforms are still quite immature in terms of their capability, but interviewees hoped that through linking in other such products, like custom AI models, it will eventually be able to support deeper capabilities for red team operations and provide added benefits to clients of interviewees.

Specifically, client organisations claim to want it for automation of attack path analysis and breach simulations. This permits organisations to develop graph-based models to connect security tests dynamically and link attack chains into physical, social and digital attack vectors. This is then combined with real-time dashboards and intelligence driven reporting, rather than static point in time reports from traditional security testing.

Development of these platforms and integration of them into red teams remains an aspiration for many organisations due to the complexity of converging and automating multiple toolsets into a single offering. This means there is still significant manual effort, not to mention security concerns for the data should threat actors breach the platform.

7.2 Big data management

Data is massive and gains more and more importance, and therefore the security and use of that data is getting more and more important.

Interviewees felt that analysis and processing of “big data” was becoming more important, though acknowledged it was more relevant for the defence side of the house in analysing traffic patterns, logging, and alerts to identify offensive cyber operations. This information, however, is also useful for identifying how defence platforms are identifying offensive cyber techniques and helping to develop new evasion methods. Eventually, interviewees would like this to be combined with scanning results, and vulnerability data to help identify new attack paths, with links into Attack Surface Management tooling.

A number of challenges for the realisation of widespread development and adoption for this still exists, however the primary bottlenecks have always been seen as the storage of big data, and the analytics necessary for processing it. Interviewees suggested that Cloud has helped address the first of these issues as there was no real option to host that material internally to their organisations, and it is hoped that AI would improve the ability for data scientists to analyse large data sets faster.

Interviewees stated that whilst they expected significant benefits from more capable data processing, the security of large data repositories was becoming a top priority for both clients of interviewees and red team firms as they accumulate and process more sensitive information. As a result, a few firms are considering moving data back out of the cloud to secure datacentres. This has been exacerbated by the impact of breaches of large cloud providers in recent years.

Additionally, interviewees identified a large concern over using commercial AI products to support the automation and analysis of the data given the technology’s relative immaturity. To partially address this, some interviewees suggested that R&D effort is being spent on training local AI models on private data to maintain security and prevent external dependencies. Meanwhile efforts to analyse the data still rely on manual and human-driven efforts.

7.3 Blockchain/crypto

Despite cryptocurrencies remaining prevalent in news articles and crypto exchange currencies being a target for a number of offensive cyber threat actors, no mention of this technology was made by any of the interviewees. It is possible that this is due to the fact that many of the organisations who agreed to interview do not have the same reliance or interest on cryptocurrencies, and therefore do not factor it into their business models.

Interestingly, blockchain, in general, was also not discussed by interviewees. The benefits of blockchain for decentralisation, immutable transactions, resiliency and transparency when it comes to data management, makes its lack of discussion in the sector from interviewees surprising. Due to the lack of discussion on this topic, it is not clear why this is the case; however, it is possible that it is viewed as of greater benefit to the defence side of the industry than the offence. This is because it would support the development of tamperproof logs – supporting audit and compliance, offer security benefits for internet connected devices by helping prevent unauthorised access and potentially improve security for DNS.

7.4 Cloud adoption & infrastructure as code

We have seen a heavy adoption to cloud, right by the client, and by the attacker and the offensive industry, right, everyone

Interviewees were keen to discuss the adoption of cloud, and cloud related technologies such as Infrastructure as a Service (IaaS), and Software as a Service (SaaS), adoption of which does not appear to be slowing down. This is due to the benefits of Infrastructure as Code (IaC) toolkits such as Terraform, to deploy cloud environments efficiently, reducing operational overhead, and enforcing security-by-design principles. This has the added benefit of being able to rapidly scale up and down resources based on market demand, not just for clients of interviewees but also for red team operations, permitting the industry to adjust to demand of services.

The prevalence of SaaS did raise concerns for the interviewees. Clients of interviewees had frequently raised with them that the uncontrolled adoption of SaaS applications creates data leakage risks, as employees often sign up for cloud services without security oversight. This then led to further concerns over supply chain vulnerabilities - such as exposed code repositories or mismanaged access permissions. As a result, more effort was being invested by clients of interviewees into improving visibility into who was accessing what data across cloud service. Although unstated, this would likely also require red team firms to consider how best to evade or capitalise on these advancements.

Interviewees expressed that with so many organisations pivoting to the cloud that many had re-orientated to also develop and research capabilities to target and attack cloud environments. As a result, securing cloud environments, especially those that are hybrid cloud environments has become significantly more important as compromise of a single cloud identity can give attackers significant access to corporate environments. Interviewees acknowledged the need for stronger cloud security measures, including conditional access controls and phishing-resistant MFA to help address this.

Interestingly, the benefits of using services such as cloud platform Content Distribution Networks (CDNs), or high reputation email services such as o365 to obfuscate red team operations was not a topic that was brought up by interviewees, instead choosing to focus on the cost savings benefits and advances in efficiency through use of cloud for infrastructure and advancements in attacking clients of interviewees with cloud deployments.

7.5 Defensive tooling capabilities

I used to work in defensive security and for example, [I don’t know] how many times I saw a decision people took … to buy a tool just because they saw [it] on my slide, you know, and they never have to test because if unless you are under attack, you don’t know if your defensive tool works or not.

Offensive security is the opposite. The customer who buys it, they will test it immediately. They will know immediately if it works or not.”

Interviewees acknowledged that defence tooling had advanced significantly over the last few years, and this had a significant impact on the red team industry. As defences improved, the offensive cyber service offerings were forced to develop new tools and techniques to evade them. This meant that the capabilities of offensive cyber tools and methodologies managed to outpace those of the defensive technologies as they remained reliant on a human to review alerts and manage investigations into breaches. This introduced a time lag in which the activities of the red teams could operate and potentially achieve their objectives.

Interviewees expressed the opinion that with potential advances in AI and machine learning, plus improved access to large data repositories and cloud-based outsourcing of security products permitting defence at scale, that defence capabilities may outstrip offensive capabilities at a higher rate, degrading the service offerings for more mature environments. Techniques which had been developed in house as proprietary offerings could become identified and exposed far more quickly resulting in more failed offensive cyber operations against clients of interviewees willing to embrace such efforts.

Furthermore, the development of new tooling and techniques would be equally hampered due to the inability to ensure that such third-party defence firms would not have an early foresight of them whilst in development and testing to determine their efficacy against the defence products. An exact timescale for when this flip will occur was not offered by interviewees, but it was expected to occur within the next few years.

Despite the concerns raised, interviewees were also quick to raise that not all defence products are the same: poor implementation and deployment of the products mean many outsourced services fail to meet expectations. Limited commercial cyber defence personnel remain a bottleneck and “alert fatigue” still plays a significant influence in the reason as to why so many attacks are successful despite the improved tooling and visibility. Furthermore, it was acknowledged that threat actors, such as cyber criminals, and therefore also the red team sector are also starting to use techniques borrowed from defenders to fool threat intelligence analysts. This can lead to misattribution, which has the added knock-on effect of possible geopolitical impacts.

7.6 Generative AI impact

AI is changing the way we look at a lot of things from an offensive and defensive perspective.

Possibly unsurprisingly, given the global awareness of the topic, the discussion of Generative AI products was prevalent throughout the interviews.

Interviewees were explicit that they could see significant benefits from this technology in enhancing productivity, with significant improved efficiencies for code generation, data analysis and documentation tasks. Furthermore, interviewees also felt the technology currently incapable of contextual understanding necessary for risk management in highly dynamic and complex environments in which the red team element of the commercial offensive cyber sector operates.

As a result, almost every respondent was highly sceptical of the technology in delivering red team operations. Interviewees felt that it was not yet matured enough to replace the human element, and instead the technology was viewed as more of a co-pilot for a human operator, able to offer advice or identify alternative attack paths rather than being permitted to act.

In terms of like development and stuff, it’s skyrocketed someone’s ability to code. I would say that’s … the biggest thing.

One of the more common areas cited by interviewees where AI was having a bigger impact was AI’s ability to help with coding. Although, it was expressed that AI’s ability to help develop new techniques and tools for red team operations was also extremely limited. This was because the current generation of models only have access to public repositories of data, and as such those techniques are all widely signatured and known about by the defence industry. This means its attempts to generate tools are already heavily signatured and do not offer any advantages over human efforts, other than to lower the barrier for lower sophisticated threat actors who lack the understanding of what the tooling is doing.

Furthermore, most of the models do not have access to data showing how attacks are getting identified and prevented, which leads to gaps in its ability to support the development of capabilities to evade such security products. Given the sensitivity of such data, there was no expectation that public models would be given access to data on how attacks are being detected and blocked by security teams, this includes custom detection rules and the output of security systems, any time soon. Instead, interviewees expressed interest in developing internal, private models to assist their own proprietary research. However, details on efforts of this remain scant and interviewees were reluctant to discuss that topic further.

People have invested so much in AI, they just rammed it down our throats. It is useful. But that’s it. It’s just useful. It’s not greedy. It’s not the world’s changing, you know.

Whilst a lot of interviewees were optimistic for the technology, scepticism of AI was also very apparent in the interviews. There was significant concern that AI’s potential had been exaggerated by marketing and media in terms of its true capabilities. Concerns were raised repeatedly over the security and misuse of data provided to public models, and the concern that it will hallucinate, presenting false information that ultimately requires knowledgeable human expertise to correct, thereby undoing any efficiency savings of using the tool.

I’m an AI optimist, but I’m also a sceptic in the form of the attack paths that are going to be created in reality off the back of it.

There is significant discussion on the hopes and fears of this technology. Interviewees from the red team element of the sector broadly agreed they would like to see further limited adoption of the technology to support commercial offensive cyber operations, but at present do not feel or trust that it would be capable of replacing human operators for a significantly large period of time.

7.7 Internet connected technologies

From the emerging technologies point of view, that we’re seeing, that’s mapping a lot into the OT space at the moment as well.

It should be noted that up until recently, the risk and complexity of conducting offensive cyber security in Operational Technology (OT) environments was considered overwhelming and effective operation in the space was limited to certain nation-state backed threat actor groups.

Whilst the risks remain high, interviewees expressed the opinion that emerging technologies are now enabling more comprehensive security coverage for (Internet of Things) IoT and OT systems, addressing previously unmanageable risks. As a result, the industry is doing more in this space. The rise of regulatory frameworks and compliance mandates is pushing companies to improve their IoT security postures. As a result, more cybersecurity businesses are entering the IoT/OT security market, developing solutions to monitor and manage security risks in connected devices.

Interviewees raised that their clients had begun to recognise that IoT and OT systems are becoming major attack vectors that need proactive security measures. Media sensationalised breaches to OT systems such as the Colonial Pipeline (even though the OT systems for that were not directly affected), has raised a significant interest in how to protect these systems better, requiring more testing from the red team element of commercial offensive cyber sector. This is driving a requirement to develop new tools and techniques, and fuelling research into examining protocols used by many legacy systems that are now more interconnected.

Challenges in developing in this area still exist however, of those that discussed this topic there was an opinion that some organisations still see IoT/OT security as too complex or risky, leading them to delay investment in this area. Furthermore, not all firms have integrated IoT security solutions, with some waiting for further technological advancements before committing resources. It was also felt that certain industries remain hesitant to adopt IoT security measures, either due to legacy infrastructure challenges or lack of skilled personnel, and that some clients of interviewees still prioritised IT security over OT security, leaving industrial and IoT environments less protected despite known risks.

7.8 Mimicking real-world threat actors

There will be a big adoption like I said before of AI, but in the intrusion space, in the adversarial simulation space, I still think that the most value comes from a human, you know, attempting to emulate another human.

This topic focused more on methodology rather than technology, however technology did play a part in permitting red teams to mimic threat actors. This is because the industry needs to adapt to new technologies based on what clients of interviewees are using and therefore threat actors are abusing or attacking.

Those interviewed expressed that organisations are prioritising threat actor emulation in red teaming and adversarial simulation exercises, with security teams wanting to replicate attacker behaviours to test defensive capabilities under real-world conditions. As a result, manual simulation was still preferred over automation for emulating real adversaries, as attackers continue to operate manually.

Specifically, clients of interviewees expected tailored engagements, preferring customised red team exercises rather than large-scale automated assessments. This means security firms are required to take a premium approach, using best-of-breed tools and methodologies similar to those used by elite threat actors. Consequently, adversarial simulation remains highly specialised, requiring human expertise rather than full automation.

Given the client requirements for greater simulation and alignment of attack techniques to real world threat actors, the cybersecurity industry has increased its focus on attributing cyber threats, leading to more threat intelligence-driven security operations. Interviewees felt that attackers were adapting to this trend by intentionally mimicking other threat groups to evade detection and mislead attribution efforts.

This in turn was leading security teams (both red team companies and commercial cyber defence companies) to refine their techniques to differentiate between real and deceptive attack patterns. AI was seen as a potential game changer in this field but is still considered too immature to add any real insight at this time. As a result, technology advancements in this field remain fragmented and little information was shared in terms of how this would be addressed.

7.9 Quantum computing

As far as quantum and stuff, that’s something I’ve been hearing about for years. But … that’s in university somewhere. It’s got no benefit to me at the moment.

The topic of quantum computing was used as an example of emerging technology during the interviews. As a result, this likely prompted interviewees to discuss this particular emerging technology. This means it is unknown if they would have raised this topic without prompting.

Interviewees stated that they were interested in quantum computing but as yet saw no direct application for the sector beyond its impact on cryptography, which was an area they believed would see the most impact once the technology advances. Discussions around quantum instead focused on future possibilities rather than current implementations. While some of the largest firms interviewed have quantum computing expertise, it is not actively tied to offensive cybersecurity efforts.

Furthermore, interviewees that do have quantum computing talent expressed that they struggle to retain specialists, as startups offer higher-paying opportunities. This is exacerbated by a shortage of practical quantum cybersecurity experts, leading many firms to rely on academic discussions rather than internal expertise.

Several interviewees believe that quantum computing is in an overhyped phase, similar to previous emerging technologies. There was scepticism about its immediate impact, with most believing practical quantum threats are still over ten years away. As a result, quantum was discussed more in academic terms rather than in practical cybersecurity implementations.

7.10 Social engineering & AI

You go from having broad, badly spelled phishing emails which are lower [sophistication] and probably and may potentially have technical failings that stop them from working, to having an AI or something in the mix that can then translate it to perfect English, but also add the context [to] that e-mail that’s relevant to the target as well

Despite generative AI being a focus topic for emerging technology elsewhere in this paper, there was sufficient divergence on the topic for this theme to emerge as a standalone area.

Interviewees felt that one of the biggest impacts generative AI was having was in the development of more sophisticated social engineering attacks. Specifically, generative AI was now widely used to craft more convincing phishing emails and social engineering messages. This means that legal and illegal commercial offensive cyber entities, can generate well-written, highly personalised emails, improving their success rates. Additionally, the use of AI-powered phishing and chatbot-based pretexts were being explored as a way to automate the initial engagement phase of social engineering attacks.

Anecdotal evidence, provided by third party paper studies, was raised by a number of interviewees over the use of generative AI for other social engineering approaches such as through deepfakes to mimic real world individuals on video and telephone calls. A technique covered by the term Vishing.

Some of the deep fake stuff, there’s a significant nervousness about using those techniques in a red teaming setting.

One interviewee expressed that they deliberately avoid deepfake technology in social engineering engagements due to ethical and legal concerns. This appeared to be due to unclear legal boundaries and reputational risks. Furthermore, ethical guidelines for AI-based social engineering were felt to be still evolving, leading to caution in corporate security testing environments.

7.11 Automated vehicles

Drones also came up, so use of drones for collisions. So it’s part of full spectrum red teaming. That is something that we have I think we’ve actually used them in engagements. And I can imagine you could use them for a number of things. So reconnaissance of physical sites. Also peering through windows. I’m not necessarily saying that we’ve done any of these things, but drones certainly seem to be on the list of technologies, so.

Red team involvement in this topic area was considered by interviewees to be niche and highly specialised. As a result, there were very few firms that raised this as a topic area, but there was still sufficient discussion to warrant its own topic area.

Interviewees felt that modern vehicles were increasingly seen as operational technology (OT) due to their high levels of connectivity and automation. As a result, red team security experts were closely monitoring developments in automotive, maritime, and aerospace technologies, as similar security risks are expected to emerge across industries. Given the impact attacks on such systems could have, especially with the rise of drone technologies in numerous civil and military environments, vehicles and IoT-integrated transport systems are becoming key cybersecurity areas of interest.

Investigation into use of AI/automation in vehicles was also raised on this topic, with interviewees expressing the opinions that self-driving vehicles operated well in structured, controlled environments, such as motorways with clear lane markings. However, when unexpected conditions arise, automated systems struggle to adapt, highlighting potential security risks. As a result, automated decision-making in transport was seen as still limited to predefined scenarios, leaving room for exploitation or failures in edge cases.

This trend was seen as expanding out from considering how to attack them, but also into how to incorporate them into offensive cyber operations. Drones are being considered in full-spectrum red teaming engagements, particularly for reconnaissance and physical security testing, with security professionals recognising the potential for drones to gather intelligence, such as peering into windows or scouting locations before an attack.

8. Investments & acquisitions

Figure 3: Breakdown of amount of discussion on topics under Investments & Acquisitions

Code Topic
INV-ADOP Adoption of technologies for the entire business
INV-CUST Customer Expectations for Emerging Technology
INV-EFF Improving efficiency through investment
INV-RD R&D Investment vs Acquisitions

8.1 Adoption of technologies for the entire business

We have different parts of the business that are involved in that and we’re looking at how AI can improve everything we do from like, you know, a customer experience.

Out of all the emerging technologies, a sizeable number of red team services were investing in AI-driven projects as a key enabler for digital transformation. Cloud investment was not discussed as heavily as AI on this topic during the study, despite much discussion of it in market trends and emerging technologies. It was not clear if this was because cloud was viewed as an older investment, the benefits of which has already been realised, or if it was due to the market hype of AI being the next big thing. Regardless, this was a trend that interviewees claim was echoed amongst their clients as well as themselves as a tool for cutting costs and optimising commercial operations.

It should be noted that the investment was being viewed as still in its exploratory phase with businesses testing the water with multiple business applications rather than offensive cyber capabilities. Quantification of the benefits of AI were debated as the cost savings benefits have yet to be realised and require significant initial capital investment. The most common adoption of AI appears to be in client facing chatbots for customer service, automation and operational efficiency.

Interviewees made it clear that when it came to business service adoption of technologies such as AI, they were far more likely to purchase commercial offerings rather than develop an in-house solution. This was likely due to the prevalence of such commercial offerings rather than the more specialised options that would be unique to the commercial offensive cyber market.

8.2 Customer expectations for AI

I’ve not seen any customer go ‘We expect your tech to be more advanced because now you have AI to help you.’

Interviewees were asked if their clients had an increased expectation that they would be using AI in their commercial offensive security offerings, either in their products, or in their services. There was an overwhelming opinion that their clients did not currently have strong expectations that AI would drastically improve services but acknowledged that clients of interviewees were being influenced by AI marketing claims.

Where the influence was most strong was in smaller businesses which lacked firsthand understanding of the technology and had unrealistic expectations of what it could achieve based on marketing claims. This influence was identified as being greatly diminished however in the more technically knowledgeable clients of interviewees.

There was an expectation that over the next few years demand for AI enhanced commercial offensive cyber services will increase, and may eventually replace some manual efforts. However, at present, clients of interviewees did not expect AI to have a direct hand in the delivery of services or even AI authored reports due to concerns over AI hallucinations and concerns over data security and unauthorised AI processing in commercial offerings.

Overall, scepticism and security concerns are dominant, but marketing influence and automation trends suggest there will be a gradual shift in expectations.

8.3 Improving efficiency through investment

We are investing in much more time in people capability building as well, and getting towards the point of more automation and augmentation through taking tooling, etcetera, that people are using individually, and making that more available through our portal.

Interviewees were asked about how they were improving their capabilities and service offerings through investment. Whilst exact expenditure amounts were never shared during the interviews, one interviewee suggested that their annual spending on tooling, research and development, and training was estimated between £750,000 - £1,000,000.

Where tooling was developed, it would often remain proprietary for a strategic period of time in order to maintain a competitive edge before sometimes being released into the open-source community. This means that the red team’s capabilities can remain diverse and, in many cases, unique as they maintain capabilities on specialised topics and areas of interest not initially publicly available whilst still being able to offer broad operational capacity.

Acquisitions and mergers were also raised as a method for improving organisation efficiency. In this case they would acquire boutique cybersecurity companies to gain access to specific expertise. Through acquisition of many such boutiques they had access to a large pool of experts and were able to work on bigger and more complex projects.

When it comes to using research on products such as AI, interviewees expressed significant security concerns over the use of public models. As such a sizeable number of interviewees expressed that for cybersecurity testing, they were developing and investing in private, internal models customised for those specific use cases. Details on the development of those were not shared by interviewees.

8.4 R&D investment over acquisitions

Our investment is purely R&D. There’s no magic products that we can buy that will help us be better.

Interviewees expressed that they would almost always invest in research and development over acquisition as this was seen as far more cost effective for them in the long run. Exact levels of investment were not shared, however, for some large companies it ranged between 10 and 20% of their annual budget, whilst for smaller companies this figure was close to 90%. The main reason cited for this was that custom tooling is preferred for offensive security in order to maintain an edge over their competitors. This has the added benefit of being more likely to evade evolving security products as it is not being widely used such as those in off-the-shelf solutions.

Research was heavily guided by client and market demands rather than speculative development, and therefore until there is client demand for it, firms will invest minimally in certain product areas or investments in terms of research. This is backed up by the prevalence of research into exploiting or conducting offensive cyber operations on Microsoft Windows environments as opposed to MacOS, Linux, mobile device or OT environments.

In a minority of cases, research and development was also viewed as investment of branding and credibility, choosing instead to publish their materials to establish industry expertise rather than keep it internally for use on engagements and only sharing it after a strategic period of time. In this manner it was also used to support recruitment of professionals who were more aware of the brand due to their prevalence in publishing capabilities and research.

In a couple of interviews, the interviewee suggested that their firm would acquire smaller companies to enhance expertise but were keen to emphasise that they did not heavily rely on acquisitions for innovation, and when acquisitions did occur, the focus was on capabilities rather than just technology.

Whilst not a particular focus for this question, the topic of AI was also repeatedly brought up. There was an acknowledgement amongst interviewees that AI development was expensive, and therefore the number of interviewees interested in this area was limited. Furthermore, AI adoption was cautious, with many firms opting for private AI models rather than public ones. Overall, AI’s use and current direction of research and development is focused on it being a tool for augmentation rather than full automation in cybersecurity assessments.

9. Talent & recruitment

Figure 4: Breakdown of amount of discussion on topics under Talent & Recruitment

Code Topic
TR-KEYS Key skills wanted from recruitment
TR-RET Retention Through Innovation & Culture
TR-SKILL Training vs Direct Hire

9.1 Key skills wanted from recruitment

Instead of their educational background, we focus more on their exposure, in terms of have they written any blog post, have they created any CVEs (Common Vulnerabilities and Exposures). Have they created any open-source tools? [No matter] how small or insignificant it might be in the scheme of a full engagement, but that shows that how curious they are.

Interviewees were emphatic that their organisations do not prioritise cyber security degrees from potential applicants. Instead, firms were seeking demonstrable skills and hands-on experience - with particular value placed on GitHub projects, blog posts, open-source contributions and demonstrations of real-world problem solving. Passion, curiosity, initiative and self-driven learning was seen as more important traits than formal education. Interviewees explained that the reason for this was that they could teach technical skills through internal training programmes, but the personality traits and enthusiasm needed was not something that could be trained.

Use of universities to create pipelines for new talent was expressed in a minority of interviews, however apprenticeships and early career programs such as CyberFirst were more common and used to train talent in-house.

In at least one interview, it was made clear that rather than cybersecurity degrees, they would instead look towards those who had degrees in Chemistry or Engineering, or had backgrounds as photographers. This was due to those individuals’ abilities to understand how to examine a system and understand how the constituent components would interact to make the system work as a whole. This in turn would make it easier to understand ways to attack the systems or make them behave in manners not planned by their creators.

When it came to more experienced candidates, qualifications and certifications from bodies such as Crest, Cyber Scheme, or the Cyber Security Council’s chartership schemes were valued but only for regulatory related testing. Experience specifically in red teaming and penetration testing was prioritised, but with the caveat of at least 6 years’ experience in the industry. Where that was not available, expertise in networking, system administration, or reverse engineering was considered as highly beneficial for delivering engagements and developing new capabilities. Specialisations in cloud security, malware development, EDR evasion, and cryptography were identified as being increasingly valued but extremely difficult to find and attract due to high salary demands.

Skills in AI expertise or Quantum cryptography were mentioned but only in the context that there was no demand for those at present in the industry and as such were not considered priority hires or being actively sought after.

9.2 Retention through innovation & culture

Salary is a big one, obviously. But work life balance and company culture are just as big for people that come to us and very important for the right type of talent. Benefits that are intangible, like sort of flexibility and you know, having some control of your own destiny and working in a pleasant environment where people like each other. It’s these things are actually really important for staff retention, for attraction and retention.

Interviewees identified that demand for cybersecurity experts massively exceeds supply, which makes retention a significant challenge. Many firms interviewed admitted that they struggle to compete with higher salaries offered by companies in the U.S. and other high paying geographies. Furthermore, it was identified that cybersecurity professionals, in particular, prefer companies where they can work on cutting-edge problems rather than repetitive tasks.

To address these challenges, interviewees explained that whilst salary is always a key factor, additional non-monetary benefits such as autonomy, interesting projects, professional growth opportunities, employee support systems, enforced work-life balance through a hard cap on billable hours, and remote working, were all seen a key to retaining staff. Company sponsored attendance to international conferences, internal knowledge-sharing workshops, and bonuses for innovative discoveries were also raised as priorities for interviewees as offerings. A number of interviewees were keen to emphasise that they would rather prioritise flexible schedules and respecting personal time to prevent burnout and increase job satisfaction rather than compete on salaries.

Anecdotally, some of the interviewees expressed the opinion that retention levels improved when staff were given opportunities to switch between projects and engagements, and that internal career progression was encouraged through continuous learning and skill development rather than achieved from job hopping.

9.3 Cross-skilling vs. direct hire

So, we kind, we kind of try to crowdsource things specifically. So, we don’t want just one person in the know of things like AI, and I think that’s actually a downfall of a business.

Interviewees were asked specifically about their approach to obtaining specific skills, whether that would occur from direct hiring, or from cross skilling existing staff.

The prevalent view was that direct hiring only occurs when there is sufficient demand from client work. Certifications, degrees and academic partnerships played only a minor role in recruitment. When direct recruitment did occur, it was usually from industry connections, personal networks or internal referrals rather than from job postings. The opinion that trusted recommendations from recognised industry professionals were routinely valued more than formal recruitment processes was a recurring one, and use of internal bonuses for referrals was considered common for attracting top-tier talent.

The main reason cited for this approach was that niche skills such as red teaming, offensive security capability development, or even things like AI expertise, demand high salaries. These make them difficult to attract, and many offensive cyber security firms cannot compete with top-paying global companies who also desire those skills internally.

We do recruit apprentices every year and we have done, kind of stays for two years, put [them in the] program and then they basically rotate around various different cyber security teams to learn from.

Instead of hiring fully trained experts, companies in the red team element of the commercial offensive cyber sector expressed a preference to recruit talented junior individuals with the right mindset, curiosity, and problem-solving abilities and train them in-house. This is achieved through internal apprenticeships, cross-training and hands-on research projects, with internal academy and apprenticeship programs playing a significant role in training junior staff. These training programs would be adapted to provide skills in niche areas as demand and planning requires. This means that although there is not a current demand for things like AI experts, firms are already planning for such a demand and training paths in this field are already being explored.

10. Future implications & risks

Figure 5: Breakdown of amount of discussion on topics under Future Implications & Risks

Code Topic
FUT-BAL Balancing Innovation & Security
FUT-COM Commoditisation & Market Disruption
FUT-INNO Undefined Expected Innovation
FUT-REG Future Regulation
FUT-SEC Security Risks & Concerns
FUT-SERV Service Delivery Benefits
FUT-VULN Vulnerability Research
FUT-WORK Workforce Evolution

10.1 Balancing innovation & security

The defensive piece is just [as] important, if not more important than, than this, and how AI and emerging technologies are getting, you know, are helping defenders and what they’re using, to what they’re going to be using to protect organisations. Because I think that’s just as important as the simulated events.

The overall trend in this topic was the opinion that the security landscape was more balanced than before, forcing offensive security professionals to refine their tactics, tools and commercial offerings. This perceived increased speed of defensive adaptation was ultimately leading to a more cautious approach to knowledge sharing among offensive security practitioners so as to avoid techniques being burned prematurely and inhibiting operations.

Analysis of this topic from the interviews revealed that the effective capability bar for offensive security professionals was rising requiring deeper coding knowledge, automation expertise, and adaptability. Traditional offensive techniques were becoming less effective, which was forcing red teams to find short-term gaps in defences rather than relying on old exploits. Interviewees expressed the conclusion that offensive cyber operators were needing to constantly evolve as security solutions are becoming more sophisticated and harder to bypass.

This means knowledge of innovative tools and techniques may become more restricted, and only become public once they have been effectively neutralised by defences.

10.2 AI commoditisation & market disruption

It’s really impacting I guess, someone’s ability to buy and actually know what they’re buying from, because you can use ChatGPT to create the best answers.

Interviewees were asked explicitly about the commoditisation of AI into the commercial offensive cyber sector, and how they felt it would impact the delivery of their services. Opinions expressed on this topic were generally varied.

On the commercial front and in use by the red teams, AI was frequently seen to be making proposal writing and bid responses easier, which would reduce differentiation between vendors offerings of their experience and skill. Whilst the concern that AI and increasing automation of security assessments posed risks to traditional consulting services, was a remote one. There was little evidence that clients of interviewees were willing to prioritise cost-effective AI solutions over human expertise in the very near future.

No, I think it’s been shoehorned into products to make them seem better than it is, I think a lot of people are saying it’s using AI but often it doesn’t really make a difference to the products.

There was a concern that AI-driven security products were being marketed as superior, but many lack real differentiation. This was seen as contributing to market saturation and commoditisation, which in turn risked making products wildly overhyped as using the term AI was turning into a marketing term rather than a core product functionality which in reality is closer to traditional automation. There was an expressed concern that this would have a further impact on clients of interviewees struggling to understand what they were actually buying, as vendors inflated AI capabilities in their products, and could lead to a risk of AI-powered security tools being blindly trusted, even when their capabilities were unproven or exaggerated.

In the longer term, a number of interviewees felt that there was a risk that due to costs involved in AI larger firms would be able to capitalise on the advantage of acquiring top talent more effectively and investing in long-term product development, making it more difficult for smaller firms to compete. This would lead to many mid-sized and boutique firms being overshadowed by larger competitors that bundle AI-driven services with their security offerings over more specialised human-driven expertise firms. Generally, it was felt that given the option, in time, clients of interviewees would be increasingly attracted to bundled solutions that “check more boxes”, even if they do not necessarily provide better security outcomes.

10.3 Undefined expected innovation

I’m expecting to see some, you know, new shiny tools and some pretty good stuff come to the market. But so far, it’s been very slow.

Interviewees were of the opinion that more innovation in tools and capabilities should be expected but struggled to express what form those innovations take.

Despite elevated expectations, new security tools and major breakthroughs were thought to have been slow to emerge in the offensive cyber security sector, with many of the toolsets being several years old, with minor iterative enhancements being added over time. A number of interviewees expressed hope in AI and automation playing a key role to accelerate innovation in the field in the coming years.

We’ve seen all the very recent noise coming out of China about AI being run at a much cheaper cost with much lower processing power, I think. Understanding the evolution of those market elements will be quite interesting because if we can suddenly take and, or, back that, you have the cost of doing AI at the compute level. That will actually have a transformative effect on what can be done. So that that’s what I’ll be watching myself over the next couple of years.

There was a feeling that China and other international markets are driving AI innovation. This was hoped to reduce the cost of AI products improving access to them for smaller firms. Capitalising on this, some of the firms theorised use of AI to explore post-exploitation and lateral movement tactics and manage large datasets, which would help improve contextualisation, prioritisation, and vulnerability management.

Interviewees noted that larger firms are trying to break into this area by developing internal private AI models, training them on internal security assessments, allowing them to fine-tune their models for specific threat environments in the hopes it will lead to unexpected innovation for their organisations. Meanwhile smaller firms are closely monitoring AI-driven innovations, and are yet to decide whether to adopt an external solution or develop an internal one.

Whilst major innovations in the field are widely hoped for, a number of interviewees expressed doubts that security testing will ever be fully automated, arguing that human intuition and experience could not be replaced, especially when taking into context individual client environments and security appetites.

10.4 Future regulation

I worry about the regulatory environment keeping up.

Throughout the interviews the topic of regulation, its impact on clients of interviewees, and therefore a key driver for the red team services. Interviewees were therefore keen to also discuss their speculation on future regulation which could affect the industry.

A few interviewees raised the concern that cybersecurity compliance would always lag behind threats, making constant adaptation more critical than strict regulations. It was felt by interviewees that given the pace of change, it was hoped that this would eventually be international alignment over cybersecurity laws, especially given the drive of globalised business to simplify and streamline the ability for companies to operate in multiple jurisdictions and stay compliant to local laws.

A concern was voiced by some interviewees that AI-driven vulnerability scanning, and penetration testing may be wrongly accepted as a regulatory standard, bypassing human expertise. To address this, interviewees suggested that regulators may eventually require human verification of AI-driven security assessments, ensuring quality control and validation.

There’s also the taxation element. There are people talking about taxing AI usage, etcetera, etcetera. The societal impact. And you can go very much fearing into the future. But there will be adverse reaction at government levels. That’s probably the bit that we’ve not called out

The general consensus was that eventually governments were likely to introduce AI regulation and potential taxation but the timeline and scope for such implementations remained unclear. There was a concern that cross-border AI regulations could also create conflicts between different legal frameworks. This risk would require a joined-up approach to ensure that no country would abuse the technology and create market imbalances for the development and deployment of AI. Essentially trying to avoid creating AI rich and AI poor regions which authorised different uses for the technology.

There was considerable concern that less ethically constrained countries would seek to ignore international consensus leading to widespread abuse of the technology once it becomes more commonplace (such as some countries permitting its use in developing and deploying offensive operations, whilst others would only permit its use for business applications).

10.5 Security risks & concerns

We are very, very reluctant because obviously we deal with quite confidential information.

Throughout every interview security concerns and scepticism remained high of the adoption of technologies, but especially high in AI.

Many interviewees stated they were reluctant to use AI in critical security tasks due to concerns about data privacy, hallucinations, and lack of transparency. There was widespread distrust of AI-generated reports and analyses, as security teams required manual verification of AI outputs, this meant that within the sector, Enterprise adoption of AI was slow. Overall, the integration of AI into cybersecurity operations was being considered but its full impact combined with fragmented implementations means it is constrained to the experimental, non-production environments at this time.

The interviewees voiced concerns over how AI was already being abused by threat actors. Examples included how attackers were leveraging AI tools, including generative AI for phishing and automated reconnaissance, and how AI-driven automation was making it easier for less skilled attackers (e.g., script kiddies) to generate semi-effective payloads, increasing the volume of attacks.

The largest security concerns raised by the interviewees and, they claim, from their clients came from confidential data being exposed to AI models, leading to data leaks and regulatory violations. The lack of clarity on how AI models handle sensitive data, in particular raised concerns about whether models train on proprietary or customer information. As a result, it was felt that organisations were keen to investigate deployment of private AI models to mitigate risks, but many remained cautious about integrating AI into security workflows.

10.6 Service delivery benefits

Long term, I think we’ll get more efficient at testing and I think we’ll get more efficient at reporting. And generally I would hope that that will mean that our services will be, you know, continue to be delivered [at] a competitive price point for our customers.

Interviewees were keen to describe how they felt their businesses would benefit from the new technologies. AI and automation were expected to significantly improve efficiency in security testing and reporting. This was specifically focused on manual documentation and repetitive tasks, allowing security professionals to focus on high-value tasks and service delivery. It was widely recognised that AI ability to accelerate data analysis would greatly enhance the service offerings, making it easier to identify threats faster and more accurately.

Beyond reporting and documentation, it was hoped that automation and AI-powered tools would continue to improve. A number of the interviewees hoped this would lower the barrier to entry for offensive security professionals and replacing the traditional testing market for gaining experience over many years and permit greater competition between existing firms in terms of capabilities and capacity to offer services more widely and cheaply. This was specifically due to the need for highly specialised expertise and expand the number of individuals being employed in the sector. Ultimately this would lead to red team testing and vulnerability assessments becoming more accessible due to advanced tooling.

In spite of these expressed hopes, opinion was not uniform on this topic. Some security professionals believed automation would not be able replace human expertise, arguing that manual testing and critical thinking remain essential as not all clients of interviewees would trust AI-driven assessments. Furthermore, some expressed doubt that it would meaningfully reduce experience requirements, as context and intuition remain crucial in cybersecurity.

10.7 AI in vulnerability prioritisation

Certain individuals, who are developing AI in stove pipes in-situ, and they are doing some fairly scary things. Feed it a binary and it comes out with, you know, half a dozen modules [that are] critical risks and work every time. However, they are not the common, and that stuff is not going to be publicly available. You know. It does have that capability, if you are a highly talented genius level AI programmer and engineer, it can be done.

It is perhaps unsurprising that AI’s use in identifying new vulnerabilities was discussed. The industry in general has been very interested in using AI for code reviews, reverse engineering and for analysis of large-scale data. Analysis of security patches to identify mitigated vulnerabilities falls into both of these categories making it a prime candidate to help determine if AI can be used to identify additional software flaws that offensive cyber actors can take advantage of.

Some interviewees admitted to experimenting with AI-driven code review solutions to detect vulnerabilities earlier in the development lifecycle, whilst others were investigating its use in AI-driven fuzzing (the process of varying inputs to a system in the hope of identifying unintended behaviours which could lead to identification of vulnerabilities for exploitation) and automated testing are helping security teams analyse millions of attack scenarios faster than before. The results of the research being conducted on this were not shared by the interviewees but it was clear that there was a high degree of enthusiasm and hope attached to these projects.

Overall, it was felt that AI-driven automation allowed for security test cases that were previously impractical due to resource constraints; machine learning was enabling complex security simulations that were once only achievable by nation-state-level actors, but its effectiveness is very much dependent on large datasets, which limits its adoption in certain parts of the sector. The concern of AI hallucinations in this work also remains high as this will introduce false positives and lead to unreliable results which could waste research time and effort.

10.8 Workforce evolution with AI

Long term, I think it’ll make it harder for us to gain, to skill up, people from a position of being a semi-competent person to somebody who’s effective at their job, whilst having the technical knowledge … to perform adequately.

Interviewees also wanted to raise challenges and benefits they were facing with workforce adoption of AI.

It was acknowledged that AI-driven automation was streamlining repetitive tasks, reducing turnaround times on work such as coding, documentation and reporting. This was attributed to staff making more use of tools such as Copilot and ChatGPT during their day-to-day work, even with the concerns about privacy and security. Despite the efficiency gains the industry was experiencing from this, there was a negative impact as well. Tools such as these being abused by staff to bluff knowledge in areas they lack experience, leading to concerns about genuine skill levels which leads to issues during performance reviews and in job interviews which are linked to increasing salaries.

These concerns were centred around the thought that as AI adoption becomes more commonplace in the workplace, fewer manual operators will be required and instead more decision makers will be needed. It was felt this would lead to a general degradation in skill levels and experience, ultimately widening the gap between current generation and next generation cybersecurity professionals even further.

In two interviews this conjecture was taken a step further where it was felt that a lack of experience would ultimately lead to a weakening of cyber security postures in clients of interviewees. This would be due to cybersecurity professionals not having the experience and business context to challenge clients based on poor security practices.

11. Annex: interview & acronyms

11.1 Semi-structured interview

The following are the interview questions which were used to compile the results for this study.

11.2 Section 1 - Introduction (EST 5 Mins)

Overview & Objectives

Explain the purpose: The intention of this call is to understand how the commercial offensive cyber sector and market is evolving with the adoption of emerging technologies and the strategic implications of that.

Emphasis that the discussion is focused on market trends and organisational dynamics, not the specific technical details of technologies.

Consent & Confidentiality

Clarify how the information is going to be used (e.g. anonymised, aggregated insights)

Obtain consent to record the conversation if needed.

Objective - The aim of this section is to understand how market forces are driving change and influencing the adoption of emerging technologies.

  • How would you describe the current state of the commercial offensive cyber sector?

  • What are the most significant changes you’ve observed in the past 3-5 Years?

  • How competitive of collaborative is the market currently?

  • What emerging trends do you think are shaping the sectors evolution?

  • Are there specific technologies, strategies, or market forces driving these trends?

*What roles do regulatory, ethical, or geopolitical considerations play in shaping market dynamics?

11.4 Section 3 - Integration of emerging technologies (EST 10 Mins)

Objective - The objective of this section is to explore how companies are adopting and incorporating critical and emerging technologies into their offerings.

  • How is your company adopting and incorporating critical and emerging technologies into their offerings?

  • Are these technologies being used to enhance intrusion capabilities, improve data analysis, or both?

  • Are there any other technologies that are relevant in addition to these already mentioned?

  • To what extent do you see your company developing bespoke technologies versus leveraging off-the-shelf solutions? (e.g. pre-existing AI models)

  • Have you noticed any trends in how emerging technologies are being integrated into the offerings of cyber intrusion companies?

  • What is your assessment of how the integration of critical and emerging technologies is affecting competition or differentiation in the market?

  • Do you think the integration of these technologies is influencing customer expectations or market demand?

  • What changes have you noticed?

  • Has the emergence of AI models or bespoke AI-cyber capabilities increased expectations from customers in terms of access?

11.5 Section 4 - Investment and acquisitions (EST 10 Mins)

Objective - The objective of this section is to examine the financial and strategic commitment of companies to critical and emerging technologies.

  • What levels of investment is your organisation making in critical and emerging technologies?

  • Are these investments primarily in research & development, acquisitions, or other areas?

  • To what extent is your company acquiring startups or specialised entities to gain expertise or technology in this space?

  • What are acquisitions focused on? For example, technology, talent or other areas?

  • How are these investments influencing your competitive positioning in the market?

11.6 Section 5 - Talent and expertise recruitment (EST 10 Mins)

Objective - The objective of this section is to understand how companies are building internal capabilities around critical and emerging technologies.

  • How is your company attracting and retaining experts in emerging technologies?

  • Are there notable shifts in your recruitment strategies or partnerships with academic/innovation hubs?

  • What is your focus for recruiting experts in critical and emerging technology?

  • Are you looking for specific skills or disciplines in particular?

  • What challenges does your company face in recruiting expertise in areas like AI or quantum computing?

  • How is recruitment shaping innovation or research & development efforts for your company?

11.7 Section 6 - Blurring of sector lines and future implications (EST 10 Mins)

** Objective - The objective of this section is to explore how the integration of emerging technologies is influencing the sectors boundaries and its intersection with other industries.**

  • Do you see the integration of emerging technologies blurring the lines between the offensive cyber sector and other industries? For example, overlaps with data analytics, AI/ML Development, or cloud services?

  • How might these overlaps create new challenges or opportunities for your company in the commercial cyber intrusion market?

  • What do you see as the long-term implications of this technological integration for your company?

  • Are there risks of commoditisation or increased competition from adjacent industries?

11.8 Section 7 - Conclusion (EST 5 Mins)

  • Do you have any additional insights on how the market, or your organisation is evolving with

  • Are there other aspects we haven’t covered which you think are critical to understanding this topic?

Follow up & Next Steps

Confirm willingness to participate in follow-up discussions if needed.

Thank participant for their time and insights.

11.9 Acronyms

  • AI: Artificial Intelligence
  • ASM: Attack Surface Monitoring
  • DNS: Domain Name System
  • DORA: Digital Operations Resilience Act
  • DSIT: Department for Science, Innovation & Technology
  • EDR: Endpoint Defence & Response
  • IT: Information Technology
  • MFA: Multi-Factor Authentication
  • OS: Open Source
  • OT: Operational Technology
  • ROI: Return on Investment
  • SAAS: Software As A Service
  • SOC: Security Operations Centre
  • UK: United Kingdom