Cyber Security Advisory: Managing tensions between security, safety and human factors requirements analyses
Published 4 November 2025
Executive summary
Security, safety and how people interact with systems has always been a struggle. If unmanaged, this can cause conflicts and compromises in a capability’s design, leading to unintended consequences that introduce more safety and security failure modes. To date, however, there have been no tool or practice exemplars for integrating safety and human factors data and activities with Secure by Design. Such exemplars could provide important lessons about how different types of non-functional requirements come together at the earliest stages of a capability’s life.
To build and evaluate a suitable exemplar, the Defence Science and Technology Laboratory (Dstl) contracted a supplier to develop and evaluate a Minimum Viable Product (MVP) illustrating how safety, security and human factors analyses aligned with Secure by Design. The supplier evaluated the MVP by creating a User Requirements Document (URD) for a Dstl-provided case study.
Based on building and evaluating the MVP, this advisory makes 5 recommendations for managing tensions between security, safety, and human factors requirements analyses.
| Recommendation | Summary |
|---|---|
| 1. Escalate out of scope requirements | Make provisions to escalate requirements and supporting analysis outside of a capability’s scope. |
| 2. Make solutioneering an opportunity for knowledge exchange | When validating security requirements, use solutioneering as an opportunity for knowledge exchange with problem owners and other domain experts. |
| 3. Implement traceability consistently | Traceability should be implemented consistently in both software and documented specifications, and for different requirements stakeholders. |
| 4. Specify cohered Secure by Design competence | The requisite competence needed to apply a cohered Secure by Design process, or consume its outputs should be specified. |
| 5. Bake in First Line Assurance (1LA) | 1LA should be baked into tools and practices for cohering safety, security, and human factors requirements with Secure by Design. |
These recommendations will be of benefit to delivery team leads, requirements managers, 1LA and Second Line Assurance (2LA) assessors, and suppliers delivering Secure by Design and Security Engineering suitably qualified and experienced personnel (SQEP) to delivery teams.
1. Introduction
Motivation
Requirements are the foundation of resilient software design in defence, and security requirements are an important element of the Secure by Design approach adopted by the Ministry of Defence (MOD) [footnote 1]. Failing to adequately elicit, specify, and validate requirements increases a system’s attack surface, leading to other unintended consequences.
For example, the threat of information disclosure between 2 elements of a capability might warrant user requirements for some form of secure channel. However, inaccurate or ambiguous requirements for securing this communication could be refined to inaccurate or ambiguous cryptography and access control system requirements which, when implemented, could:
- impact performance (if inappropriate algorithms are selected given the operational context and broader platform considerations)
- increase the workload on those using interfaces that encrypt or decrypt communications [footnote 2]
- indirectly use services that, due to inappropriate security policies, lead to unexpected access control issues [footnote 3]
In addition to tensions between security and functionality, there are also tensions between security and safety. For example, within the rail sector, attacks on infrastructure used by rail signallers are comparatively inexpensive, but have safety implications in 2 areas:
- shutting down or degrading services targeted by an attack also denies or degrades safety-critical information needed by signallers or train drivers, thereby introducing potential hazards
- changes to tasks or workarounds carried out by humans directly or indirectly linked with the targeted infrastructure might lead to increased cognitive burden, leading to errors and violations that also make hazards more likely [footnote 4]
Such tensions are also present within military systems, but with the additional tension of survivability.
For example, in air platforms, the implications of degrading information available to a pilot could both endanger a pilot’s life, and increase the chance of the aircraft being shot down. While survivability is beyond the scope of this advisory, the examples illustrate that security requirements do not live in isolation. Security, safety, and human factors analyses needs to be integrated into the tools and techniques used to engineer software.
Bi-directional traceability between requirements and supporting security, safety, and human factors analyses is important for meeting the Secure by Design ‘Assure, Verify and Test’ principle. Such traceability is, however, necessary but not sufficient for cohering requirements. Both the volume and range of data associated with security, safety, and human factors analyses is such that, without assistance, engineers may become subject to the bounded rationality bias [footnote 5]. This bias limits the ability to comprehend different design options due to incomplete or inaccurate information, and limitations in resources and time.
There are currently no exemplars for how safety, security, and human factor analyses might cohere to support Secure by Design. Previous work in the academic community has explored the conceptual foundations of integrated security, safety, and usability requirements, and how software tools can exploit them, for example [footnote 6].
There has also been some industrial take-up of methodologies and tooling building on these foundations, and growing interest in co-assurance between non-functional concerns like safety and security, for example by the Safety-Critical Systems Club [footnote 7]. Unfortunately, this interest is not yet manifest as useful tools and practice within UK defence to support Secure by Design.
During the course of integrating and operating such tools, important lessons could be learned about how different types of non-functional requirements could be cohered at the earliest stages of a capability’s life. This will make Secure by Design a partner not an inhibitor to complementary security, safety, and human factors analyses.
Objectives
The objective of this guidance is to issue recommendations for managing tensions between security, safety, and human factors requirements analyses. The recommendations are grounded in the experiences building and evaluating a MVP aligning safety, security, and human factors analyses, requirements management, and Secure by Design.
This guidance is based on results that show the potential of using software for bringing safety, security, and human factors requirements together.
This guidance helps those responsible for selecting software to support Secure by Design, but makes no recommendation for any particular tool. It provides advice for addressing intrinsic challenges associated with cohering different types of analysis with Secure by Design, particularly at the earliest stages of a capability’s life.
Guidance outline
In Section 2, the approach for building and evaluating the MVP is presented.
In Section 3, the results of building and evaluating the MVP are described.
In Section 4, recommendations are given for teams wishing to initiate Secure by Design activities with complementary safety, security, and human factors requirements analyses.
2. Approach
Dstl ran a competition to deliver and evaluate a MVP aligning safety, security, and human factors analyses, requirements management, and Secure by Design. In developing and evaluating the MVP, the selected supplier had to satisfy the requirements in the following sub-sections.
Supplier expertise
To build and evaluate the MVP, the supplier needed to have working security, safety, and human factors knowledge to complete the necessary research and development.
Technology constraints
Three technology constraints were applied to the MVP.
First, suppliers were instructed to build upon existing open source or commercial solutions. This would elevate an existing solution’s technology readiness for UK defence. Where open source solutions were proposed, suppliers had to indicate their level of expertise in using, extending, and maintaining them.
Second, the MVP was also required to be accessible to users as a web application, but no internet access should be assumed for any client or server-side component.
Finally, the MVP should expose appropriate services or Application Programming Interfaces (APIs) facilitating run-time interoperability with complementary tools and Continuous Integration / Continuous Deployment (CI/CD) pipelines. Demonstration of the MVP within a CI/CD pipeline would not be required.
Security, safety, and human factors analysis
The MVP had to demonstrably support Secure by Design stakeholders in the ongoing identification and articulation of risk, resulting both from security requirements and its supporting security, safety, and human factors analyses. It also had to support complementary threat modelling, risk analysis, human factors, and safety analysis, where human factors and safety analysis techniques selected were compatible with DEFSTAN 00-251 [footnote 8] and DEFSTAN 00-56 [footnote 9] respectively.
The MVP had to demonstrably support in the application of System-Theoretical Process Analysis (STPA). STPA is a hazard analysis approach used to identify Unsafe Control Actions (UCAs) and causal factors for hazards in system models [footnote 10]. It is also useful for framing risk appetite statements necessary for supporting Secure by Design [footnote 11].
Documentation
The MVP had to provide installation instructions, and guidance for on-boarding new users, where users have at least an awareness-level of expertise with risk analysis, STPA, human factors and safety analyses, and threat modelling, for example use of Spoofing, Tampering, Repudiation, Information Disclosure, and Elevation or Privilege (STRIDE) and Data Flow Diagrams(DFDs).
Demonstration
Once the MVP was delivered, the supplier had to deliver an in-person demonstration of the MVP using a suitable case study. This would provide confidence that the MVP could be applied to a unique problem of our understanding, and not something convenient to the supplier. Any security, safety, and human factors analysis carried out during the demonstration should be based on pre-existing open source data.
The collection of human participant data for the demonstration was prohibited.
The supplier had to use the MVP deliverable to create a URD following the instructions specified in appendix A based on the capability described in appendix B. The ‘capability’ was a software plugin used to interact with a parent capability: a UAV - Pandora - for delivering lethal and non-lethal aid to UK and coalition partner units engaged in Counter-Insurgency (COIN) operations. Dstl captured the nuances of the capability, and Defence Lines of Development (DLOD) considerations encapsulated within it.
The URD template was loosely based on that commonly used within MOD. The supplier had to deliver a draft URD on or before the day of the demonstration.
3. Results
Supplier expertise
The selected supplier had expertise in design for security, human factors, and safety assessment, and was supported by 2 full-stack software developers.
Technology constraints
The MVP was a fork of the open-source Computer Aided Integration of Requirements and Information Security (CAIRIS) platform. CAIRIS is a cloud-based platform, accessible through a web browser. It exposes a Representational State Transfer (REST) API, used by both the standard user interface, and available to client software wishing to consume or contribute to CAIRIS system models.
The MVP was successfully deployed to Dstl’s cloud-infrastructure, and was accessible from web browsers on its corporate network.
Figure 1. MVP process framework
Figure 1 describes the cycle of security, safety, and human factor analysis activities supported by the MVP. The cycle begins by identifying a capability’s concept of use, key goals, assets, and user roles. It then considers the characteristics of the capability’s users and the tasks they carry out, before considering data and information flows across the capability. The process then considers security and safety concerns and how these contribute risks, and requirements that address them. Some activities of this process are iterative, and can be repeated as required.
To support the ongoing identification and articulation of risk, resulting both from security requirements and its supporting security, safety, and human factors analyses, the MVP was built on the process framework shown in figure 1. This indicates the activities a delivery team would need to complete to specify requirements, and security, safety, and usability elements associated with them. While the process could be followed stepwise, many steps are iterative, and (as indicated in figure 2) should be carried out by different subject-matter experts at the same time.
The MVP supported an articulation of risk and requirements aligned with both Secure by Design and STPA. In addition, it extended CAIRIS in 2 areas to complement DEFSTAN 00-251 and DEFSTAN 00-56 respectively.
First, support was added for Hierarchical Task Analysis (HTA) and Performance Shaping Factors (PSFs).
HTA entails representing tasks hierarchically in terms of tasks and sub-tasks, where tasks have a purpose, and sub-tasks, where organised as part of a plan, need to be completed to meet this purpose [footnote 12].
PSFs are factors that influence human performance [footnote 13]. These are associated with sub-tasks, and indicate causal factors and the potential for failure they introduce.
Second, the concepts of Event, Failure, and Hazard were incorporated into the MVP. Events are typically undesired or unplanned, resulting in accidents. Failures are associated with failure modes and sub tasks. Hazards contribute to both events and failures, and may also be associated with security risks.
These extensions facilitate the use of STPA with elements aligned with human reliability analysis, and also showed how outputs aligned with DEFSTAN 00-251, DEFSTAN 00-56, and Secure by Design can be self-reinforcing.
Figure 2. Stakeholder input
Figure 2 considers the same activities as figure 1, but indicates which security, safety, human factors, and other subject matter experts are associated with each activity. In most cases, a single subject matter expert holds primary responsibility for providing input, although others could also provide input.
Documentation
The installation instructions provided with MVP were a modest extension of CAIRIS’ installation guide, with an additional script to ease the installation process. Similarly, the user manual was based on the CAIRIS manual, with some updates for the new concepts mentioned in the previous section.
On-boarding guidance took the form of a process document which, when used in conjunction with the manual, provided guidance on how the different system, security, safety, and human factor analysis techniques can be used in a clear and organised way.
Demonstration
The case study was issued to the supplier on delivery of the MVP. The MVP was successfully demonstrated to a panel of system, security, safety, human factors and military subject matter experts at a Dstl site approximately 1 month after this delivery. Although a month was given for completion of the case study, due to other commitments, the supplier spent approximately a week using the MVP to elicit and specify the requirements within the URD.
The URD contained:
- 10 key user requirements
- 16 user requirements
- 23 human factors and safety requirements
- 8 security requirements
These requirements were reinforced by 7 tasks, 9 risks, 7 failures, and their supporting security, safety, and human factors analyses.
For each DLOD consideration by Dstl, the URD requirements were inspected to determine how much these considerations had been incorporated. The coverage of requirements to DLOD considerations is summarised in table 1 and detailed in the sub-sections that follow. As table 1 suggests, coverage of the user requirements to the pre-specified DLOD considerations was mixed. However, this coverage could have been significantly increased with more access to UK defence expertise, better time management and quality assurance of results.
| DLOD | Coverage (%) |
|---|---|
| Training | 50 |
| Equipment | 69 |
| Personnel | 50 |
| Information | 56 |
| Concepts and Doctrine | 0 |
| Organisation | 100 |
| Infrastructure | N/A |
| Logistics | 50 |
Table 1: coverage of pre-specified DLOD considerations in URD.
Training
The specification exemplar assumed any training burden should be negligible beyond the existing Pandora training package, and the plugin should be incorporated into UK and coalition partner trials and exercises.
The need for learning outcomes associated with key tasks was implied only, although user requirements for training did account for missing or inconsistent information. Similarly, although there was no indication that the plugin should be incorporated into trials and exercises, requirements for training to identify faults were identified.
Equipment
The specification exemplar implies that procurement of the plugin, and its co-ordination with coalition partners should be considered. However, no requirements associated with procurement were elicited.
Augmentations to the Standard Operating Procedures (SOPs) for Pandora were also expected to be captured to account for the plugin. The plugin should not overly constrain mobile device freedom of action, and be as secure by design as its parent capability. Changes to the SOP were not accounted for in the user requirements, but there was a human factors and safety requirement indicating that mobile systems should be evaluated to ensure operator workload is not exacerbated. However, while security and human factors requirements were captured, none said that they should allow normal use of the mobile device.
Personnel
While the plugin had no clear implications on personnel matters, a general awareness of UAV operations on mobile devices should be considered for those involved in COIN operations. This explicit level of awareness was not captured, but a key user requirement was captured for operators to identify and respond to security vulnerabilities.
Information
Given the importance of lethal aid losses, the plugin would need to distinguish between different types of aid. This was not captured as a user requirement, but the models of assets forming the basis of the requirements did distinguish between types of aid.
Because of the simultaneous use of multiple ATAK plugins, accounting for potential symbol clashes was necessary. No user requirement was captured to account for explicit symbol clashes, but a human factors and safety requirement was captured, indicating the tracking and delivery alarms and alerts should be distinct.
Because plugin users may interact with information about other units or known insurgent locations, there were access control needs to account for the loss or compromise of a mobile device (for instance, to ensure such information is not disclosed).
A security requirement for role-based access control was included within the URD, together with a security requirement specific to Pandora for automatically wiping sensitive data shared with ATAK upon physical capture or tamper detection. That requirement was out of the specification exemplar’s scope though.
Doctrine and concepts
Because there is likely to be new and extant doctrine and concepts associated with both Pandora, or the loss of lethal aid to adversaries, the plugin’s use would need to correspond with this. But no requirements were specified corresponding with either.
Organisation
There would need to be some consideration for units contributing information needed by the plugin, specifically for the delivery of lethal aid to coalition partner units. A human factors and safety requirement was elicited for visible and accessible tracking, delivery, and arrival information to operators. The URD did not indicate whether these are UK or coalition partner operators.
Infrastructure
The supplier did not ask for or include any requirements for infrastructure within the URD. But as a software-based capability, there were no obvious infrastructure considerations to be captured.
Logistics
The plugin needed some consideration for maintaining its software environment, as well as strategy and procedures for tracking their use. A human factors and safety requirement was captured that indicated maintenance activities were required to align with relevant human factors standards and guidelines.
A security requirement was also captured to align configuration changes and mission uploads to an identified user. With some abstraction, this would have yielded an appropriate user requirement associated with plugin authorised and unauthorised use.
4. Recommendations
Drawing from the results of building and evaluating the MVP in Section 4, 5 recommendations are issued for managing tensions between security, safety, and human factors requirements analyses.
This advisory has 4 beneficiaries:
- delivery team leads, who are responsible for ensuring team members are suitably qualified and experienced (SQEP) for activities they need to perform, and sourcing any supporting software tools
- requirements managers, who need to cohere safety, security, and human factors analyses with requirements activities, particularly during the early stages of capability acquisition
- 1LA and 2LA assessors, who seek to make sense of security, safety, human factors requirements and supporting evidence
- suppliers who deliver Secure by Design and Security Engineering SQEP to delivery teams
Escalate out of scope requirements
We recommend that management activities for data and process requirements assurance include provisions for escalating requirements and supporting analysis outside of a capability’s scope.
The scope for the capability described in the Single Statement of User Need (SSUN) was intentionally small. This was a single software plugin based on a known software framework. However, requesting and analysing security, safety, and human factors data would require exploration of this broader context (for instance, to identify threats and vulnerabilities that could impact the capability).
Despite the modest scope, many of the requirements specified were for the broader system context the capability was part of, rather than the capability itself. For example, the requirement UAV Anti-GPS Spoofing Logic with the text UAV must include GPS spoofing detection and route validation logic to detect anomalies exceeded the scope of the Pandora Delivery Tracking capability.
Some requirements appeared to go unnoticed as different team members were contributing to the MVP at the same time. The suppliers acknowledged that the ‘management activities’ supporting the process framework could have been clearer.
However, both the requirement and its underpinning analysis are an exploitable opportunity for Secure by Design - an opportunity that the stove-piped accreditation schemes preceding it could not offer. This is because both the out-of-scope requirement and its supporting analysis is likely within the scope of another capability associated with Pandora.
If the scope of this and related capabilities are still developing, such requirements might be lost. We should actively pass on any needs that fall just outside a capability’s scope to the wider team or organization, so decisions about scope coverage can be better made.
Even if these requirements are redundant when captured elsewhere, its supporting analysis still helps bolster them. The resources these activities might save is likely to be greater than any duplication of effort.
Make solutioneering an opportunity for knowledge exchange
When validating security requirements, we recommend using solutioneering as an opportunity to share knowledge with problem owners and domain experts. Bringing the problem to the table before the solution should provide a more beneficial outcome.
Many of the security requirements captured for the capability were system requirements, rather than user requirements. This meant they made assumptions about the solution that went beyond the context in the specification exemplar. For example, the Secure Communication Encryption ATAK requirement, with the text Communications in ATAK must be encrypted using AES-256 or equivalent encryption protocol is an inappropriate user requirement. It assumes architectural decisions have been made about how such a secure channel will be implemented, for instance, the use of a particular cipher, and its key size.
More appropriate user requirements could have been identified by carefully abstracting these requirements (for example, the need for a symmetric cipher).
These requirements are also symptoms of solutioneering. This occurs when engineers try to solve their own problems, rather than those associated with the user [footnote 14].
Solutioneering security requirements can be easy because of the low-level of abstraction that some vulnerabilities might be exploited at. Given uneven distributions of knowledge that span organisational and disciplinary boundaries [footnote 15], abstracting these requirements can also be hard.
To overcome these difficulties, requirements managers should proactively identify solutioneering, and be prepared to draw on knowledge from problem owners and those with relevant domain expertise. These might not only improve the resulting user requirement, but stimulate insights that result in new, more innovative requirements.
Implement traceability consistently
We recommend that traceability is implemented consistently in both software and documented specifications, and for different requirements stakeholders.
The MVP was effective at visualising the links between different system, security, safety, and human factors concepts, and maintaining these links. Unfortunately, the traceability evident within the URD had 2 weaknesses.
First, the requirement traceability matrix contained both inaccurate and superfluous information. For example, failures associated with each requirement did not correspond with named failures within the system model, and detail about PSFs and results of failures offered little traceability value.
This weakness was a symptom of requirements being contributed by a non co-located engineer who was unable to use the MVP, and, due to time pressures, the contributions were incorporated directly into the URD without first being incorporated in the system model in the MVP.
Second, some graphical models were difficult to read due to the presence of traceability links which, while filterable within the MVP, were not in the URD. Because of how the figures were not vector-based, it was also difficult to read the figure text when zoomed in.
Suggestions for addressing such weaknesses include:
- consistently name objects across both software tools and generated documentation
- hyperlink object names to definitions, which themselves could contain tables of traceability links to related objects
- remove non-essential elements and links from model figures
- embed object names in model figures to improve searchability
- consider alternative metaphors to visualising groups of related concepts, for example tree structures to show elements contributing to important concepts like risks and losses
Figure 2 acknowledges the input that different stakeholders might be expected to provide, but the MVP did not consider the traceability needs other users of the specification might have. For example, an Independent Safety Assessor will be concerned about how safety requirements are allocated to refined requirements and software, and vice-versa. A 2LA assessor also needs to be confident that security requirements are effective at sufficiently reducing risks.
As a consequence, stakeholder friendly metaphors for visualising traceability should be investigated, to see how software tools and generated documentation can best support them.
Specify cohered Secure by Design competence
We recommend clearly stating the skills needed to either follow a Secure by Design process or make use of its results.
DT leads are responsible for ensuring security leads and the wider security team are appropriately SQEP [footnote 1]. This is difficult when knowledge about the necessary expertise is incomplete. DT leads need to draw on at least awareness-level expertise in security, safety, and human factors when recruiting the specialist support for a joined-up Secure by Design process. This is because not all specialists are competent in all design techniques within their disciplinary area. For example, most human factors practitioners will be aware of what personas are, but the skills needed to conduct the necessary user research that generates personas are lesser-known.
Software tools can support competent practitioners, but cannot replace them. For example, those using the MVP as part of a cohered Secure by Design process were assumed to have awareness-level expertise with threat modelling, STPA, and human factors and safety analyses. When applying the MVP though, the supplier team encountered occasional confusion around how human factors and security experts would interpret certain terms such as ‘risk’. And so while awareness-levels of expertise may be sufficient for browsing a URD, greater expertise is necessary for contributing to, or validating its contents. If the required competencies are not made explicit, the expertise needed to quality assure URDs may be underestimated.
Bake in First Line Assurance
We recommend that First Line Assurance (1LA) is embedded into tools and practices for bringing together safety, security, and human factors requirements with Secure by Design.
As section 3 indicates, complementary tools and practice can facilitate an agile but structured approach to Secure by Design. This enables professionals from a range of disciplines to come together, and for their contributions to support the delivery of well-structured user requirements at pace. The tools and practices described support 1LA, but do not replace it.
Secure by Design is not a box-ticking exercise. Attempting to use figure 2 in this way would fail to appreciate the importance of data and process assurance at each stage, and the impact inappropriate data in one step could have on all subsequent steps. And so safety, security, and human factors practices need to align with the management processes necessary for 1LA, and software tools need to provide the automation necessary to ease a delivery team’s 1LA activities. For example, integration with the Cyber Assurance and Activity Tracker (CAAT).
Although effort is needed to orchestrate these activities, the outcome will be a more logical and sound URD, that makes a stronger case for the specified capability. This result validates Secure by Design’s applicability to the earliest stages of capability acquisition, as well as the potential gains in productivity and cost efficiency.
Appendix A: Pandora Delivery Scheduling: User Requirements Document (URD)
Based on the specification exemplar provided (Pandora Delivery Scheduling), please use your MVP to create a URD for the capability described in the exemplar.
As well as the requirements for the capability, the URD should also capture security, safety, and human factor requirements based on appropriate analysis you have carried out with your MVP and process documentation.
The URD should incorporate the following sections:
Section 1: general description
This should include all of the following:
- Single Statement of User Need (SSUN)
- a figure illustrating the boundary of the capability in context
- a set of scenarios that bound the circumstances in which the capability must be effective
- any constraints that will have a significant or abnormal impact on the capability
- a list of users whose needs section 3 records
- a list of stakeholders whose needs section 3 records
Section 2: key user requirements
These should be drawn from section 3 and documented in a table.
Section 3: individual capability requirements and constraints
Requirements should express the services users need to be able to deliver or the outcomes and effects that the users need to be able to achieve, by deploying or exercising the capability.
They should demonstrate that the SSON is fully described to the satisfaction of stakeholders, and account for the various Defence Lines of Development (DLODs).
Constraints should express what the URD owner needs to impose on the solution to the SSON.
Requirements should be organised in a hierarchy, but the hierarchical breakdown must remain within the problem space and not the solution space.
Requirements should be presented in a table format, with each requirement described using the following attributes:
- unique identifier
- hierarchical user requirements number
- requirement text
- measure of effectiveness
- justification
- validation criteria
- priority
- supplemental information
Section 4: context documents
This should include any supporting analysis and evidence reinforcing the safety, security, and human factors capability requirements.
Section 5: glossary
This should include definitions and explanations for all terms that could cause confusion, as well as references and acronyms.
Appendix B: Pandora delivery tracking
Background
Pandora is a planned UAV-based capability for delivering lethal and non-lethal aid to UK and coalition partner units engaged in Counter-Insurgency (COIN) operations.
Single Statement of User Need
UK defence requires the capability supplier to track Pandora deliveries of lethal and non-lethal aid to Pandora users across contested Cyber, Electronic Warfare (EW), and kinetic environments worldwide.
Customer context
Planning into Pandora was initiated because of perceived capability gaps associated with its predecessor system, Athena.
Athena was recently used to support the delivery of rations to remotely located UK and coalition partner units during COIN operations in support of the island nation of Isla Piraña.
Based on these experiences, it became clear that Athena would need to operate in significantly more contested environments where both insurgent and organised crime groups might seek to attack deployed units, and employ a range of Cyber, EW and kinetic effects to disrupt or intercept UAV deliveries. As a result of this, units are likely to require greater logistical support in future operations and more resilient delivery options than Athena can currently support.
Athena deliveries were tracked using bespoke tablet devices, but because these devices are so common, the customer prefers to use regular mobile devices to track Pandora deliveries. Following a recent Initial Look Request (ILR), the customer found the Android Tactical Assault Kit (ATAK) a useful option.
Additional considerations
Other things to consider include:
- scope of lethal aid is currently limited to small-arms ammunition
- scope of non-lethal aid is limited to batteries and medicine
- early decision has been made to implement the tracking solution as a ATAK plugin
- capability sponsor has indicated the following as unacceptable losses:
- [L1.] death or human injury occurs
- [L2.] a UAV is lost or destroyed
- [L3.] unauthorised information about lethal aid deliveries is disclosed
List of abbreviations
| Acronym | Definition |
|---|---|
| 1LA | First Line Assurance |
| 2LA | Second Line Assurance |
| APIs | Application Programming Interfaces |
| ATAK | Android Tactical Assault Kit |
| CAAT | Cyber Assurance and Activity Tracker |
| CAIRIS | Computer Aided Integration of Requirements and Information Security |
| CI/CD | Continuous Integration / Continuous Deployment |
| COIN | Counter-Insurgency |
| DFDs | Data Flow Diagrams |
| DT | delivery team |
| DLOD | Defence Lines of Development |
| Dstl | Defence Science and Technology Laboratory |
| EW | Electronic Warfare |
| HTA | Hierarchical Task Analysis |
| ILR | Initial Look Request |
| MOD | Ministry of Defence |
| MVP | Minimum Viable Product |
| PSFs | Performance Shaping Factors |
| REST | Representational State Transfer |
| SOPs | Standard Operating Procedures |
| SQEP | suitably qualified and experienced |
| SSUN | Single Statement of User Need |
| STRIDE | Spoofing, Tampering, Repudiation, Information Disclosure, and Elevation of Privilege |
| STPA | System-Theoretic Process Analysis |
| UCAs | Unsafe Control Actions |
| URD | User Requirements Document |
References
-
MINISTRY OF DEFENCE, Secure by Design: Design for Security from the start, 2025 ↩ ↩2
-
WHITTEN, Alma, TYGAR, J. D., Why Johnny can’t encrypt: a usability evaluation of PGP 5.0, In: Proceedings of the 8th Conference on USENIX Security Symposium - Volume 8, USENIX Association, 1999, SSYM’99. ↩
-
REEDER, Robert W., BAUER, Lujo, CRANOR, Lorrie F., REITER, Michael K., VANIEA, Kami, More than skin deep: measuring effects of the underlying model on access-control system usability, In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2011, CHI ’11, 2065–2074. ↩
-
THRON, Eylem, FAILY, Shamal, DOGAN, Huseyin, FREER, Martin, Human factors and cyber-security risks on the railway – the critical role played by signalling operations, Information and Computer Security, 01 2024, 32(2), 236–263. ↩
-
SIMON, Herbert A., Rational decision making in business organizations, The American Economic Review, September 1979, 69(4), 493–513. ↩
-
FAILY, Shamal, Designing Usable and Secure Software with IRIS and CAIRIS, Springer, 2018. ↩
-
SAFETY CRITICAL SYSTEMS CLUB, Security Informed Safety Working Group, https://scsc.uk/siswg, 2025. ↩
-
MINISTRY OF DEFENCE, Human Factors Integration for Defence Systems, December 2021, (Defence Standard 00-251 Issue No 2). ↩
-
MINISTRY OF DEFENCE, Safety Management Requirements for Defence Systems: Part: 01 : Requirements and Guidance, October 2023, (Defence Standard 00-056 Part 01 Issue No 8). ↩
-
LEVESON, Nancy G., Engineering a Safer World: Systems Thinking Applied to Safety, MIT Press, 2017. ↩
-
NCSC, Are you hungry? A two-part blog about risk appetites, https://www.ncsc.gov.uk/blog-post/a-two-part-blog-about-risk-appetites, September 2021. ↩
-
SHEPHERD, Andrew, Hierarchical Task Analysis, Taylor & Francis, 2001. ↩
-
KIRWAN, Barry, A Guide to Practical Human Reliability Assessment, Routledge, 1994. ↩
-
THIMBLEBY, Harold, THIMBLEBY, Will, Solutioneering in user interface design, Behaviour & Information Technology, 1993, 12(3), 190–193. ↩
-
Dstl, Guidance: Secure by Design Problem Book, https://www.gov.uk/government/publications/secure-by-design-problem-book, 2025. ↩