Guidance

Critical factors for success

Published 23 October 2020

The next three Building Blocks represent critical factors we must understand at the appropriate stages of development and deployment. They aren’t part of the system, but are critical to its success.

1. Advantage

It’s a good idea to know why you’re doing what you’re doing, and in particular what the intended business or operational advantage (benefit) is. This includes understanding the benefits we hope to achieve, the user needs we are trying to satisfy (recognising sometimes we need to experiment to understand these), and the cost implications of our potential solutions across the wider Defence and Security enterprise.

Here are some advantageous (sorry) questions to ask yourself:

1.1 Benefit

  • Are we clear on what we are trying to achieve, and what benefits we expect? It’s always a good idea to know this.
  • Are we confident that the benefits are achievable? Will partially achieving the benefits still be worth doing?
  • Is this a precursor to gaining more significant benefit in the future?
  • How will this capability compare with our competitors? Even if we achieve what we set out to, if it’s not at least as good as what others are doing, is it worth doing?

1.2 User needs

  • Do we have a clear understanding of the user needs we are seeking to address?
  • Do we have users involved in the project? Here involved means not only turning up to the kick-off meeting but actually being part of the team with a voice that will be listened to.

1.3 Cost

  • Is there a financial or other cost imposed elsewhere on the organisation, and is the benefit still worthwhile given that cost?
  • Are all of those who may be affected involved in the debate?

1.4 Protecting advantage

  • It’s a fact of life that someone will try to disrupt or remove the advantage your system provides. So it’s critical to think about how you can stop them and protect your advantage.
  • What are the threats to your system? What are the vulnerabilities? How can you protect your system to mitigate these? This might include cyber security, protecting your data and hardening against adversarial AI. You need to think about this for all the Building Blocks!

1.5 Intellectual Property and Sovereignty

  • If we gain significant advantage from AI and Autonomy, have we considered whether and how we should own or protect the resulting Intellectual Property?

  • Should we maintain control over key components?

We must have consent for the idea and the associated capability. “Consent” is used broadly to include legal and regulatory constraints imposed upon us, as well as satisfying our own policy, ethics and risk appetite, and the willingness of suppliers and partners to support where required.

Here are some questions you should ask yourself:

  • Are there any externally imposed constraints on our capability, such as legal and regulatory frameworks that we need to follow?
  • Have we checked the international position as well as domestic?
  • What do we need to do to stay within these constraints?
  • Is the legal position clear or ambiguous? Do we need to get advice to ensure we comply?
  • Is it possible to influence those constraints if we can’t operate within them?
  • Note that anything involving legal matters will take longer than you can possibly imagine, so factor this in.

2.2 Policy and risk appetite

  • Is the enterprise (including partners, suppliers and collaborators) likely to be willing to pursue this capability, based on its own internal policies and risk appetite?
  • What are the existing policy and risk positions of our organisations?
  • Are there international policies to consider?
  • What do we need to do to stay within these constraints?
  • Is the policy position clear or ambiguous? Do we need to get advice to ensure we comply?
  • Is it possible to influence the policy if we can’t operate within it?
  • Should we try to influence this? For example, what are the risks of not developing the capability?

2.3 Ethics

  • Fundamentally, should we pursue this capability?
  • Have we considered the ethics of doing so, and equally the ethics of not?
  • What is our organisation’s existing ethical position?
  • Does this capability operate within that position?
  • Do our ethics align with those of our partners, and will these partners support and engage in our work?
  • Are systems fair and equitable?

3. Confidence

We must have confidence in our AI and Autonomous Systems, and be able to satisfy others of that. “Confidence” is used broadly to include:

  • Satisfying regulatory and safety requirements;
  • Inspiring trust through assurance, explainability and effective exercising;
  • Being aware of the risks through an understanding of threats, vulnerabilities, means of failure and wider resilience.

Relevant questions are:

3.1 Assurance

  • Will we be able to certify that the system satisfies all relevant regulations, including safety and security standards?
  • Will all of the functions that the system performs work reliably, expected and for as long as they need to? The latter is an important point if you have a learning system where the performance could change over time – how do you understand and maintain performance?
  • Do we have an understanding of behaviours the system must not have (e.g. harming people – this is generally considered to be a bad thing) and how they can be prevented?
  • Do we understand what level of assurance is required?

3.2 Trust

  • Who needs to trust the system, what do they need to understand and what do you need to provide to obtain this trust?
  • This sounds like a simple question but can have many facets – there will be different trust considerations for the direct users, those making decisions based on its outputs, the regulators and the general public.

3.3 Explainability

Do we need to be able to explain why the AI made a particular decision; both at the time, and in retrospect? If so, how can we do this? This is another question that may impact on your algorithm selection: if you really need to know why the system produced a certain output, some types of algorithm will be more suitable than others.

3.4 Resilience

  • Do we understand the vulnerabilities in the system, and the risks it might introduce to our operations or business? Will the system fail gracefully if it encounters situations beyond its design parameters?

3.5 Experimentation

  • How suited is the system for experimentation, to build experience and confidence before it is used in a live environment?