Advai: Assurance of Computer Vision AI in the Security Industry

Case study from Advai.

Background & Description

Advai’s toolkit can be applied to assess the performance, security and robustness of an AI model used for object detection. Systems require validation to ensure they can reliably detect various objects within challenging visual environments. Our technology identifies natural (‘human-meaningful’) and adversarial vulnerabilities in CV models using an extensive library of stress-testing tools.

The natural vulnerabilities include semantically meaningful image manipulations (such as camera noise, lighting, rotation, etc.). These probe the vulnerability of the CV system to image distortions that are likely to occur naturally (but rarely). These are called near out-of-distribution or out-of-sample inputs and are in essence mathematically unrecognisable to a system not trained on equivalent inputs. For example, foggy Californian days are rare, but they happen. Their rarity leads to AI models that are ill-equipped to deal with these inputs accurately. Our approach methodically reveals these weaknesses and can advise, for example, synthetic data generations to compensate (to continue the example, a foggy overlay of a Californian setting).

To assess the adversarial vulnerabilities we inject adversarial perturbations into trusted image data to understand how vulnerable the system is to subtle manipulations designed to cause the biggest deleterious effect. This approach tests not only the vulnerability to efforts by an adversary, but it is also a reliable method of assessing general robustness to natural vulnerabilities due to the constraints that can be applied to the optimisation of the perturbation.

This toolkit is applied throughout the MLOps lifecycle, divided into Data Analysis, Pre-Deployment and Post-Deployment. This ensures that robustness is not just assessed at the end, but rather the AI is robust by design.

How this technique applies to the AI White Paper Regulatory Principles

More information on the AI White Paper Regulatory Principles

Safety, Security & Robustness

The rigorous testing of data, models, and API packaging directly addresses the safety, security, and robustness of the AI system, ensuring that it is resistant to both inadvertent errors and intentional attacks.

Appropriate Transparency & Explainability

The analysis of the data, including assessments of labelling, poisoning, OOD detection improve transparency and the explainability of the model’s decision-making process by ensuring that the system’s judgements can be traced back to clear and unbiased data inputs.

Fairness

Data screening to detect and correct imbalances in the training data addresses the potential for bias in the model, which contributes to the fairness of the system’s object detection capabilities.

Accountability & Governance

By identifying vulnerabilities and providing technical recommendations, Advai promotes accountability and contributes to the governance of the AI system’s use within the security industry.

Why we took this approach

This approach was selected to provide a comprehensive assessment of the AI system’s ability to perform under significant duress and therefore imply its reliability in the real world, and to immunise the system against sophisticated AI-specific threats.

Benefits to the organisation using the technique

  • Increased confidence in the AI system’s ability to accurately detect objects in complex visual environments.

  • Enhanced security against adversarial attacks through a thorough examination of data, models, and APIs.

  • An improved understanding of the AI model’s limitations and performance boundaries.

  • A more robust and reliable AI system that stakeholders can trust.

Limitations of the approach

  • The approach does not cover all possible adversarial attacks, especially new or unforeseen ones; however, we are aware of (and develop internally) a great number of adversarial methods.

  • The improvement of resilience metrics may come at the cost of accuracy scores. This is a trade off that we look to optimise with the clients

  • Reassessment is required when the model is updated or when new data is introduced to ensure robustness hasn’t been compromised

  • The recommendations may increase computational costs, however development costs could also reduce if the CV systems have a higher success rate on deployment.

Further AI Assurance Information

Published 12 December 2023