ISACA: Digital Trust Ecosystem Framework (DTEF) Beta application to assure AI environments

Case study from ISACA.

Background & Description

ISACA’s Digital Trust Ecosystem Framework (DTEF) offers enterprises a holistic framework that applies systems thinking – the notion that a change in one area can have an impact on another area – across an entire organisation. As such, the Framework encompasses six domains: Culture, Emergence, Human Factors, Direct and Monitor, Architecture, Enabling and Support.

The Framework is suitable for mid to senior level executives and practitioners who are either developing a strategy for AI or implementing AI tools, and who are seeking guidance on techniques to establish trust and trustworthiness in AI.

The Framework is not prescriptive or narrow, but includes detailed practices, activities, outcomes, controls, KPIs and KRIs that a practitioner can use to implement and assess against. Additionally, it is aligned to many existing frameworks on the market so an enterprise that has already adopted a framework such as ISO 27001 or NIST CSF, is already performing many of the tasks outlined in the DTEF.

Such a use-case has been built on the beta version of the Framework. ISACA expect to release the Alpha version of the DTEF in early 2024.

Relevant Cross-Sectoral Regulatory Principles

Safety, Security & Robustness

This Framework encourages adopters of AI to consider risk and mitigations throughout implementation. For example, it suggests organisations should consider what they wish to achieve, where misuse could occur, and how the model should be trained in order to perform safely and securely. It also addresses the elements of human interaction and verification of results to build assurance.

The following interdependent domains should be considered collectively: Culture, Emergence, Human Factors, Direct and Monitor, Architecture, Enabling and Support.

Appropriate Transparency & Explainability

The DTEF directs organisations to ensure the introduction of AI is understandable through communication with users so that they are aware of the function it is performing. The role of AI actors within processes and the service portfolio should be clearly acknowledged to avoid misunderstanding.

This is similar in nature to the GDPR requirements on personal data, where an organisation must be transparent about how it uses such data and explain this clearly to the data subject. An AI-related implementation following the DTEF should adopt a similar approach with all stakeholders. Selecting, establishing, and maintaining digital relationships requires confidence and transparency from all parties involved.

Most relevant domains: Culture, Human Factors, Direct and Monitor, Enabling and Support.

Accountability & Governance

Governance is a key theme running throughout the Framework particularly as there is often a perception that AI, by its nature, can easily get “out of control”. The Framework inspires organisations to account for and review all of the stakeholders involved in an AI lifecycle and ensure appropriate controls are put in place, along with relevant monitoring and wider GRC functions.

Most relevant domains: Emergence, Human Factors, Direct and Monitor.

Why we took this approach

The Framework is designed and can be used to build assurance for a range of emerging technology systems and is particularly pertinent to AI, which is likely to be applied beyond an organisation’s technology or security departments and will therefore have implications that can cut across departments and business units.

Much in the mode of the principles-based approach to AI safety set out in the UK’s AI White Paper, ISACA’s Framework reflects the fluidity of AI systems and encourages organisations to examine proposals across a broad range of different perspectives. The Framework’s breadth means organisations can assess security questions that are technical, practical, and ethical, as well as manage and review the business and financial case for their AI use. The DTEF encourages organisations to revisit the metrics and outputs produced in the process of its application, and to continually review their assurance using compatible maturity assessment frameworks.

Benefits to the organisation using the technique

DTEF enables organisations to take a strategic view on a potential AI deployment. It encourages consideration of the target culture for use and deployment of AI (and thus has the potential to illuminate cultural inhibitors). Furthermore, it helps address the expected boundaries of AI actors, control the input variables and define the controls which will support the user experience and ultimately support the organisation to determine the resource requirements to run, control and manage an AI system.

Using the Framework enables organisations to think holistically about the business and financial case for AI use. Then, organisations can decide whether it is appropriate to embed AI within their service value chain. This overtly strategic approach is more likely to surface risk that might not otherwise be identified by solely tactical or technical teams and increases the likelihood of realising the expected benefits of the implementation.

Limitations of the approach

While DTEF provides a foundational starting point, to get the most value from this approach organisations will need to tailor their specific activities, outcomes and controls to their specific business and industry. This approach encourages organisations to have appropriate skillsets, not only technical, but also in risk, security, business change management and project management skills.

ISACA Digital Trust Ecosystem Framework Beta

Understanding the Full Digital Trust Ecosystem

ISACA’s Digital Trust Mission

Further AI Assurance Information

Published 12 December 2023