A behavioural science and systems thinking approach to assess and enable AI readiness in DfT
Published 19 June 2025
Executive summary
The Department for Transport (DfT) has launched a research programme to support the successful use of AI in the department.
The programme uses social and behavioural research methods to generate evidence on opportunities for AI, potential risks and areas for improvement.
This report represents a flagship project from the programme – A behavioural science and systems thinking approach to assess and enable AI readiness in DfT (BeST).
BeST aimed to identify the behaviours related to AI, and the relationships between them, that different actors in DfT should be performing when the department has successfully adopted AI.
DfT is using this evidence to understand existing strengths and areas that need improvement and to develop targeted interventions that enable safe and effective AI use in the department.
This project combined behavioural science and systems thinking to develop a behavioural systems map of AI-related behaviours in DfT. Data were collected through workshops with DfT AI experts and actor groups and supplemented with information from the Generative AI Framework for HMG.
Results
The behavioural systems map contains 48 behaviours contributing to AI use in DfT. The research team classified behaviours into 6 subsystems.
Identification – focuses on establishing the foundations for the ethical and effective development or procurement of AI tools, by identifying and prioritising potential areas for the application of AI and understanding requirements.
Data management – focuses on preparing, quality assuring and stewarding data
Development and deployment – focuses on developing AI tools and testing, improving and deploying them for use in the department.
Training and support – focuses on promoting AI literacy, facilitating continuous learning and ensuring that staff possess the necessary skills and knowledge to effectively leverage AI technologies.
Governance and strategic oversight – focuses on AI readiness, risk management, project oversight and on aligning strategies and standards for responsible AI integration.
Use – focuses on using AI safely, responsibly and effectively in everyday work processes.
Within the subsystems, different actor groups are or should be performing the identified behaviours. These actors were grouped into the following categories:
- communications
- commercial
- data
- data science
- digital
- governance
- legal
- skills and capabilities
- social and user research
- users
Explore the AI behavioural systems map
Senior civil servants and teams working on or with AI at DfT can use this map to identify and prioritise behaviours that need to be improved or implemented. Other government departments may find this map beneficial and could use it as a basis for developing their own.
Explore the behavioural systems map to discover where it can support your work or inspire your own approach.
After selecting the target behaviour(s), structured approaches like the ‘behaviour change wheel’ can be used to design interventions aimed at increasing, improving or implementing (encouraging people to start performing) the chosen behaviour.
Background
Artificial intelligence (AI) has the potential to transform how the government functions and delivers public services, making it more responsive and efficient. AI can improve public service productivity by automating repetitive bureaucratic tasks, thus allowing skilled staff to focus on more important work.
However, as advised by the National Audit Office, effective AI adoption in government requires a deep understanding of the business needs and capabilities, as well as careful consideration of complexities and interdependencies early on in the AI implementation process. This approach increases the likelihood that chosen AI solutions are feasible, deployed responsibly and closely aligned with the goals and requirements of government departments.
To aid AI adoption, the Department for Transport (DfT) has launched a Social and Behavioural Sciences for AI research programme. The programme aims to support the effective and sustained use of AI in the department by generating evidence on AI opportunities, assessing business and user needs and identifying risks and areas for improvement.
This report presents a flagship project from the programme – A behavioural and systems thinking approach to assess and enable AI readiness in DfT (BeST).
BeST aimed to identify the behaviours related to AI, and the relationships between them, that different actors in DfT should be performing when the department has successfully adopted AI. DfT is using this evidence to understand existing strengths and areas that need improvement, enabling the development of targeted interventions that facilitate the safe and effective AI use in the department.
This research covers central DfT and does not include its agencies or public bodies.
Methods
This project combined behavioural science and systems thinking to develop a behavioural systems map of AI readiness in DfT.
A systems map is a simplified representation of a system, in the form of a diagram. A behavioural systems map usually shows actors, behaviours and influences on behaviour.
For this project, an actor was defined as an individual or team in DfT and a behaviour was defined as an action contributing to AI use in DfT that is directly or indirectly observable. For example, DfT staff (actor) uses a bespoke AI tool relevant to their role (behaviour). Influences on behaviour are the factors that make a behaviour easier or more difficult to perform.
This project was carried out in 3 stages.
Stage 1 – Developing a preliminary behavioural systems map
Two DfT social researchers developed a draft behavioural system map on Mural, using existing knowledge of DfT and its AI programme. This map depicted the key actors and behaviours they should perform in DfT as an AI ready organisation.
The researchers presented this map to 2 DfT senior AI governance actors for feedback, which was integrated into the map.
Developing the draft behavioural system map enabled the researchers to capture existing knowledge within the team and understand what actor groups they needed to consult in the next stage. They used the draft map as a starting point when developing the ‘final’ behavioural system map.
Stage 2 – Gathering internal insights
In this stage, representatives from DfT actor groups were invited to 1 of 6 online workshops. The actor groups represented were:
-
communications
-
commercial
-
data
-
data science
-
digital
-
governance
-
skills and capabilities
-
social and user research
-
user
The purpose of the workshops was to use participant expertise and experience to identify behaviours that will contribute to the responsible and effective use of AI in DfT and to explore influences on these behaviours. Each workshop had 2 facilitators and was conducted using MS Teams and Mural.
Workshops were structured as follows.
Introduction – Overview of the DfT AI readiness programme, behavioural systems mapping and session activities.
Identification of behaviours – Participants were asked to write down the behaviours they expected their actor group to perform when DfT has effectively and safely adopted AI.
Identification of barriers and enablers (influences) to the identified behaviours – Participants were asked to consider which behaviours might be difficult to perform and write down the reasons why. They were also asked to consider what might make the behaviour easier to perform.
Whilst this project sought to develop a comprehensive map of the actors and their behaviours, due to time restrictions, it only identified some influences for particularly challenging behaviours. Future work will explore influences in more detail before developing interventions.
At the end of each workshop, 2 DfT social researchers extracted the behaviours, barriers and enablers from Mural and saved them in an MS Excel sheet. The Mural boards for each workshop were kept for future reference.
Stage 3 – Finalising the behavioural systems map
The research team then used the insights gathered from the workshops to build the final behavioural systems map, as follows.
Insights were translated into behaviours. Similar behaviours were grouped to create higher-level behaviours.
A description for each higher-level behaviour was developed using the original insights and information from the Generative AI Framework for HMG.
Behaviours were then connected with arrows indicating the directionality of the relationship. This formed a draft map.
The draft map was reviewed by representatives of each actor group and feedback was incorporated into the map.
To better understand how different actors and their behaviours interact with each other, behaviours were organised into subsystems.
The identified influences were connected to their respective behaviours and then categorised according to the COM-B model.
A representative of each actor group reviewed the final map and their feedback was included. The ‘legal’ group was added at a later stage, with behaviours associated with this group identified based on guidance from senior AI experts.
Limitations
While the behavioural systems map provides a valuable overview of the behaviours that should exist in an AI-ready DfT, it likely does not encompass all possible behaviours that may be relevant. In addition, due to time constraints, BeST only identified a subset of influences on behaviours that were deemed more challenging by the actor groups. Consequently, some factors that affect AI-related behaviours in DfT may not have been fully explored or understood.
Future work should focus on further validating the map and exploring influences on behaviour in more detail before developing interventions.
Results
Overview
This section provides an overview of the content in the behavioural systems map.
The behavioural systems map contains 48 behaviours contributing to AI adoption in DfT. The research team classified behaviours into 6 interrelated subsystems. These are:
- identification
- data management
- development f deployment
- training and support
- governance and strategic oversight
- use
Figure 1 – Subsystems in the behavioural systems map
This figure provides a high-level overview of the relationships between subsystems and the direction of these relationships. Each connecting arrow represents one or more links between behaviours or influences on behaviours across 2 subsystems.
Within the subsystems, there are different groups of actors who should be performing the identified behaviours. These include:
Communications
This group focuses on internal communications. They engage staff in delivering DfT’s AI strategic aims and supporting the integration of AI technologies. They do this by delivering clear and relevant communications to target audiences and continuously developing and improving the department’s channels based on audience insight.
Commercial
This group delivers and supports the procurement and contract management of both direct and indirect AI solutions, analyses the AI market and develops commercial strategies to support responsible and effective adoption. They work with colleagues across DfT to ensure procured AI products and services are safe, deliver high-quality outputs and provide the best value.
Data
This group is responsible for acquiring, curating and managing the data that underpins AI development and use. They ensure data is accurate, secure and accessible, enabling effective decision-making and the responsible deployment of AI. They also handle data governance and reporting and lead on data ethical standards.
Data science
This group provides internal capability to develop bespoke AI products and explore frontier AI applications. They also advise teams across the department on how to use AI safely and effectively to deliver efficiencies through automation to make best use of public money and which tools are most suitable.
Digital
This group leads on delivering the digital infrastructure and technology solutions that enable the safe and effective use of AI. They provide fit-for-purpose, user-focused digital tools, ensure secure and ethical data use and drive innovation to support the development, deployment and scaling of AI technologies across the department. They are also responsible for ensuring compliance with government digital standards and data protection regulations.
Governance
This group is comprised of individuals responsible for effective strategic oversight activities across each actor group. They are responsible for shepherding the internal governance processes to support the roll-out of AI.
Legal
This group provides legal expertise across the department. They advise on the legal implications of AI use and on compliance with relevant legal and regulatory requirements.
Skills and capabilities
This group focuses on providing staff with AI-related skills. They offer learning opportunities across different areas, ensuring that staff at all levels have the knowledge to work with AI technologies effectively. The team supports continuous development, helping staff stay up-to-date with AI advancements and fostering a culture of innovation and responsible AI use.
Social and user research
This group plays an important role in enabling the successful adoption of AI. They:
- identify barriers to adoption
- guide the development of interventions to address these barriers
- ensure tools are designed around user needs
They also assess the usability of AI tools and measure their impact on productivity. Their insights help inform ethical, effective and human-centred AI implementation in the department.
User
As the final point of interaction with AI systems, users are the custodians of how AI is utilised in the department. They are responsible for using AI tools effectively, responsibly and ethically, interpreting outputs, reporting issues and knowing when human oversight is required. Their confidence and judgement are essential to driving value and maintaining public trust.
The influences identified are categorised according to the COM-B model[footnote 1] , which stands for Capability, Opportunity, Motivation-Behaviour. The influences include:
- capability – for example, knowledge and skills
- opportunity – for example, pace of change, access to tools and software or time to perform the behaviour
- motivation – for example, perceptions of AI, perceptions of risk or belief that performing the behaviour will make a difference
Refer to the behaviour systems map to view all the influences identified.
The following sections describe each subsystem, detailing the actions within each behaviour and the actors responsible for those actions. It is important to note that actors may not perform every action listed in the description of each behaviour.
Identification
This subsystem focuses on laying the groundwork for the ethical and effective development or procurement of AI tools by identifying and prioritising potential application areas and understanding the associated requirements.
Table 1 – Behaviours in the identification subsystem
Behaviour | Description | Actors |
---|---|---|
Decide on AI projects to take forward | • Convening an AI board that provides oversight, accountability and strategic guidance to make informed decisions about AI adoption and use. • Convening an ethics committee comprised of internal and cross-government stakeholders, sector experts and external stakeholders, to assess the ethical implications of various actions, projects and decisions within the department. • Checking that AI use cases are aligned with departmental and government objectives. • Ascertaining the advantages and disadvantages of potential AI use cases. • Communicating the risks and benefits of use cases to the relevant business unit, to enable informed decisions on trade-offs. • Advising on the prioritisation of bespoke AI tools. |
Data Data science Digital Governance Legal Users |
Design evaluation plan | • Establishing metrics for assessing the performance, usability and acceptability of AI tools. • Developing evidence-based evaluation frameworks for AI. |
Data science Social and user research |
Explore existing internal and external tools | • Evaluating the suitability of off-the-shelf AI technologies to help the department become an intelligent customer, mitigating risks and maximising potential value. • Assisting in selecting appropriate tools and resources for data scientists and general staff. • Adopting new cutting-edge AI technologies quickly and responsibly. |
Data Data science Digital |
Identify AI tool requirements | • Involving key stakeholders to gather requirements. • Clearly outlining the objectives and challenges the AI tool should address. • Checking the current technical environment for compatibility with the AI tool. • Advising on the design of bespoke AI tools. |
Data science |
Identify data sources | • Identifying suitable data sources for AI. | Data Data science |
Identify user needs and opportunities for AI | • Actively seeking opportunities to optimise processes as AI knowledge and capabilities advance, while staying informed about existing use cases and innovations within the department. • Prioritising the most suitable and impactful applications of AI. • Guiding and encouraging teams to deliver on the department’s ambition. • Working closely with policymakers, domain experts and other stakeholders to understand their requirements and objectives. • Collecting data on user needs and attitudes towards AI. • Promoting non-AI solutions for problems that need deterministic solutions. |
Data science Digital Governance Social and user research Users |
Figure 2 – Identification subsystem
This figure presents a snapshot of the map, illustrating the actors, their behaviours and the factors influencing these behaviours, as well as the relationships between them, within the identification subsystem.
View the behavioural systems map to see this section in detail.
Data management
This subsystem is responsible for preparing, quality assuring and managing data.
Table 2 – Behaviours in the data management subsystem
Behaviour | Description | Actors |
---|---|---|
Advise on data standards | • Reviewing existing data standards. • Keeping up to date with industry’s development of data standards to meet emerging requirements. • Communicating best practice and expectations around data standards. |
Data |
Advise on infrastructure | • Creating and giving guidance on data pipelines and architecture for new AI requirements. | Data |
Maintain data catalogue | • Cataloguing DfT data assets to aid accessibility. | Data |
Make data available | • Gathering and processing data to meet data standards. • Determining privacy and security needs to identify appropriate accessibility restrictions. • Adding data assets to DfT data catalogue. |
Data |
Put in place and maintain infrastructure | • Identifying infrastructure needs, ensuring the smooth operation, security and efficiency of IT systems and AI tools. • Providing access to the required IT systems and AI tools. • Assessing IT systems and AI tools for performance and vulnerabilities, implementing robust security measures against cyber threats and ensuring the right IT infrastructure with appropriate hardware and software is in place. • Managing data effectively for efficiency and accessibility, providing adequate data storage solutions and safeguarding data to ensure privacy and security. |
Data Digital Governance |
Quality assure data and adhere to data standards | • Ensuring data is reviewed by an independent internal specialist for accuracy and completeness and is fit for its purpose. • Ensuring data meets departmental standards. |
Data |
Store data | • Choosing the appropriate storage solution that meets efficiency, privacy, security and accessibility needs. • Storing data effectively meeting efficiency, privacy, security and accessibility needs. |
Data |
Figure 3 – Data management subsystem
This figure illustrates the actors, their behaviours and the factors influencing these behaviours, as well as the relationships between them, within the data management subsystem.
View the behavioural systems map to see this section in detail.
Development and deployment
This subsystem focuses on developing AI tools and evaluating, improving and deploying them across the department.
Table 3 – Behaviours in the development and deployment subsystem
Behaviour | Description | Actors |
---|---|---|
Assure AI tools | • Rigorously evaluating AI tools to ensure they meet organisational needs, prioritising safety over rapid deployment. • Maintaining thorough documentation of AI models, methodologies and project workflows for transparency. • Safeguarding against biases in both bespoke and off-the-shelf AI. • Effectively evaluating third-party AI components. |
Data science Digital Governance |
Decide on AI tools to take forward | • Determining which AI tools should proceed to development based on performance, relevance and alignment with project objectives. | Data Data science Governance Legal |
Deploy AI tools | • Incorporating AI tools into existing systems or workflows, ensuring compatibility and ease of use. • Providing training for end-users to ensure they understand how to use tools effectively. • Setting out how AI tools will be maintained over time and developing a comprehensive plan for knowledge transfer and training to ensure the tools’ sustainable management. • Establishing clear roles and responsibilities to ensure accountability within teams for AI systems, including who has the authority to change and modify the code of AI models. |
Data science |
Develop or adapt AI tools | • Choosing an AI solution that fits requirements of the business area, whether it is an off-the-shelf product or a custom-built model. • Gathering, cleaning and preprocessing the necessary data for training and testing the AI model. • Developing and training the model using appropriate algorithms and techniques for bespoke solutions. • Rigorously testing models to ensure their accuracy and reliability, validating results against benchmarks. |
Data science |
Document best practice, AI use cases and AI impacts | • Identifying stakeholders and gathering input. • Researching best practices from literature and experts. • Compiling use cases alongside qualitative and quantitative success measures. • Regularly reviewing and updating documentation as needed. |
Data science Governance Users |
Improve AI tools | • Updating and making improvements to AI tools based on user feedback, tools performance and technological advancements. | Data science |
Monitor and evaluate AI tools and user needs | • Continuously monitoring AI models to ensure accurate and reliable results, making updates as necessary. • Carrying out user testing and gathering feedback to identify areas for improvement and refine tools over time. • Assessing AI elements in third-party SaaS offerings (software on a subscription basis using external servers). • Evaluating the impact of introducing different AI tools. |
Data Data science Digital Governance Skills and capabilities Social and user research Users |
Obtain permissions and sign-offs | This behaviour involves: • Obtaining signoffs and permissions from relevant groups, for example, the AI board, data protection, digital architecture, information assurance, etc. |
Data science Digital Legal Governance |
Pilot AI tools | • Conducting initial tests of AI tools to understand functionality, potential benefits and challenges. • Identifying any biases. • Gathering feedback for improvement. |
Data science Governance Legal |
Procure AI tools | • Following procurement processes to acquire new AI tools. • Notifying the AI projects oversight group about AI use in procured tools or services. • Working closely with procurement to assess external AI solutions efficiently. |
Commercial Data science Digital Governance Users |
Promote AI tools | • Promoting the AI tools available for staff use across various professions. | Communications Data science Governance |
Provide feedback and participate in research on AI tools | • Providing feedback informally by contacting the data science team. • Participating in research through pilots, beta testing, improvement workshops, etc. |
Users |
Redesign processes with AI in mind | • Supporting the redesign of processes and teams to incorporate AI effectively. | Governance |
Review AI tools | • Evaluating the suitability of bespoke or off-the-shelf AI tools and aiding the department in becoming an informed consumer. • Safeguarding against unnoticed or unexpected AI bias from bespoke models developed internally. |
Data science Digital |
Review processes | • Verifying that processes remain effective upon AI implementation. | Governance Users |
Share AI adoption benefits with the public | • Making AI applications transparent and effectively communicating AI benefits to the public. | Communications Data science Governance |
Share best practice and AI use cases internally and externally | • Identifying target audiences. • Sharing best practice and AI use cases to internal and external stakeholders using effective communication approaches and channels. |
Communications Data Data science Governance Skills and capabilities Users |
Figure 4 – Development and deployment subsystem
This figure illustrates the actors, their behaviours and the factors influencing these behaviours, as well as the relationships between them, within the development and deployment subsystem.
View the behavioural systems map to see this section in detail.
Training and support
This subsystem is dedicated to enhancing AI literacy, promoting continuous learning and ensuring that staff acquire the necessary skills and knowledge to effectively use AI technologies.
Table 4 – Behaviours in the training and support subsystem
Behaviour | Description | Actors |
---|---|---|
Deliver training, events, resources and support | • Delivering effective and tailored AI training, events and resources. • Providing expert AI advice. • Offering clear guidance on AI tools and projects. • Supporting colleagues on data requirements, ethics and security. |
Data Data science Digital Governance Legal Skills and capabilities |
Develop, procure or update training, events, resources and support | • Developing training, events, resources and support for different learner groups to address skills gaps. • Procuring training, events, resources and support for different learner groups to address skills gaps. • Adjusting learning pathways and user profiles. • Updating training, events, resources and support. |
Commercial Data science Skills and capabilities |
Engage in training | • Engaging in regular study or practice of AI tools and concepts through courses, training sessions, events, hands-on projects or own reading. | Users |
Monitor and evaluate training events, resources and support | • Regularly assessing training effectiveness by tracking participation, adoption and skills gaps. • Gathering user feedback on the training provision to identify issues and opportunities for development. |
Skills and capabilities Social and user research |
Promote training, events, resources and support | • Systematically promoting AI training, events and support to staff and stakeholders. • Ensuring visibility by clearly signposting learning opportunities to relevant groups, promoting AI literacy and seeking endorsements from senior stakeholders. |
Communications Data science Governance Skills and capabilities |
Provide feedback and participate in research on training | • Providing feedback on training, events, resources and support. | Users |
Research trends and developments | • Keeping up to date with emerging AI technologies, tools and best practices to stay informed about advancements and their potential impact on the department’s training needs and provision. | Skills and capabilities |
Review training offer | • Determining whether the training offer is adequate using insights gained from evaluation, user needs and AI trends and developments. • If necessary, recommending changes to the training offer for each learner group. |
Skills and capabilities |
Figure 5 – Training and support subsystem
This figure illustrates the actors, their behaviours and the factors influencing these behaviours, as well as the relationships between them, within the training and support subsystem.
View the behavioural systems map to see this section in detail.
Governance and strategic oversight
This subsystem is concerned with AI readiness, risk management, project oversight and aligning strategies and standards for responsible AI integration.
Table 5 – Behaviours in the governance and strategic oversight subsystem
Behaviour | Description | Actors |
---|---|---|
Assess AI readiness using evidence-based approaches | • Gaining a thorough understanding of the current state of AI adoption within the department, including opportunities for automation. This involves identifying key cultural and structural factors that influence AI integration. • Leveraging these insights to identify areas for improvement, thereby facilitating the development of targeted interventions to enhance the safe and effective adoption of AI. • Analysing internal data on barriers to AI adoption to gauge user confidence and competence. |
Governance Skills and capabilities Social and user research |
Identify AI risks and opportunities | • Regularly reviewing the high-level objectives, opportunities and potential risks associated with the department’s AI programme. | Governance |
Maintain AI project database | • Maintaining a centralised database of AI projects and initiatives within the department. • Implementing reporting mechanisms to track progress and outcomes. |
Governance Users |
Manage risks | • Ensuring AI is being used safely and effectively. • Minimising bias and discrimination in AI models. |
Data science Digital Governance |
Oversee AI programme | • Integrating oversight of AI into the department’s governance processes. • Convening an ethics committee comprised of internal and cross-government stakeholders, sector experts and external stakeholders, to assess the ethical implications of various actions, projects and decisions within the department. • Ensuring programme teams have clear governance structures in place. • Creating a review process to ensure compliance and accountability. • Evaluating the effectiveness of the AI programme, governance groups, strategies and policies. • Supporting the development of AI governance in teams. |
Governance |
Review AI strategy | • Reviewing and updating the vision and goals for AI integration in the department. • Identifying key stakeholders and their roles. |
Governance |
Review standards, policies, procedures and frameworks | • Reviewing and updating relevant AI and data standards, policies, procedures and frameworks. | Data Digital Governance Legal |
Figure 6 – Governance and strategic oversight subsystem
This figure illustrates the actors, their behaviours and the factors influencing these behaviours, as well as the relationships between them, within the governance and strategic oversight subsystem.
View the behavioural systems map to see this section in detail.
Use
This subsystem focuses on using AI safely, responsibly and effectively in routine work processes.
Table 6 Behaviours in the use subsystem
Behaviour | Description | Actors |
---|---|---|
Encourage teams to use AI | • Encouraging team collaboration and AI adoption by sharing AI use cases, championing and normalising the use of AI. • Addressing scepticism but also uncritical enthusiasm towards AI within teams. • Using different media and forums to raise awareness and generate enthusiasm for AI. |
Users |
Report issues | • Reporting any issues experienced with AI that require immediate action. • Adhering to protocols for any potential breaches or misuse. |
Users |
Use AI safely, responsibly and effectively in everyday work processes | • Deciding whether AI should be used, considering the risks. • Choosing a suitable AI tool for the task. • Putting the necessary oversight and human-in-the-loop processes in place to use AI. • Following ethical and security standards, especially regarding data sharing. • Checking that AI outputs are factual, truthful, non-discriminatory and non-harmful and do not violate existing legal provisions, guidelines, policies or the providers’ terms of use. • Clearly disclosing when AI has been used to generate an output. • Reverting to non-AI methods if AI proves unsuitable. |
Users |
Behaviours in table are listed in alphabetical order.
Figure 7 – Use subsystem
This figure illustrates the actors, their behaviours and the factors influencing these behaviours, as well as the relationships between them, within the use subsystem.
View the behavioural systems map to see this section in detail.
What can this map be used for?
Identify and prioritise behaviours for intervention
Senior civil servants and teams working on or with AI at DfT can use the behavioural systems map to identify and prioritise behaviours for improvement or implementation (encouraging staff to start performing a behaviour). Other government departments might find this map beneficial and could use it as a basis for developing their own.
Different sources of evidence and information can facilitate the identification of behaviours that require improvement, including:
- local knowledge – insights from staff and stakeholders
- internal research – collecting data from staff through more formal mechanisms such as workshops and surveys
- academic and grey literature – peer-reviewed studies, organisational reports, government publications, websites, forums and other online information
- quantitative metrics – for example, the number of connections associated with each behaviour
Once behaviours needing improvement have been identified, the following criteria adapted from the behaviour change wheel guidance for developing behaviour change interventions can be used to select which behaviours should be prioritised .
Impact of behaviour change
Assess the potential impact and magnitude of change.
Ease of behaviour change
Think about how feasible it is to change the behaviour within the given context. Factors such as availability of resources and acceptability of the behaviour can influence ease of change.
Effects on the broader system
Consider the role of the behaviour in creating a positive ‘spillover’ effect or negative, unwanted consequences for other behaviours in the system.
Ease of measurement
Determine whether the behaviour can be easily monitored and evaluated. This can help determine whether an intervention has changed the target behaviour.
Not all criteria will be relevant in every situation and other criteria may be more helpful. Consider those that align with your priorities, resources and specific needs.
After selecting the target behaviour(s), structured approaches like the behaviour change wheel can be used to design interventions aimed at increasing, improving or implementing the chosen behaviour. The behaviour change wheel involves 3 stages:
- developing a detailed understanding of the target behaviour, including its barriers and facilitators, using the COM-B model
- identifying appropriate intervention options
- defining the content and implementation strategies for the intervention.
Case study: Behavioural intervention using COM-B
This real case study describes one approach to developing an intervention to change a behaviour identified through the behavioural systems map described in this report. For more information about designing behaviour change interventions using the behaviour change wheel see Behaviour change: guides for national and local government and partners.
Target behaviour: Developing, updating or procuring training, events, resources and support, with a specific focus on procuring AI training.
Rationale: Training is essential for AI integration, boosting user confidence, ensuring effective use and promoting responsible practices to reduce risks like misinterpretation or bias.
The target behaviour was fundamental to the AI training and support subsystem, ensuring the relevance, engagement and effectiveness of AI training within the department. DfT was in the early stages of AI adoption and did not yet offer a formal AI training programme to its staff.
The learning and capability lead, responsible for capability-building across various domains and areas, was assigned the task of developing a comprehensive training plan and procuring modules from government or external providers to address the needs of foundational, implementer and senior leader knowledge levels. While progress had been made, it was essential to accelerate efforts to offer staff a training programme that would enable them to adopt AI safely and effectively.
COM-B analysis
Capability The lead did not have the required expertise in AI training programs to evaluate different options effectively. This was an important gap, as understanding the differences between types of AI training requires a certain level of AI knowledge.
Opportunity The lead had limited time and access to resources, such as recommendations, reliable information or access to professional networks that could help them find quality AI training options. Moreover, with AI still in its early adoption phase, the market for AI training programmes was incipient. This restricted the lead’s opportunity to procure the right programme. While there was a supportive environment that allowed the lead to ask advice from AI experts or knowledgeable stakeholders, these individuals were often busy. More formal and streamlined mechanisms to seek and deliver guidance were needed.
Motivation The lead was motivated to understand and assume responsibility for this task, but they were overextended. Given the complexity and technical nature of the required training programme, they may also have felt apprehensive about taking on such a responsibility, especially considering their level of expertise in curating and producing this type of training.
Intervention description
The intervention function for the proposed intervention was enablement – increasing means/reducing barriers to increase capability (beyond education and training) or opportunity (beyond environmental restructuring).
The chosen behaviour change techniques[footnote 2] were:
- practical social support – advising on, arranging or providing practical help for performance of the behaviour
- instruction on how to perform the behaviour – advising or agreeing on how to perform the behaviour
The intervention provided:
- access to external consultants who produced a comprehensive learning needs analysis with essential AI competencies for foundational, implementer and senior leader levels (capability)
- dedicated funding for the lead to hire specialised external providers capable of developing a tailored AI training programme (opportunity)
- a streamlined approval pathway, minimising administrative delays and enabling the lead to progress through procurement stages smoothly (opportunity)
- internal advisory resources on AI across various disciplines who gave advice on consultant procurement, research methods, stakeholder engagement and technical AI aspects (capability)
- regular check-ins with colleagues working in AI to support the lead’s progress and enhance their confidence in making decisions (motivation)
- clear leadership communication on the importance of AI in driving innovation and productivity, which reinforced the significance of the lead’s role in enabling AI adoption and, in turn, motivated them to prioritise the task and focus on identifying high-quality external training providers (motivation)
Effectively convey AI readiness to stakeholders
This behavioural systems map is also an effective tool to articulate the department’s current AI readiness state versus the desired future state. The map presents clear evidence of the complexity of the programme in terms stakeholders can understand, while enabling them to narrow down on specific components. It highlights interdependencies and provides information about what is influencing behaviours and the COM-B domain they relate to.
The information provided by the map, along with its interactive features, allows the department to identify and eliminate obstacles, seize opportunities while mitigating risks, foster cross-departmental collaboration and build a compelling case for securing the necessary funding and resources to achieve AI readiness.
Acknowledgements
This report and the accompanying map were produced by Dr Rebecca Nourse and Dr Ana Wheelock Zalaquett of the Social and Behavioural Sciences for AI team, part of the Advanced Analytics division at DfT.
We would like to thank our colleagues across the different actor groups within DfT, as well as the Systems Thinking team (Advanced Analytics division) and the Social and Behavioural Research division, for their valuable contributions.
References
-
Michie, S., Van Stralen, M.M. and West, R., 2011. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implementation science, 6, pp. 1-12. ↩
-
Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., Eccles, M.P., Cane, J. and Wood, C.E., 2013. The behaviour change technique taxonomy (v1) of 93 hierarchically clustered techniques: building an international consensus for the reporting of behaviour change interventions. Annals of behavioural medicine, 46(1), pp. 81-95. ↩