The People Factor: A human-centred approach to scaling AI tools (HTML)
Published 4 June 2025
Delivering the UK Government’s first generative AI tool to be approved for cross-government use
Government Communications has pioneered Assist, the first general purpose AI tool approved for use across the UK Government[footnote 1]. Developed in-house by the Cabinet Office’s multidisciplinary Applied Data and Insight Team, this secure and bespoke generative artificial intelligence tool (GenAI) is transforming how government communicators work.
Since the launch of the Assist pilot in November 2023, Assist has unlocked greater productivity, saving thousands of hours of communicators’ time, whilst enabling better integration of communications best practice by embedding core Government Communications frameworks, policies and documents into the tool’s responses. As a result, Assist has already supported the rapid delivery of efficient, consistent and high-quality public sector communications across more than 200 government organisations.
Through developing and scaling Assist across government, we’ve learnt many lessons and want to share our insight with those facing similar challenges implementing GenAI tools across other organisations. As a result, we have chosen to publish this guide and wider resources, including our Mitigating Hidden AI Risks Toolkit, to support other teams to successfully and safely scale GenAI in their organisations.
By using the evidence-based frameworks and toolkits outlined in these resources, we have scaled Assist to 200+ government organisations in less than a year since cross-government launch, achieving a 70% adoption rate which is increasing every week (as of May 2025). Specific interventions developed based on these frameworks have led to a 180% increase in the completion of AI training, a 23% improvement in users’ confidence using AI at work and the de-risking of over 50 uses for Assist with a range of evidence-based mitigations.
By sharing the methods which have led to these successful outcomes, we hope to contribute to the impactful and ethical use of AI for public good.
Feedback and collaboration
We welcome feedback on the guide as well as opportunities to collaborate with other teams, particularly if you have experience of rolling out AI tools and services.
Get in touch with us by email: gcs@cabinetoffice.gov.uk.
Learn more about how Government Communications is responsibly harnessing innovations including AI to transform government communications:
Foreword
In 2023, Government Communications started the development of Assist, a private and secure AI-powered tool designed to help government communications professionals to harness the benefits of generative AI (GenAI) in their daily tasks. Developed in-house, I am proud that Assist was the first generative AI tool approved for use across the UK government[footnote 2].
As we rolled out Assist, it became clear that successful AI deployment involves more than just the best and brightest technical experts building a good tool. Harnessing AI’s potential requires not only a technical transformation, but a social transition. Successful adoption of AI requires working environments where teams can confidently and safely embrace these powerful new tools. This ‘people’ element – encompassing cultural shifts, skill development and organisational change – is as vital to delivering the benefits of AI as the technology is.
Achieving this requires a thoughtful, ethical and user-centred approach that aligns with our values as public servants. The interdisciplinary team behind Assist comprised data scientists, behavioural scientists, digital specialists, user researchers and evaluation experts.
This guide reflects our learnings so far from scaling Assist[footnote 3]. It provides a robust, evidence-based framework for organisations seeking to scale AI tools safely and successfully. By sharing our experiences and insights, we aim to support other teams to effectively and responsibly implement AI in their organisations and contribute to the use of AI for public good.
This is just the beginning of our journey; as AI technologies continue to evolve, so too will our approach to AI adoption. While we have made significant progress with Assist, we are committed to further pushing the boundaries of responsible innovation to deliver better outcomes for the public we serve.
Simon Baugh
Chief Executive, Government Communications
About this guide
This guide is designed for people involved in the development, delivery or procurement of GenAI tools within organisations. It offers practical advice, useful tools and interdisciplinary insights to help you unlock the benefits of AI tools for your organisation, avoid the common pitfalls of designing technological solutions that go unused and minimise risks associated with AI roll outs.
Intended to complement the AI Playbook for the UK Government and supplement the UK Government’s Service Standard[footnote 4], this guide operates from the assumption that you and your team have identified a business challenge or a problem faced by your target users, and have concluded that a GenAI tool is a good solution to this challenge or problem when assessed against other options for solutions.
By following the simple three-stage framework outlined in this guide – Adopt, Sustain, Optimise (ASO) – you can fast-track and de-risk your organisation’s AI journey, ensuring a smoother transition to AI-enhanced operations. This approach has been informed by the real-world experience of implementing a generative AI tool to thousands of users across over 200 government organisations.
You can use this guide to:
- Plan engagement and communications to drive and maintain AI uptake
- Inform the design of effective AI training programmes for your users
- Develop a robust risk management approach to identify and mitigate potential risks associated with AI rollout using a novel approach we have developed, the Mitigating Hidden AI Risks Toolkit (published separately alongside this guide)
- Create user journey maps[footnote 5] to better understand how your users may interact with your GenAI solutions, and potential barriers to use they may experience
- Design strategies to enable users to embed your solutions within their workflows
- Establish metrics for measuring the success and impact of your GenAI solutions within your organisation
Introduction
Tools powered by Artificial Intelligence (AI) have the potential to transform organisational efficiency and, in government, deliver improved outcomes for the public. These opportunities are clearly outlined in the Government’s recently published AI Opportunities Action Plan[footnote 6]. However, AI transformation will be a huge change for any organisation. Developing a new technology is just the start – the key is then to ensure people use it. These tools can only deliver benefits if members of organisations adopt and use them – not just logging in once or a handful of “superusers”, but many people using them effectively to support their tasks on a regular and consistent basis. This means that implementing any form of user-operated AI tooling is not just a technical transformation, but also a social transition.
Although AI researchers talk of “AI alignment”[footnote 7], the current landscape of AI implementation largely neglects the human element of AI transformation, treating the challenge as predominantly a technological or economic pursuit[footnote 8]. This oversight risks undermining the very efficiency and impact that GenAI promises to deliver.
As a result, when we developed our in-house generative AI tool Assist[footnote 9], we found that there was no guidance on how to practically deliver a successful AI roll out given that:
- People do not necessarily adopt tools that you offer them. There has been very little practical guidance for organisations struggling to bridge the gap between making AI tools available to their employees and regular and high impact use of the tools provided. We need to bridge the gap between technological innovation and human adoption.
- Whilst considerable attention has focused on technical fixes to technical risks (for example, better quality training data to mitigate hallucinations), there was no guidance on how to anticipate and mitigate risks related to the way people, teams and organisations interact with and use AI tools[footnote 10]. We call these ‘hidden’ risks, as often they can be subtle and easy to miss. To deliver positive outcomes, we need to mitigate the ‘hidden’ risks that can’t be resolved through guardrails in tools or naive applications of human-in-the-loop.
Bridging the gap between technological innovation and human adoption
Experience of past technology roll-outs shows that change doesn’t always come easily. AI implementation can’t be solely techno-centric – it must consider the people involved, their needs, and the barriers they may experience to adopting and using AI effectively and safely, to ensure that the benefits can be realised. For example:
- 50% of UK adults say they do not use AI at all in their day-to-day life, with only 5% saying they use it a lot[footnote 11]
- Only 15% of public servants globally have received training on how to leverage AI in their work[footnote 12]
- Workers across sectors have concerns about using AI-based technologies in their organisations, such as how it will impact the quality and security of their jobs or the quality of the services they provide to the public[footnote 13]. Without being addressed, these concerns impact job satisfaction, team morale and staff wellbeing, which will ultimately impact organisational performance[footnote 14][footnote 15].
Overcoming these challenges to implementation goes far beyond doing user research. It requires leaders and interdisciplinary AI delivery teams to work in partnership with people across all levels of an organisation: leaders setting strategic direction, managers implementing new processes, employees adapting to new ways of working, and end-users engaging with AI-powered solutions. AI tools can help enhance workplace productivity and deliver wider benefits, but only if they are embedded effectively into existing daily routines, workflows and team processes. This guide aims to demystify this process of scaling and embedding AI tools within organisations by demonstrating how you can do this effectively by drawing on the skills, experiences and perspectives from behavioural and social science, user research, change management and digital design.
Successful AI implementation requires a wide range of skills, experiences and perspectives
Economy and society
Sociological research methods can help to understand how wider societal factors impact on AI adoption within organisations. For example, capturing public attitudes and acceptance of AI applications (e.g. public trust in AI), the role of AI regulation, the impact of media coverage and monitoring industry-wide adoption patterns.
Organisational leaders
Change management frameworks can be used to help establish clear governance frameworks, data protection, security and privacy concerns and to ensure AI implementation aligns with organisation’s existing frameworks. For example, organisational values.
Managers and teams
Social and behavioural science can help to understand how barriers including and beyond the tool itself make it difficult for end-users to take up an AI tool and continue to use it well. For example, how established team dynamics and culture may impact individual uptake or responsible use and how you can build AI literacy effectively to enable high-quality use of your tools.
Individual end-users
User research can help to understand end-user’s attitudes. perceptions and experiences of using a tool. For example, whether a user finds a tool and its interface useful or easy to use, their perceptions of the quality of a tool’s output, and their assessments about the tool’s benefits (e.g. productivity).
Mitigating ‘hidden’ risks that can’t be resolved through guardrails in tools or naive application of “human in the loop”
AI development and risk management is primarily led by experts focused on risks like algorithm bias, hallucinations, and disinformation. Whilst deepfakes and algorithm mishaps make headlines, the use of technology in fields such as aviation and healthcare has taught us that some of the most significant risks will emerge from more mundane sources; just as an overworked nurse or doctor’s minor data entry error can lead to serious adverse medical consequences, well-intentioned professionals using AI tools for routine tasks could inadvertently create significant risks. Just as most aviation accidents result from factors like miscommunication or maintenance lapses rather than storms or hijackings, AI’s greatest risks will likely come from the build up of seemingly small things we could overlook, not the dramatic scenarios that make headlines.
Since risks arising from the well intentioned operational use of AI have been comparatively neglected in AI safety discourse, they are also much more likely to fit under the category of risks known as “unknown unknowns”. As a result, they are much less likely to have credible and effective safeguarding strategies to mitigate them. For instance, the most commonly cited mitigator is keeping “a human in the loop”. Whilst human oversight can be effective under the right conditions, there’s little guidance on how to train, support and empower people to be effective AI monitors. Humans, much like machines, are also fallible, albeit for different reasons – for example, they get tired or they experience “off days”.
The path forward therefore requires a fundamental shift in approach. Instead of treating AI safety and AI alignment as purely a technical, or indeed, ethical challenge, we need to understand how these tools interact with organisational systems and human psychology. This requires input from interdisciplinary colleagues who can help us understand not just how to build better tools, but how to implement them effectively and safely within complex organisational environments.
Addressing the gaps – this guide
The guide aims to help AI delivery teams avoid the common pitfalls of designing or procuring tools that go unused or which create negative unintended consequences, which is explored in further depth in this guide’s sister publication, The Mitigating Hidden AI Risks Toolkit.
The framework identifies three key phases that are key for delivering impact: Adopt, Sustain and Optimise (which we will refer to as ASO). This approach goes beyond theory – it is based on proven experience from successfully scaling AI in complex organisations through our work on Assist, which now serves thousands of users across more than 200 government organisations.
In less than a year since Assist’s launch across government, interventions for Assist designed on the basis of the ASO framework have led to (as of May 2025):
- Over 50% of all government communicators across 200+ government organisations have used Assist, compared to a workplace average of 34% according to a recent Google report (PDF, 29.2MB) of AI adoption
- 70% of all government communicators have completed our AI onboarding training, strengthening their AI literacy
- 180% increase in AI training completion as a result of targeted interventions
- Over 30% of our users are logging in to use Assist weekly, with this continuing to increase week-on-week, establishing the habits needed for AI tools to have genuine impact
- 50+ use cases for Assist de-risked with evidence-based behavioural, technical and governance mitigations to enable responsible use
- 23% improvement in users’ confidence using AI at work
A three-phased framework: Adopt, Sustain, Optimise
Overview of the framework
The Adopt, Sustain, Optimise framework equips organisations with an evidence-backed way to help them successfully and safely implement, scale and iterate GenAI tools they are developing and/or deploying. We define successful implementation as the sustained, high-quality use of these tools by the groups you want to use your AI tool[footnote 16].
Accordingly, the framework outlines three key phases: enabling the adoption of AI tools, sustaining their usage, and optimising how they are used by individuals, teams and organisations. These phases are related and interdependent. For example, using the tool regularly means that people have more opportunities to learn how to use a tool effectively and build relevant skills in using it (e.g. prompt engineering). This can increase the quality of outputs, and in turn may increase motivation to continue using the tool in future.
Intended to supplement the Service Standard[footnote 17], this novel framework operates from the assumption that a team or organisation has:
- Identified a business challenge or a problem that users (e.g. staff members) are experiencing which can be improved and;
- Concluded that a GenAI tool is a good solution to this challenge when assessed against other possible solutions, such as organisational or resource changes[footnote 18].
Phase 1: Adopt
Aim
To encourage uptake of AI solutions by your people and teams.
Types of actions which may be required by users
Signing up for access to an AI solution; completing onboarding information requests; undertaking mandatory AI training; navigating to and accessing an AI solution for the first time (e.g. logging in).
Why Adopt matters
Many factors shape whether people adopt AI tools[footnote 19] including whether a person thinks a tool will be useful for their specific needs and easy to use[footnote 20]. Some wider factors that impact GenAI adoption include whether a person feels as though they will be competent in using it, whether others they know are using it, whether they think they would enjoy using it, their trust in AI and their AI literacy[footnote 21].
As a result, you shouldn’t assume that the reason people are not taking up your tool is because they don’t know about it. Organisations can increase the adoption of their AI solutions by identifying whether factors such as these pose barriers to uptake for the people they want to use the tool, and addressing these in turn.
For example:
- Do your target users think the tool will be relevant or useful for them specifically?
- Do your target users think they have the skills or knowledge to use it well?
- What are their attitudes towards AI[footnote 22], or to the idea of using and integrating AI within their or their team’s workflow?
- Are line managers, senior leaders and other people across your organisation fostering an environment which promotes using your tool?
- Are you providing (or planning to provide) access to support, to help your target users adopt the tool? Do your target users know about the support available?
Phase 2: Sustain
Aim
To ensure that AI applications are used routinely and embedded into peoples’ everyday tasks, where applicable and relevant, to maximise the impact they can provide.
Types of actions which may be required by users
Regular and routine usage of the tool to support job tasks. For example, logging in, submitting requests to the tool, completing tasks with the tool, using outputs from a tool.
Why Sustain matters
AI-powered tools cannot deliver impact if they are not used. To ensure an AI-powered tool delivers impact, it is important in most cases that a person:
- Uses the tool regularly or consistently. For example, if a tool is designed to be used at a particular step or task within a process, that they use it consistently for that step.
- Uses the tool for a variety of their tasks, where the tool is appropriate or relevant.
Building regular use of an AI-powered tool will require people, teams and organisations to create a new routine or habit for using it and embedding it within their workflow. A habit is a behaviour repeated regularly without conscious planning. Every habit starts with a three-part psychological pattern called a “habit loop”[footnote 23]. This is particularly relevant to internal-facing, “back office” tools, and tools which are designed as general purpose use.
In order for someone to develop a habit for using tools they need to:
- Be prompted. Habits need to be prompted; an effective prompt needs to be salient and consistent to trigger a person’s memory of using an AI solution. For example, strategies which help individuals build knowledge of an AI tool (e.g. showcasing its use cases) or which remind people to use it (e.g. bookmarking it on your browser).
- Be rewarded or experience relief. This includes positive experiences with using an AI solution (e.g. relief when a task has been made easier), or being rewarded (i.e. by others) for using it for a particular task, which signals to people that using an AI tool has positive outcomes. Research shows that an organisation’s internal support for using AI tools is a critical driver of their continued use[footnote 24].
- Repeat the behaviour. Repeated use helps to form the routine of using the AI solution as part of this “loop”.
Phase 3: Optimise
Aim
To ensure high quality and safe use, to maximise the organisational benefits and minimise risks.
Types of actions which may be required by users
Using the tool effectively and safely; assessing tasks for AI use and disregarding tasks it is not suitable for; using AI for suitable tasks frequently or consistently; quality-assuring accuracy of outputs before using them; developing prompt engineering skills; developing AI literacy skills.
Why Optimise matters
Just because you give people access to new tools and guidance on how to use it, it doesn’t mean that they will use it well. Even when well-intentioned, organisations, teams and people can use AI tools in ways which bring about risks: we call these ‘hidden’ risks because they are likely to appear more mundane than the big salient AI risks highlighted in the media such as deepfakes and jailbreaking[footnote 25][footnote 26].
Think of aviation safety. While many fear dramatic scenarios like storms or engine failures, research shows that the overwhelming majority (60–80%) of aviation accidents actually stem from ordinary human factors such as stress, fatigue, inadequate training and poor part maintenance[footnote 27][footnote 28]. Similarly, AI’s greatest risks could arise from deceptively routine actions, such as managers not having the time to effectively quality assure outputs before incorporating them into major decisions.
To de-risk genAI tools we must consider the wider system involved in shaping how people use them. For example:
- Does the “human in the loop” have the right skills, knowledge or expertise to use or oversee it effectively and responsibly?
- Are they time-poor or pressurised, making it more likely they will not be able to quality-assure outputs appropriately?
- Does the organisation provide the psychological safety[footnote 29] to enable people to escalate concerns about the tool (e.g. if they believe that the technology is not being used appropriately)?
- Do you have frameworks in place to anticipate “hidden” risks that aren’t immediately obvious before they have happened?
To be able to de-risk your tools, you need to be able to anticipate the kinds of ‘hidden’ risks that may arise in the first place, so that you can proactively mitigate them. Given that there is a gap in tools available to help organisations to anticipate these ‘hidden’ behavioural and organisational risks and unintended consequences, we have developed a novel AI risk identification and mitigation framework, the Mitigating ‘Hidden’ AI Risks Toolkit, which is co-published in complement to this guide. Using this toolkit, you can:
- Anticipate what ‘hidden’ behavioural and organisational risks could emerge when implementing your AI tools
- Consider how these risks could threaten the success of your AI tool
- Identify steps or solutions you could take to mitigate the risk, and thus enhance the impact of your tool
- Get ideas for how to embed the risk-based approach into your AI team’s roll out procedures based on our experience of doing it in Government Communications
This work builds on existing resources such as MIT’s AI Risk Repository[footnote 30] (a database of AI risks categorised by their cause and type of risk) by providing an approach for AI delivery teams to identify and monitor potential risks in their own AI roll outs, as well as practical strategies for mitigating those risks.
Common barriers and enablers to adopting and using GenAI tools
Understanding the factors which may shape uptake and high-quality use of your GenAI tool is vital, as this will impact the success and safety of your tool’s scaling. Below is a visual user journey highlighting the steps an individual user needs to go through to adopt and use an AI tool well.
Examples of the types of barriers a person may experience at each step are provided. Barriers are obstacles which make adopting and using tools effectively harder, such as if a person thinks a tool will be difficult to learn how to use. This diagram is not exhaustive – you need to do research with your people and teams to understand the individual, team and organisational barriers they could face so that you can develop solutions at each of these levels to help them access and get the most out of your tool. You can also identify ‘enablers’, which are the opposite of barriers: positive factors which make use of your tool more likely (e.g. if your user has already used other GenAI tools, they may be more likely to use yours more easily). To scale your AI tool successfully and safely, you need to remove barriers and take advantage of any enablers.
Solutions can then be developed by considering what needs to be true for the barrier to be removed or for the enabler to be taken advantage of[footnote 31]. For example, do you need to improve the tool, the user, and/or your organisation’s environment, processes or approach? Do not assume that one solution, such as one-off training alone, will address these barriers – often it is a mixture of strategies across these different elements of the system which are most effective for supporting AI adoption and usage. For example, research shows that advocacy from an organisation’s leaders can be a key driver of AI uptake and usage in the public sector[footnote 32].

Illustration outlining the user journey that a person would need to go through to learn about, get access to and use a generative AI tool. For each step of the user journey, example barriers are identified, alongside example solutions to those barriers.
Step one of the journey is for the person to learn about the AI-powered tool. One barrier is they may be concerned about the impact of the tool on their job quality or security. One example solution is to consistently communicate with target users how the tool will support (not impact or replace) them, if this is true to the ambitions of the tool.
Step two of the journey is when a person decides to access (or request access, if necessary) to the tool. One barrier is they don’t think it will be useful for them in their role. One example solution is to create role-specific use cases, examples and testimonials.
Step three of the journey is for the person to complete mandatory requirements for access (if applicable). One barrier is they think the mandatory steps will take too long. One example solution is to provide clear information about time they need to set aside and the benefits of doing so.
Step four of the journey is when a person uses the tool for the first time. One barrier is they do not know where the tool is located. One example solution is to create shortcuts for people to access the tool (e.g. newsletters). Step five of the journey is a person identifies a work task AI can help with.
Step five of the journey is a person identifies a work task AI can help with. One barrier is they do not know what kinds of tasks AI is good for. One example solution is to share resources (e.g. case studies, tips) exemplifying a range of use cases.
Step six of the journey is when a person decides to use the AI-powered tool for the task. One barrier is they think other tools will be better to use. One example solution is to highlight what makes your tool unique to your organisation.
Step seven of the journey is the person inputs prompt(s) into the tool. One barrier is they lack the prompt engineering skills to prompt well. One example solution is to provide high-quality training on how to write prompts effectively.
Step eight of the journey is a person uses tool output to complete a task. One barrier is they do not believe the output is high quality/useful. One example solution is to show users how they can improve the initial outputs through prompting.
Step nine of the journey is a person uses tool routinely for similar tasks. One barrier is they forget to use the tool for the task next time. One example solution is to work with teams to embed tools within task/team workflows.
Step ten of the journey is for the person to extend the use of the tool to other tasks. One barrier is they do not know the full breadth of tasks it can be used for. One example solution is to encourage teams to share how they’re using the tool to inspire use.
The three-stage framework: step-by-step checklist
This section provides an overview of the steps required for each stage of the Adopt, Sustain, Optimise framework, and then provides further detail about each phase in turn. The first part lets you know what activities to consider undertaking in each stage, and the second part gives some practical examples of how following this process helped Government Communications to roll out Assist. These steps compliment and extend best practice in user-centred design adopted by digital teams across HM Government using interdisciplinary methods[footnote 34].
Adopt
- Define and monitor the user journey
- Understand the profile of who is and isn’t adopting
- Investigate barriers to adoption
- Test strategies to close adoption gaps
Sustain
- Define successful sustained usage
- Identify barriers to routine use of your tool
- Test strategies to increase sustained use
- Adopt a robust impact measurement approach
Optimise
- Understand how your tool is being used
- Adopt a risk identification, monitoring and mitigation approach
- Develop an effective training, support and feedback offer
- Support leaders across your organisation
Adopt
Tools cannot deliver impact if they are not used. Adopt focuses on understanding the people who you want to use your tool really well to support them to adopt it.
Step 1. Define and monitor the user adoption journey
This step aims to establish a clear pathway for users to adopt the tool, ensuring that the process is straightforward and measurable. This extends existing best practice by starting from the moment target users learn about the tool, rather than the moment they access the digital tool.
Activities
- Clearly outline steps users must take to adopt the tool (e.g. become aware, sign up if necessary, undertake any required training) into a journey. There are many ways to approach user journey mapping[footnote 35]. For Assist, we used guidance in Section 1.3 ‘Mapping out behaviours with user journeys’ from the Government Communications guide ‘The Principles of Behaviour Change Communications’.
- Identify ways to simplify steps in the adoption journey to mitigate drop-off
- Establish a metric to define adoption success (e.g. logging into or using the tool as part of a workflow for the first time)
- Track user actions to identify any drop-off
How we applied this to Assist
First, our team decided how we would collectively define “adoption”. For example, would we define someone as having adopted the tool if they login, or only if they login and send a prompt? There may be no right or wrong way to define it, the important thing is that you agree and are consistent.
We then created a system for measuring and monitoring adoption, mapping the user journey for adopting Assist, highlighting key steps and associated drop-off rates (see Figure 1). This visualisation enabled us to identify points of disengagement and target solutions at these touchpoints to improve adoption rates.
Figure 1. Monitored steps in the Assist adoption journey and associated drop-off rates

Line gaph illustrating steps monitored along the Assist AI tool user journey, and demonstrating drop-off rates and attrition along different steps. Steps includ being invited to get access to Assist, completing mandatory training, logging in for the first time, having two chats or more and logging in within the last week. The line graph is trending negatively downwards.
This approach extends established standards[footnote 36] by zooming out to look at the bigger picture of adoption from beginning to end (e.g. users learning Assist exists, completing our sign up and training processes), rather than just barriers to accessing the tool itself.
Step 2. Understand the profile of who is and isn’t adopting the tool
Identifying who is and is not engaging within your organisation is crucial for addressing inequalities in adoption.
Activities
- Collect a range of demographic and organisational data on adopters, where feasible and considerate of ethical and/or privacy implications
- Monitor wider metrics to build a picture of your users, such as AI literacy and experience
- Identify disparities in adoption rates to address any inequalities (for example, by comparing uptake with any available wider demographic data for your potential user base)
How we applied this to Assist:
To monitor Assist uptake, we put in place an onboarding form which captures details such as Government Communications members’ specialism, digital and AI experience, AI confidence, AI literacy and demographics for equalities monitoring (you can find the questions we developed and used on GOV.UK, published alongside this guide). With this data, we identified a range of insights, including the fact that a large proportion of those signing up had prior experience and confidence with AI tools, suggesting those with low AI confidence were less likely to sign up. This helped us to target our engagement and support at this less experienced group, to provide them with the support to adopt Assist. We are also able to monitor whether uptake varies depending on gender, ethnicity and other characteristics so that we can try to address any imbalances.
Step 3. Investigate barriers to adoption
Understanding the barriers preventing some users from adopting your tool will help to develop targeted strategies to boost uptake. This should consider a range of different groups, for example, those who have not adopted your tool, or those who report having less experience or confidence in using your type of tool.
Activities
- Generate and prioritise targeted solutions in your team (and by consulting others, including people who haven’t adopted your tool to see if they think it would work) to close adoption gaps
- Test changes to user communications, the onboarding process, the design of the tool itself, policies and processes and evaluate if this leads to improvements in adoption rates, to help you adapt your approach - often making things as easy and simple as possible for users can help (e.g. mandatory training is kept short, or signing up is not effortful)
- Ensure you have a robust product roadmap, grounded in feedback from both users and non-users. Often this can be taken for granted, but prioritising features based not only on user needs but also user ‘wants’ can be a key driver for adoption given evidence shows people need to feel the tool will be useful for them in order to adopt it.
- Develop a plan for how you will measure and report on whether these solutions are working
How we applied this to Assist:
We conducted interviews with people who hadn’t adopted the tool (‘non-adopters’) to explore their reasons for not signing up for Assist after receiving an invitation. While many recognised the value of Assist for Government Communications, this process revealed some barriers to uptake which we have then been able to reduce and/or remove. In particular, one barrier that emerged was that some government communicators perceived a lack of direct relevance of Assist to their roles. As a result, we ensured our communications with government communicators during the ‘Adopt’ phase highlighted the tool’s relevance and practical benefits for a range of tasks and roles.
As Assist’s adoption increased across Government Communications, we used research with newer users to capture how barriers to adoption evolved, and whether our strategies for removing them were working.
Step 4. Test strategies to close adoption gaps
Designing communications and interventions that address identified adoption barriers will help to maximise uptake.
Activities
- Generate and prioritise targeted solutions in your team (and by consulting others, including people who haven’t adopted your tool to see if they think it would work) to close adoption gaps
- Test changes to user communications, the onboarding process, the design of the tool itself, policies and processes and evaluate if this leads to improvements in adoption rates, to help you adapt your approach - often making things as easy and simple as possible for users can help (e.g. mandatory training is kept short, or signing up is not effortful)
- Ensure you have a robust product roadmap, grounded in feedback from both users and non-users. Often this can be taken for granted, but prioritising features based not only on user needs but also user ‘wants’ can be a key driver for adoption given evidence shows people need to feel the tool will be useful for them in order to adopt it.
- Develop a plan for how you will measure and report on whether these solutions are working
How we applied this to Assist:
We tested a range of strategies to boost uptake of Assist, including tailored communications, direct engagement with and provision of early ‘VIP’ tool access for Government Communications leaders and improvements which simplified the adoption process. These strategies led to:
- More than 50% of all government communicators across 200+ organisations having used Assist, compared to a workplace average of 34% according to a recent Google report (PDF, 29.3MB) of AI adoption (as of May 2025)
- A 636% increase in the proportion of government communicators signing up for access to Assist[footnote 37];
- Onboarding rates nearly doubled from 25% to 44%[footnote 38];
- Completion rates of our Assist mandatory training improved by 180%[footnote 39], with 70% of all government communicators having completed our AI onboarding training, strengthening their AI literacy (as of May 2025)
Testing and evaluating uptake strategies provided us with valuable insight about our target users, and informed our wider strategy for communicating about Assist as a tool.
Sustain
For AI tools to deliver their potential impact, users must use them in a routine and embedded way in their day-to-day job tasks, not just once or twice and never again. Steps within Sustain are critical for facilitating the right structures and environment within your organisation to help users build the habit of using your AI solution in their workflow.
Step 1. Define successful sustained usage
This step establishes clear metrics to evaluate users’ routine use of the tool.
Activities
- Identify metrics that will define a ‘use’ of the tool, such as log-ins or messages sent
- Define what will be accepted as ‘sustained’ usage for your tool – this can evolve over time as your tool/service matures and it becomes more embedded within the workflows of your target users
- Set up an analytics dashboard to monitor usage metrics comprehensively
How we applied this to Assist:
To maximise Assist’s potential, we established clear metrics for monitoring sustained usage. Our team defined “use” and “sustained use”, agreeing on a combination of metrics, including log-ins, sessions, and prompts sent to the tool, to provide a comprehensive and nuanced view of how Government Communications professionals are using the tool.
We defined sustained use as logging in at least once per week, aggregated monthly, to reflect the regular use required to build the routine of using Assist while acknowledging that usage will vary depending on members’ roles and tasks.
Step 2. Identify barriers to routine use of your tool
Understanding the challenges users face in maintaining regular, routine use of the tool is important.
Activities
- Conduct regular primary research and feedback sessions with users
- Analyse usage data to understand how and why people are using the tool; this may be more relevant if your tool has a wider variety of use cases versus if your tool is more narrowly focused on a specific task within a workflow
- Develop a user journey map[footnote 40] to identify where further support may be needed
How we applied this to Assist:
To maintain regular, routine use of Assist, it was crucial we understood users’ ongoing experience of the tool. Our team conducts a regular cycle of focus groups, user experience interviews and feedback surveys to gather insight into users’ experiences and suggestions for improvement which could support their regular use.
We developed a user journey map outlining the steps required for users to embed Assist into their day-to-day work, and then identified potential barriers to each of these steps (See Figure 2 below). This helped us to understand what needed to be true from a user’s perspective in order for them to use Assist regularly. For example, in order for a person to identify a work task that AI tools could help with, that person needs to have the knowledge of what LLMs can be used for. We then mapped our existing support activities, such as introductory webinars, onto this journey to both identify where our existing efforts are removing these barriers, and where there are gaps in our provision, where we could target support further to help users maintain regular, routine use of Assist.
Figure 2. User journey for sustained use of Assist, with potential barriers mapped.

User journey for sustained use of Assist, with potential barriers mapped. For example, one barrier to a person identifying a work task AI can help with is that they need to know which types of tasks different generative AI models can be used for, and what their limitations are.
Step 3. Test strategies to increase sustained use
This step involves implementing targeted strategies to address identified barriers to routine use.
Activities
- Develop, deploy and iterate solutions such as reminders, resources and support based on user feedback
- Select appropriate evaluation methods to learn what works and iterate your approach to driving sustained use, acknowledging that a variety of strategies may be required.
- Develop a plan for how you will measure and report on whether these solutions are working
How we applied this to Assist:
As a team, we identified a range of strategies to reduce the barriers Government Communications professionals face to using Assist regularly, and are evaluating their impact. While we have tested and implemented a range of strategies to drive Assist’s regular usage, some key initiatives include:
-
Early access for leaders: Alongside giving a range of Government Communications professionals access to the tool as part of the pilot, we also gave senior Government Communications leaders, including Directors of Communications, early Private Beta access to Assist to enable their active engagement and advocacy. Feedback showed that this enabled them to get familiar with the tool as well as plan how they could embed Assist in their teams’ workflows when the tool was rolled out more widely.
-
Mini Assist webinars: Hosting short, interactive sessions that highlight best practices for using Assist with Government Communication-specific tasks, helping users learn how to use Assist across their job and providing a personal touchpoint for users to get support.
As a result of our Sustain interventions, 30% of Assist users are logging in to use Assist on a weekly basis, with this continuing to increase week-on-week, establishing the habits needed for AI tools to have genuine impact (as of May 2025).
Step 4. Adopt a robust impact measurement approach
This focuses on developing a robust approach to evaluating the benefits the tool brings for your organisation.
Activities
- Develop a bespoke, mixed methods approach to measuring efficiency benefits, based on:
- The nature of your tool
- The kind of work the tool is supporting
- Your organisation’s AI needs
The Evaluation Task Force’s recent guidance on evaluating the impact of AI tools can help with this.
How we applied this to Assist:
To measure Assist’s impact, in line with the UK Government Efficiency Framework[footnote 41], our team defines efficiency as spending less resources (in the context of using Assist) to achieve the same - or greater - outputs, or to achieve higher outputs while spending the same amount of resource. Productivity is thus measured, at a high level, as how many units of output are produced from one unit of inputs. As such, improved productivity is a means to achieve greater efficiency. Wider government frameworks define productivity as effectively using resources to achieve relevant outputs and outcomes[footnote 42][footnote 43].
There are known limitations with using self-report questions to measure impact alone, such as asking people how much time they’ve saved by using a tool.
For example, it is cognitively challenging for people to estimate time savings; doing so requires them to anticipate how long a task would usually take them, and then consider how a tool helped them cut that time down. Research finds that people often overestimate time savings. We therefore chose to use multiple methods to build a more robust and holistic picture of how Assist is providing value to our membership. Our evaluation approach aligns with recent guidance issued by the Evaluation Task Force for evaluating the impact of AI tools[footnote 44].
Our team’s strategy includes:
- Quantitative surveys measuring Government Communications professionals’ self-reported efficiency savings, with estimates of how long tasks would have taken them without using Assist versus how long they perceived it took them with Assist.
- Quasi-experimental exercises comparing specific task completion times between a control group and a test group using Assist.
- Qualitative primary research with Government Communications professionals using Assist to understand their use of the tool for their role, their experiences of using it and how it is adding value (e.g. how are they making use of any efficiencies), in order to help us build a theory of change for how Assist is delivering its outcomes, and inform how we iterate our delivery of Assist to ensure we are meeting its objectives.
Optimise
While promoting the adoption and regular use of AI tools is essential, it is equally crucial that users use these tools effectively and safely. Effective use of AI enables users to harness its advantages while minimising potential risks. As such, the Optimise phase centres on two interrelated objectives: reducing the ‘hidden’ behavioural and organisational risks associated with people using AI tools, and maximising the benefits these tools can offer.
Although strategies such as “human in the loop” and reliance on terms and conditions have been popularised as an approach to AI risk mitigation, these are insufficient alone without supporting interventions as users may lack the necessary expertise, time or authority to critically assess AI outputs or challenge their use. As a result, it is important that you proactively anticipate, monitor and mitigate potential risks associated with deploying your AI tools through a bespoke optimisation strategy tailored to the nature of your tool(s).
Step 1. Understand how your tool is being used
This step involves using data and research to understand whether your tool is being used as intended.
Activities
- Analyse user behaviour and use cases to understand how and why your users are using your tool
- Identify instances where the tool is used beyond what you define as its intended scope. This will help you to identify whether this reflects users being responsibly experimental and innovative, or if they are using your tool for tasks is it not optimised or appropriate for
- Conduct user research to understand how usage aligned with the tool’s intended purpose
How we applied this to Assist:
To understand how Government Communications professionals were using Assist, we started by analysing a randomly selected, anonymous subset of messages inputted into the tool (users’ prompts are monitored anonymously and are only visible to six named members of the team delivering Assist to inform the future development of the tool and mitigate against misuse).
From this analysis, we identified the most common uses for Assist. This included tasks such as summarising lengthy documents, policies and reports and helping to develop first drafts of communications and marketing content which could be further honed.
This enabled us to understand what people wanted to use Assist for, and we then tailored our training offer to this, so that we could support Government Communications professionals to use Assist for these tasks as effectively as possible.
Informed by this initial work, our data scientist built our classification capabilities, enabling the team to monitor messages inputted into the tool for common use cases and misuse.
Step 2. Adopt a risk identification, monitoring and mitigation approach
This step focuses on identifying potential risks to high-quality use, and embedding monitoring and mitigation as part of the tool’s deployment.
Activities
- Identify possible rollout risks using a risk identification tool, such as Mitigating Hidden AI Risks Toolkit (published separately) which is designed to help AI delivery teams identify and mitigate AI risks that could arise from their tools and services
- Work with your team to identify practical ways to embed the approach within team processes. This is critical, as if it is not pragmatic and becomes too onerous, the initial work will be done but will soon be forgotten about, creating delivery risks.
- Ensure risk beyond data security and GDPR compliance are anticipated, monitored and mitigated
- Apply an ethical framework to prioritise which risks your team should address first
- Identify ways to monitor and take ownership for risks within existing team processes to make it feasible to deliver and share responsibility
- Test and evaluate strategies to optimise us
How we applied this to Assist:
We looked at how the use cases identified for Assist could backfire and then took steps to mitigate these possible backfire effects. We found that there were no frameworks to help us anticipate what could go wrong so designed our own based on the analysis we did. This is our Mitigating Hidden AI Risks Toolkit, which has been published separately. This process enabled us to surface over 90 hypothetical risks across six themes, enabling us to proactively consider each in turn and identify monitoring and mitigation strategies as part of our Assist risk management approach.
For example, we identified one category of risks related to how AI tools may not be effectively embedded into teams or processes which we called ‘Workflow and Organisational Challenges’. Leaders were identified as an important group to engage in order to mitigate these risks, so we delivered bespoke sessions designed for those with team management responsibilities (see more information below, under point 4).
To embed our risk management approach effectively within team processes, we worked with the project team to assign members to the six different risk themes based on their areas of interest and nature of their role (for example, our AI Engineer and behavioural scientist co-led on risks related to ‘Quality Assurance’ as this is both a tool and a human challenge). This not only spread the resource required to effectively monitor and mitigate risks, but also empowered everyone in the team to embrace mitigation and embed it within their day-to-day work as part of delivering the Assist tool, making it more practical, less onerous, and enabling us to identify a greater variety of innovative strategies we could implement to de-risk Assist and drive its high-quality use.
By using our Mitigating Hidden AI Risks Toolkit to identify risks and implement mitigations, we have:
- De-risked 50+ use cases with evidence-based behavioural, technical and governance mitigations to enable responsible use
- Improved users’ confidence in using AI in their work by 23% as a result of our training and support offer and the design of Assist as an easy-to-use tool.
Step 3. Develop an effective training, support and feedback offer
This step aims to provide users with the skills and knowledge to use tools to best effect and to mitigate harm.
Activities
- Develop continuous training that offers actionable guidance beyond basic warning messages
- Offer tailored support to address specific user needs and challenges
- If you are using human oversight to maintain output quality – known as “human in the loop” – you need to make sure they are equipped to do this effectively by ensuring you give them:
- Relevant expertise, knowledge and skills to critically evaluate an output
- Adequate time to review the output
- Authority to challenge outputs or how AI-generated outputs are being used[footnote 45]
- Provide opportunities for users to give feedback on tool improvements
How we applied this to Assist:
We recognised that many of the risks we identified, such as using Assist for tasks it is not appropriate for, could be mitigated by providing users with regular and consistent high-quality training and support.
As a result, we were able to use the risks we identified to design our mandatory training so that it had the right content in it. For example, ensuring that we made it really easy to understand how the technology worked (i.e. “like an enhanced form of predictive text”) so that users are better able to assess what tasks it is appropriate for (i.e. making a user consider “maybe it won’t be good for maths then”). This meant that training wasn’t just a tick box, but genuinely supports users to use Assist effectively.
To support ongoing training and AI skill development, we also launched a series of short webinars focused on providing Assist users with best-practice guidance for using the tool for specific Government Communications tasks.
Step 4. Support leaders across your organisation
Ensure leaders understand the tool’s capabilities and limitations to facilitate effective deployment across teams.
Activities
- Provide bespoke training to leaders on AI’s capabilities, limitations and risks
- Develop resources to aid leaders in making informed decisions about how the tool is used in their teams and what professional expertise and experience they should be looking for when assembling AI delivery teams (in line with guidance from the AI Framework for Government[footnote 46] and Government Digital Service (GDS)[footnote 47]
How we applied this to Assist:
To support leaders across Government Communications, we conducted bespoke sessions for those with team management responsibilities, where we:
- Provided leaders with a clear, easy-to-understand overview of how LLMs work to build foundational knowledge
- Highlighted the uses, benefits and constraints of Assist, enabling them to set realistic expectations about how it could be used appropriately in their teams
- Increased awareness of the influential role they play as a leader in ensuring effective and responsible use of Assist
- Provided them with strategies for embedded Assist effectively in their teams (see the section on Tips for Leaders)
- Provided an opportunity for questions: this enabled them to voice concerns and get answers to questions they anticipated their teams would ask.
Tips for Leaders
Leaders in organisations who want to roll out AI-powered tools are critical to the success of these initiatives. Leaders must cultivate an organisational environment that provides the right culture and conditions for innovating with AI tools while doing so responsibly. As a result, successful AI adoption requires leaders to strike the right balance between innovation and robust risk management. Without this, this can mean the difference between transformative success and costly failure.
1. Lead by example
Get familiar with the tool and use it in your own work, where possible. Your active engagement and role modelling of the benefits will encourage others to adopt and use it. Using the tool is also critical to enable you to identify how the tool can best support the work your team or organisation delivers, as well as to spot any organisational risks. It is important you understand the capabilities and limits of your tool, so that you can ensure it is deployed for the right organisational tasks.
2. Foster a culture of responsible innovation and continuous learning
Encourage exploration and experimentation with your AI tool, where appropriate and responsible to the nature of your tool. Clearly outline what your AI-powered tool should and shouldn’t be used for, as this will provide your teams with reassurance and safety to experiment within its defined scope. Ensure your teams know that responsible use is a priority for you. Provide training and resources, and create opportunities for your staff to share their experiences and best practices, promoting a collaborative learning environment.
3. Recognise and reward use
Acknowledge teams and individuals who effectively use the AI tools during your interactions with them. Make sure that you spotlight where they have used it appropriately and safely. For example, if they have mentioned that they have undertaken a quality-assurance or review process to fact-check any outputs.
4. Invest in AI literacy and usage training
Equip your teams with skills they need to use your tool effectively. For example, ensure that user training provides specific guidance and advice on how to mitigate the risks for specific tasks or job roles (and do not just flag the risks without giving concrete instruction of whether and how to avoid them creating harms; e.g. algorithm bias). Continuous training, delivered through a variety of formats, will help to mitigate risks and empower staff to leverage the tool where it can have the most impact for your organisation.
5. Allocate adequate resources to your delivery teams
Ensure that your AI delivery teams are adequately resourced, so that they can balance ongoing delivery tasks with essential activities such as risk assessment, impact measurement and user feedback analysis. In line with the AI Framework for Government[footnote 48], you should ensure that the AI team is diverse, comprising interdisciplinary skills and mindsets, including data scientists, machine learning experts, social and behavioural scientists, and user researchers, to guarantee a well-rounded approach to AI implementation.
6. Implement robust monitoring, evaluation and risk management methods
Establish robust monitoring systems so that you know who is and is not using your tool, how it is being used, what your users’ experiences are of using it, and whether its usage is driving a clearly defined business outcome you have set for it. This needs to be cyclical, ensuring that insights from evaluation are being used to iterate and improve service delivery. For this to be robust, you need to ensure your AI delivery team do not only measure success in terms of the business outcome, but also wider dependencies, including the potential efficiency and quality impacts that AI adoption may bring about. Best practice for conducting this evaluation is available in this guide, as well as within the Evaluation Task Force’s guidance on evaluating the impact of AI tools[footnote 49]. Having this data will help you to identify and direct areas for improvement and iteration, so that your tool meets your organisation’s needs and delivers impact.
7. Establish a robust risk management strategy for your tools
If providing a general tool, ensure you know how and what your tool is being used for so that you can steer people away from poor uses and towards better ones. This can provide an early warning signal should staff be using the tool in ways it wasn’t designed for and specifically isn’t appropriate for. Ensure risks beyond data security and GDPR compliance are anticipated, monitored and mitigated through a comprehensive risk management strategy for AI implementation. Your AI delivery teams can utilise our novel Mitigating Hidden AI Risks toolkit to do this, which has been published alongside this guide.
8. Maintain and enhance organisational safeguards
Ensure robust organisational oversight is in place to mitigate the likelihood of unintended impacts. For example, ensuring sign-off processes are maintained, to ensure that the quality of work is maintained. Regularly review and update safeguards to ensure responsible use across your organisation.
9. Ensure feedback is actually feeding back to your AI strategy
Establish clear channels for staff to feedback on your provisions to inform your AI strategy. Be explicit that constructive feedback is encouraged and proactively demonstrate how you are actioning this feedback. Ensure that inclusive feedback mechanisms are established to gather input from both AI tool users and non-users, avoiding selection bias and designing tools that cater to a broad range of preferences beyond early adopters.
Scope and limits of this guide
This guide focuses primarily on the human aspects of implementing and scaling generative AI (GenAI) tools within public sector organisations. Below, we outline the scope and limits of this guide to support teams to apply it in practice to their own generative AI tool roll-outs.
What do you mean by a GenAI tool?
Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on[footnote 50].
Accordingly, while we do not provide a precise definition for what a ‘GenAI tool’ is, we use the term to refer broadly to the class of end-user software applications that use these generative artificial intelligence algorithms to provide users with generated content for a specific task, based in part on the inputs the user has provided. Others define these as “end user tool…whose technical implementation includes a generative model based on deep learning[footnote 51]. Well known examples of GenAI tools under this scope include ChatGPT[footnote 52], Google Gemini[footnote 53] and Perplexity[footnote 54]. In this guide, we use terms such as ‘tools’, ‘services’ and ‘technologies’ interchangeably to reflect these forms of generative AI-powered user-operated applications.
What kinds of GenAI tools is this guide most relevant for?
Due to our focus on the human aspects of implementing and scaling GenAI tools within organisations, this guide will be most relevant for teams implementing AI tools that are intended to be used directly by people. This is in contrast with more automated, backend AI systems which, while oversight is still important, typically involve less direct human access.
Is this guide applicable to tools which use wider types of AI, beyond GenAI?
The insight and framework outlined in this guide is based on our experience of rolling out one tool which uses generative AI and has been shaped by insight and input from those across wider organisations working in this space. Until tried and tested across wider types of AI tools, we have kept the scope of this guide to tools which specifically use generative AI, as the insights we have gathered may not fully apply to all forms of AI.
However, we believe this guide could also be useful for teams developing tools which utilise wider forms of AI. As a result, we welcome feedback from teams developing wider AI tooling to help us validate and refine the framework to make it as useful as possible.
Is the scaling framework applicable to organisations beyond the public sector?
This guide focuses on supporting the successful and safe scaling of AI tools within a public sector context. Accordingly, not all aspects or challenges identified in this guide may be applicable to the private sector.
However, our engagement with private sector partners suggests that some of the challenges and barriers faced in rolling out new GenAI tools and services within large public sector organisations are likely to be common to many large organisations, whether public or private. As such, we believe the general framework outlined could be useful to teams within the private sector.
Acknowledgements
This guide was written by Holly Marquez and Dr Moira Nicolson of the Behavioural Science Team in Applied Data and Insight (Government Communications, Cabinet Office).
To make this guide as applicable and useful as possible for people developing, implementing and scaling GenAI-powered tools across the public and private sectors, we engaged and consulted a diverse range of stakeholders.
We thank the contributions of these individuals and organisations. Their insight and expertise was instrumental in shaping the Adopt, Sustain, Optimise framework and this accompanying guide.
Government Communications
Conrad Bird, Director, Strategy and Campaigns
Dr Amanda Svensson, Deputy Director of Applied Data and Insight
Robin Attwood-Martin, Head of Applied Innovation
Marcus Melton, Applied Innovation Lead and Assist Product Manager
Kiran Chahal, Applied Innovation Lead
Dr Ashley Poole, Assist Lead Developer and AI Engineer
Rishi Moulton, Insight and Evaluation Manager
Abby Wade, Applied Innovation Manager
Hayley Higgins, Head of Member Strategy and Services, and the Member Strategy and Services Team
Steven Pirrie, Visual Designer
Kayleigh Lewis, Lead Content Designer
Lisa Sutherland, Applied Innovation Lead
Wider contributors
Professor Susan Michie, Director of the Centre for Behaviour Change, University College London (UCL) and Co-Director of Behavioural Research UK (BR-UK)
Deborah Morgan, PhD Researcher and AI Engagement and Analysis, Technology and Strategy Insights, Government Office for Science
Ellie Haberlin-Chambers, Implementation Advisor, Department for Health and Social Care (DHSC)
Katie French and Charlotte Ryall, Senior User Researchers, Incubator for Artificial Intelligence (i.AI, Department for Science, Innovation and Technology)
Ann Borda, Ethics Fellow, The Alan Turing Institute
Professor David Leslie, Head of Ethics and Responsible Innovation, The Alan Turing Institute
Antonella Maia Perini, Research Associate, The Alan Turing Institute
Robecca Hogg, Senior Responsible AI Advisor, Defence Science and Technology Laboratory
Ed Butcher, Senior Principal Analyst, AI Concepts and Exploitation Team, Defence Science and Technology Laboratory (DSTL)
Stuart Hossack, Behavioural Insight Lead, HM Courts and Tribunal Service
Rebecca Furlong, Media Specialist, Construct Education
Ethan McQuaid, User Experience Analyst and Researcher, Cabinet Office
Feedback and collaboration
We welcome you and your teams’ thoughts and feedback on this guide, particularly your experience applying this guide to help with scaling your own GenAI tools. We also welcome opportunities to collaborate with other teams.
Input from others beyond Government Communications and the Cabinet Office has been and will continue to be invaluable to the project’s continuous development.
Get in touch with us by email:
-
Learn more about how Government Communications is responsibly harnessing innovations including AI to transform government communication: Innovating with Impact Strategy, Generative AI policy and Framework for Ethical Innovation ↩
-
The AI Playbook for Government provides the public sector with guidance on the effective and responsible use of a wide range of AI technologies, whilst the Service Standard outlines a set of 14 points that guide the development and delivery of great public services. ↩
-
There are many ways to approach user journey mapping. You can choose the approach which you feel suits your project best. For Government Communications’ work on Assist, we followed the approach to user journey mapping outlined within the Government Communications publication The Principles of Behaviour Change Communications in Section 1.3 ‘Mapping out behaviours with user journeys’. The Government Digital Service (GDS) also outlines other useful approaches to user journey mapping. ↩
-
Department for Science, Innovation and Technology (2025) AI Opportunities Action Plan ↩
-
See for example, this blog from IBM and this academic paper from researchers at MIT, UC Berkeley, MIT and the University of Cambridge. ↩
-
This is evident in the way that mainstream AI research defines “AI alignment”, namely as “a field, focused on the technical# project of ensuring an AI system acts reliably in accordance with the values of one or more humans”. Xuan et al (2024). See for example, this blog from IBM and this academic paper from researchers at MIT, UC Berkeley, MIT and the University of Cambridge. ↩
-
The Government Communications-bespoke conversational AI tool. Learn more in our Innovating with Impact Strategy ↩
-
For example, evidence has found that when using generative AI to develop ideas for a story, use of AI-powered tools enhanced individual creativity, but the stories developed using generative AI were more similar to each other than human-generated stories. This has implications for innovation, and reflects on the risks of overreliance on AI-powered tools. Doshi & Hauser (2023) Generative artificial intelligence enhances creativity but reduces the diversity of novel content. Generative artificial intelligence enhances creativity but reduces the diversity of novel content ↩
-
ONS (2023) Understanding AI uptake and sentiment among people and businesses in the UK: June 2023 ↩
-
Apolitical (2024) Building the AI-Ready Government (PDF, 28MB) ↩
-
ONS (2023) Understanding AI uptake and sentiment among people and businesses in the UK: June 2023 ↩
-
Ai (2024) The impact of perceived AI replacement on employee job satisfaction: exploring the mediating role of self-esteem ↩
-
Nazareno & Schiff (2021) The impact of automation and artificial intelligence on worker well-being ↩
-
Sustained and/or high quality use of GenAI tools will subjectively vary depending on the tool, the task it is being used for and the context in which a person is operating in when using it. We consider high-quality use to include both effective prompt engineering and low-risk use cases/applications of generated content (e.g. behaviours such as verifying facts before using them). ↩
-
This scoping and analysis of the existing context is essential. In line with the Service Standard, it is vital that end users in the system are consulted as early as possible, well before the Adopt phase, to robustly understand the problem faced by users and ensure that the solution you’ve identified is the right one and has the greatest chance of addressing the problem, before resources are invested. ↩
-
Bianco (2021)(PDF, 4.6mb) Overcoming social barriers of AI adoption ↩
-
Davis (1989) as found in Marangunic & Granic (2014): Technology acceptance model: a literature review from 1986 to 2013 ↩
-
Dahri et al (2024): Extended TAM based acceptance of AI-Powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study ↩
-
It is worth noting that AI-hesitant attitudes alone may not itself prevent someone from taking up or advocating for their team using a GenAI tool – it may just make them less likely to do so. For example, an organisational leader may still direct their team to use an available tool if they are being asked to promote it by their seniors, even if their own attitudes towards AI are hesitant or sceptical. ↩
-
Neal, Wood, Labrecque & Lally (2012) How do habits guide behaviour? Perceived and actual triggers of habits in daily life. ↩
-
Bianco (2021)(PDF, 4.6MB) Overcoming social barriers of AI adoption ↩
-
FOIS (2024) Generative AI Models: Opportunities and Risks for Industry and Authorities (PDF, 1,146KB) ↩
-
Capraro et al (2023) The Impact of Generative AI on Socioeconomic Inequalities and Policy Making ↩
-
Shapell et al (2006) Human error and commercial aviation accidents: A comprehensive, fine-grained analysis using HFACS ↩
-
Bruno, Walker and Abujudeh (2015) Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction ↩
-
Psychological safety is defined as “the belief that the workplace is safe for interpersonal risk taking”. For more information, see Frazier (2017) Psychological safety: a meta analytical review and extension (PDF, 294KB) ↩
-
Solutions can be diverse. You can use publicly available tools such as the Behaviour Change Theory and Techniques Tool and the EAST framework to help you identify solutions to remove barriers. As an example, improving the tool may look like introducing (or removing) features or redesigning interface elements based on user feedback on what is and is not working, where the goal is to enable users to use the tool effectively and safely. Improving the user often needs to go beyond one-off training - for example, providing demonstration opportunities or putting in place a strong support offer to help people learn how to use your tools effectively in manageable chunks for specific tasks. Improving the organisation’s environment, processes or approach could include ensuring organisational leaders are informed enough to accurately steer their teams in using your tool effectively and safely, or ensuring your communications about the tool are clear, reaching your users and are well-received (for example, your target users do not feel the tool is designed to “replace” them, which is an identified risk) ↩
-
Valle-Cruz, Garcia-Contreras & Munoz-Chavez (2024) Leadership and transformation in the public sector: an empirical exploration of AI adoption and efficiency during the fourth industrial revolution ↩
-
Step one of the journey is for the person to learn about the AI-powered tool. One barrier is they may be concerned about the impact of the tool on their job quality or security. One example solution is to consistently communicate with target users how the tool will support (not impact or replace) them, if this is true to the ambitions of the tool. Step two of the journey is when a person decides to access (or request access, if necessary) to the tool. One barrier is they don’t think it will be useful for them in their role. One example solution is to create role-specific use cases, examples and testimonials. Step three of the journey is for the person to complete mandatory requirements for access (if applicable). One barrier is they think the mandatory steps will take too long. One example solution is to provide clear information about time they need to set aside and the benefits of doing so. Step four of the journey is when a person uses the tool for the first time. One barrier is they do not know where the tool is located. One example solution is to create shortcuts for people to access the tool (e.g. newsletters). Step five of the journey is a person identifies a work task AI can help with. One barrier is they do not know what kinds of tasks AI is good for. One example solution is to share resources (e.g. case studies, tips) exemplifying a range of use cases. Step six of the journey is when a person decides to use the AI-powered tool for the task. One barrier is they think other tools will be better to use. One example solution is to highlight what makes your tool unique to your organisation. Step seven of the journey is the person inputs prompt(s) into the tool. One barrier is they lack the prompt engineering skills to prompt well. One example solution is to provide high-quality training on how to write prompts effectively. Step eight of the journey is a person uses tool output to complete a task. One barrier is they do not believe the output is high quality/useful. One example solution is to show users how they can improve the initial outputs through prompting. Step nine of the journey is a person uses tool routinely for similar tasks. One barrier is they forget to use the tool for the task next time. One example solution is to work with teams to embed tools within task/team workflows. Step ten of the journey is for the person to extend the use of the tool to other tasks. One barrier is they do not know the full breadth of tasks it can be used for. One example solution is to encourage teams to share how they’re using the tool to inspire use. ↩
-
In line with HMG Service Standard ↩
-
There are many ways to approach user journey mapping. You can choose the approach which you feel suits your project best. For Government Communications’ work on Assist, we followed the approach to user journey mapping outlined within the Government Communications publication The Principles of Behaviour Change Communications in Section 1.3 ‘Mapping out behaviours with user journeys’. The Government Digital Service (GDS) also outlines other useful approaches to user journey ↩
-
For example, as per the Service Standard ↩
-
Measured over two weeks following the delivery of emails to Government Communications members signed up to the Government Communications newsletter inviting them to get access to Assist. For the benefit of rapid test and learn, this analysis was crude, whereby we compared the proportion of members who signed up for access to Assist in the two weeks prior to the emails being delivered versus the proportion in the two weeks following the email (i.e. pre vs post, expressed as relative change). While we intentionally limited wider activities with target users (e.g. talks, demonstrations) during the measurement period to reduce confounding factors, this specific analysis is unable to isolate the impact of the email from wider factors (e.g. word of mouth across teams driving sign-ups to Assist). Notably, this activity was part of a randomised control trial (RCT) testing different emails (through the use of bespoke UTM tracking), with each focusing on different communication styles designed to encourage adoption by removing identified barriers or leveraging identified enablers (e.g. one email focused on the opportunity to build their skills in using AI, while another focused on the benefits of becoming more efficient in their work). At the time of publication, we are still conducting our evaluation of how the different emails compared in terms of driving sign-ups, but are happy to share insights from this in future with interested parties. Please find our contact details within the Acknowledgements section. ↩
-
Measured over one week following changes to onboarding communications sent to those who had completed an initial sign-up form. Onboarding is defined as target users both completing a sign up form and undertaking a mandatory training module, after which they were given access to Assist. We acknowledge that this analysis is crude, where we compare the proportion of members who signed up and completed their mandatory training in the week prior to changes to an email communication sent during the onboarding process versus the proportion in the week following the changes (i.e. pre vs post, expressed as an absolute change). ↩
-
Measured over one week following delivery of a reminder communication to complete mandatory training, which featured behaviourally-informed wording changes focused on highlighting the ease of the training and the benefits. For the benefit of rapid test and learn, this analysis was crude, whereby we compared the proportion of members who completed their mandatory training in the week prior to the reminder email versus the proportion in the week following the reminder email (i.e. pre vs post, expressed as a relative change). ↩
-
There are many ways to approach user journey mapping. You can choose the approach which you feel suits your project best. For Government Communications’ work on Assist, we followed the approach to user journey mapping outlined within the Government Communications publication The Principles of Behaviour Change Communications in Section 1.3 ‘Mapping out behaviours with user journeys’. The Government Digital Service (GDS) also outlines other useful approaches to user journey mapping. ↩
-
Institute for Government (2024) Public service productivity: April 2024 ↩
-
Office for National Statistics (2024) Public Services Productivity Review progress report: February 2024 ↩
-
These are our three principles for optimising “human in the loop” as an approach to risk mitigation. See page 31, where this is discussed further, alongside wider strategies for AI risk mitigation. ↩
-
AI Framework for Government (2024) ↩
-
GDS (2021) Service Manual: Set up a service team at each phase ↩
-
AI Framework for Government (2024) ↩
-
The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers (PDF, 999KB) ↩