Beta This is new guidance. Complete our quick 5-question survey to help us improve it.
Using moderated usability testing
Moderated usability testing is where you watch participants try to complete specific tasks using your service.
Asking them to ‘think aloud’ as they move through the service helps you understand what they are doing, thinking and feeling.
Meeting the Digital Service Standard
You must carry out user research as part of meeting:
You’ll have to explain how you researched with different user groups in your service assessments.
When to use moderated usability testing
Moderated usability testing is most useful in the alpha, beta and live phases to test prototypes or the service you’ve built. You can also use it in the discovery phase to learn about problems with an existing service.
Doing it helps you to:
- see if users understand what they need to do and can complete all relevant tasks
- identify specific usability issues - for example, problems with the language or layout
- generate ideas for how to improve your service
Steps to follow
Plan your moderated usability testing carefully so you learn things that can help improve your service.
Plan the sessions
Usability test sessions usually take between 30 and 60 minutes, depending on the number and complexity of the tasks you want users to attempt. Plan for no more than 6 one-hour sessions a day and allow at least 15 minutes between sessions, plus additional time for lunch.
Before planning any sessions, work with your team to agree the research questions, types of users and parts of your prototype or service you want to focus on. Once you know this:
- recruit research participants - these need to be actual or likely users of your service
- choose a location for the test sessions - research labs are best, but you can also use meeting rooms or run pop-up sessions
- make sure the venue is accessible to the people you want to see
- arrange for interpreters or assistants to help participants that need them
- decide if and how you want to record the sessions
- invite observers and arrange a note-taker for each session
Design the tasks
You need to design test tasks carefully to make sure they answer your research questions. Good test tasks:
- set a clear goal for participants to try and achieve
- are relevant and believable to participants
- are challenging enough to uncover usability issues
- don’t give away ‘the answer’ or hint at how a participant might complete them
You may have one long or complex task that you want to research but it’s more common to give users several smaller tasks. When you have several:
- arrange them in a logical order and work through them one at a time
- use the time between them to set up different parts of the prototype or service - for example, you may have to switch from a live service to a prototype if you haven’t got a working end-to-end product
- bring a selection to each session and choose the tasks that are most relevant to the participant
Once you’re happy with the tasks, create a ‘discussion guide’. This should include:
- your introduction script - this tells the participant who you are, explains the research and reminds them about things like recording
- descriptions of each test task, along with any instructions
- a planning checklist to make sure you’ll have everything you need
You can use your discussion guide to:
- try out the test tasks and instructions with a colleague
- stay on track during test sessions
- make sure participants are given tasks in a consistent way
- maintain a record of what you do in this round of research
Run a session
Participants are often nervous and worried about making mistakes. Before you ask them to do any tasks:
- give them time to relax
- run through your introduction script to explain what’s going to happen
- let them know that you’re testing the service, not them - reassure them that they aren’t being judged or assessed
- ask a few friendly questions to learn more about them - you can use this information later to make tasks more relevant to the participant
When you introduce a task, explain what you want the participant to do using clear, neutral instructions. You should also:
- personalise the task if you can - for example ‘you told me your daughter is ready for nursery, can you choose a nursery that would be right for her?’
- ask the participant to tell you their thoughts as they run through the task
- try to stay quiet - mostly just watch and listen
Occasionally, you may want to interrupt a participant. For example, you can:
- ask the participant about anything really interesting that you see or hear so you can understand what’s happening
- help the participant get back on track if they get completely stuck - giving them a chance to recover means you can continue learning
- ask the participant about any opinions or suggestions they give - ask open-ended questions like ‘what makes you say that?’ or ‘how would that help?’
Once you’ve finished:
- thank the participant for their time and what they’ve helped you learn
- ask them if they have any final thoughts about the service or the session
If you’ve finished for the day, pack away your equipment (use your planning checklist).
Testing with personal data
Whenever possible, ask participants to carry out tasks using their own data and documents. They’re likely to be more engaged in the task and you’ll probably learn more than if you use dummy data.
Using real data is only possible if:
- your service can access and process data - prototypes won’t always be able to do this
- you are able to keep personal data secure
Using dummy data
If you can’t use real data or don’t have enough time to set up appropriate test conditions, you should set up dummy data. You’ll need to create a character for the participant to play and mock up documents like driving licences, letters or credit cards with that character’s name and details on them.
Participants using dummy data provide useful insights. However, they are likely to be less engaged than if they were using their own data and will probably uncover fewer contextual issues.
Examples and case studies
To learn more about moderated usability testing:
- read how the GOV.UK Verify team used lab sessions to assess the quality of newly certified companies
- find out why the GOV.UK Verify team has carried out more than 100 rounds of usability testing
You may also find these guides useful:
- Published by:
- User research community
- Last update:
Guidance first published