Using moderated usability testing
Moderated usability testing is where you watch participants try to complete specific tasks using your service.
Asking them to ‘think aloud’ as they move through the service helps you understand what they are doing, thinking and feeling.
Meeting the Digital Service Standard
You must carry out user research as part of meeting:
You’ll have to explain how you researched with different user groups, including people with disabilities, in your service assessments.
When to use moderated usability testing
Moderated usability testing is most useful in the alpha, beta and live phases to test prototypes or the service you’ve built. You can also use it in the discovery phase to learn about problems with an existing service.
Doing it helps you to:
- see if users understand what they need to do and can complete all relevant tasks
- identify specific usability issues - for example, problems with the language or layout
- generate ideas for how to improve your service
Steps to follow
Plan your moderated usability testing carefully so you learn things that can help improve your service.
Plan the sessions
Usability test sessions usually take between 30 and 60 minutes, depending on the number and complexity of the tasks you want users to attempt. Plan for no more than 6 one-hour sessions a day and allow at least 15 minutes between sessions, plus additional time for lunch.
Before planning any sessions, work with your team to agree the research questions, types of users and parts of your prototype or service you want to focus on. Once you know this:
- recruit research participants - these need to be actual or likely users of your service
- choose a location for the test sessions - research labs are best, but you can also use meeting rooms, run pop-up sessions or test remotely
- make sure the venue is accessible to the people you want to see
- arrange for interpreters or assistants to help participants who need them
- decide if and how you want to record the sessions
- invite observers and arrange a note-taker for each session
Design the tasks
You need to design test tasks carefully to make sure they answer your research questions. Good test tasks:
- set a clear goal for participants to try and achieve
- are relevant and believable to participants
- are challenging enough to uncover usability issues
- don’t give away ‘the answer’ or hint at how a participant might complete them
You may have one long or complex task that you want to research but it’s more common to give users several smaller tasks. When you have several:
- arrange them in a logical order and work through them one at a time
- use the time between them to set up different parts of the prototype or service - for example, you may have to switch from a live service to a prototype if you haven’t got a working end-to-end product
- bring a selection to each session and choose the tasks that are most relevant to the participant
Once you’re happy with the tasks, create a ‘discussion guide’. This should include:
- your introduction script - this tells the participant who you are, explains the research and reminds them about things like recording
- descriptions of each test task, along with any instructions
- a planning checklist to make sure you’ll have everything you need
You can use your discussion guide to:
- try out the test tasks and instructions with a colleague
- stay on track during test sessions
- make sure participants are given tasks in a consistent way
- maintain a record of what you do in this round of research
Run a session
Participants are often nervous and worried about making mistakes. Before you ask them to do any tasks:
- give them time to relax
- run through your introduction script to explain what’s going to happen
- let them know that you’re testing the service, not them - reassure them that they aren’t being judged or assessed
- ask a few friendly questions to learn more about them - you can use this information later to make tasks more relevant to the participant
When you introduce a task, explain what you want the participant to do using clear, neutral instructions. You should also:
- personalise the task if you can - for example ‘you told me your daughter is ready for nursery, can you choose a nursery that would be right for her?’
- ask the participant to tell you their thoughts as they run through the task
- try to stay quiet - mostly just watch and listen
Occasionally, you may want to interrupt a participant. For example, you can:
- ask the participant about anything really interesting that you see or hear so you can understand what’s happening
- help the participant get back on track if they get completely stuck - giving them a chance to recover means you can continue learning
- ask the participant about any opinions or suggestions they give - ask open-ended questions like ‘what makes you say that?’ or ‘how would that help?’
Reserve some time at the end of the session to:
- ask follow-up questions about the things you observed but didn’t clearly understand
- check if the participant has any final thoughts about the things they’ve seen
Once you’ve finished:
- thank the participant for their time and what they’ve helped you learn
- explain what will happen with your research
- ask the participant what they thought of the session, so you can improve next time
If you’ve finished for the day:
- make sure any personal data you’ve collected (on paper or in recordings) is stored securely
- pack away your equipment (use your planning checklist)
Testing with personal data
Whenever possible, ask participants to carry out tasks using their own data and documents. They’re likely to be more engaged in the task and you’ll probably learn more than if you use dummy data.
Using real data is only possible if:
- your service can access and process data - prototypes won’t always be able to do this
- you are able to keep personal data secure
Using dummy data
If you can’t use real data or don’t have enough time to set up appropriate test conditions, you should set up dummy data. You’ll need to create a character for the participant to play and mock up documents like driving licences, letters or credit cards with that character’s name and details on them.
Participants using dummy data provide useful insights. However, they are likely to be less engaged than if they were using their own data and will probably uncover fewer contextual issues.
Researching with assistive technologies
Users of assistive technologies often have personal set-ups that are hard to replicate - for example, speech recognition software needs to be trained to a user’s voice. This means it’s usually best if participants can use their own devices.
Participants may be able to bring portable devices to a lab. Sometimes, however, you may need to visit someone at home or work.
Understanding assistive technologies
It’s helpful to understand how assistive technologies (like screen readers or screen magnifiers) work before you run any sessions with a participant who uses them. If a user has any problems completing a task, you’ll be able to understand whether it’s down to the service or the technology they’re using.
You can familiarise yourself with assistive technologies by:
- watching online demonstrations - for example, videos of how assistive technologies work from the Digital Accessibility Centre
- watching sessions run by other user researchers
- trying out software yourself - ask the developers or testers in your team if they have any you can use
Examples and case studies
You may find the following blog posts useful:
- Choosing the best methods to answer user research questions
- 100 rounds of user research on GOV.UK Verify
You can also:
- find out how the Home Office built a low-cost user research lab
- download GDS’s ‘user research tips’ observation room poster
- download the Home Office’s poster explaining how to observe a user research session
You may also find these guides useful: