Measuring success

Usability benchmarking a website or whole service

You can usually see how well a transaction is working for users by using performance metrics combined with user research methods like usability testing.

You might need more than just these methods if you need to measure:

  • end-to-end user journeys or whole services comprising several pieces of GOV.UK guidance and transactions
  • non-transactional services, like websites

This is because it’s hard to design metrics that give you an idea of the user experience across several tasks. It’s easier to gauge this by simply watching users try to complete a set of tasks and seeing whether they’re successful or not.

This process is known as ‘usability benchmarking’ and is usually owned by the user researcher in your team.

You should repeat the process periodically and compare results to see whether your service is getting easier to use over time.

Choosing tasks to test

It’s important to design good tasks so they give you accurate data about the overall usability of your service. It’s worth testing tasks that:

  • are relevant and believable for participants
  • the majority of your users need to complete - this helps you optimise your highest traffic journeys
  • have a clear correct answer - this makes it easier for you to work out whether the user’s successfully completed the task
  • are likely to remain consistent over time - this is especially important if you’re looking to run the exercise multiple times
  • where possible, don’t require users to go through login pages or enter details they won’t have access to

You can use analytics data to establish your most common tasks.

Don’t overwhelm participants with too many tasks. A good rule of thumb is no more than 5 tasks per participant, with up to 10 minutes per task.

Recruiting participants

You’ll need to recruit candidates to take part in sessions. Aim to recruit 30 to 60 actual or likely users of your service.

Ask participants for their ‘informed consent’.

If your benchmarking software allows you to do so, you might want to consider segmenting your participants by things like age, experience of your service or use of assistive technology. This can help you spot patterns between specific groups.

How to run the sessions

It’s usually best to run usability benchmarking remotely. This allows you to test with a large number of users in a short space of time.

To help with this, there are a number of tools available that let you do things like:

  • record a user’s screen
  • produce clickstreams and heatmaps
  • ask questions to establish a user’s age, digital skills and experience and whether they’re using any assistive technology

Make sure you follow the rules on getting informed consent and protecting participant privacy.

You can use Digital Marketplace frameworks if you need help finding a usability tool supplier.

What to measure

For each task, you should measure:

  • whether the user completed the task successfully - this could mean reaching a specific page, or finding a certain piece of information
  • the time it takes the user to complete a task
  • whether they abandon the task, or think they’ve completed it successfully when they haven’t

At the end of each task, it’s also useful to get users to say on a scale of 1 to 5:

  • how easy or difficult the task was
  • how confident they are that they’ve got the right answer
  • whether it took more or less time than expected

You can also give the user the chance to leave any comments they have.

Analyse the data

Once you’ve collected all your data, look to draw out things like:

  • the average completion rate of each task
  • the average time it takes to complete each task
  • the difference between the perception of how difficult or time consuming the task was compared to the reality

If you segmented your participants at the start of the study, you can also look for common patterns between specific groups.

It’s also important to look for the reasons why people are failing. You can use click path data and review screen recordings to look for common failure patterns.

You can also compare the data with your digital analytics. Do the analytics show the same issues occurring for all users, or are they telling a different story?

Make sure to share any findings with the rest of your team and any relevant stakeholders.

Use the data to improve your service

Once you’ve analysed your data, you can come up with some hypotheses for how you might make your service simpler to complete. You might identify things you could apply to other user journeys, too.

You can use data from the next round of benchmarking to determine whether the changes you’ve made improve things for users.

Running future rounds of benchmarking

Running several rounds of benchmarking helps you work out whether your service has improved over time.

To make sure you can do this, you should:

  • review your benchmarking tasks regularly - for instance, it’s useful to know about any design changes or improvements so you can factor them in to your analysis
  • maintain consistency - it’s easier to compare results if you keep the tasks and questions the same for each round
  • stay up-to-date with changes in content or user behaviour that might affect benchmarking results
Last update:

Guidance first published