Beta This is new guidance. Complete our quick 5-question survey to help us improve it.

  1. Service manual
  2. Measuring success
  3. Measuring user satisfaction

You must plan to continually improve the user experience of your service. By tracking user satisfaction you can find out what users think about the service and which parts of it cause them problems. This will help you decide what to improve.

Meeting the Digital Service Standard

To pass point 16 (identify performance indicators) in your service assessments, you must be able to show that:

  • you’re measuring user satisfaction
  • you have a plan to improve it

Set up your service to measure user satisfaction

You can capture user satisfaction with:

  • a GOV.UK feedback page
  • an in-service feedback page

GOV.UK feedback page

You must allow users to tell you what they think of your service once they’ve finished using it.

You must get this through the satisfaction survey on a GOV.UK feedback page (sometimes called a ‘done page’). This asks them to rate their experience of using the service on a 5-point scale, from ‘very satisfied’ to ‘very dissatisfied’.

It also includes a final open-ended question for users to say whatever they think of the service.

The image below shows how a GOV.UK feedback page displays these questions.

Survey questions on feedback page

The GOV.UK content team will create your feedback page (historically known as a ‘done page’). They set this up at the same time as setting up your ‘start page’.

Ask the digital publishing team in your department or agency to contact the GOV.UK content team.

Get in touch with them to discuss your plans before your beta assessment. Your service’s GOV.UK start and done pages usually won’t be published until your service’s public beta.

In-service feedback page

You’ll also want users to be able to give feedback from any page in your service. To do this, you can create an in-service feedback page and link to it from:

  • the footer on every page of your service
  • your alpha or beta banner

There is no formal guidance on what questions you must ask. You should at least have an open-ended question about how to improve the service, similar to the one on the GOV.UK feedback page.

Measuring users’ satisfaction with the whole service

Sometimes the end of a transaction isn’t the end of a user’s experience with the service. For example, if a user claims Carer’s Allowance, the end of the transaction is when they’ve finished submitting their claim. But their experience with the service isn’t over until they get a decision.

You must still prompt users to give feedback when they finish the digital part of the service and report this to the Performance Platform. However, to measure satisfaction with the whole service, make sure you have a way to collect feedback at the very end of a user’s involvement with it (the ‘end point’).

For example, for users claiming Carer’s Allowance, you could set up a system where you email them when they get a decision.

Services often have many different end points for users. Make sure there’s a way to collect feedback at all of them.

Capturing feedback when users drop out

For various reasons, users can drop out of a service at any point. You should try to get feedback from these users as they’ll likely have important insights about how you can improve your service.

To do this:

  • set up your in-service feedback page as suggested above
  • set up your web analytics to record the point in the service where your users submit feedback and also any feedback scores they give

Include assisted digital support

Some of your users will need help using your digital service, whether over the phone or face to face. This is called ‘assisted digital’ support. You must measure user satisfaction for your assisted digital support.

The way you should measure these users’ satisfaction differs depending on their circumstances and the support they’ve chosen.

For example, if the user gets support over the phone, you could use interactive voice response (IVR) technology for a survey after they’re finished.

If the user needs face-to-face support because they can’t get online easily, you may have to send out a follow-up survey by post.

User satisfaction through each service phase


You can only measure user satisfaction in discovery if there is an existing service. Use the results as a baseline to see if the new service improves the user experience.


In alpha, find out how satisfied users are with early prototypes of the new service by doing remote usability testing or surveys.

See: Usability testing on Wikipedia.


In beta, continue to measure user satisfaction by doing remote usability testing or satisfaction surveys.

You should also identify why user dissatisfaction with parts of the service may be causing people to drop out, then try to make changes to the service to improve user satisfaction levels.

See: Usability testing on Wikipedia.


Once the service is live, measure user satisfaction continually and publish results at least once a month on the Performance Platform. You should get feedback from other sources besides your online surveys, for example:

  • helpdesks - if your service has one users might tell people who work on it what they think of the service
  • social media or discussion forums - users might talk about what they think of your service online

Planning to increase user satisfaction

Once live, you must take the following steps to continually improve user satisfaction:

  1. Identify statistically significant patterns in satisfaction data and user feedback.

  2. Use this data to choose which parts of the service to change.

  3. Test these changes on real users using a prototype of the service.

  4. Implement any changes that test well.

  5. Repeat this process regularly.

  6. Continually monitor user satisfaction ratings to make sure that changes have the effect you anticipated.

Sharing data with the Performance Platform

Once live, you must measure user satisfaction data at least once a month and share it through your dashboard on the Performance Platform.

See: Sharing your data with the Performance Platform.

Future changes to this guidance

The performance analysis, user research and design communities are reviewing guidance on how you should measure user satisfaction.

Follow the latest discussions on the design community hackpad.

You may find these other guides useful: