Measuring user satisfaction
You must plan to continually improve the user experience of your service. By tracking user satisfaction you can find out what users think about the service and which parts of it cause them problems. This will help you decide what to improve.
Set up your service to measure user satisfaction
You should allow users to give feedback about your service at various stages of using it.
At the end of your online service
You must allow users to tell you what they think of your service once they’ve finished using it.
You can create your own feedback page to do this while your service is in alpha or private beta.
Before you go into public beta, you must either:
- request a standard GOV.UK feedback page
- continue using your own feedback page if you need to ask specific questions about your service
If you use your own page, you’ll need to arrange a way of forwarding the data to the Performance Platform.
Within your service
You’ll also want users to be able to give feedback from any page in your service. To do this, you can create an in-service feedback page and link to it from:
- the footer on every page of your service
- your alpha or beta banner
There is no formal guidance on what questions you must ask. You should at least have an open-ended question about how to improve the service, similar to the one on the GOV.UK feedback page.
When users drop out
Users can drop out of a service at any point for various reasons. You should try to get feedback from these users as they’ll likely have important insights about how you can improve your service.
To do this:
- use your in-service feedback page
- set up your web analytics to record the point in the service where your users submit feedback and also any feedback scores they give
Measuring users’ satisfaction with the whole service
Sometimes the end of a transaction isn’t the end of a user’s experience with the service. For example, if a user claims Carer’s Allowance, the end of the transaction is when they’ve finished submitting their claim. But their experience with the service isn’t over until they get a decision.
You must still prompt users to give feedback when they finish the digital part of the service and report this to the Performance Platform. However, to measure satisfaction with the whole service, make sure you have a way to collect feedback at the very end of a user’s involvement with it (the ‘end point’).
For example, for users claiming Carer’s Allowance, you could set up a system where you email them when they get a decision.
Services often have many different end points for users. Make sure there’s a way to collect feedback at all of them.
Include assisted digital support
Some of your users will need help using your digital service, whether over the phone or face to face. This is called ‘assisted digital’ support. You must measure user satisfaction for your assisted digital support.
The way you should measure these users’ satisfaction differs depending on their circumstances and the support they’ve chosen.
For example, if the user gets support over the phone, you could use interactive voice response (IVR) technology for a survey after they’re finished.
If the user needs face-to-face support because they can’t get online easily, you may have to send out a follow-up survey by post.
User satisfaction through each service phase
You can only measure user satisfaction in discovery if there is an existing service. Use the results as a baseline to see if the new service improves the user experience.
In alpha, find out how satisfied users are with early prototypes of the new service by doing remote usability testing or surveys.
See: Usability testing on Wikipedia.
In beta, continue to measure user satisfaction by doing remote usability testing or satisfaction surveys.
You should also identify why user dissatisfaction with parts of the service may be causing people to drop out, then try to make changes to the service to improve user satisfaction levels.
See: Usability testing on Wikipedia.
Once the service is live, measure user satisfaction continually and publish results at least once a month on the Performance Platform. You should get feedback from other sources besides your online surveys, for example:
- helpdesks - if your service has one users might tell people who work on it what they think of the service
- social media or discussion forums - users might talk about what they think of your service online
Planning to increase user satisfaction
Once live, you must take the following steps to continually improve user satisfaction:
Identify statistically significant patterns in satisfaction data and user feedback.
Use this data to choose which parts of the service to change.
Test these changes on real users using a prototype of the service.
Implement any changes that test well.
Repeat this process regularly.
Continually monitor user satisfaction ratings to make sure that changes have the effect you anticipated.
Sharing data with the Performance Platform
Once live, you must measure user satisfaction data at least once a month and share it through your dashboard on the Performance Platform.
Future changes to this guidance
The performance analysis, user research and design communities are reviewing guidance on how you should measure user satisfaction.
Follow the latest discussions on the design community hackpad.
You may find these other guides useful:
- Published by:
- Performance analysis community
- Last update:
Guidance first published