Visa web messenger
HO'S Visa web messenger alpha assessment report
Visa web messenger
| Assessment date | 10/02/2026 |
| Assessment stage | Alpha |
| Assessment type | Assessment |
| Service provider | Home Office |
| Result | Amber |
Previous assessment reports
N/A
Service description
With the introduction of digital eVisa and Electronic Travel Authorisation (ETA) processes, an additional 30 million customers could be contacting Home Office. There is a business need to manage the flow of contact, and web messenger is seen as a key tool for reducing customer contact to the contact centres.
Virtual agent - Handles all initial web messenger queries. The conversation follows a predefined script, and a prewritten and policy-approved answer is provided from a knowledge base, based on a keyword identified in the user’s question.
Live agent - Within office hours, customers can escalate their enquiry to a live agent. Call agents can handle up to three enquires at a time.
Web form - Outside of office hours, escalation is via a web form, that can then be picked up and responded to within office hours.
Service users
This new service will be for UK Visa & Immigration (UKVI) customers, across multiple service lines of different visa types, these include:
Work; Study; Visits; Sponsorship; Marriage; Family; Graduate; Settlement; British National Overseas (BNO); Ukraine Extensions; Homes for Ukraine.
Things the service team has done well:
- The team demonstrated a pragmatic approach to designing the messenger in a space where established design patterns and guidance are still emerging. Recognising limitations and took proactive steps to address known usability issues. Showing good cross government engagement. This helped inform their reskinned design approach.
1. Understand users and their needs
Decision
The service was rated amber for point 1 of the Standard.
This is amber because:
- The team did not demonstrate an in-depth understanding of the different user groups who would use their service. Whilst a good understanding of the common problems and challenges experienced by users was demonstrated, specific insights or detail about the users was missing from the assessment and the evidence shared with the panel.
- The user group ‘sponsor’ seemed significantly underdeveloped, with no user needs developed for this user group and the team stating they do not know much about this group. The team needs to demonstrate, and evidence increased understanding of their user groups, by developing high level user needs for both user groups and providing data, insight or information about these users.
- There seemed to be an assumption that because both user groups had the same goal (to get information or response to a query) then there was no significant need to consider the users as individual groups – this is an assumption that needs rigorously testing as the different groups may have different needs, characteristics, positionalities and demographics which may result in different ways these users engage and interact with the service.
2. Solve a whole problem for users
Decision
The service was rated amber for point 2 of the Standard.
This is amber because:
- The team had chosen to prioritise ‘capable navigators’ for their alpha and demonstrated intent to continue to prioritise these users for beta. It was unclear how the team had reached and validated this decision, based on the information presented at the assessment which centred primarily around the problems experienced by people with English as a second language, who struggle to articulate themselves or need support when using HO services. They also identified one of their riskiest assumptions to test in Alpha was the assumption that the messenger’s responses can be clearly understood by users who may face barriers to access (e.g. language, disability, emotional state, trust etc). Capable navigators, as a group with high English fluency, able to communicate with the HO, good digital skills and have a stable support network, appear to be the user type who are least likely to experience these problems and barriers. The choice to prioritise this user type above others therefore seemed inconsistent with the information presented at assessment and the team to need to fully consider how prioritising capable navigators in beta will impact the problems raised by the team in the assessment.
- The team has not fully demonstrated how the web messenger fits coherently into the broader contact system or how it helps users solve their end to end problem with needing to rely on additional channels. While the intent of the service is clear, there is limited evidence showing how users will discover the solution, understand when it is the most appropriate route for their enquiry, or how it sits alongside existing channels.
- A key riskiest assumption is that the users will behave in a similar way to users of existing web messengers, this assumption has not been clearly validated, where users may have more complex needs. The alpha has not shown evidence of the behavioural differences or been demonstrated in the content structure.
- It has not been clearly demonstrated that this will resolve the users problems first time or prevent onward contact. Existing performance data shows that a significant proportion of enquiries was not answered on first interaction. Further testing that service provides sufficient and actionable information to allow users to progress their tasks without escalation.
Optional advice to help the service team continually improve the service:
- Strengthen the mapping between digital and non digital elements of the user journey to ensure the messenger genuinely resolves the whole problem and does not simply deflect contact to another channel.
- Provide clear evidence of how users will find the messenger and why they should choose to use it over existing routes.
- Model the full end to end journey including edge cases, escalation flows and complex scenarios.
3. Provide a joined-up experience across all channels
Decision
The service was rated amber for point 3 of the Standard.
This is amber because:
- The team has not demonstrated how the messenger avoids disconnection and duplication across the different contact channels. While the service is understood, there is limited evidence showing how the messenger avoids disconnection or duplication across other contact methods.
- The escalation journey is unclear, especially how information will be retained so users do not repeat themselves when moving from virtual agent to human agent to telephone, to webform.
- It’s not clear how the user feedback from all channels will be brought together to improve the service as a whole. Risking isolated improvements rather than addressing the wider end to end experience for users
Optional advice to help the service team continually improve the service:
- Defining and test when users should use the web messenger and when they should use other routes, providing evidence that these entry points and signposting are working in practice, not just described in principle.
4. Make the service simple to use
Decision
The service was rated green for point 4 of the Standard.
Optional advice to help the service team continually improve the service:
- The team could strengthen this further by ensuring that design histories are aligned to ongoing user testing and performance insight, particular for users who may have different needs and behaviours. Capturing how design decisions evolve in response to user interaction will support focussing on user needs and therefore outcomes.
- The team may benefit from formalising how adapted or bespoke patterns are documented, reviewed and maintained and when patterns should be reused, adapted to help maintain consistency and avoid disconnection.
- Continue to test the design and content structure for readability, especially where long explanations are required and the impact of timeout behaviour.
- The team could consider how adapted patterns might be shared more widely across the organisation/ other government departments to help contribute back to emerging best practice in this space.
5. Make sure everyone can use the service
Decision
The service was rated amber for point 5 of the Standard.
This is amber because:
- The decision to prioritise capable navigators resulted in insufficient evidence of how the team ensured the service would work for everyone, particularly those users who experience barriers due to low digital literacy or confidence, low English literacy (including, but not limited to EAL speakers) and access needs (including, but not limited to users with neurodiverse or psychological conditions which impact information processing and communication). A helpline is available to users therefore this is not currently critical (not a Red for this service standard) however the team needs to test the messenger with users with low digital literacy and low English literacy in Beta in order to evidence how they are ensuring everyone can use the service. Users who sit outside the capable navigator user type will use the messenger and the team to consider their needs and user experience.
- A limited amount of research with users with access needs was conducted. This needs to be increased, with both assistive tech users and users with access needs, to understand and address the barriers these users face when using web messenger tools.
6. Have a multidisciplinary team
Decision
The service was rated amber for point 6 of the Standard.
This is amber because:
- The team is missing a Performance Analyst who would be able to formally establish and co-ordinate Service Performance ways of working to make sure the correct quantitative and qualitative measures are in place that will identify insight, identify pain points and provide feedback to the rest of delivery team to explore continuous improvement. These skills exist elsewhere in Home Office, but disparate ways of working need to be improved (see Standard 10).
- The team that will design the beta service will only have a technical architect. All developers will be external. This is a concern as none of the actual software engineering will happen within the team, and the close collaboration between designers and developers will be difficult and may lead to slow iterating.
7. Use agile ways of working
Decision
The service was rated green for point 7 of the Standard.
8. Iterate and improve frequently
Decision
The service was rated green for point 8 of the Standard.
Optional advice to help the service team continually improve the service:
- The web messenger and live agent chat follow established patterns used in other government departments.
- There needs to be mechanisms in place to pivot content responsiveness and iterate based on known user behaviour throughout the year and unplanned events.
- Operationally, the web messenger channel needs to react in a similar way to contact demand at call centres, ensuring there is scalability to cope with potential contact.
- Do multiple rounds of UR, iterating in between each round, to demonstrate how user research with your users and messenger service is informing design and development.
9. Create a secure service which protects users’ privacy
Decision
The service was rated green for point 9 of the Standard.
Advice to help the service team continually improve the service:
- Make sure the DPIA remains current with any possible change during private Beta that might impact users’ privacy, such as log retention or process to process unwanted PII.
- Make sure that the detection of PII remains accurate. Hopefully the Genesys platform will make that easier, but see point 11 below.
10. Define what success looks like and publish performance data
Decision
The service was rated amber for point 10 of the Standard.
This is amber because:
- Create a structured performance framework as an artefact that can be referred to. This document should include, the 4 mandatory KPIs, each user need, hypotheses, as well as Management Information (MI). Be very specific on the measurement method, the metrics, data sources, tools used, success thresholds, and how performance is going to be reported. By having this in place will help articulate how you are going to demonstrate service success and that users’ needs are being met. It is expected the performance framework is a living artefact and is iterated over time.
- Define how data can be used in a meaningful way. For example, the team need to establish segmented baselines from existing contact centre data based on the waves of different user groups that contact Home Office throughout the year. Use this data as a baseline to measure digital take-up of the new web messenger. This will identify which user groups are interacting with the web messenger and which are not. Add these baselines to your performance framework artefact.
- Make data insight visible to the team to embed continuous improvement throughout the delivery lifecycles.
- Understand how the 4 mandatory KPIs can be published for the service.
11. Choose the right tools and technology
Decision
The service was rated amber for point 11 of the Standard.
-
Even though the choice of the Genesys platform was made for the team, the team didn’t demonstrate a good understanding of the risks with having a black box at the heart of the service. In particular if some operations (like updating the new front-end UI) will require a service request. This could make iterating the Beta service longer and more complicated than could be. At least the team can learn from other existing HO services build around Genesys.
-
As mentioned above, while the Beta service has not been built yet, and its operational model not defined, the panel is concerned that the service will be created by engineers outside of this team (the HOF team and V&V) who might not have the full picture of the idiosyncrasies of this service. The team’s technical architect may find it difficult to synchronise how UCD people in the team will work with engineers outside of the team.
12. Make new source code open
Decision
The service was green for point 12 of the Standard.
Advice to help the service team continually improve the service:
- Make sure the public code is of good quality, commit messages are clear, and the whole repository demonstrates HO’s use of good software engineering.
13. Use and contribute to open standards, common components and patterns
Decision
The service was rated green for point 13 of the Standard.
Optional advice to help the service team continually improve the service:
- Maintain the relationship with other government departments using the same patterns if any improvements are made so they can take into consideration.
- Specifically contact the GOV.UK Design System team to share the valuable insight into designing components for accessible web messengers.
14. Operate a reliable service
Decision
The service was rated green for point 14 of the Standard.
Tech: the team has demonstrated good knowledge of service reliability and will benefit from the experience of other teams that built the components used by this service.