Authorisation service - agent services alpha assessment

The report from the alpha assessment for HMRC's authorisation service - agent services service on 8 May 2018.

From: Central Digital and Data Office
Assessment date: 08 May 2018
Stage: Alpha
Result: Not met
Service provider: HMRC

To meet the Standard the service team should:

  • Look outside the scope of this service and consider the end-to-end journey for the user. This journey should start with what the user is actually trying to do, for example file their client’s self-assessment.
  • Do the hard work to make the journey simple for users who have the need to authenticate agents for multiple services, particularly through challenging business requirements of other services.
  • Demonstrate the ability and desire to radically iterate the service design, thoroughly challenging hypotheses and assumptions.

About the service

Description

The service allows agents to request authorisation to interact with specific HMRC services on behalf of their client. The client is then able to accept or reject this request.

Service users

The users of this service are business taxpayers and tax agents, such as accountants.

Detail

Authorisation is a particularly difficult problem and the team should not be discouraged by this report, which primarily reflects the difficult landscape.

The panel understands that an interim version of this service has recently been put into production with the approval of HMRC’s internal service assessment team - a small number of users are likely to use this service from May. This should not have happened without first consulting with the GDS service standards team.

The service team plans to entirely replace the live service with their new service once they have approval to move to private beta.

User needs

The team has built up a good understanding of the range of professionals who act as agents, how they work with clients, their problems with existing services, and their needs from new services.

The team has some understanding of likely client users, but it is less complete and will need more focus during beta.

The team has done a good amount of research and used a good range of appropriate user research methods, including contextual research in work places and homes.

The team has also done good research to understand users’ support needs and to test elements of their support model.

The team has produced two sets of personas to represent its understanding of the different kinds of users among both clients and agents. The personas for agents describe different kinds of financial professionals. They provide a good representation of what the team have learned about agents.

However, the client personas do not clearly represent relevant clusters of circumstances, behaviours and motivations.

For example, during the assessment the team talked about people:

  • starting a business and setting up business taxes for the first time

  • moving to a new agent

  • with different level of trust of agents

  • with different levels of desired control over the authorisation process

  • who have had a government gateway ID created by their agent

  • who already use their own government gateway ID to access their business tax account

  • who give their agent their gateway login details

The team should improve its client personas to more clearly bring out these important circumstances, behaviours and motivations. This will help to create a clearer justification for, and link to, the needs that each of the client personas represents.

The team has a reasonable plan to continue learning during private beta, including continuing to test their support model.

To recruit users for their private beta, the team plans to recruit agents who are using services that are part of Making Tax Digital. This should be an effective approach.

But we know that less experienced and less confident users can be reluctant to take part in betas of services like this, so the team will need to work hard to recruit agents and clients who are more likely to experience problems and need support.

This will be important for the team to properly test their support model to make sure they are building a service that will work for all users.

The team has reasonable confidence that the core transactional parts of their service - agents requesting authorisation and clients granting authorisation - work well for most users. But the team presented little evidence of testing those transactions in realistic contexts - eg setting up a new business, moving to a new agent, authorising an agent for some aspects of tax while doing some yourself. The team must do more to test their transactions and support services in the context of realistic end-to-end journeys for users.

Team

The team are led by an engaged and well-informed service manager and most of the necessary roles are filled. They work in an agile way and have recently made a positive change to their processes to divide into two smaller teams, to help make ceremonies more constructive. (The ‘not met’ on point 4 results from the lack of discovery and could be resolved by demonstrating a more significant challenging of assumptions at reassessment )

The panel understands that this team is dependent on several other teams, many of which seem less agile. If possible the Enterprise-Tax-Management-Platform’s release windows should be significantly shortened and formal Change Requests minimised.

The panel were concerned that the service manager’s influence seems to be restricted to a relatively small portion of the overall journey on the tax platform. Some of the design issues raised by the panel (particularly those around sending a link to the client) were made because of an inability to affect tax-platform decisions. It would also have been good to see the team challenging the need of connecting services to have different ‘known facts’ to each other.

Technology

The team have so far built an HTML prototype only, although the architecture for the beta service has been designed. The design shows a sensible separation of frontend and backend services which will allow subsequent development of an API layer.

The beta will be based on the common HMRC tax platform, using its standard technology stack of Scala with the Play framework and deployed via Docker. It communicates via API with the Enterprise Tax Management Platform, an SAP-based product which is responsible for all data storage; the service does not store any data itself. The team are aware of issues around security and fraud, and plan to carry out a full security audit and penetration test.

The tax platform provides separate environments for development, staging and QA. The team are following good development processes, with automated functional and regression testing. Logging and metrics will be available via Kibana and Grafana, and Pagerduty is used for alerts. The code is being developed in the open and is available via GitHub.

It was concerning that the team were not able to clearly explain the relationship between the code used in the existing live service and the new one and whether there will be any reuse, nor describe what code has already been written for the beta. The team also need to be clearer about how they will iterate the service based on the results of user research during the beta phase.

Design

The biggest issue with this service is that the core concept seems to have been decided before any kind of Discovery phase took place. This also happened before any of the current team were involved in the project, but the team has not challenged the core assumption that this service needs to exist and so has ended up iterating on work that was built on untested assumptions.

The team needs to be really clear about what problem this service solves, both for users and for government. The service being assessed is really a component of other services, but beyond the two planned ones it’s not clear what those other ones will be. Even with the planned connected services, it wasn’t clear that the end-to-end journey was tested and understood by users.

The team have highlighted that many users will need to manage multiple authorisations for multiple services using this component, but the currently journey for managing multiple authorisations seemed confusing and possibly repetitive. This journey also had not been tested and should be before a Beta phase can be started with full confidence.

Proof of identity is needed in this service, so the team should be engaging with the Verify team on this so they can best understand the impact on their service when Verify becomes part of the HMRC identity solution next year. It was noted in the assessment that it hadn’t been easy to find who to talk to in Verify, and this will be fed back to the Verify team as well.

Analytics

The team understood the mandatory KPIs and had engaged with the Performance Platform. A performance analyst is assigned to the team. They have a convincing plan to monitor analytics and customer feedback in beta, using Google Analytics, Splunk and DeskPro.

There are plans to phase out non-digital methods of authorising agents over the next year before this service is mandated in April 2019.

Recommendations

To pass the reassessment, the service team must:

  • Prototype and test a client journey that starts within the Making Tax Digital portal, rather than relying on the agent to copy and paste a link in an email. The team should also have gained agreement from HMRC to implement this journey for private beta.
  • Test the core transactions in the context of realistic end-to-end journeys for users, and be able to demonstrate this at assessment
  • Do further testing to prove or disprove the hypothesis that those agents who do not use the API will only need to gain authorisation for a single service at a time. If this is disproved, the team should prototype a multiple authorisation journey.
  • Prototype and test an agent dashboard that shows the status of sent requests

The service team should also:

  • Engage with the Verify team to explore how Verify could be appropriate for this service
  • Try and standardise and consolidate which ‘known facts’ are needed to authenticate a user for each service and standardise the terms and conditions the user needs to sign. This is outside of the team’s immediate scope, but would be valuable for users
  • Challenge their own assumptions around whether requests should originate with agents, rather than clients (this meets the needs of agents, but is open to risk of abuse)
  • Add additional context that makes it clear that non-digital authorisation still exists for other services, for example on the ‘Manage agents’ page
  • Make it clear to users that if they authorise a new agent for the same task they will effectively de-authorise their previous agent
  • Improve its client personas to more clearly represent important circumstances, behaviours and motivations, and the needs that come from them

Next Steps

In order for the service to continue to the next phase of development it must meet the Standard. The service must be re-assessed against the criteria not met at this assessment.

Please contact the Service Assessment team at least 4 weeks before the date you’d like to hold a reassessment.

Get advice and guidance

The team can get advice and guidance on the next stage of development by:

Digital Service Standard points

Point Description Result
1 Understanding user needs Met
2 Do ongoing user research Met
3 Having a sustainable, multidisciplinary team in place Not met
4 Building using agile, iterative and user-centred methods Not met
5 Iterating and improving the service on a frequent basis Not met
6 Evaluating tools, systems, and ways of procuring them Met
7 Managing data, security level, legal responsibilities, privacy issues and risks Met
8 Making code available as open source Met
9 Using open standards and common government platforms Met
10 Testing the end-to-end service, and browser and device testing Not met
11 Planning for the service being taken temporarily offline Met
12 Make sure users succeed first time Not met
13 Ensuring consistency with the design and style of GOV.UK Met
14 Encouraging digital take-up Met
15 Using analytics tools to collect and act on performance data Met
16 Defining KPIs and establishing performance benchmarks Met
17 Reporting performance data on the Performance Platform Met
18 Testing the service with the minister responsible for it N/A
Published 6 August 2018