Tell us about a death alpha assessment

The report from the alpha assessment for DWP's tell us about a death service on 10 May 2018.

From: Government Digital Service
Assessment date: 10 May 2018
Stage: Alpha
Result: Met
Service provider: Department for Work and Pensions

The service met the Standard because:

  • It will improve the lives of bereaved citizens
  • The service team is performing well, crossing organisational boundaries to work with other in the same Department and across government
  • The panel was impressed by the work done to improve the interface for agents - it has been optimised for the organic nature of a phone call, to avoid asking end users to answer questions in a particular order or repeat information or at a very stressful time for them

The panel would like to commend the service team on an Alpha that has been run well and will help to improve the lives of bereaved citizens.

About the service

Description

Tell Us About a Death (TUAaD) is a service that enables bereaved citizens to inform us that someone has died prior to formal death registration, this information is then shared with other government departments and services and the bereaved citizen is then directed to other services that can support them.

Service users

Bereaved Citizens, these can range from family members to care home workers, anyone whom is able to register a death.

Detail

User needs

The team has conducted an impressive amount of research with users of the services: bereaved people, registrars, telephone agents, tell us once, bereavement service team, hospices and funeral directors. It was clear that they were aware of any of the pain points of the current services. The team were also aware of how backend processes impacted on users’ experience. They had taken into account that users could be satisfied with the service when they used it but didn’t realised there might be problems further down the line. Consideration of users who might need assisted digital support was evident throughout the discovery and alpha phase and was, more importantly considered in the context of the service.

The team had a good plan in place for beta, with a focus on testing the riskiest assumptions. They had an impressive plan to have call agents embedded within the team during this phase. This will allow for fast feedback and iteration. There was also a plan to continue work on a public digital interface.

Team

The core service team had the multidisciplinary skills needed for Alpha and will bring on new skills for Beta - the team may wish to consider improving the ratio of permanent to interim team members in Beta.

The core team also collaborates with colleagues in other areas of the end-to-end service, and receives support from senior stakeholders. There are also strong links with other government departments. The service team works with agility and is working well together.

Technology

  • What technical choices have you made and why - language, framework, deployment, integration, 3rd parties?

Source code will be in GitHub with RESTful APIs. No code has been written for this solution yet as this at the early stage. JIRA is used to store the user stories in the backlog. Confluence will be used for document storing and management.

Service will be hosted in AWS (to be consistent with other apps that DWP host in AWS) in the London Region in a Single Availability Zone. The S3 Bucket – stored between Availability Zones for robustness. SFT will be used for transferring out to ATAS2.

Dev and QA environments are being setup while staging and production environments to be done.

MVP (private Beta) will not integrate Verify but rather the agent will follow DWP guidelines when reporting the deceased person over the phone with the informant. This will be restricted to ‘internal use’ by the DWP agents. Further testing to prove the research to make this service publicly available is needed but can be done at a later phase.

Technical team includes a technical architect and a developer who were present during alpha.

  • How do you plan to make the service open - open source, open standards, open data, common platforms?

Today’s version of the service shares the data with 8 different departments (GDPR and other implications to be considered and validated).

Various emails will be sent to the various departments e.g. HMRC to cancel the passport, DVLA to cancel the driving license, etc.

Production code in GitHub, this project team working with another internal team, Fit Note, to get the code in GitHub.

  • How will you ensure the service is safe for users - data privacy, security threats, fraud vectors?

ADFS will be used to authenticate and authorise users. Secure groups within AWS subnet to manage control over groups. Key Management Store (KMS) will be used to encrypt and decrypt data at the db level (RBaC VPC within AWS). Information passed within the SubNets and Security Groups (ELB between each Security Group).

Security threats: token-based for passing data between the front end to data store. Cloud Watch to be run on AWS to monitor the activities within the instances.

Fraud: the unique code from the registrar letter ‘may’ prevent people to commit fraud and report.

  • What other distinct options have you tried/tested?

    Tested with the agent and informant (role play) and produced a video.

Further testing in next stage - TBC.

  • What are your technology plans for private beta?

Service will be hosted in AWS and use ADFS to obtain the list of the authorised users. The are a number of roles that will control the authentication to the service by the agents. PostgreSQL database being used with NodeJS and using RedHat. MVP (private Beta) will not integrate Verify but rather the agent will follow DWP guidelines when reporting the deceased person over the phone with the informant.

The product will be developed by the inhouse development team and AWS is the only vendor who will provide the infrastructure.

  • What plans do you have if your service is unavailability for any length of time?

If the service is down the agent will need to take manual notes. When the service is live this will be LB on AWS - Auto Scaling Groups – if server fails it will be restarted. Logs from the issues that agents face during the submission of the data are captured using Prometheus.

Service desk / support type issues: TechNow team will pick up the tickets. Triaging and first line support will be done by TechNow.

Internal / external teams (perm/contractors/vendors/etc.)

Design

The service team has iterated a number of prototypes to research both the citizen- and agent-facing aspects of the service. Based on research findings they made the decision to focus on a phone service for their MVP. This will be complimented by a digital service for the agents receiving the calls, which will allow them to capture information more easily and keep track of their cases.

The panel was impressed by the work done to improve the interface for agents. It has been optimised for the organic nature of a phone call, to avoid asking end users to answer questions in a particular order or repeat information or at a very stressful time for them. During private beta, the agents will be co-located with the service team. This will only help further improvements to the service.

The interaction design has been informed by collaboration with designers of other case-working systems within DWP and across government. Whilst this ensures a level of consistency, the panel recommends that the team has another look at the ‘tabs within tabs’ pattern used. This could cause problems for agents working on multiple things at once, as well as being difficult to implement for users with certain access needs.

A lot of work has also been done on the paper ‘bookends’ of the service: a leaflet handed out to end users by registrars after a death, and the letter they receive after using the service. The team is well aware that one of their biggest design challenges is getting users to the service in the first place, and it is good to see them focusing on this.

A citizen-facing online service has been prototyped too, although it will not be the initial focus for private beta. The panel is concerned that the team will take a long time to learn whether an online channel is sufficiently viable for this service to justify its cost. Although the research indicates that users would use an online service, it isn’t the same as finding out whether they actually do. The panel recommends finding a way to quickly test this assumption with real users in a private beta context.

Analytics

The service team will measure the mandatory KPIs and has also identified a range of additional KPIs by which to measure the value of their service, and has spoken with the GOV.UK performance platform team. The service team outlined the challenges of measuring some of these KPIs - the panel highlighted the importance of benchmarking ‘as is’ performance at the beginning of Beta, even if this requires assumptions and confidence measures - so that their can be objective assessment of improvement in value at the end of Beta.

Recommendations

To pass the next assessment, the service team must:

  • Benchmark current performance against agreed KPIs at the beginning of Beta.

The service team should also:

  • consider improving the ratio of permanent to interim team members in Beta.
  • find a way to quickly test the assumption that citizens will use an online service with real users in a private beta context
  • ensure that enough research time is in place to manage both the integration of the beta telephone service and the research to understand the digital service.

Next Steps

You should follow the recommendations made in this report and adhere to guidance on engaging with GOV.UK before arranging your next assessment.

Get advice and guidance

The team can get advice and guidance on the next stage of development by:

Digital Service Standard points

Point Description Result
1 Understanding user needs Met
2 Improving the service based on user research and usability testing Met
3 Having a sustainable, multidisciplinary team in place Met
4 Building using agile, iterative and user-centred methods Met
5 Iterating and improving the service on a frequent basis Met
6 Evaluating tools, systems, and ways of procuring them Met
7 Managing data, security level, legal responsibilities, privacy issues and risks Met
8 Making code available as open source Met
9 Using open standards and common government platforms Met
10 Testing the end-to-end service, and browser and device testing Met
11 Planning for the service being taken temporarily offline Met
12 Creating a simple and intuitive service Met
13 Ensuring consistency with the design and style of GOV.UK Met
14 Encouraging digital take-up Met
15 Using analytics tools to collect and act on performance data Met
16 Defining KPIs and establishing performance benchmarks Met
17 Reporting performance data on the Performance Platform Met
18 Testing the service with the minister responsible for it Met
Published 6 August 2018