Get Flood Warnings

The report for the Get Flood Warnings alpha assessment on the 17 November 2021

Service Standard assessment report

Get Flood Warnings

From: Central Digital & Data Office (CDDO)
Assessment date: 17/11/2021
Stage: Alpha
Result: Met
Service provider: Environment Agency

Service description

The service provides flood warnings by phone, text or email to citizens, businesses and organisations when the Environment Agency forecasts flooding from rivers, the sea or groundwater to their location of interest.

Users can sign up to receive flood warnings for locations of interest such as in their home, place of work or area they drive through

Service users

This service is for

External users:

  • Current flood warning system users
  • Opted-in users - automatically opted into receiving flood warning information via the Extended Direct Warnings(EDW) service
  • Unregistered users - These users are not registered to receive any kind of flood warning information, but could be at risk, or may travel in areas at risk of flooding

Sign up for flood warnings via a different system (Targeted Flood Warning System) but XWS will send the messages to these users via TFWS:

  • Professional Partners
  • Multi-site Owners

Internal EA users:

  • Flood Warning Duty Officers (FWDO) and Assistant Flood Warning Duty Officers (AFDO)
  • Flood Resilience teams
  • IM&R Digital Services
  • Other Environment Agency teams

1. Understand users and their needs

Decision

The service met point 1 of the Standard.

What the team has done well

The panel was impressed that:

  • the team has worked with a wide variety of internal and external user groups, all with different relationships to the flood warning service, and carefully mapped them
  • the team has a good understanding of a cross-section of users and the context of receiving flood warnings; in particular a deeper understanding of rural users
  • the team is clearly working with a great many user groups, and multiple stakeholders within the confines and constraints of legacy systems and operational models, as they design for a complex, safety-critical service
  • the team is reaching out to harder-to-reach groups users via panels, specialist charities, and interest groups
  • the team has engaged with their multiple stakeholders within the Environment Agency and beyond, to bring them along
  • needs of internal users appear very well understood, and the team has done a considerable amount of work to understand and improve the message creation journey for duty officers
  • accessibility needs of users are being considered
  • the service is researching and iterating in response to design and content issues

What the team needs to explore

Before the next assessment, the team needs to:

  • use mixed methods and in particular, where appropriate try to gather feedback from a larger sample of users
  • think about how to reach out to offline/limited online users
  • do more work directly with users who have accessibility needs, including those with a hearing impairment
  • where appropriate, test on mobile, tablet and desktop, to understand design and content needs in relation to different platforms
  • there is a large amount of user research to be done on this service - the panel recommend that the team considers increasing research resource and also seeking out mentoring from other government departments
  • ensure research meaningfully represents the breadth of private citizen users (including opted-in users, registered users and potential users, but focusing on the needs and contexts rather than the technical relationship to the flood warning service)
  • continue to consider a wider, more dynamic range of situations - there is, understandably, a great focus on property flooding, but people whose homes are not at risk may still be severely affected, for example, flooded roads cutting off access to home/schools/hospital and making driving dangerous; floodwaters affecting people, animals and property at a distance, such as the elderly relative in a flood zone, horses grazing in a riverside field, boat moorings and such like. It would also be useful to consider the ‘life cycle’ of flood warnings of different types, from early alerts to severe flood warnings, and how users may respond during a rapidly changing situation (there is some helpful work highlighted in the earlier Discovery research)
  • continue to explore the experience of those who receive automatic messages without signing up (80% of users). Given that many messages are sent out in this way, and these messages drive some of the recipients to call the Floodline number
  • build on research with users who are offline, those who are online but with low digital skills, and those who might reject (or not use) online systems for this particular purpose
  • it would also be useful to draw out general access barriers, including disability. How does an elderly user, with a moderate hearing impairment, who is not online, engage with the service? (Floodline appears to use Type Talk, for example)
  • the team should also do more exploration of users’ mental models when setting up alerts and looking at their location and mapping results
  • the separation of ‘high’ and ‘medium’ risk areas in the current journey in particular felt clunky - it was not clear how users understood this distinction or indeed what the consequences were
  • maps make a great deal of sense to some users (including professional users) but in an age of satnav, could be very confusing to others
  • consider what metrics to use in evaluating the quality of the service, from sign-up to receipt of messages, going beyond user satisfaction and number of messages sent as these can be unreliable indicators
  • examine the user feedback ecosystem, to understand how best to integrate insight from data (messages sent, opt-outs, customer service stats, online feedback etc) with other feedback

2. Solve a whole problem for users

Decision

The service did not meet point 2 of the Service Standard.

What the team has done well

The panel was impressed that:

  • the internal processes have been well considered
  • given the nature, number and complexity of overlapping systems and services, there was an organisational drive and ambition to simplify the end to end journey

What the team needs to explore

Before their next assessment, the team needs to:

  • consider a simpler non-account route for the 80% of users who want one alert for a postcode. This might not involve GOV.UK
  • the team must do more to work as directly and closely as possible across the department and across products, including as part of the Access to Flood Services discovery (started in September 2021), ensure it is building around users’ whole journeys and problems, rather than around the way things have historically been done or around current technological systems and constraints
  • organise and orientate around end users’ problems and needs and an aimed-for service blueprint, rather than around as-is setup
  • refine the scope of the service team’s work. Consider if, for example, EA users managing geographical locations (‘target areas’) should be in or out of scope at this stage
  • maintain focus on the ‘Get Flood Warnings’ theme, of the four currently being worked upon, with Create a warning, Manage target areas and Manage my service being secondary at this time
  • identify and seek to overcome operational constraints, contributing to the access to flood warnings discovery, including overlaps with other flood related services to create a more consistent service for end users
  • review whether users’ need to understand when there’s a new risk of flood damage’ might be met by status change alerts from the Check flood risk service
  • map the existing end to end journey of end users, including all touch points, in relation to achieving this goal. Draft a blueprint of what the ideal service would look like to match those journeys
  • the separation of ‘high’ and ‘medium’ risk areas in the current journey in particular felt very clunky - it was not clear how users understood this distinction or indeed what the consequences were

3. Provide a joined-up experience across all channels

Decision

The service met point 3 of the Standard.

What the team has done well

The panel was impressed that:

  • the online, phone and SMS based services are well joined up
  • Front line operations staff needs had been considered

What the team needs to explore

Before their next assessment, the team needs to:

  • further explore accessibility requirements, particularly for hearing impaired users
  • consider further how the service aligns with other services in Environment Agency flood risk services as as part of the Access to Flood Services discovery (started in September 2021), to ensure it is building around users’ whole journeys and problems
  • consider how the alert notification could be better aligned with online information, User Research tells us that users receive a text then go and find more information elsewhere

4. Make the service simple to use

Decision

The service met point 4 of the Standard.

What the team has done well

The panel was impressed that:

  • the team had made many changes to the prototype in response to user testing and feedback, such as moving sign up to after users had found out if alerts were an option; simplifying the results by showing different types of alerts on different maps; removing confusing icons, and; leading with clearly written explanations instead of a more-confusing map
  • content had been refined according to user testing around language confidence, terminology including service name and breaking content into chunks for easier reuse across the service

What the team needs to explore

Before their next assessment, the team needs to:

  • review the naming of the service and alert levels within it, to be sure they work for users, including potential users. Reconsider how the team describes and approaches the service, ensuring it’s user-centred rather than delivery, process or legacy centred
  • do more research to confirm if account creation is a necessary part of this service. Identify if any alert management needs might be met by an opt out option within alerts, particularly as most users have just one alert set up, and most users are added by default and so won’t have a chance at that point to create an account
  • check assumptions that 27% of calls to the flood helpline being related to alerts sign up or management equates to there being a user need to have an account - do they call because it’s hard to do?

5. Make sure everyone can use the service

Decision

The service met point 5 of the Standard.

What the team has done well

The panel was impressed that:

  • users are offered the alternative offline ‘Floodline’ helpline. The service is set up to either help create the user registration and going forward will help to create the subscription on users’ behalf (assisted digital)
  • the team is planning an accessibility audit and access needs testing through the independent DAC audit centre in Swansea

What the team needs to explore

Before their next assessment, the team needs to:

  • understand how users are generally finding and navigating to the service, and the extent to which all potential users would be able to find it if necessary
  • ensure that all aspects of assisted digital and accessibility requirements are considered as part of the user stories

6. Have a multidisciplinary team

Decision

The service met point 6 of the Standard.

What the team has done well

The panel was impressed that:

  • the team consisted of the key roles expected to see in a multidisciplinary team
  • all members of the team had taken part in user research sessions and were able to explain the problem and scope for this work
  • worked closely with SMEs in the operations space to ensure alignment of thinking

What the team needs to explore

Before their next assessment, the team needs to:

  • ensure that succession planning is in place for any off-rolling of staff. This team is completely different to the discovery team which has meant that there has been a loss of knowledge between the two phases of development which has impacted on delivery of alpha
  • ensure departmental support is in place so that a sustainable and multidisciplinary team will be working on this service throughout its lifetime. And to align services development to further avoid developing in silo’s
  • consider how policy colleagues can link in with this work to ensure alignment across services
  • the team would benefit from increased research capacity in the short term along with adding interaction designer to map the online/offline services, as well as service designer to look at the programme of work as a whole

7. Use agile ways of working

Decision

The service met point 7 of the Standard.

What the team has done well

The panel was impressed that:

  • the team has adopted an iterative approach to building and releasing their service and features to the service
  • the team has adapted its ways of working to meet the needs of the development team, taking a flexible approach to stand up method/time
  • they have iterated their UX service and elements of technology
  • the team is familiar with their product backlog and work together to prioritise features, using Jira to manage their service

What the team needs to explore

Before their next assessment, the team needs to:

  • consider the purpose of ceremonies, especially show and tells. The team has technical and user centred design show and tells but would benefit from including whole service show and tells
  • consider structuring development around defined user stories, over existing processes and legacy services. As recommended in the user needs section, the team would benefit from looking at the journeys anew, instead of a lift and shift of process from old to new
  • utilise agile development mindset that the development is more than sprint ceremonies; the focus should be on individuals and interactions over processes and tools. For example the team should consider user journeys separately from existing processes in order to identify pain points for users

8. Iterate and improve frequently

Decision

The service met point 8 of the Standard.

What the team has done well

The panel was impressed that:

  • the team has made changes to the prototype based on user testing and feedback, such as simplifying the map view, how the service is displayed and the language used

What the team needs to explore

Before their next assessment, the team needs to:

  • ensure that there is sufficient time/resource available to consider the wider user journey work recommended and the need for iteration that this may present
  • consider the current internal process for notification. Is there a way that this could be streamlined or automated?

9. Create a secure service which protects users’ privacy

Decision

The service met point 9 of the Standard.

What the team has done well

The panel was impressed that:

  • the service uses AWS, which, if used well, will almost certainly provide more protection for users than self-hosting
  • the planned architecture lends itself to using AWS’s security tools
  • the team are aware of the need to make a secure system that protects user data

What the team needs to explore

Before their next assessment, the team needs to:

  • evaluate the risks around the service. Threat modelling should be performed to justify the team’s decisions about security. It is not enough to be aware of possible attacks the service may experience based on previous ones on other services. It is also important to first figure out why the service would be attacked, by whom, how, etc, and then use that model to make security design decisions
  • make sure IT health checks and privacy impact assessments are carried out early
  • make full use of AWS’s data security features (TLS, db encryption, etc) but also code the application itself using modern security practices. See OWASP for guidance

10. Define what success looks like and publish performance data

Decision

The service met point 10 of the Standard.

What the team has done well

The panel was impressed that:

  • the team has baselined current KPIs and were ready to report on the four mandatory KPIs
  • current user journey drop out points have been identified as a future KPI for the new service
  • the team has developed a transaction model for the existing service

What the team needs to explore

Before their next assessment, the team needs to:

  • consider how much impact they can have on some of the identified KPIs. For example, the panel understood the rationale for including “number of warnings issued” but to what extent is that in the team’s control given the increase in flood events?
  • ensure they have benchmarked the offline indicators, such as % calls to Floodline about a certain query
  • consider why they are collecting the KPIs and how that data will inform future development of the service. Are there any KPIs linked specifically to user stories? If so, it would be good to see those joined up
  • consider how these KPIs align to other services within the programme to ensure alignment across the piece
  • consider embedding analytics and metrics within the service

11. Choose the right tools and technology

Decision

The service met point 11 of the Standard.

What the team has done well

The panel was impressed that:

  • the team has only just started implementing the private beta service, keeping in mind that there is a danger in wanting to building too much too early, especially when the service is only just coming out of alpha and is likely to change substantially in the course of private beta
  • the tools and technology the team have chosen are modern, widely used and well supported: NodeJS, nginx, and so on
  • the team is making use of many third party services and libraries such as mapbox, OS Places

What the team needs to explore

Before their next assessment, the team needs to:

  • make sure open technology is considered first when selecting software or libraries. See the comment on AWS Connect below for an example

12. Make new source code open

Decision

The service met point 12 of the Standard.

What the team has done well

The panel was impressed that:

  • the team has published its source code on github (links below)
  • the team is conducting its software development in the open

Links to source code:

What the team needs to explore

Before their next assessment, the team needs to:

  • describe better the service’s repositories by explaining what service they belong to and how they link to each other
  • make full use of github’s features (pull requests, etc) to demonstrate the team follows modern software development practices

13. Use and contribute to open standards, common components and patterns

Decision

The service met point 13 of the Standard.

What the team has done well

The panel was impressed that:

  • the prototype used the GOV.UK Prototype Kit to build the prototype

What the team needs to explore

Before their next assessment, the team needs to:

  • check if any part of the service could be useful to the open source community, and if so package and publish it so it can be reused by others
  • check if any part of the service could be offered as a public API, and if so open and document it so it can be used by others
  • make sure that open standards support is considered when selecting specific components. For instance AWS Connect doesn’t seem to support open standards for IVR such as VoiceXML. Supporting VoiceXML would allow the service to be more portable across IVR platform and would also let the team publish more of its source code

14. Operate a reliable service

Decision

The service met point 14 of the Standard.

What the team has done well

The panel was impressed that:

  • the service will be running on AWS and will benefit from the quality of service and reliability that comes with it
  • the team has demonstrated good knowledge of how to use AWS to make the service robust

What the team needs to explore

Before their next assessment, the team needs to:

  • make sure that the service isn’t locked in the AWS ecosystem. There might be reasons (contractual or technical) for the service (or a part of) to be migrated elsewhere. For instance AWS Connect may prove not to meet users needs, which another IVR provider would
  • do more than trust its cloud service to handle reliability. It’s important not to take infrastructure reliability for granted, but instead have a plan for any incident that could happen. This includes:

    • preparing for possible attacks
    • monitoring the service using a third party service (pingdom, sentry, etc)
    • planning for application or platform failure: who gets notified, who can make decisions to turn things off, how to rebuild the service from scratch, etc
  • consider how they might deal with any service outages, how they will learn about them and deal with them (continuity arrangements/disaster recovery) and have prepared the information about this
  • mitigate any single points of failure, particularly in the operations/duty officer space
  • consult the Service Standard for further details
Published 25 August 2022