Resilience Direct Alpha assessment report
Service Standard assessment report for CO'S Resilience Direct alpha assessment
Service Standard assessment report
Resilience Direct
| From: | GDS |
| Assessment date: | 23/01/2024 |
| Stage: | Alpha |
| Result: | Red |
| Service provider: | Cabinet Office |
Service description
ResilienceDirect allows authorised users to create and share maps and mapping data, create informational pages and upload and download documents to facilitate collaboration across organisational boundaries for organisations and individuals who plan for, exercise and respond to recover from emergencies across the UK, the devolved governments and UK overseas territories.
Note: as a project centered on the modernisation of a live system, the effort to sustain adequate funding for its delivery is ongoing. The following report reflects this constraint.
Service users
This service is for users from;
- Category One and Two emergency responders as defined by the Civil Contingencies Act 2004 - police, fire, ambulance services, local authorities, NHS,
- organisations that support in the planning, exercising, responding and recovery from emergencies; government departments, voluntary and charity sector, private organisations, academia.
The main user types are;
- Site Admin - full administrative access across the service, including managing organisations, sub-organisations (SO), and user roles. Typically, RD Team or Service Desk personnel.
- Organisation Admin - admin for a specific organisation, managing users and viewing organisation-wide statistics. - - A Standard User with elevated permissions
- Standard User - general user with permissions to log in, create, edit, and upload maps, pages, and files based on organisational permissions.
- Dashboard User - limited access; can log in but is restricted to viewing dashboard data only.
- Contact - not a user with login rights, but acts as a recipient for notifications and has no access to the service directly. Can include shared/group email addresses.
Things the service team has done well:
- it was obvious that the team is genuinely passionate about the service and rightfully proud of everything they’ve managed to achieve despite an incredibly tight timeframe and significantly difficult user recruitment circumstances
- the team completed a very extensive technology options analysis and provided comprehensive documentation considering salient points for each of the platforms they reviewed before making a final technology choice. Including careful consideration of common languages and tooling within the Cabinet Office
- extensive thought has been put into the Role Based Access Control proposal and details like ensuring documents do not get orphaned when parties are removed from the platform
- consideration has been made around the accessibility of the mapping element of the application
- the team has adopted common components such as the Design System Components and GOV.UK Notify. They have also used standardised geo data formats and common nomenclature related to the M/ETHANE model, a common model for passing incident information between services
1. Understand users and their needs
Decision
The service was rated red for point 1 of the Standard.
During the assessment, we didn’t see evidence of:
- a clear problem statement and user needs that represented what people need to use the service for. Those presented were low level, functional needs. This has led the team to focus on what should be improved from the current system, rather than the purpose of the new one.
- a clearly defined MVP based on user needs. The team is planning to build and launch the whole service in private beta. This seems unrealistic in the context of the constraints the team are facing. A smaller MVP with features prioritised around user needs gives the team a better chance of delivering value quickly.
- the number of potential users and an understanding of what stops them using the current service. Without this, the team can’t work towards meeting their needs and increasing uptake.
- evidence of sufficient research for an alpha stage. There are gaps in the research, many of which the team recognise. For example, they’ve only done remote research with people in offices, there’s been no co-ordinated research around different device usage and the five ‘non-users’ who tested the prototype all worked for organisations that are engaged with the service.
- understanding of users’ offline journey, including the extent of duplication built into the service.
- a support journey based on user needs that has been tested and shown to be effective.
- a detailed research plan for beta, although the panel acknowledge this is in part due to timelines, scope and the team for beta not being clear.
2. Solve a whole problem for users
Decision
The service was rated red for point 2 of the Standard.
During the assessment, we didn’t see evidence of:
- specific prioritisation of which areas of the service needed the most urgent improvement - the service offers a very wide array of different features and use cases, with many of these unexplored due to the limited type of use case that emerged from only being able to conduct user research with predominantly one type of user
- planning or consideration for the offline journey or the experiences of users in the field, especially during in-the-moment crisis or incident response
- outcomes-focused design or development measured against clear baseline performance metrics
3. Provide a joined-up experience across all channels
Decision
The service was rated red for point 3 of the Standard.
During the assessment, we didn’t see evidence of:
- looking at the full end-to-end service experience for different types of user exploration of offline channels or access
- exploration or taking a baseline of measurement to understand whether the various organisations who use the service would report an improvement to their ability to work together internally or with other organisations thanks to the proposed changes to the design of the service
4. Make the service simple to use
Decision
The service was rated amber for point 4 of the Standard.
During the assessment, we didn’t see evidence of:
- exploration of alternative Plain English options for many areas of content design; while there was good use of components from the GOV.UK Design System, content design was deprioritised and considered functional only, with a number of design decisions made based on adhering to what existing users were already familiar with from the existing service and expected or wanted to see, rather than following good principles of content design - for -
- new users with no previous experience with the service, a more focussed exploration of this may have resulted in a simpler, more intuitive user experience
- exploration of whether any aspects of the existing service could potentially be removed and discarded from a potential future version to simplify the service and narrow its focus to meet specific user needs, rather than trying to offer a large range of under-utilised features
5. Make sure everyone can use the service
Decision
The service was rated amber for point 5 of the Standard.
During the assessment, we didn’t see evidence of:
- sufficient testing on different types of device, especially mobile devices or in scenarios with poor signal testing with users in the field or during crisis response or a fast-moving incident management situation exploration of the offline user journey
- plans for assisted digital support if users with were unable to use or access the service, particularly for those with accessibility needs
6. Have a multidisciplinary team
Decision
The service was rated amber for point 6 of the Standard.
During the assessment, we didn’t see evidence of:
- a fully adequately resourced team, especially with content, interaction and service design all being done by one person - this risks the work being unsustainable and unmanageable, and significantly reduces the ability for useful constructive challenge to be made to design thinking and decision-making sufficient tech resource to deliver beta
7. Use agile ways of working
Decision
The service was rated green for point 7 of the Standard.
Optional advice to help the service team continually improve the service
- we saw evidence of psychological safety and mutual respect within the team, including hearing from lots of different voices
- there is a real sense of pride and collaboration from both CO and CGI colleagues
- the team has iterated their ceremonies and ways of working based on learnings
- the team has had good access to, and engagement from stakeholders
- would be nice to see some co-location where possible - appreciate budget constraints and Dec/Jan not the easiest time to travel
8. Iterate and improve frequently
Decision
The service was rated red for point 8 of the Standard.
During the assessment, we didn’t see evidence of:
- funding to continue iterating the service in the short-medium term
- continuity of resource for beta. The panel understands the entire CGI team is due to roll off, and beta may be delayed for 12 months or more leading to a lack of handover to a future team, and potential changes to the service landscape
9. Create a secure service which protects users’ privacy
Decision
The service was rated amber for point 9 of the Standard.
During the assessment, we didn’t see evidence of:
- sufficient documentation around threat modelling plans for the service, it was discussed during assessment and some assurances were given but documented evidence was not provided. Considering all departments must implement the Secure by Design principles and adopt processes for carrying out threat modelling, especially due to the critical nature of the service it is suggested that the team engage in threat modelling early and build upon experience from the previous product to create a robust framework, and documentation, to assess threats and review this regularly. The sensitive nature of documentation held within the service would suggest the risk of being targeted as part of a cyber-attack is likely elevated.
- diagrams did not show Web Application Firewall provision in the HLD document, given the service is fronted by the Cabinet Office Login Application, it is expected that is behind a web application firewall. However the application should also be, given the proposed system is to be hosted on AWS (Amazon Web Services) it is easily achieved with AWS Web Application Firewall, it is also suggested DDOS (Distributed Denial of Service), and other advanced protection is also considered, it is recommended the team document this before beta
10. Define what success looks like and publish performance data
Decision
The service was rated amber for point 10 of the Standard.
During the assessment, we didn’t see evidence of:
- sufficient baselining of the as-is to be able to demonstrate an improvement delivered by the new service in beta and beyond
- wider focus on outcomes outside of the mandatory KPIs - performance targets were primarily based on non-functional system performance, rather than success of the service in terms of user outcomes
- factoring in the cost of the team and architecture into the overall cost-per-transaction. Appreciate it’s not a traditional transactional system and estimates are still useful, but panel believes current costs are significantly under-represented
11. Choose the right tools and technology
Decision
The service was rated amber for point 11 of the Standard.
During the assessment, we didn’t see evidence of:
- incorporating caching into the solution. Given the application will be used “in the field” on multiple devices, where bandwidth and functionality maybe be limiting, it would be suggested the team, as they build and approach private beta, performance test the solution and optimise to ensure page load times, data retrieval and overall performance is optimised for slow network conditions.
- enough prioritisation or consideration to overall systems availability. The team should critically evaluate all aspects of the applications and design availability and disaster recovery goals to reflect the needs of users. As a system that is handling crisis management, consideration should be made on matters other than cost when evaluating a multi-region architecture, the needs of the user community should be consulted to establish robust RTO (recovery time objectives) and RPO (recovery point objectives) figures. During system adoption active/active or an active/passive architecture should be explored to ensure no downtime for the user base in the likelihood of managing a crisis in the same region the application is hosted.
12. Make new source code open
Decision
The service was rated Green for point 12 of the Standard.
Optional advice to help the service team continually improve the service
- during the assessment the team have not created any new Open Source assets as yet, given the focus on the Alpha was to investigate technical options. They do not see there is no reason why custom functionality developed on Open Source components (for example, some of the custom mapping functionality such as the grid tool and sector tool) couldn’t be fed back into the Open Source community. It would be good to ensure the team have set appropriate standards to code in the open and have full support of the Cabinet Office to complete repository setup under their organisation.
13. Use and contribute to open standards, common components and patterns
Decision
The service was rated green for point 13 of the Standard.
Optional advice to help the service team continually improve the service
- reduce the amount of use of the green ‘call to action’ (CTA) button design by using:
- descriptive link text for dashboard tiles (such as making the word ‘Governance’ a text-based link called ‘Governance documents’ rather than using a green ‘Open’ or ‘View’ button) to improve accessibility for screen reader users
- more instances of the normal grey button state to help actual CTAs (such as ‘Add new’) to stand out better
- test content with users who have no experience at all with the current service - this will help to give better insight on whether content design decisions such as ‘child pages’ need to be reworked to better align with content design best practice and principles
14. Operate a reliable service
Decision
The service was rated red for point 14 of the Standard.
During the assessment, we didn’t see evidence of:
- thorough exploration of either the impact on or potential alternative routes into the service for users in the field where signal or internet connection is poor or non-existent but still need urgent access to information or reporting contained within the service
- planning for sustainable ongoing design support rather than all aspects of content, interaction and service design being allocated to just one person
- regional resilience of the service, given the criticality of the service we had discussions around availability, see notes under service point 11 around systems availability. It is recommended that the team investigate and are confident the system will have high availability, that is commensurate with its importance. It was not clear if this service does fall under Critical National Infrastructure. What is clear is given there is a likelihood crisis management may be related to physical and or Cyber related areas where the application is hosted, appropriate extremely quickly accessible redundancy should be in place.
Next Steps
In order for the service to continue to the next phase of development, it must meet the Standard and get spend approvals. The service must be reassessed against the points of the Standard that are rated red at this assessment.