Integrated Data Service (ONS)
Service Standard report for ONS's Integrated Data Service beta reassessment
Service Standard assessment report
Integrated Data Service
From: | Assurance team |
Assessment date: | 18/03/2024 |
Stage: | Beta Reassessment |
Result: | Amber |
Service provider: | Office for National Statistics |
- JISC approval confirming acceptance by the Naming and Approval Committee for having the specific website: Domain Registry Service: Request for integrateddataservice.gov.uk – 107015]
Service description
This service aims to solve the problem of replacing the Secure Research Service (SRS) with a new Integrated Data Service (IDS). The IDS is intended to provide a single unified facility from which accredited analysts can access data to support research and decision-making.
To use the IDS, the user needs to be an accredited researcher. To achieve this accreditation, the user needs to register using the ONS Research Accreditation Service (RAS): https://researchaccreditationservice.ons.gov.uk/ons/ONS_Registration.ofml
Service users
This service is for:
- government data analysts
- government data scientists
To access data in the Integrated Data Service (IDS), you must be an accredited researcher.
Things the service team have done well:
On user research and design standards
- the team demonstrated how they’re tracking outcomes and impacts using an impact tracker so they understand how data analysts’ outputs are disseminated and used by decision makers.
- the team demonstrated an understanding of barriers to data sharing by organisations. They are overcoming these barriers through proactive engagement and building reputational trust as a benchmark initiative. They provided examples of concerns and described a robust process for managing relationships, which was evidence in the Data provider to be blueprint. They are aware of the pain points still to address, including human factors, and have a plan for how they’ll look to resolve these.
- the team described how they’ve developed guidance for users to refer to if they have an issue when they aren’t logged into the service. They’re also developing a more structured Contact the service team process, shifting from email to a categorised form. They are developing frontline support based on analytics about technical problems users encounter.
- the team shared their user research and feedback plan, which covers user research and testing plans for issues they’ve identified at key points in the end to end user journey. The plan’s timeline is from ‘current to June 2025.
- the team shared some user research findings on the tactical version of the IDS Data Catalogue, which outline their methods and test sample. They have themed the findings and have described how these insights will be used to inform design iteration.
- the team described the patterns and principles they’ve taken from the GOV.UK and ONS Design System. They described the limitations of using third party platforms, such as Google Big Query. They raise tickets around problems with third party systems, particularly in relation to accessibility but also usability. They described using test environments to host dummy data for usability testing.
- the team described design exploration they’ve carried out around in-help context associated with data fields and developing a robust rationale for the content they include at different points in the journey, including what information they request from users.
- the team has used the DAC accessibility reports they’ve used to guide and prioritise the things they need to fix or improve in the different elements that make up the service. They’ve focused on the ‘as is’ RAS and Hub accessibility fails initially.
- the team is using ONS Design System pre-tested components where possible to ensure they’re meeting accessibility requirements.
On performance standards
- the team has established a clear set of KPI’s and defined what good looks like for each KPI by defining critical success factors.
- the team provided visibility of how performance data is being used to support user research.
- the team has provided details of data sources and frequency of collection for the identified key performance indicators.
- dashboards are in place for tracking and forecasting purposes. The team have been able to add measures that are useful to track progress such as total data sets available and the number of data sets by provider.
- the team provided evidence of tracking user activity via the Google cloud platform including notebook interactions by time and unique researchers by organisation.
- the team evidenced the use of MI for service improvement and provided good examples of issues identified via MI and outcomes of this. For example, the user experience survey and MI from RAS identified issues with slower than expected times and not enough updates on progress. Improved features have been added to picklists and data validation and automated emails are sent at important milestones. This addressed the recommendation to show how findings are being used to iterate on the service.
On technology standards
- good use of Cloudflare’s Browser Isolation to increase data protection.
- data governance processes and data anonymisation techniques that have been developed over time by ONS have been successfully integrated into the IDS service.
On Agile delivery and the team
- the team have worked to bring in the right skill mix into the core team and where there are gaps have utilised the wider ONS programme to support them.
- the team works with the central programme team on the backlog and conducts weekly delivery checkpoints, making sure that communications are clear and timely, and escalations for governance and reporting are in place.
- the team are continuing to access SME, operational and programme support from ONS as the service develops.
-
the team continue to work together to create user guides and is now providing 1st line and 2nd line service support to handle user queries and support calls.
1. Understand users and their needs
This was not in the reassessment as previously met.
2. Solve a whole problem for users
Decision
The service was rated amber for point 2 of the Standard.
During the assessment, we didn’t see evidence of:
- a ‘to be’ data ecosystem that included the decision makers’ route to information that is an output/result of access to data in the IDS.
- a ‘to be’ data ecosystem of organisations (government and others) that will provide data to IDS and what type of data they’ll provide, although this was described at a high level.
- a list of which users need which data sets and why, to demonstrate their dataset rationale and strategy.
- experiments and/or proof-of-concept tests to confirm the extent that the service will give decision-makers the insights they need.
- the team must take the needs and key scenarios identified in the user research they do to demonstrate/verify that the service is the viable in the context of decision makers and it suitable for their needs.
3. Provide a joined-up experience across all channels
Decision
The service was rated amber for point 3 of the Standard.
During the assessment, we didn’t see evidence of:
- end-to-end usability testing that has been conducted. They haven’t prototyped and tested all parts of the journey and weren’t able to demonstrate how they’d used the findings to iterate. -the team must prototype and test content and design concepts at service touchpoints across the whole journey, not only the interactions within the IDS website or the data catalogue. They have shared a prototype to demonstrate their progress to towards the concept they described of the IDS Hub as a single point for users to access information, tools and data outputs. They should provide evidence of the research and usability testing they are doing to show their concept aligns with the steps in their users’ end to end journey. -how they’ve made iterations to the start points of the service in line with user research and usability testing. The team described their current process of ‘handholding’ users with accessing the website. While there are instructions on the IDS website of the steps you need to go through and the RAS step prior to using IDS, the team didn’t demonstrate that they’ve designed, tested and iterated these steps and instructions in line with any research findings. -a consistent user experience across the user journey. The current tactical solution uses some patterns from the GOV.UK and ONS Design System, but the journey between platforms has resulted in a fragmented user experience and complex navigation/wayfinding. As the team continues to design and test the ‘to be’ IDS Hub concept, they should ensure GOV.UK Design System patterns are fully exploited. This will help to create an experience that feels reliable and trustworthy. For example:
- following GOV.UK front end styles include lots of components that make them worth including in your service - even if parts of it look nothing like GOV.UK
- ensure the user experience is accessible to users with disabilities by using design principles that mean all users regardless of their abilities or the devices they use, can interact with your service effectively
- responsive design to create layouts that automatically adjust
- consistent branding and design elements throughout the journey
- optimising loading times to ensure good user experience
- design for ease of interaction - make sure buttons and interactive elements are large enough to be easily tapped with fingers.
4. Make the service simple to use
Decision
The service was rated amber for point 4 of the Standard.
During the assessment, we didn’t see evidence of:
- rationale and findings to support deviations from GOV.UK and ONS patterns, where platform constraints have meant they cannot be easily applied.
- while they shared user research findings from usability testing the data catalogue with a small sample of users, they did not demonstrate the iterations they’ve made or are planning to make based on what they’ve learned.
- what they’ve learned about usability pain points across the current journey in terms of consistency and clarity of navigation between platforms. The team have a plan to improve usability by bringing the existing steps into a single Hub. They should be able to share evidence of what they know from the usability testing they’ve done already around users navigating between different platforms, as even when consolidated into a single hub some of the routes will remain the same (e.g. from IDS Hub to Big Data Query Studio, to GitHub, etc). This feels like an important area to understand, particularly when considering ease of use for new users, who will be relying on guidance and instructions at the point of need.
- usability testing on different devices and browsers. The team must demonstrate they understand how their users access the service (desktop, mobile, tablet) so they are designing content, interactions, navigation and handoffs between platforms with this in mind.
- an end-to-end prototype that has been tested with their main user groups data analysts and providers, but also their product, policy and admin users. The team must test the service across the different touchpoints in the journey, covering a range of data provider and accessor scenarios.
5. Make sure everyone can use the service
Decision
The service was rated amber for point 5 of the Standard.
During the assessment, we didn’t see evidence of:
- the team addressing accessibility issues with the third-party platform that are used within the end-to-end service journey.
- the team have advised that a new solution is being delivered, and this is to be shared with the panel as it is developed.
- testing with users who have a broad range of accessibility needs and feeding back issues to third party providers to fix. In addition, the GDS accessibility personas can be used for a cognitive walkthrough of the service [Using persona profiles to test accessibility – Accessibility in government (blog.gov.uk)]. (https://accessibility.blog.gov.uk/2019/02/11/using-persona-profiles-to-test-accessibility/)It’s positive that the team are using Google’s Accessibility Conformance reports to learn about how their products work for people with disabilities
- test for common accessibility problems via usability testing, recruiting users who have a range of access needs to build the accessibility strategy and complement the feedback and support channels to highlight issues. https://www.gov.uk/service-manual/helping-people-to-use-your-service/testing-for-accessibility )
- consider adding events tracking and analyse frequency of error states and drop-off points, as these could indicate usability and accessibility issues to investigate in addition to the high levels of support provided to IDS users.
- end to end usability testing to identify accessibility blockers or pain points, and how they’ve iterated to resolve any issues identified. For example:
- data being clearly labelled (e.g. active titles that explain what the data shows)
- (perceivable) making sure there’s sufficient contrast between the graph colours so it can be seen by people with colour blindnes
- (operable) can data be navigated by people who only use a keyboard not a mouse, or use screen readers (e.g. complement graphs with data in tables for screen reader users when compressed to mobile devices
- (understandable) using plain English, or identifying errors and helping people correct them if any service touchpoints involve user inputs
- (robust) being compatible with assistive technologies and using mark-up correctly.
- the team must be clear in their accessibility statement about the plans they have to improve accessibility of the service, and make sure their statement includes how to report problems
- the team should explore best practices for making data accessible to help guide their approach. Some example resources:
- https://dataingovernment.blog.gov.uk/
- https://dataviztraining.dwpdata.info/index.html
- https://analysisfunction.civilservice.gov.uk/policy-store/writing-about-statistics-2/
- account teams have been put in place to deal with user support issues through the E2E service journey.
-the team should now explore how a user journeys through the content touchpoints to make sure they’ve captured how users accomplish goals and what support they need during the journey.
6. Have a multidisciplinary team
Previously met so not reassessed.
7. Use agile ways of working
Decision
The service was rated amber for point 7 of the Standard.
During the assessment, we didn’t see evidence of:
- how Agile ways of working can be iterated, and the team coached to improve on collaborative working, including the need for focus on user research as a whole team activity.
- how the team maintained involvement in user testing and iterating on designs to improve new user needs as they emerge.
8. Iterate and improve frequently
Decision
The service was rated amber for point 8 of the Standard.
During the assessment, we didn’t see evidence of:
- iterating on designs with user feedback and demonstrating that the team is responding to changing user needs.
9. Create a secure service which protects users’ privacy
Decision
The service was rated green for point 9 of the Standard.
10. Define what success looks like and publish performance data
Decision
The service is moving into amber for point 10 of the Standard.
During the assessment, we didn’t see evidence of:
- using digital analytics (Google Analytics) to measure how users are interacting with web-based touch points (for example link clicks, radio buttons and validation errors.)
- using digital analytics (Google Analytics) will also enable tracking of measures such as form completion rate %.
- by using Google analytics and tag manager to measure the web-based touch points the team will be in a better place to identify pain points via drop off points within journeys, by tracking measures such as exits and error count.
- the MI is in a strong place but using digital analytics with this will provide a stronger picture of the user journey across web-based touch points when moving into Beta.
- when in Beta the team must use Google analytics to drive iterative improvements to the web-based touch points sides of the service and use data to assist in driving the decisions around iteration.
11. Choose the right tools and technology
This was not in the reassessment as previously met.
12. Make new source code open
This was not in the reassessment as previously met.
13. Use and contribute to open standards, common components and patterns
This was not in the reassessment as previously met.
14. Operate a reliable service
This was not in the reassessment as previously met.
Next Steps
Amber
‘Amber’ means that the standard has not yet been met, but the issues are not critical. The service can continue to the next phase of delivery while you work on the amber issues.
You need to address the amber issues within a reasonable period of time, usually up to 3 months. You must record your progress in a live ‘tracking amber evidence’ document which will be visible to the assessments team.