Development of the patient safety incident management system (DPSIMS) alpha assessment

The report from the assessment for NHS improvement's development of the patient safety incident management system (DPSIMS) on 27 March 2018.

From: Government Digital Service
Assessment date: 27th March 2018
Stage: Alpha
Result: Met
Service provider: NHS Improvements

The service met the Standard because:

  • They have designed and built a prototype for the reporting service, testing and iterating it throughout the alpha.
  • The team have given a lot of thought to their KPI’s and what they will be measuring in beta
  • The team have conducted a substantial amount of research with users, and clearly showed how the reporting journey has been changed in response to emerging findings

About the service


The service will enable providers of healthcare, and members of the public, to record details of patient safety incidents for the purposes of learning and improvement.

Service users

The users of this service are front line clinical and non-clinical staff within the healthcare profession.

Eventually this will expand to members of the public


This assessment report is for the reporting service only.

The Patient Safety Incident Management service will be used to report incidents or situations that could have led to incidents. The users of the service will be nationwide and from multiple professions within healthcare. The team were clear on why they were building the service and what the benefits would be to the NHS and the wider public.

The team have so far treated the reporting of incidents and the reviewing of those incidents as one service, the panel feels it would be better to consider them as two separate services. The panel was impressed with the team’s work on the reporting aspect and feel that the team have met the required standard to move into beta. However, the panel did not feel that the team had given enough thought or carried out enough research into the reviewing aspect for that to move into beta. As such, this report is entirely based on the reporting service the team demonstrated to the panel.

The panel felt that a content designer being involved from the beginning would have enabled the team to get more out of their prototype and the usability testing. Although the journey was coherent, the questions relied on additional hint text which did not take into account the varying experience levels of the users. A content designer should have been able to make the questions easier to understand, even taking into account the need for some medical terminology.

The team indicated that due to the number of different professions using the service, there will need to be a lot of variations to the questions asked during the journey. The panel felt that this could become unmanageable. More research should be carried out with the varying professions to assess common needs and terminology. A content designer should be able to reduce the need for separate questions.

If the service is successful then there will be an increase in the amount of reports that require reviewing. The panel were concerned that the team had not given enough consideration to the business impact this could have. If there is a substantial increase in the number of reports but not enough reviewers to deal with them, this would have a detrimental effect on the overall service and undo all the good work the team have done. The team should begin engaging early to ensure the relevant teams are aware of the expected increase and can begin to prepare.

User needs

The alpha team has taken on an ambitious, wide-ranging service with a range of user groups who have differing needs and priorities, and reported on an extensive programme of user research and workshops. Elsewhere we recommend narrowing the scope by breaking the service into smaller parts.

There are two principal user groups, the frontline staff who log incident reports, and the groups who review, investigate and analyse reports. At times it felt as though the needs of reporters were somewhat lost in the drive to maintain continuity with existing systems and meet incident analysts’ desires. For example, the team have developed 25 different user personas; in the presentation, only one frontline persona was shown, and it would have been useful to see more evidence of the thinking around frontline staff needs.

The team have conducted a substantial amount of research with users, and clearly showed how the reporting journey has been changed in response to emerging findings. A variety of methods was employed (one-to-one research, remote testing and workshops); the panel noted that the team had faced difficulties in researching the service in situ (i.e. in clinical setting). This limitation may result in highly motivated, informed users being somewhat over-represented in research.

The panel recommends conducting more research in clinical settings/frontline settings, and taking care to sample a wide cross-section of NHS settings, job roles and working patterns, with users who have varying access to IT systems.

Although a substantial amount of research has been conducted, the user research assessor was concerned that recommendations for design improvement came across as somewhat blind - there was limited insight into users’ mental models of reporting, or expectations and preferences around incident recording. It appeared that the scenarios used in research were generated by users themselves, based on personal experience; it is recommended that the team also consider generating some simple, realistic incident scenarios [from real life data] which span a range of complexity and severity. This would allow the team to assess the reporting journey in a controlled way.


The alpha team appears to be a good size and mixture of disciplines. It includes members from NHS Improvements, the Clinical Review team, and NHS Policy areas. However, the alpha would have benefited from having a content designer and interaction designer in the team from the beginning of the alpha.

They are using agile methodology for the first time and can clearly show that they understand how it works and what the benefits are. The panel was particularly impressed with the efforts they’ve made to introduce agile to their extended teams and stakeholders, holding an introduction to agile session to up to 80 people to share their learnings and gain their support.

The amount of senior engagement was clearly shown throughout the assessment. Due to the availability issues of the intended users, this engagement will be crucial to the success of the project and amount of research undertaken.

The panel was pleased with the amount of engagement with senior policy stakeholders. As a lot of the requirements stem from policy and legislation, this engagement will hopefully enable the team to influence future policy changes where appropriate.

The team have also already engaged with external bodies who utilise the Patient Safety Incident data to begin understanding their needs.

The team plan on expanding during the beta to include a content designer and a performance analyst. The panel recommends that they include an interaction designer in their beta team.


The team has been using the GOV.UK Prototype Kit in Node.js for the alpha prototype. The choice of technology is heavily influenced by the NHS Improvements in-house architecture and development ecosystem, which is based around Microsoft software and Azure. It is reassuring to hear that the team have adopted some open source technology such as SpaCy for Natural Language Processing in Python.

The team is planning on open sourcing the source code for PSIMS in private beta (excluding any sensitive data and configuration details). The code will be publicly accessible in a GitHub repository and released periodically.

There is a good integration, testing and deployment pipeline in place and are using industrial standard N-Unit, Selenium etc. The team is using Azure Application Insights for monitoring the platform.

The panel encourages the team to identify fraud vectors and security threats to the service and come up with a mitigation plan where necessary.

The team is considering the use of GOV.UK Notify for communicating with the public users. The use of GOV.UK Verify was initially considered, but then opted to use OKTA due to the large number of existing registered users currently on this platform.

The team showed a good understanding of the maximum outage time and its impact on it’s users. The team had a general idea of the service working hours and alternatives if the service exceeds the critical outage.

The team are using the volumetric model from the existing service. The team demonstrated they had a plan to handle unanticipated traffic and are able to scale to the demand.

The PSIMS service will eventually replace the legacy NRLS system. The service is looking to integrate with multiple Local Risk Management Systems in private beta. The panel encourages the team to continue working with the software vendors to integrate with their LRMS and identify any support needs. The panel also encourages the team to come up with a transition plan to support the decommissioning process of the legacy systems.


The team demonstrated the journey for an internal user reporting an in incident. The journey had been iterated and updated based on usability testing and other research. The journey for reporting an incident seemed coherent, though it could likely be simplified and improved over the course of beta. The initial journey would greatly benefit from being separated from the wider service of viewing incident reports. The panel were impressed by the team’s ability to speak to iteration and the reasons they’d tried different approaches.

The service needs to cope with a variety of users completing reports, in a variety of details. Some reportees will know very precise details of the incident, whilst others will only know general information. Whilst the initial target user base are medical professionals, the later ambition is that members of the public will be able to report too - so the service will need to deal with users who are unfamiliar with technical medical terms. The alpha team would have benefited from having an interaction designer who could consider these challenges and how to support both types of user and the challenges of accepting both ambiguous details and specifics.

Having an interaction designer on staff will also help as the team prepares to support incident reports for multiple people and to update reports after they’ve been submitted.

The panel recommends reviewing the pages that use multi-selects and autocompletes to collect categories. The components used likely have accessibility issues, and as the users of the system may not know the categories, the pages would likely be better as a series of checkboxes or radios. The panel understands the team was concerned about the number of options, but this emphasises that the options need to be chosen carefully so that there are not so many that users are unable to pick correctly.

At times the service relies heavily on hint text or caveated questions. It’s essential the service add a content designer to the team, and review all content used. The team should try to use plain english where possible, and avoid technical terms where they’re not needed.

The panel believes that much of the hint text could be reduced or removed with careful rewording of the questions. Simplifying the questions would also help prepare the service for the eventual use by members of the public. The panel recommends avoiding hint text where possible - and much of that used in the service could likely be body copy to improve readability.

The team mentioned that in research some users needed additional help or had specific questions. The panel recommends the team consider the use of expanding hidden text for cases where most users don’t need additional help, but some do.

The panel had some concerns about the single free-text box for incident description. The team suggested they were considering splitting this in to several more structured free-text boxes, which the panel fully supports.

The service doesn’t currently provide any kind of receipt or notification that an incident report has been submitted. The panel recommend the team does research in to this area to find out what user needs exist for receipts - something that might be particularly useful for the anonymous submission route.

The service is NHS branded rather than GOV.UK. The team has yet to engage with wider NHS community to agree styling. The panel recommend the team engage with other NHS digital services to ensure there’s a coordinated approach to design on NHS services.


The panel were impressed with the amount of thought that the team had put into their KPI’s. The team are planning on tracking cultural changes through the expected increase in new reporters using the service to report Patient Safety Incidents. They are also expecting to be able to see the impact of their service through a decrease in the levels of harm being reported.

The team are also hoping to see an improvement in the quality of the data coming in from the LRMS. The plan is for the LRMS to alter their taxonomy to be more aligned with each other to make national statistics easier to collect and more reliable. The panel thought this was a great idea but are unclear on what the incentive will be to the providers of the LRMS to make these changes and who will potentially be covering the costs.

While the team know what KPI’s are relevant, it would be good if they began having conversations about what success will look like. Indicative benchmarking early will enable them to have better idea of what targets they should be aiming for once in beta. This will also help them with stakeholder engagement going forward.

The team are expanding in beta to include a Performance Analyst.


To pass the [next assessment / reassessment], the service team must:

  • The Patient Safety Incident Management service should be split into at least two services: one focusing on the reporting of incidents, and another enabling the reviewing of these reports. The data sharing aspect could also be a separate service.
  • Have a content designer and interaction designer on the service for at least 3 days per week.
  • Conduct more one-to-one research in clinical settings/frontline settings, taking care to sample a cross-section of NHS settings, job roles and working patterns

The service team should also:

  • For reporting incidents, ensure there is greater focus on the needs of reporters as well as the needs of reviewers, investigators and analysts
  • Review all content and questions in the service, simplifying where possible and reducing the reliance on hint text.
  • Review the use of multi selects and autocompletes. If they continue to be used, provide evidence that they work better for users than other solutions.
  • Run workshops with frontline staff who will be using the service rather than the incident reviewers
  • Identify fraud vectors and security threats to the service and come up with a mitigation plan where necessary.
  • Create a high level transition plan to support the parallel running and finally the decommissioning process of the legacy systems.

Next Steps

You should follow the recommendations made in this report before arranging your next assessment.

Get advice and guidance

The team can get advice and guidance on the next stage of development by:

Digital Service Standard points

Point Description Result
1 Understanding user needs Met
2 Improving the service based on user research and usability testing Met
3 Having a sustainable, multidisciplinary team in place Met
4 Building using agile, iterative and user-centred methods Met
5 Iterating and improving the service on a frequent basis Met
6 Evaluating tools, systems, and ways of procuring them Met
7 Managing data, security level, legal responsibilities, privacy issues and risks Met
8 Making code available as open source Met
9 Using open standards and common government platforms Met
10 Testing the end-to-end service, and browser and device testing Met
11 Planning for the service being taken temporarily offline Met
12 Creating a simple and intuitive service Met
13 Ensuring consistency with the design and style of GOV.UK N/A
14 Encouraging digital take-up Met
15 Using analytics tools to collect and act on performance data N/A
16 Defining KPIs and establishing performance benchmarks Met
17 Reporting performance data on the Performance Platform N/A
18 Testing the service with the minister responsible for it N/A
Published 6 August 2018