© Crown copyright 2018
This publication is licensed under the terms of the Open Government Licence v3.0 except where otherwise stated. To view this licence, visit nationalarchives.gov.uk/doc/open-government-licence/version/3 or write to the Information Policy Team, The National Archives, Kew, London TW9 4DU, or email: firstname.lastname@example.org.
Where we have identified any third party copyright information you will need to obtain permission from the copyright holders concerned.
This publication is available at https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology
Today we have some truly remarkable data-driven innovations, apps, clinical decision support tools supported by intelligent algorithms, and the widespread adoption of electronic health records. In parallel, we are seeing advancements in technology and, in particular, artificial intelligence (AI) techniques. AI is being used on this data to develop novel insights, tools to help improve operational efficiency and machine learning driven algorithms, and clinical decision support tools to provide better and safer care.
This presents a great opportunity, but these techniques are reliant on the use of data that the NHS and central government have strong duties to steward responsibly. Data-driven technologies must be harnessed in a safe, evidenced and transparent way. We must engage with patients and the public on how to do this in a way that maintains trust.
Our responsibility as an internationally trusted health and social care system is to use all the tools at our disposal to improve the quality and safety of care. We are working to build new ways to support the development of new data-driven innovations through programmes such as Local Health and Care Record Exemplars, Digital Innovation Hubs and NHS Test Beds. At the same time, we have collated a set of principles outlined in this code of conduct to ensure that this opportunity can be combined with responsibility, to result in partnerships that deliver benefits to patients, clinicians, industry and the health and care system as a whole.
To achieve these ends, the code will:
- provide clarification of:
- what we expect from suppliers of data-driven technologies
- what the government will do to support and encourage innovators in health and care, including the development of trusted approval systems and a coherent pathway for suppliers to enter the market
- provide the basis for ongoing engagement and conversation on how we should use new technology to provide better and more sustainable services, with:
- our partners in academia, industry and the health and care system
- the wider public
- provide the basis for the health and care system and suppliers of digital technology to enter into commercial terms in which the benefits of the partnerships between technology companies and health and care providers are shared fairly
We are launching the code in an initial form and encourage and welcome comment and feedback to improve and strengthen its contents. We also encourage its use – it provides principles that we want our suppliers to live by and that those involved in the commissioning and procurement of innovative, digital technologies and services should look for. Above all, it seeks to provide a basis to deepen the trust between citizen, clinician and the wider health and care system. We should not wait to take this opportunity.
Please note that this code does not replace or change any existing regulatory requirements. We expect developers of data-driven technologies to work closely with health and care providers to ensure the safe implementation of technology and manage risks to the safety and quality of care in accordance with the expectations set out by the health and care system regulators. Health and care data must be used in accordance with the common law duty of confidence.
The way ahead
As of now, the code is voluntary, but any organisation that wishes to sign up to the code immediately can, and is encouraged to. We want to ensure that we give our stakeholders the opportunity to co-design the final version with us. We want the code to be something that technology providers want to sign up to, to demonstrate their world-leading approach.
Alongside this, we are conducting a formal review of the regulatory framework and assessing commercial models used in technology partnerships. This is to ensure that issues including, but not limited to, bias, transparency, liability and accountability are appropriately accounted for.
After the conclusion of these exercises, we aim to have identified how to constructively enforce these standards, as well as provide commercial support for trusts that wish to enter into partnerships with industry. We have therefore published the code online with the ability for you to feed back to us via a questionnaire. We will republish this document in December, at which point the code will become a collaboratively agreed standard for technology partnerships.
We are considering how best to develop the code over time, including the setting up of a partnership support service and development of a Kitemark scheme for the code, underpinned by a robust application and evaluation process. We welcome comment from all stakeholders on these ideas.
The code outlines 10 key principles for safe and effective digital innovations, and 5 commitments from the government to ensure that the health and care system is ready and able to adopt new and innovative technology at scale. These are based on a significant amount of engagement with industry, research and academia, and are designed to create a trusted environment for data-driven technologies.
This is the initial version of the code. We want to work openly and in partnership to develop it, so that by following the principles it outlines, we in the health and care sector can collectively:
- help build a safe ecosystem for digital health and intelligent algorithms
- demonstrate that we have appropriate safeguards
- raise standards by establishing best practice
- show evidence of transparency and accountability
- create a competitive advantage for the NHS and UK health and care market
- build partnerships between health and care providers, their patients, service users and staff, industry and academia to ensure that all benefit from the enormous potential of digital innovation to improve lives and grow the economy
The code aims to meet the most important need for each of 4 key groups of stakeholders:
- patients and citizens will experience improved care through technology
- health and care professionals will be able to deliver better care
- commissioners will commission services that better meet the needs of their population
- innovators will choose the UK as a great place to do business on technology
The Department for Digital, Culture, Media and Sport has published the Data Ethics Framework, which guides the design of appropriate data use in government and the wider public sector. It is aimed at anyone working directly or indirectly with data in the public sector.
This code of conduct is designed to be complimentary to and build on the Data Ethics Framework to set out what central government expects from industry engaging with the NHS and the health and care system, and how we want the health system to engage in return.
Several issues have emerged around AI and other sophisticated algorithmic tools (and which are applicable to a range of digital technologies) from early deployment in health and care, and from experience in other sectors that are further ahead. These affect commissioners, health and care providers and suppliers to varying degrees. We are developing the code of conduct to clarify what central government is doing to help navigate and manage these issues, what we expect of technology suppliers and how we will manage liability and accountability in a changing world.
Commercial arrangements for partnerships developing, testing and implementing new technologies need to be mutually beneficial. This is complicated and requires consideration of a wide range of factors, listed below. The aim of this code is to provide guidance on how to think about these factors critically and on the impact that any commercial arrangements you enter into will have on the NHS and the health and care system, including:
- value added for health and/or social care
- scope, terms and value of arrangements
- compliance with laws and rules, for example around data
- ownership of intellectual property
- transparency, quality assurance and audit
- liability and accountability
Market, procurement, contracting and commissioning
Health and care contracting, procurement and commissioning processes are not always well-suited to the adoption of rapidly developing and evolving innovation:
- multiple regulators mean that the pathway for products across their lifecycle is unclear. There are overlaps and some products may need to be registered with multiple regulators – there is a need for clarity as to where this is the case and why
- the large number of health and care providers, particularly when considering primary care and social care, results in a fragmented market
- providers are not always equipped to manage the change associated with innovation
- support for innovators is unbalanced, with a heavy focus on research, development and testing but little or no support for scaling up successful products and businesses
- as a result, suppliers may not receive a coherent demand signal. This could make it difficult for UK-based suppliers to secure private investment
Need for data
When collected and used properly, data relevant to people’s health and care has the potential to be transformative. Sharing data offers immense promise for improving the NHS and the social care system, as well as benefiting individuals through unlocking new treatments and medical breakthroughs. Getting information-sharing right can create a better experience for people using services and make care more efficient.
This government is committed to championing the safe use of data, enabling data to flow in a lawful, secure and appropriate way to improve outcomes for patients. There are several important safeguards in place, including the National Data Guardian’s 10 data security standards and the national data opt-out, which provides an opportunity for people to opt out of their data being used for purposes beyond their individual care. We have been clear that data must be shared across the system in a safe, secure and legal way, and that, where possible, data should be anonymised.
AI and other complex algorithms require large quantities of data to be able to function. This leads to 2 central challenges:
- from a technical standpoint, data must be defined and structured in accordance with agreed interoperable standards wherever possible
- from an ethical and legal standpoint, people must be able to trust that data is used appropriately, safely and securely – for example, by exercising choice about whether their confidential patient information is shared – and must understand the consequences of their choice
People want to know that their privacy and rights are safeguarded and to understand how and when data about them is shared, so that they can feel reassured that their data is being used for public good, fairly and equitably.
Our ability to unlock the benefits of data-sharing relies on the public having confidence in the health and care system’s appropriate and effective use of data.
To ensure that the code can deliver the outcomes we have set out, we need to provide:
- a clear pathway for market entry and for wide adoption of successful innovations to support growth of UK SMEs so that all those developing, testing and implementing data-driven technologies have an opportunity to deliver benefits for patients, clinicians and the health and care system
- a clear regulatory regime to support this
- a level of trusted approval – whether suppliers can demonstrate that they meet the requirements of the code
- understanding of what evidence is required for particular applications of particular technologies, so that we can ensure that regulation and approvals are proportionate
- an agreed standard that complements others already in existence and supports the creation of an environment that is driven by ethics and the ambition to create societal value
Success in this will require some longer-term work to be started, including:
- ensuring regulation is fit for purpose
- ensuring that commissioners can assess which technologies can best meet the needs of their populations
- ensuring that our procurement frameworks enable providers to adopt the right innovations quickly so that patients and service users can benefit
- ensuring that there is an open market in which all providers of innovative data-driven technologies have an equal opportunity to thrive
1 Define the user
Understand and show who specifically your product is for, what problem you are trying to solve for them, what benefits they can expect and, if relevant, how AI can offer a more efficient or quality-based solution. Ensure you have done adequate background research into the nature of their needs, including any co-morbidities and socio-cultural influences that might affect uptake, adoption and ongoing use.
2 Define the value proposition
Show why the innovation or technology has been developed or is in development, with a clear business case highlighting outputs, outcomes, benefits and performance indicators, and how exactly the product will result in better provision and/or outcomes for people and the health and care system.
3 Be fair, transparent and accountable about what data you are using
Show you have utilised privacy-by-design principles with data-sharing agreements, data flow maps and data protection impact assessments. Ensure all aspects of GDPR have been considered (legal basis for processing, information asset ownership, system level security policy, duty of transparency notice, unified register of assets completion and data privacy impact assessments).
4 Use data that is proportionate to the identified user need (data minimisation principle of GDPR)
Show that you have used the minimum personal data necessary to achieve the desired outcomes of the user need identified in 1.
5 Make use of open standards
Utilise and build into your product or innovation, current data and interoperability standards to ensure you can communicate easily with existing national systems. Programmatically build data quality evaluation into AI development so that harm does not occur if poor data quality creeps in.
6 Be transparent to the limitations of the data used and algorithms deployed
Show you understand the quality of the data and have considered its limitations when assessing if it is appropriate to use for the defined user need. When building an algorithm be clear on its strengths and limitations, and show in a transparent manner if it is your training or deployment algorithms that you have published.
7 Make security integral to the design
Keep systems safe by integrating appropriate levels of security and safeguarding data.
8 Define the commercial strategy
Purchasing strategies should show consideration of commercial and technology aspects and contractual limitations. You should only enter into commercial terms in which the benefits of the partnerships between technology companies and health and care providers are shared fairly.
9 Show evidence of effectiveness for the intended use
You should provide evidence of how effective your product or innovation is for its intended use. If you are unable to show evidence, you should draw a plan that addresses the minimum required level of evidence given the functions performed by your technology.
10 Show what type of algorithm you are building, the evidence base for choosing that algorithm, how you plan to monitor its performance on an ongoing basis and how you are validating performance of the algorithm.
Show the learning methodology of the algorithm you are building, if it falls into a higher-risk categorisation as shown in principle 9. Aim to show in a clear and transparent way how you are validating the outcomes.
As the government we are committed to doing the following:
1 Simplifying the regulatory and funding landscape
We are reviewing the existing health and care regulatory landscape for data-driven technologies to ensure that it neither stifles innovation nor risks patient safety.
We are making it easier for the market to provide innovative solutions to the demands of the health and care system by creating a simplified and streamlined means of engagement.
2 Creating an environment that enables experimentation
Through initiatives, including the Local Health and Care Record Exemplars and Digital Innovation Hubs, we are improving safe and secure access to NHS data for research and development purposes, and creating environments to enable the analysis and sharing of clinical, genomic, biological and other multi-dimensional data.
3 Encouraging the system to adopt innovation
We are reviewing our contracting, procurement and commercial arrangements with technology partners to ensure that all parties are fairly compensated for the time and resource inputs and outputs in existing and future partnerships.
Through the Topol review, we are evaluating the impact of future technologies on the training requirements for the health and care workforce to ensure that individuals are properly equipped to act as intelligent and willing customers for technology and innovation.
We are working with the NHS to build on progress made by the approvals process created for the NHS Apps Library to create a trusted approval scheme Kitemark for digital health and care products, so that commissioners and those making procurement decisions can do so in an informed way.
This is to ensure that where innovation is shown to deliver genuine benefits to people and the health and care system, it is given the best opportunity to do so.
4 Improving interoperability and openness
We are working with citizens, industry, healthcare professionals and the research community to enable innovators to develop fully interoperable products through the establishment of clear, open and public data standards and application programming interfaces (APIs) that also give people more control of their own data and the ability to engage more proactively in managing their health.
5 Listening to our users
We are opening up for discussion our principles, commitments and plans for delivering this. This is the beginning of a new era for the health and care system and we want there to be an open, honest and joined-up conversation with the widest possible audience about the benefits of data and the role of data-driven technology in the system.
Principle 1: Define the user
Understanding the precise user you are targeting and their specific needs will help with uptake and adoption of the technology or innovation being built, as well as clearly showing a commissioner/buyer the problem being solved.
If you are conducting healthcare research to either develop a proof of concept or eventually to submit an application for a Health Technology Application, the user need must conform to the UK Policy Framework for Health and Social Care Research. There may be the need to seek approval from the NHS to carry out this research, and this can be facilitated through the Integrated Research Application System.
If you are unsure what user need is being solved, contact the NHS England strategy research team to see what user needs have been identified in line with national health strategies. If you are unsure how to carry out good user research, follow the Government Digital Service manual on user research and the NHS Digital design service manual.
Principle 2: Define the value proposition
Demonstrate how and where the product will add value to people and the health and care system. This will help in the uptake and commissioning phase of the product. Clearly define KPIs and where the product will result in better provision and/or outcomes for people, in addition to outlining where and how cost savings or reductions are likely to be made.
Principle 3: Be fair, transparent and accountable about what data is being used
Citizens hold strong views on how their data is held and what it is used for. Information governance describes how confidential information is managed. It covers the requirements and standards that organisations and their suppliers need to fulfil in order to meet obligations that information is handled legally, securely, efficiently, effectively and in a way that maintains public trust. It seeks to achieve a balance between privacy and sharing of personal confidential data.
GDPR and incorporating privacy by design
The GDPR came into full effect on 25 May 2018 and, coupled with a new UK Data Protection Act (which is still under parliamentary review), replaced the existing data protection law. From a practical perspective, the important documents underpinning a privacy-by-design approach are the data flow maps and data protection impact assessments (DPIA).
Have a data flow map to help complete the required DPIA
A good data flow map identifies the data assets (data at rest) and data flows (exchanges of data) that enable the relevant objective or initiative to be delivered.
Two processes need to be recorded in data flow maps:
- the actual exchanges of data – physically or digitally – between technologies and people through which roles (including the status of data controller and data processor) are defined
- the legal basis for each transfer of data between parties
A data controller is the person or agency who determines the purposes and means of the data processing activity. A data processor is the person or body that processes the data on behalf of the controller. An issue to address when mapping data flows is identifying and assigning controller and processor relationships for each processing activity. Guidelines on how to do this are being drafted by the Information Governance Alliance for future publication.
Where data flow mapping identifies instances where data is processed by a data processor on behalf of a data controller, a legally binding written data processing contract is required. This should include clauses appropriate to the processing risks identified (highlighted in the DPIA), as well as mandatory clauses for all data processing contracts.
Once complete, data flow maps will have different characteristics – each can be used to clearly assess and attribute the different levels of risk and should be maintained and updated throughout the life of the project.
Completing a DPIA
The data flow map will then influence the DPIA as the vehicle by which proposed flows of personal identifiable data are governed, and the controls developed to ensure lawful processing. There are 2 stages:
- an initial evaluation of the degree of risk, informed by data flow mapping, to determine the depth to which a DPIA must be executed
- where the proposed processing is likely to result in a ‘high risk to the rights and freedoms of individuals’, a full assessment of privacy risks and mitigating activity
The vast majority of data processing in a health and social care context will involve special categories of data and it is therefore recommended that a full DPIA is carried out. A DPIA is intended to be a ‘living document’ and should be regularly reviewed and updated by programmes.
Alongside data flow mapping, DPIAs must be carried out at the earliest possible opportunity and can be used to identify information governance capacity and capability requirements. DPIAs identify risks, and risk-mitigating controls should be proportionate to the risk identified. If a risk is identified that cannot be mitigated, the ICO must be consulted before processing commences. They will normally provide advice within 8 weeks, or 14 weeks in complex cases. For further information on when permission needs to be sought from the Health Research Authority, please see principle 6.
Data sharing agreements are only valid between data controllers (those who determine the purposes and means by which personal data can be processed). They should not be confused with data processing contracts, which govern relationships between controllers and processors (those who undertake the processing of data on behalf of a controller).
Data sharing agreements are strongly recommended, although they are not a legal requirement. They set out specific concerns relating to the data to be shared, as identified through data flow mapping and DPIA exercises.
The requirement for legally binding contracts typically invokes a related requirement for advice from qualified information law practitioners. Where the data flow mapping and DPIA process identifies the need for a data processing contract, projects need to provide sufficient resources and allow lead times for drafting and agreeing contracts.
Principle 4: Use data that is proportionate to the identified user need
Be able to explain to a lay member of the public why the data used was needed and how it is meeting the user need. Be able to explain the proportionality of the data and that the questions under principle 3 of the government data ethics principles have been answered.
Be aware that since May 2018 there has been a new national data opt-out policy, which allows people to say if they do not want their confidential patient data to be used beyond their direct care. The opt-out does not apply to data that is anonymised in line with the ICO’s Code of Practice on Anonymisation. If there is any doubt about whether to seek explicit consent, guidance should be sought from the ICO or the National Data Guardian’s Office.
Principle 5: Make use of open standards
Within the NHS health and social care system, information standards cover the specifications used to collect and extract data from information technology systems. Collections and extractions are defined data sets that can then be used to measure or conduct analysis of particular areas of interest.
NHS Digital currently hosts a range of data, clinical and interoperability standards for the health and social care network:
Other data, clinical and interoperability standards:
- NHS England interoperability standards
- InterOpen current FHIR, Care Connect, HL7 and PRSB standards
- UK government open standards
Principle 6: Be transparent to the limitations of the data used
Having complete and accurate data improves the data quality
The data used must be well understood and reviewed for accuracy and completeness. Accuracy is the closeness of agreement between a data value and its true value. Completeness is the presence of the necessary data. NHS Digital publishes a quarterly data quality and maturity index, which provides data submitters with transparent information.
A 2-staged approach is suggested when applying analytics to any data. Algorithms should be trained to understand the levels of data quality first and then achieve their objective by using the variables given. This 2-stage approach should be built in so that high fluxes in data quality are handled appropriately.
Assessment of data quality should not be a one-off check – continuous anomaly detection should be in place to provide alerts to changes in a data source. NHS England and the UK Statistics Authority have produced guidance on data quality, which should be referred to, as has the National Institutes of Health (US).
Good data linkage will avoid reducing data quality
There is a range of approaches for linking data, which can provide differing levels of accuracy and data loss. It is often necessary to strike a balance between good matching accuracy and loss of too many records. Consideration should be given to the effects of a selected linkage procedure on data quality. In particular, if the process could cause systematic loss or mismatching of a particular type of record, this could have downstream implications in model assumptions and algorithm training.
Where you can access data sets
There is a range of sources of health data:
- Public Health England collects a range of data, made available in different formats, for example their fingertips tool
- the Office for National Statistics collects a range of health-related microdata at their ONS virtual microdata lab
- UCI have built an open source training dataset for machine learning (Health Data Research UK are in the process of building further training datasets)
- Health Data Finder
- NHS Digital
Access must be requested for data that is not already in the public domain. The process for this varies depending on the organisation providing the data, and this should be detailed on the organisation’s website. NHS Digital holds the responsibility for standardising, collecting and publishing data and information from across the health and social care system in England. It is in the process of establishing a remote data access environment. Access to restricted data and information is via the Data Access Request Service.
If the approach uses data newly donated by individuals, there will be a range of additional considerations, including appropriate consent and ethical approval processes. This can be obtained from the Health Research Authority. Appropriate considerations will be given to privately accessing training and synthetic datasets.
Training vs deployment
Be clear on the strengths and limitations of the training versus deployment data set. If the algorithm has been built on a training set and not yet deployed in a real-world clinical implementation, transparency should be shown to that effect. Demonstrate whether the algorithm is published in a real-world deployed environment or a training environment.
Principle 7: Make security integral to the design
A core element of at-scale adoption and uptake is to ensure that security and data protection methodology have been incorporated as set out in principle 1. NHS Digital has launched a new Data Security and Protection Toolkit to replace the previous Information Governance Toolkit to ensure that patient information is kept safe. All organisations that have access to NHS patient data and systems – including NHS trusts, primary care and social care providers, and commercial third parties – must complete the toolkit to provide assurance that they are practising good data security and that personal information is handled correctly.
When developing an application, ensure the product meets the OWASP Application Security Verification Standard, which is used to establish a level of confidence in the security of web applications.
Principle 8: Define the commercial strategy
From a commissioner’s perspective, the commercial strategy will define the engagement with data-driven technology providers:
- the scope of the engagement
- the value added
- the responsibilities of organisations involved
- the restrictions on the activities
- the liabilities and the period for which those arrangements apply
Developing a clear idea of the vision before engaging with industry is beneficial. Whatever happens, the relationship with developers will be dictated by a binding legal contract, and this will determine the interaction and the end points. Once the implications of the data of the technology being considered have been fully understood, then implications of the contract terms need to be fully understood, which will require legal advice.
Before engaging with the legal teams, the following should be considered:
- scope: does the intended arrangement provide the developer with exclusive access to the data for whatever purposes (which is one model), or is the intention to restrict their work with the data to a very limited area or issue? Issues here are exclusivity and scope
- term: how long does the engagement last? What happens when it is over?
- value: how is value created and realised? There are numerous models: simple royalties, reduced payments for products, equity shares in the business, improved data sets (where curation and labelling are provided by the technology provider in return for access to the resulting clean data). There is no simple or single answer
- compliance with laws and rules around data: is there clarity on what will happen to the data (including for patients)? Interesting points here are around the use by AI of data as ‘training sets’ from which an algorithm can learn (increasing its value), but not retain
- ownership of intellectual property: there are numerous models in use across this area, broader medical and pharmaceutical research, and across academia. Which is most applicable will depend on a host of factors. This is important where the same algorithm is used in multiple applications – the participant (or data source) in each application cannot hope to own the underlying algorithm but might hope to share in the increase in value
- liability: is the intended application such that someone (an individual or an institution) can suffer loss or harm if something goes wrong? If so, where should liability lie?
- transparency, quality assurance, audit: is there clarity on the extent to which the product will be transparent and can be audited or tested with test data? Is there clarity on whether there is capacity and resource to implement that audit, and if not, what is the point of having the right to do so?
- bias: the single greatest threat to reliance on data-driven technology is the actual or possible presence of bias. Any commercial arrangement will need to identify where this manifests and how it is managed, by whom, and at whose cost.
- NHS value added: data-driven technology providers/developers want to work with NHS bodies because of the unique data set, but also because of many other value-adding characteristics. A commercial arrangement should ensure that these characteristics are fully recognised and compensated in any value-sharing arrangement
- roles: traditional research collaborations (of which there are many examples across the NHS) have an established operating model around what each party contributes. Partnerships for the development of data-driven technologies may require completely new roles. Achieving clarity in advance of what contribution is expected of each party is key
This is by no means a complete list – the range of engagements with data-driven technology providers is such that there is no single set of commercial terms that need consideration. As with choosing the right legal support, there is no definitive set of commercial terms available from NHS England, but colleagues across the NHS will have views.
The final element of a complete commercial strategy is to make sure that systems are in place to ensure that the institution (for example, the organisation’s governance team) fully understands the technological and commercial/legal implications of the engagement. This should be more than just including monthly reports to the board. It is about developing systems for reporting, reviewing and challenging technology, which is ever-evolving.
Principle 9: Show evidence of effectiveness for the intended use
Standards that define what evidence should be required to demonstrate the effectiveness and economic impact of digital health innovations are currently being developed. As a principle, the standard of evidence expected of digital tools will be proportionate to the function of the tool being assessed. Different functions will require different types and levels of evidence.
When building or developing the technology, consider what function the product delivers, and this will guide what sort of evidence generation plan should be put in place. The standards will also help the intended evaluation body review the package of evidence provided to them and help them outline gaps in the information they received from innovators.
A functional categorisation of digital tools is emerging (see table below). The functions can be graded in terms of clinical impact and risk of harm. As such, the evidence standards will increase as the risk associated with the technology grows. For those products that perform more than one function, the evidence requirements for the function that carries the highest potential impact and highest potential harm should be met.
|Evidence tier 3a||Evidence tier 3b||Evidence tier 2||Evidence tier 1|
|Public health: tools for public to change behaviour around smoking, eating and so on||Treat: provides treatment, guides treatment||Inform: provides information to citizens, patients or clinicians||Systems services: no measurable patient outcomes but which provide services to the health and social care system|
|Self-manage: allows people to self-manage a specificed condition. May include behaviour change techniques||Active monitoring: tracking patient location, using wearables to monitor a specific condition||Simple monitoring: includes general health monitoring using fitness wearables and simple symptom diaries||-|
|-||Calculate: a calculator that impacts on treatment, diagnosis or care||Communicate: allows people, patients or clinicians to communicate||-|
|-||Diagnose: diagnoses a specific condition, guides diagnosis||-||-|
More detail on this is being developed under the Evidence for Effectiveness project, in partnership with NICE, Public Health England, Academic Health Science Networks, academia and industry. This project is being led by NHS England with executive sponsors in NHS England and NICE. The classification will be further iterated as part of this work.
Principle 10: Show the type of algorithm being developed or deployed, the evidence base for using that algorithm, how performance will be monitored on an ongoing basis and how performance will be validated
When building an algorithm, be it a standalone product or integrated within a system, show it clearly and be transparent of the learning methodology (if any) that the algorithm is using. By achieving transparency of algorithms that have a higher potential for harm or unintended decision-making, as detailed by the data-driven technologies classification (see above), you can ensure that article 12 of the GDPR, ‘the right to explanation’, is met and, importantly, build trust in users to enable better adoption and uptake.
Work collaboratively with partners, provide the context of the algorithm and be transparent on whether the model is based on active, supervised or unsupervised learning. Again, be clear on the reasons for the choice, the identified limitations and the assurance and evaluation surrounding the approach. This will help ensure that the most appropriate skillset is trained and utilised. It will also inform the potential resource implications as the model develops and is refined. Consider a clear standard operating procedure (SOP) for how the algorithm will be implemented, which will include:
- information governance
- data flow maps
- ongoing regulation
- lines of responsibility for the different parts of the algorithm – for example, technical, clinical
- handling of data quality entering the algorithm
- decision-making or decision support
- intended use of the output
In a clear and transparent SOP, show:
- the strengths and limitations of the algorithm
- its learning methodology
- whether it is ready for deployment or still in training
- where the bar of acceptable use is for the algorithm
- the potential resource implications
Additionally, through a clear methodology, intended use of the algorithm and transparency will build trust in incorporating machine-led decision-making into clinical care. This will also build accuracy and increase chances of adoption.
Understanding why the decision was made or not made by the clinical decision support system/algorithm, the level of clinical and model evaluation, the accreditation of the algorithm, why an error may occur, and when to trust the output will help build public and clinician trust, help train the workforce and enable the proper scale-up and adoption of machine learning clinical decision-making.
By learning collaboratively in this intelligent algorithmic space, we can build better policy, deliver more affordable and sustainable healthcare, enable clinicians to focus on complex care delivery, create more appropriate commercial frameworks and change current regulatory and legislative practices to allow for a future where intelligent machine-led decision-making is part of health and care.
It must be emphasised that this is a working document. The NHS will continually develop and iterate it and ensure all relevant partners sign the code together and continually improve.
Commitment 1: Simplifying the regulatory and funding landscape
We recognise that the regulatory environment for health and care technology is complex and not always conducive to identifying the best data-driven innovations or ensuring that they are adopted widely across the system once they are shown to be safe and effective. We further recognise that the support offered to innovators is skewed towards research and development. This is not enough to support the spread of innovation or to help innovators, particularly in the SME space, scale up.
This section of the code outlines current or planned activity that we will undertake to:
- improve and better target the support available for digital innovation in health and care
- make it easier to demonstrate alignment with the principles of the code, and thus simplify the process of bringing products to the health and social care marketplace
We will build on the code of conduct as our longer-term work on regulation, procurement and commercial arrangements progresses.
Innovations need to be demonstrably safe and clinically effective. We recognise that currently both our regulatory framework and the way it is applied do not always make this easy.
The 10 principles set out in detail what is required in terms of evidence from different classes of digital health tools. The trusted approval scheme, which forms part of objective 2, provides a streamlined means to demonstrate that a product has been developed and tested in accordance with these requirements. This will help to address the challenges that innovators can experience in getting their products to decision-makers.
Alongside this, we are reviewing the current regulatory framework, beginning with a ‘gap analysis’ to identify any problem areas and develop solutions. This work will develop over time and may require a long-term focus.
The current landscape
The current health and care innovation landscape can be overly complex, confusing to innovators and is not always consistently aligned with health and care priorities. Analysis from leading think tanks, such as The King’s Fund and The Health Foundation, and feedback from stakeholders across the system, suggests that adoption and spread of novel innovations is lower than it should be.
There is a lack of coherence across funding schemes, meaning that innovators find it difficult to identify the right scheme, and we do not effectively track products that have been funded or support their pull through the system, which leads to missed opportunities for the NHS, social care, patients and the economy.
We recognise the need to streamline, simplify and maximise the impact of public funding and support for innovation in health and care.
Our vision for supporting innovation
We will provide a joined-up system of support so that innovators with clinically and cost-effective technologies that meet NHS priorities can rapidly introduce and scale their products within the health service. This support system will reinforce the UK’s position as a global leader in health innovation and deliver better outcomes for patients.
Our objectives for supporting innovation
- Simplification of the regulatory and funding landscape to provide clarity for innovators around the route to market for their products and to help understanding of the needs of the health and care system, by creating a simpler way for innovators to engage with the innovation landscape
- Testing and evidence to enable the health and care system to become a test bed for new innovations, allowing the true value of innovations to be demonstrated and laying a path for wider expansion
- Adoption and spread of innovation to create an environment where the health and care system becomes willing and able to adopt proven innovations, resulting in innovation becoming a routine part of practice
Commitment 2: Creating an environment that enables experimentation
Our world-class patient data also makes the UK an attractive place for industry to invest and carry out research. Ensuring that, within appropriate legal frameworks, our data is accessible for research will incentivise the life sciences sector to locate in the UK, bringing with it investment, jobs and, crucially, future access to the most cutting-edge treatments for UK patients.
Beyond direct patient benefit and the ability to attract research investment, we cannot ignore the fact that UK patient data also has the potential to be a valuable national asset. AI technology is potentially very profitable for the life sciences sector and, in general, applications for the health and care system cannot be developed without access to patient data.
Access to this data will only ever be given where the end goal is an innovation that benefits patients and the public, and when the data in question can be kept safe and secure. Patient data will only ever be used within the legal frameworks, the strict parameters of the codes of practice and the standards set out by the National Data Guardian and regulatory bodies. However, assuming these criteria are met, it is appropriate to consider whether the UK and in particular the NHS should also seek to benefit when the data that it has collected is used to develop valuable new technology.
Commitment 3: Encouraging the system to adopt innovation
The UK currently does not have a trusted approval (Kitemark) scheme for digital health and care products. The NHS Apps Library represented a first attempt to apply this type of approach to a subset of technology. However, we recognise the need for a more rigorous and sophisticated approach.
Successful introduction of trusted approval will enable suppliers to demonstrate that their product complies with the code of conduct. This in turn will provide a level of assurance to commissioners and purchasers in health and social care, providing an easier and better-defined route to market.
We undertake to introduce a trusted approval scheme for digital technology in health and care that will be:
- proportionate to the level of risk inherent in different classes of product
- robust enough to safeguard patients and assure commissioners and purchasers considering a given product
- as straightforward as possible for a compliant product to obtain, to avoid delaying products getting to market
Clinicians and professionals need to buy in to a different way of delivering care, and we cannot achieve transformation without this. We will build a joined-up narrative around work on benefits of data, consent and the role of data-driven technology to support our engagement with care professionals and commissioners.
This will align with our engagement with the Topol review and Health Education England’s workforce strategy – we need to ensure that the health and care workforce is properly equipped to act as an intelligent customer for technology and innovation.
If health and care providers and their suppliers demonstrate adherence to the code of conduct, this will provide an important level of reassurance for commissioners, who increasingly need to put innovative services in place to ensure that population needs are met.
Commitment 4: Improving interoperability and openness
The breadth and depth of NHS data is unique and the potential benefits from this to patients and the public are huge. Access to this, and other relevant data, can improve the delivery of joined-up individual care and experience, and empower people to manage their own health.
The health and care system can also use this data to manage and plan services more efficiently and help clinicians’ workload and effectiveness. In research, access to high-quality data is fundamental across the pathway, from running clinical trials to developing diagnostic algorithms.
We therefore commit to the introduction of open data and interoperability standards for health and care, building on the work already done to develop Local Health and Care Record Exemplars. The first round of these recently launched in Greater Manchester, London, Wessex, Yorkshire and Humber, and Thames Valley and Surrey to create complete, electronic patient records. Together the 5 areas will cover 23.5 million people.
In developing and introducing standards, NHS England, the Department of Health and Social Care and the Office for Life Sciences will be working with representatives of the public, industry, healthcare professionals and the research community to explore how to maximise the benefits of health and care data for patients and taxpayers.
- empower people to take control of their own data and to act as the gatekeeper for information about them
- establish clear, open and public standards for health and care data, and develop open APIs to support development of innovative tools
- establish clear ethical, commercial and technical ‘rules of engagement’ for access to de-identified clinical data sets to assure mutual benefit
- ensure that current legal, ethical and privacy standards continue to be rigorously applied for any applications requiring access to identifiable data
Commitment 5: Listening to our users
We are developing communications and engagement plans to ensure an open, honest and joined-up conversation around the benefits of data, consent and the role of data-driven technology with the widest possible audience. We need to position patients as owners of their own data, actively engaged in managing their own health and wellbeing.
Taken together with our work to realise the benefits of health and care data, the outcome of this work will be an ecosystem for innovation in which:
- patients can be the gatekeepers for their own data, and can make use of this to be more engaged in their own care
- commissioners and those making procurement decisions can do so in an informed way
- clinicians, care professionals, patients and service users can access the best, technology-enabled care and treatment more quickly
With a clear demand signal and a single pipeline for innovation funding and support, industry can plan with greater certainty and scale up commercialisation of products much more rapidly.
Annex: further information on data standards
The Data Ethics Framework, published by the Department for Digital, Culture, Media and Sport, sets out clear principles for how data should be used in the public sector. It is designed to ensure that the public sector can maximise the value of data while also setting the highest standards for transparency and accountability when building or buying new data technology.
The National Data Guardian’s 10 data security standards are in place and form part of the standard contract that goes out to all providers. Providers must demonstrate that they are compliant with these. Since September 2017 the 10 data security standards have been part of CQC’s leadership assessment of well-led NHS trusts. Primary care and adult social care providers followed in November 2017. In April 2017 the NHS Standard Contract 2017/18 included new requirements to adhere to the 10 security standards. The standards are set out in full in the National Data Guardian’s Review of Data Security, Consent and Opt-Outs.
The national data opt-out is a service that allows people to opt out of their confidential patient information being used for research and planning. It was introduced on 25 May 2018 in line with the recommendations of the National Data Guardian in the above review. More information about the service.
The Data Security and Protection Toolkit covers the data security and data protection requirements for health and social care organisations, including the 10 data security standards and GDPR. Organisations can self-assess against this.