Guidance

The Government Data Quality Framework: case studies

Published 3 December 2020

These case studies show great examples of public bodies applying parts of the Data Quality Framework.

You can link to each case study from the relevant section of the framework.

1. Home Office: Developing a data quality culture within policing organisations

Year of quality

This case study has been developed to highlight the important steps required within a policing organisation to develop a culture of high data quality. Evidence suggests that high data quality is of primary importance to policing, and that a data quality culture does not exist consistently and effectively across the service.

All organisations have culture, i.e. a set of norms and values which collectively guide the behaviour of their employees. Culture is neither good nor bad, but it can nurture values and behaviours which either support or prevent organisational development. Cultures which inhibit high data quality can be a significant barrier to creating and leveraging knowledge assets and realising the value of our technological investments. The principal aim is to support forces in developing an ethos amongst all employees that poor data standards are unacceptable and to commit the force to a programme of continuous improvement.

Steps to consider

1. Leadership

This does not only mean the most senior person in the organisation. Of course, you want your Chief Constable or Senior Information Risk Owner (SIRO) to set the strategic direction, but it is the network of informal leaders that will embed the concept of high data quality across all ranks and roles within the organisation. They command the respect of their peers.

Identifying and engaging with these people at the outset of any planned programme of action will ensure that:

  • you understand the data and information needs of your organisation at an organic level
  • your message will be communicated in a way that resonates with the audience
  • this is not a one-off exercise but becomes business as usual

2. Organisational commitment

It is not enough to tell people what is expected, the organisation must make a real and tangible commitment to supporting high data quality:

  • A corporate framework for the management of, and accountability for, data quality is in place with a commitment to secure a culture of high data quality throughout the service and with partners. This accountability is at both an organisational and individual level.
  • Policies and procedures are in place to ensure the quality of the data recorded and used. The organisation must ensure that policies and procedures are up to date and reflective of how the organisation works.
  • Systems and processes are in place, which ensure the quality of data as part of the normal business of the force – focussing on the data that matters.
  • Arrangements are in place to ensure that staff have the appropriate knowledge, competencies and capacity in their roles in relation to data quality and that this is embedded in all training and learning material.
  • Arrangements are in place that are focussed on ensuring that data is actively used in decision making, resource allocation and planning processes and is subject to internal control and validation. ‘Sell’ your data – restore trust in what the data shows.
  • Create a culture where a commitment to data quality is considered part of what makes a good police officer.

3. Baseline your maturity

To target efforts effectively it is important to baseline your force’s maturity in its relationship with data.

A maturity model is simply a framework that is used as a benchmark for comparison when looking at an organisation’s processes. In this instance it is specifically used when evaluating the capability to implement data management strategies and the level at which the force could be at risk.

The Information Assets Maturity Model is designed to provide the police service with a comprehensive reference tool for driving improvements in data. This tool will allow organisations to assess themselves against recognised best practice and develop detailed data management improvement plans which support the maximisation of the benefits data presents.

There are four pillars under which a force should be assessed:

  1. Process – the ability of an organisation to understand its capabilities to use data to drive business processes and innovation.

  2. People – the ability of an organisation to recognise the skills and capabilities needed to create and use good quality data.

  3. Technology – the ability of an organisation to understand the technical solutions needed to support the optimisation of data.

  4. Data – the ability of an organisation to understand the data and information it creates and uses.

This must encompass the entire organisation – a force will not have a holistic view of its maturity if all areas of the business are not represented.

4. Make data quality matter on a personal level – articulate the why

Communicating the ‘why’ is critical to the success of any data quality improvement programme. An organisation should consider the best way to deliver these messages – considering a range of non-standard communication tools.

Road shows

Staff from the Data Quality Team (or an equivalent) will carry out a series of road shows across the force to introduce the main messages around data quality.

Intranet

The main messages of the campaign and a guidance document will be made available via the force intranet on a regular basis.

Desktop wallpaper

A desktop wallpaper will be produced and updated on a regular basis to reflect key messages of the campaign.

PowerPoint briefing

A PowerPoint briefing document will be produced by the Data Quality Team which will be circulated to all Sergeants and other supervisors to deliver as part of the staff briefing process.

Induction pack

The force will produce a comprehensive pack of information and guidance for all new members of staff – detailing their responsibilities upon joining the force.

Integrated training

The Data Quality Team will provide core training material to the learning and development lead to ensure that the main messages are woven into all training programmes. Messages will be tailored to suit specific business functions.

Data quality electronic newsletter

A data quality quarterly newspaper will be produced by the Data Quality Team. The newsletter will be available electronically and will include topical information in relation to data quality. Case study information will be included to help the reader focus on the main messages of the campaign – case studies will be operational in focus.

Data quality information sheet (aide-mémoire)

A data quality information sheet which will act as a briefing sheet for staff will be produced and disseminated via the force intranet. This briefing sheet will be tailored to specific business functions.

Posters

A series of impactful, operationally focussed posters to highlight the ‘why’ of this campaign. Examples will be:

  • ‘What happens to the data you create?’
  • ‘Data sharing with partners’
  • ‘What’s in a name?’
Data quality commanders packs

Quarterly report to go to all commanders or information asset owners (IAO) to highlight the quality of data of the data assets their staff are responsible for. A summary of packs will be presented to the Information Management Board (or equivalent) for action and escalation as required.

5. Take positive action – task force

To make best of use of what is likely to be a small internal resource the force should be targeting their efforts on the data that matters most to them. This will likely be similar in each force, but there will be nuances that need to be considered.

This stage links in with existing workstreams, for example General Data Protection Regulation (GDPR) implementation and the roll out of information asset registers and IAOs. Forces should already know their data assets – however there are some important questions:

  1. 1. What is the quality of your data?
    1. a. What assessment criteria should you use?
    2. b. How will this assessment be recorded?
    3. c. How is this data used?
    4. d. Is it live or legacy data?
  2. 2. What impact does this have on your force?
    1. a. What ‘hidden’ corrective actions already take place?
    2. b. What are the estimated costs of these actions?
    3. c. What do the customers of the data feel?
    4. d. How is the impact recorded and understood?
  3. 3. What are the potential solutions to managing this data? Cleansing data may not be the only option.
    1. a. Forces could consider the following to allow risk-based decisions to be made about the retention of the data:
      1. i. What was the purpose of the data and is the purpose still current?
      2. ii. When was the data live and are any elements within it still being updated?
      3. iii. Is the data searchable?
      4. iv. Does data need to be retained in relation to any ongoing public inquiry?
    2. b. If cleansing is the preferred option – any improvement plan needs to be based on the needs of the organisation:
      1. i. Identify and eliminate the root causes of error.
      2. ii. Build a continuous improvement programme to minimise errors going forward.

Whatever action is completed, it must focus on the needs of the users.

In conclusion, developing a culture of high data quality requires an organisation-wide programme of action – with clear leadership, clarification of the needs of the organisation, clarification of the expectations of the organisation and a realisation of the benefits good data can bring.

2. Cabinet Office: Establishing a data quality culture through Canvass Reform

What does this case study show?

This case study outlines a new process being used to keep our electoral registers accurate and complete that has inter-governmental data sharing at its heart. This process, known as ‘Canvass Reform’, shows how the Cabinet Office and its main partners have brought data into an outdated auditing process to reduce costs on the taxpayer and the administrative burden on local government officials, while ultimately improving the experience for citizens.

What is Canvass Reform and how does it use data?

Under section 9D of the Representation of the People Act 1983, Electoral Registration Officers (EROs) (a statutory position appointed by the Council to prepare and maintain the Register of Electors) across Great Britain are obliged to undertake an annual canvass. The canvass is effectively an audit of the electoral registers; a prescribed process EROs use to confirm the details of electors who should be added to or removed from the registers. For citizens to vote in an election and have their say, they must be on the electoral register.

The pre-reformed canvass is widely recognised as being outdated and cumbersome, due to its reliance on paper-based communication and its one-size-fits-all approach.

We have reformed the annual canvass from 2020 so that it is simpler for citizens and EROs, allowing local authorities to put their resources to best use. Canvass Reform introduces a new ‘data matching step’ at the outset of the canvass process in which EROs compare their electoral registers against data from the Department for Work and Pensions (DWP), and where possible, locally held data sources.

The results of the data matching step will inform the ERO whether the information on their electoral register is likely to be correct. The results will tell them which properties are likely to have the same people residing in them and which properties are likely to have had a change in residents. Based on these results, the ERO has the discretion to canvass properties through a less expensive and less resource-intensive canvassing process where they do not think there will be changes. This allows them to concentrate their resources on a fuller canvassing process for properties that need it.

By introducing new communication methods and a less prescriptive canvassing process, the intention of Canvass Reform is to safeguard the completeness and accuracy of the registers, whilst reducing the administrative burden on EROs and the financial burden on taxpayers.

What is the data challenge?

As part of the reformed canvass it is mandatory for EROs to match their electoral registers against data from the DWP at the outset of the canvass process. On top of this, EROs can also match local data, such as council tax records, against their electoral registers so that the results of local and national data matching collectively inform which route each property goes down. As a result, local data matching can be key in helping EROs to save time and resources.

However, EROs face a number of challenges related to using local data including the following:

  • access to local data sources that they might use for local data matching, which may require establishing relationships with the relevant data controllers and setting up data sharing agreements within their authority
  • ensuring the quality of local data sets
  • formatting the data correctly so that it can be matched against the electoral register

What does the law say?

The EROs have powers to acquire data for the use of electoral registration. These provisions can be found in the Representation of the People Act 2001 Regulations 23 and 35:

Regulation 23

A registration officer may require any person to give information required for the purposes of that officer’s duties in maintaining registers of parliamentary and local government electors.

Regulation 35

A registration officer is authorised to inspect, for the purpose of his registration duties, records kept (in whatever form) by:

  • the council by which they are appointed
  • any registrar of births and deaths
  • any person providing services to, or authorised to exercise any function of, any such authority

In addition, data must be processed on a lawful basis as set out in Article 6 of the General Data Protection Regulation (GDPR) and the Data Protection Act (2018). Lawful bases include legal obligation; public task; legitimate interests; and consent.

Data protection legislation does not override requirements to gather and process information as set out in existing electoral law but there will be an impact on how this information is processed and the responsibilities of EROs and ROs to keep data subjects informed. The Information Commissioner’s Office publish detailed information about GDPR and the Data Protection Act (2018) on their website.

Glasgow City Council Case Study

This case study shows some of these challenges and demonstrates how establishing a data quality culture in and across organisations helped one ERO to achieve their cost-saving objectives.

The Canvass Reform policy outlined above was informed and shaped by several pilots before it was rolled out across Great Britain in its final form. In 2017, Glasgow City Council’s ERO and Electoral Services Team took part in a pilot which tested the benefits of using a (local) data matching step in their canvass.

During this pilot, they intended to use council tax data to conduct local data matching, comparing the information on their electoral register against council tax records to inform if residents in each property had changed or stayed the same.

As with the national data matching step brought in through Canvass Reform, this would allow them to canvass some properties using a less expensive and resource-intensive route, if the data matching results indicated that no change at a property was likely. However, in order to use council tax data, the council’s ERO needed to be confident in this data set’s reliability and accuracy.

Challenges

Council tax data provided the team with a large data set which was trusted and could provide evidence for properties which were empty. However, comparing or ‘matching’ this data set against the electoral register was not without its challenges:

  1. The formatting of citizens’ names on council tax data is often different than on the electoral register.
  2. The property address was not always consistent between council tax data and the electoral register.
  3. Council tax data only held details of the liable council taxpayer(s), which can limit its usefulness as a data set, as there may be other residents.

Solutions

The first two challenges focused on inconsistencies in the formatting of property addresses and citizens’ names. They identified the Unique Property Reference Number (UPRN) - a unique numeric identifier for every spatial address in Great Britain - as a useful format to use across both data sets.

The council’s Registration Services Team worked with their Council Tax Data Team to improve coverage of UPRNs, using Geographic Information System (GIS) software to match addresses to UPRNs. This resulted in 95% coverage of UPRNs for properties in the area across both data sets. This considerably increased the number of properties in council tax data which could reliably be matched with the electoral register.

If EROs don’t have a UPRN for each address, they can liaise with the Local Land and Property Gazetteer (for England and Wales) or Corporate Address Gazetteer (for Scotland) to ensure that UPRNs are attached to each property.

To help increase the matching on names, they used the previous year’s information as a data source, which could identify people who changed their name (for example those who got married) but whose change of name was only reflected on one of the data sets.

The final challenge was that council tax data only holds details of the liable taxpayer(s). As a result, the Registration Services Team found that as the number of residents within a household increased, the ability of the council tax data to match residents with the electoral register decreased. They realised that council tax data was most effective at matching properties with the electoral register when the properties are either empty or have between one and two residents. As such, resources were only committed to data matching these types of households.

Main takeaways

The main takeaways from this case study are:

  • the Electoral Services Team worked closely with their local council tax team to encourage the sharing of local data
  • joined-up working ensured that the data was formatted consistently and that both data sets were kept up to date
  • understanding the limitations of council tax data meant resources were focused only on property types which had a good chance of matching with the electoral register
  • establishing this data quality culture in and across their organisation helped the Registration Services Team meet their cost-saving objectives

Conclusion

Overall, the Canvass Reform policy shows how data sharing and comparison can be brought into an outdated legislative process to help reduce the costs on the taxpayer and the administrative burden on local government officials, while ultimately improving the experience for citizens. The example from the Glasgow pilot underlines how collaborative working between different parts of local government can make data sharing an effective tool to meet the very same objectives.

3. Office for National Statistics: Looking to the future - a review of data linking methods

The Office for National Statistics (ONS) conduct data and analysis reviews which take topics of interest and innovation in data, review state-of-the-art methods, and make recommendations aiming to improve government work in the chosen topic areas. These cross-government reviews are future-facing, ensuring that methods used by government are keeping pace with changing theories, data sources and technologies.

The most recent Data and Analysis review Joined up data in government: the future of data linking methods contains a series of articles on state-of-the-art linkage methods and aims to address challenges and make recommendations for improving government data linkage work. It is the second review of its kind; the first was on privacy and confidentiality.

Background

Data linkage is the process of joining data sets through deciding whether two records, in the same or different data sets, belong to the same entity. Records refer to units such as events, people, addresses, households, or businesses. Data linkage is crucial for both government operations and statistics and research.

However, data linkage is not without its quality challenges. As a country that does not have a single unique identifier for all citizens, the UK is often faced with the challenge of joining up data sets containing different identifiers (such as national insurance and passport numbers). This means performing linkage with other data, such as name, address and date of birth, which can be of varying quality and completeness. Poor quality linkage can lead to poor quality data.

The process poses various challenges, including but not limited to:

  • how do we know the matches made in linkage are correct?
  • have any matches been missed in the process?
  • how can the privacy of individuals in a data set be protected?
  • how can we ensure government is joined-up in its approach to linkage?

These challenges highlighted that data linkage was the ideal topic for the second in a series of reviews, coordinated by the Government Data Quality Hub in ONS.

The review process

The review was conducted over a number of months focusing on:

  • a period of stakeholder engagement to understand government linkage work and challenges in the area
  • working with subject matter experts to identify quality methods for inclusion in the review and agree contributed articles
  • peer review of contributed articles
  • drawing out key themes and writing a report
  • making recommendations to improve government linkage work

An important part of this review was the peer review sessions. These meetings brought together experts from academia and government to give feedback to the authors who presented their contributed articles. They also gave input on making recommendations for linkage work; it was helpful to have an academic perspective on government data as well as understanding the challenges faced by government departments when creating these recommendations.

The final product includes expert-contributed articles, as well as a digestible summary of the key themes and a list of recommendations for improving data linkage in the future.

Review outcomes

Recommendations were developed from the review which focus on improving data linkage methods and capability across government, ensuring there is strong investment in the field and that methods are keeping pace with changing theories, data sources and technologies.

Developing cross government data linking networks and increased collaboration with academia form a large part of the recommendations. This includes better coordination of linkage projects, avoiding duplication of work, and sharing best practice for data linkage methods.

The review also highlighted new research areas where more research into data linkage methods is needed to understand their utility. If successful, these new methods will improve data linkage and thus enhance the quality of government data. Collaboration on this research, sharing of results and upskilling on new methods is required across government to ensure benefits for all.

Work is now taking place to ensure these recommendations are implemented, our methods keep pace with changing sources and technology and that we continue to make improvements to data linkage work across government.

4. Office for National Statistics: The ONS Data Service Lifecycle

The Office for National Statistics Data Service manages a data lifecycle that takes data from its ingestion into the ONS data management system, the Data Access Platform (DAP), to its export.

What is the Data Access Platform?

The Data Access Platform (DAP) is a set of tools and technologies that enable users to securely access data for exploration and analysis purposes. It also supports users in creating data processing pipelines to automate the processing of data on a regular basis. It is the primary platform for storage, analysis and processing of data in the ONS.

Stages in the ONS Data Service Lifecycle

Data Acquisition

This is the stage where acquisition and collection of all data types takes place, including administrative, survey and web-scraped data.

At this stage, users of DAP request different data sets from external suppliers to use in DAP, and survey data is collected using the Survey Data Collection (SDC) platform.

Data Ingestion

At this stage the secure process for getting data into DAP from different data sources takes place. This includes providing a flexible ingestion process for different shapes, sizes and frequencies of data.

Any type of data can be ingested into the DAP.

Data Transformation (1)

This is the stage where security compliance and audit take place, and controlled access to data held in DAP is granted. This stage includes granting permissions to users and teams aligned with security principles and individual conditions for use of data.

Data Transformation (2)

At this stage raw data is prepared and made ready for use. This results in better consistency of data across the ONS. This stage includes applying consistent standards, anonymisation, linking and matching data, and integration into the ONS Data Catalogue.

Data Integration – Analysis (1)

Data at this stage is accessed, explored and analysed in DAP. Users exploring DAP have access to a range of tools and technologies to support exploration and analysis, available according to permissions granted.

Data Integration – Analysis (2)

At this stage users produce statistical outputs, using tools, technology and processes in DAP. This stage involves the design and test of processes and pipelines before automation, and business process automation.

Data Sharing

Data is exported out of DAP at the sharing stage. Users of DAP can export data to other systems, for example to the ONS Digital Publishing team.

Support for the ONS Data Service Lifecycle

The stages of the ONS Data Service lifecycle are enabled by effective communication, training and support.

Communications

An information ‘hub’ is available for all updates on the service as it develops, including a clear and visible roadmap for delivery across services.

Training and support

Users of the DAP can access an information and support service, clarifying what, why, and when they need to know about DAP. Skills and training support are available to help DAP users.

5. NHS Digital: Developing the Data Quality Maturity Index

What data does NHS Digital collect?

NHS Digital is a non-departmental body created by statute and under Section 250 of the Health and Social Care Act 2012. NHS Digital has a statutory obligation to collect data as directed by the Secretary of State for Health and Social Care. Currently NHS Digital collects over 200 separate sets of data. This ranges from small aggregate returns to large record-level data sets. Data comes from providers from Primary Care, Secondary Care, Community, Maternity and Mental Health Services.

The methods of collection range from bespoke spreadsheets submitted through a secure file transfer mechanism, to complex multi-table relational data sets submitted through a cloud-based portal. The frequency of collection varies from daily aggregate situation reports, through weekly and monthly record level activity-based data sets, to annual data set refreshes and surveys.

How does NHS Digital define the data it collects?

The data NHS Digital collects is for secondary use purposes. This means that it is not directly used in the provision of care. Each secondary use is defined within a Direction or Mandatory Requirement. These are specified on behalf of the Secretary of State by the Department of Health and Social Care or its nominated representative.

For each national data collection request an Information Standard is developed that defines the collection at a data item level. Each data item is defined within the NHS Data Model & Dictionary, describing the format and structure, national codes to be used and when the data is mandated. This is then supported by a Technical Output Specification that further describes each data item. The Technical Output Specification includes the business rules and data validations which will be applied to the submitted data. The final document to support a new data set is the Data Provision Notice. This document provides formal notification to data submitters and their system suppliers of the need to submit the data.

Why does NHS Digital need to measure data quality?

Under a specific section of the Health and Social Care Act 2012, NHS Digital is required to regularly review and report on the quality of the data it collects. This was previously done in isolation on a per data set / publication basis in the form of an accompanying data quality note. But, what was not in place was a method for measuring data quality at a data set level and across data sets by data submitter.

How did NHS Digital develop the Data Quality Maturity Index?

To meet these statutory obligations, there had to be a method of measuring and reporting on data quality at a data item level by data set and by data submitter. The first consideration was to select the data quality dimensions. After some research five dimensions which draw on the Data Management Association (DAMA) dimensions were selected:

  • Coverage - the degree to which data has been received from all expected providers
  • Consistency - the degree to which data conforms to an equivalent set of data produced by the same process over time
  • Completeness - the degree to which data items include all expected values
  • Validity - the degree to which data collected meets the set of standards and business rules that govern the permitted values and formats at data item level (excluding defaults)
  • Default - the degree to which the default values specified in applicable standards and business rules have been used in the data collected

The second consideration was to select the individual data items with each data set against which the data quality dimensions would be applied. The approach taken identified data items that were both common across the eight selected data sets and core to data linkage between data sets. The resulting metric was named the Data Quality Maturity Index (DQMI) and was initially published on a quarterly basis. Publication frequency increased to monthly to support its wider use across the health system.

How does NHS Digital use the DQMI to improve data quality?

NHS Digital does not hold any statutory or regulatory powers to force data submitters to improve their data quality. So, it must work with other health bodies to use their respective powers to develop incentives and levers within existing contractual and performance frameworks. These other health bodies include NHS England, NHS Improvement and the Care Quality Commission (CQC).

The use of the DQMI by data submitters to improve data quality was written into a Service Condition within the NHS Standard Contract. Responsibility was placed with the commissioner of the data submitters services to monitor performance through established contract management mechanisms.

In another instance, providers of Mental Health Services were subject to a Commissioning for Quality and Innovation (CQUIN) metric offering a financial incentive to those that achieved a DQMI score of 95% across the contract year. The DQMI has also been incorporated into various NHS frameworks. These include NHS Improvement’s Model Hospital Dashboard, the Single Oversight Framework and the Commissioner’s Improvement and Assessment Framework, and the CQC’s Well-led Domain within their inspection regime.

What has NHS Digital learned from the development of DQMI?

Three main pieces of learning have been taken from the implementation of the DQMI. These will be used to improve its utility and effectiveness going forward:

Selection of data items

The original selection of data items focussed on common / core data items within each data set. This did not provide sufficient coverage, however, and was not representative of the overall data quality of the data sets. In future, data item selection for the DQMI will be based on those critical to the intended use of the data. This could include policy support and calculation of underlying metrics.

Frequency of reporting

Initially, disclosure rules governing the source data set dictated the frequency of the DQMI publication. In some cases, this created a lag between submission and publication of 6 months. This lag reduced the effectiveness of the metric. Future development of the DQMI will see it produced at the point of submission and reported to data submitters in near real-time. This will support immediate improvement action.

Use of incentives and levers

Publishing the DQMI without suitable mechanisms for promoting its availability and embedding its use within existing data submitter frameworks significantly reduced its effectiveness. Whilst linking the DQMI to financial incentives delivered the most significant improvements on data quality, funding for this type of incentive cannot be guaranteed across all data sets or sustained year-on-year. Future developments will need to make use of non-financial levers. This will ensure those who commission and purchase services have clearly defined responsibilities for data quality and are themselves monitored and performance managed in this area. For example, established contractual and performance frameworks could be used.

6. Government Digital Service: Improving pipeline data quality

The digital and technology spend controls pipeline process aims to capture all technology and digital spending activity across government. To date, the process has helped government save over £1 billion.

The problem

The Government Digital Service (GDS) had been receiving pipeline data from departments in different formats. Problems included:

  • mixed data types
  • different naming conventions for reference data, such as project stages and assurance levels
  • a lack of standards for column names, dates and currencies
  • no identifiers, making future record keeping difficult

These problems came about because the spend control data collection had evolved quickly and organically across government.

How we improved the pipeline data quality

GDS worked with teams across government departments to help solve this data quality problem.

  1. We started by collecting and analysing spreadsheets from different departments. We graded them for maturity to see what kind of change would be required by each department and where their gaps were.

  2. We began looking at simple fixes and decided that columns with numerical, currency or date elements should have a standard format. Text to explain the content should be in a separate comment column. These issues should be fixed in sheet production and not by the team receiving them. This is a key data quality principle: fix as close to source as possible.

  3. We discussed potential changes to the spreadsheets with the team receiving them, GDS Senior Technical Architects (STAs), and the government departments who generated them. This follows a core principle of data quality: discuss broadly to agree changes as there are often reasons why the data appears wrong but suits a particular process. Improving data quality is a collaborative process.

  4. We merged the individual spreadsheets delivered by the departments to show what a combined single spreadsheet could look like.

  5. The final list of columns went through 20 iterations following feedback from STAs and government departments.

  6. We created sample reporting demos, on a proposed list of future columns, to demonstrate how the data could be visualised and presented, demonstrating the value to both departments and GDS. Assistance was provided to departments to build reports if they required it.

  7. A final review was conducted with departments to finalise columns and sample values.

  8. A technical writer reviewed the new spreadsheet guidance and it was published on GOV.UK.

  9. Six of the most involved departments said the new column list would be adopted and met their requirements.

This should improve the end to end pipeline process, help to generate high value reporting, increase potential savings and improve efficiency.