Official Statistics

Quality report: child and working tax credits error and fraud statistics

Updated 29 July 2021

1. Contact

  • Organisation unit - Knowledge, Analysis and Intelligence (KAI)
  • Name - J Bradley
  • Function - Statistician, Benefits and Credits
  • Mail address - 7th Floor Imperial Court, 2 - 24 Exchange Street East, Liverpool, L2 3PQ
  • Email - benefitsandcredits.analysis@hmrc.gov.uk

2. Statistical presentation

2.1 Data description

This report presents results from the tax credits Error and Fraud Analytical Programme (EFAP), which is designed to measure error and fraud (E&F) in finalised awards across the tax credits population.

2.2 Classification system

Estimates of the number of awards and value of error and fraud in the claimant’s favour and error in HMRC’s favour are presented as totals and as a proportion of overall tax credits entitlement.

The estimate is based on a sample of 4,000 tax credits awards which is aggregated by National Insurance Number and application ID.

2.3 Sector coverage

Child Tax Credit (CTC) and Working Tax Credit (WTC) were introduced in April 2003. They are flexible systems of financial support designed to deliver support as and when a family needs it, tailored to their specific circumstances. They are part of wider government policy to provide support to parents returning to work, reduce child poverty and increase financial support for all families.

The introduction of Universal Credit has meant that since 1 February 2019, new claims to tax credits are no longer accepted, except in a limited number of specific circumstances.

2.4 Statistical concepts and definitions

Error and fraud

Error and fraud favouring the claimant refers to cases where the claimant has been found to be non-compliant in a way that has led HMRC to pay them more tax credits than they were entitled to for the year, in other words there was a monetary gain for the claimant and a monetary loss for HMRC.

Error favouring HMRC refers to cases where the claimant has been found to be non-compliant in a way that has led HMRC to pay them less tax credits than they were entitled to for the year, in other words there was a monetary gain for HMRC and a monetary loss for the claimant.

When Claimant Compliance Officers identified non-compliance, they were required to indicate whether they believed it was due to genuine error or fraud. To be classified as fraud, a caseworker needs to have found evidence that the claimant deliberately set out to misrepresent their circumstances to get money to which they are not entitled (for example claiming for a child that does not exist).

Error covers instances where there is no evidence of the claimant deliberately trying to deceive HMRC. It covers a range of situations, including cases where a claimant inadvertently over-claims because they simply provided HMRC with the wrong information. It could also cover a situation where the correct information has been provided but this information has been incorrectly processed by HMRC.

Financial year

The statistics are aggregated into financial years (also known as tax years). A financial year stretches from 6 April until 5 April the following calendar year.

Number of instances

The number of awards in the tax credits population that are estimated to contain error and fraud.

Amount of error and fraud

The estimated monetary value of error and fraud in the tax credits population.

Error and fraud rate

The value of error and fraud as a proportion of total tax credits entitlement.

Risk group

Error and fraud can enter the system due to a range of circumstances being incorrectly reported. At a high level there are 7 key risk categories. These are:

  • Income – inaccurately reporting income
  • Undeclared Partner – making a single claim instead of a joint claim
  • Childcare Costs – incorrectly reporting childcare costs
  • Children – incorrectly including or excluding children or young persons on a claim
  • Work and Hours – overstating/understating hours worked
  • Disability – incorrectly reporting disability status
  • Other – risks that cannot be assigned to one of the other high level categories. This category includes residency and situations where a partner has been declared but is not present

2.5 Statistical unit

The unit in the statistics is individual tax credits awards. It is important to note that our sample base is awards and not families – these two differ as a family can have a number of awards during a year. Take the following example: initially a lone parent family is in award then a new household is formed when a partner moves in and later in the year the partner moves out (the household breaks down) and they become a lone parent again. In total they have had three separate awards during the year. We follow awards as this is the unit that the tax credits system is based around and hence is most suitable for constructing a representative sample from.

2.6 Statistical population

The statistics cover all live tax credits awards. The sample base contains all positive awards present on the HMRC tax credit system at the end of the first week of August of the reporting year (August 2019 for tax year 2019 to 2020 statistics). An award may last for a period of anywhere between one day and the whole year.

2.7 Reference area

The geographic region covered by the data is the United Kingdom (UK).

2.8 Time coverage

The statistics cover the time period from tax year 2006 to 2007 until the latest tax year for which finalised entitlement is available.

3. Statistical processing

3.1 Source data

Tax credits population data

The EFAP sample and total tax credits entitlement is taken from a scan of all positive awards on the HMRC tax credits system (NTC).

Error and fraud data

The exercise takes a stratified random sample of 4,000 cases which are selected to be representative of the tax credit population. These cases are taken up for examination by claimant compliance officers who work the cases as they would for any other enquiry. The value of error and fraud and the corresponding risk group are recorded on a bespoke database for the purpose of producing the estimates.

3.2 Frequency of data collection

The EFAP exercise takes place annually between September and May.

The tax credits population scan is supplied on a monthly basis.

3.3 Data collection

Tax credits population data is taken from the HMRC tax credits administrative data system (NTC).

Error and fraud data from sample cases is taken from a bespoke Microsoft Access database.

3.4 Data validation

The underlying data are recorded by the compliance officers who carried out the enquiries. It then undergoes a number of steps where it is checked and processed before it is used to calculate the figures in this publication. Compliance officer decisions are checked at the case closure stage by reviewing all supporting evidence used to make the decision, both that supplied by the caseworker and contained in HMRC systems. All calculations are also checked for financial accuracy at the case closure stage.

The final data used are created by cross checking the information held in our compliance management information system against that held in the main tax credit computer system and against information recorded about the case by the compliance officer who worked it. The data is corrected if there is a discrepancy between the systems to assure all of the data is correct before completing the analysis.

Each award has a number of entitlement sub-periods and it is clear that some of these sub-periods cannot be associated with certain types of error or fraud that are recorded, for example if 25 per cent of an award’s time is spent in a WTC only sub-period and 75 per cent of its time in sub-periods relating to CTC then a claimant favour error or fraud relating to a child could only have occurred in the latter 75 per cent of the award. We therefore allocate the error to the sub-periods that it could be associated with, so in the earlier example the child error would be allocated to the 75 per cent of the award spent in sub-periods relating to CTC. Error favouring HMRC has been reallocated between sub-periods based on the proportion of that award spent in that sub-period.

3.5 Data compilation

Non-response

Approximately 25 per cent of claimants in the sample that is used to compile this estimate do not respond to HMRC’s investigations. The issue of non-response is monitored in several ways, including ensuring that compliance officers are in a position to make a valid decision without a response, completion of extensive quality checks of compliance officers’ decisions and monitoring of the outcome of non-response cases against those where claimants do respond.

Follow-up analysis has shown that non-response cases are no more or less likely to contain error and fraud favouring the claimant than cases where the claimant does respond. Consequently, we are satisfied that compliance officers are able to make a valid decision on non-response cases by using information held by HMRC. No adjustment is made to the estimate of error and fraud favouring the claimant to account for non-response.

Error favouring HMRC is more likely to be identified in cases where the claimant does respond. It is not possible to determine whether the non-response cases do in fact contain higher levels of error and fraud than we have identified but we hold no evidence to suggest that they do. No adjustment is made to the estimate of error favouring HMRC to account for non-response.

Not taken up cases

In each EFAP exercise, around 100 to 200 cases are not taken up for enquiry for reasons including death or other exceptional circumstances. These cases are excluded from the results, implicitly assuming that if they had been worked they would have the same incidence of error and fraud as the cases that have been successfully completed.

Cases are also not taken up if they fall under special customer records policy. These cases are deemed to require additional protection. Because of this both EFAP caseworkers and analysts do not have the required permissions to access the customer information. These cases are therefore removed from the sample. Types of special customer records can include: Members of the Royal Household, members of UK legislative bodies including Scottish and Welsh Assemblies, VIPs and those in high–risk employment, victims of domestic violence and other high-risk individuals.

Open cases

Each year, there are around 200 cases which have been opened but not completed when the first estimate is made. A projection is made to cover the estimated additional amount of extra error and fraud these cases would provide. When these cases have been closed, the estimated projection is replaced with actual values for the finalised estimate.

It is assumed in this analysis that these incomplete cases exhibit the same characteristics, on average, to those that had been settled most recently and assumed that the cases left to work to the end will on average exhibit this average level of non-compliance. Where there is only a small number of sample cases for recently settled cases, the average level over a longer time period is used.

Projections for mandatory reconsiderations

Claimants that have been found to be in error and fraud are able to appeal the decision within 30 days of receiving the award notice unless there are exceptional circumstances. These are known as Mandatory Reconsiderations (MRs) and can change the estimated levels of error and fraud by removing amounts of error and fraud from closed cases.

Any MRs that are known before the results are estimated are incorporated into the analysis. To ensure the estimate in this publication is central, a projection is made to take into account MRs that are likely to be received after the publication of the results. When the value of all MRs is known, this data is included in the final estimate.

Grossing

The sample results of the cases that have been worked to completion plus the projected results from the cases still being worked are grossed to reflect population estimates. Grossing factors are applied depending on the value of the finalised award and the characteristics of the claimant during the year.

Sample results are grossed to the total of entitlement sub-periods for the population over the whole year rather than to the single entitlement sub-period present at the end of the year.

The sub-periods are grossed up to the position of the award on each of the sample strata which gives increased accuracy over groups with potentially differing rates of error and fraud.

Aggregating data

Data are aggregated using customer National Insurance Number and application number.

4. Quality Management

4.1 Quality assurance

All official statistics produced by KAI, must meet the standards in the Code of Practice for Statistics produced by the UK Statistics Authority and all analysts adhere to best practice as set out in the ‘Quality’ pillar.

Analytical Quality Assurance describes the arrangements and procedures put in place to ensure analytical outputs are error free and fit-for-purpose. It is an essential part of KAI’s way of working as the complexity of our work and the speed at which we are asked to provide advice means there is a high risk of error which can have serious consequences on KAI’s and HMRC’s reputation, decisions, and in turn on peoples’ lives.

Every piece of analysis is unique, and as a result there is no single quality assurance (QA) checklist that contains all the QA tasks needed for every project. Nonetheless, analysts in KAI use a checklist that summarises the key QA tasks, and is used as a starting point for teams when they are considering what QA actions to undertake.

Teams amend and adapt it as they see fit, to take account of the level of risk associated with their analysis, and the different QA tasks that are relevant to the work.

At the start of a project, during the planning stage, analysts and managers make a risk-based decision on what level of QA is required.

Analysts and managers construct a plan for all the QA tasks that will need to be completed, along with documentation on how each of those tasks are to be carried out, and turn this list into a QA checklist specific to the project.

Analysts carry out the QA tasks, update the checklist, and pass onto the Senior Responsible Officer for review and eventual sign off.

4.2 Quality assessment

The QA for this project adhered to the framework described in section 4.1 and the specific procedures undertaken were as follows:

Stage 1 - Specifying the question

Up to date documentation was agreed with stakeholders setting out outputs needed and by when; how the outputs will be used; and all the parameters required for the analysis.

Stage 2 - Developing the methodology

Methodology was agreed and developed in collaboration with stakeholders and others with relevant expertise, ensuring it was fit for purpose and would deliver the required outputs.

Stage 3 - Building and populating a model/piece of code
  • Analysis was produced using the most appropriate software and in line with good practice guidance.
  • Data inputs were checked to ensure they were fit-for-purpose by reviewing available documentation and, where possible, through direct contact with data suppliers.
  • QA of the input data was carried out.
  • The analysis was audited by someone other than the lead analyst - checking code and methodology.
Stage 4 - Running and testing the model/code
  • Results were compared with those produced in previous years and differences understood and determined to be genuine.
  • Results were compared with comparable independent estimates, and differences understood. For example, total tax credits entitlement was checked for consistency with the latest HMRC National Statistics.
  • Results were determined to be explainable and in line with expectations.
Stage 5 - Drafting the final output
  • Checks were completed to ensure internal consistency (e.g. totals equal the sum of the components).
  • The final outputs were independently proof read and checked.

5. Relevance

5.1 User needs

This analysis is likely to be of interest to users under the following broad headings:

  • national government - policy makers and MPs
  • academia and research bodies
  • media
  • general public

5.2 User satisfaction

Formal investigations into user satisfaction have not been undertaken since a formal review of our National and Official Statistics publications was held between May and August 2011. KAI undertake continuous review of outputs with internal stakeholders and are open to feedback to meet changing user requirements.

5.3 Completeness

The EFAP sample is designed to be representative of the full tax credits population, and is selected from a scan of all awards with positive entitlement on the HMRC tax credits system.

6. Accuracy and reliability

6.1 Overall accuracy

This analysis is based on administrative and sample data. Accuracy is addressed by eliminating errors as much as possible through adherence to the quality assurance framework.

The potential sources of error include:

  • Human or software error when entering the customer data into the NTC system or EFAP database.
  • Mistakes in the programming code used to analyse the data and produce the statistics.
  • Human error in the modelling process, for example copy and paste errors.

6.2 Sampling error

Estimates in the tables are rounded to the nearest £10 million or 10,000 in the headline tables, and for all the overall totals in the other tables. The lower level breakdowns are rounded to the nearest £5 million or 5,000. The error and fraud rates are rounded to the nearest 0.1%.

The estimates presented are the central estimates derived from the sample and estimation methodology. Since these estimates are based on a sample they are subject to sampling errors. These margins of error have been expressed by calculating a 95 per cent confidence interval around the estimates. These have been calculated and are shown in the headline tables.

Confidence intervals are calculated using the variance of the values in the closed case data. The uncertainty around the open case projections is assumed to be the same as the closed cases.

6.3 Non-sampling error

Coverage error

From the tax year 2018 to 2019 EFAP onwards, awards with nil entitlement were removed from the sample, as these have been found to contain negligible amounts of error and fraud. This has increased the number of cases selected in the other strata improving the confidence levels of the outputs. As nil awards have no positive entitlement, they cannot contain claimant favour error and fraud.

Measurement error

The main sources of measurement error are incorrect entry of error and fraud amounts to the EFAP database. This is mitigated by the data validation approach described in section 3.4.

Nonresponse error

The non-response approach for the EFAP exercise is described in section 3.5.

Processing error

It is possible that errors can exist in the programming code used to analyse the data and produce the statistics. This risk is mitigated through developing a good understanding of the tax credits system, and thoroughly reviewing and testing the programs that are used in line with the KAI QA framework.

6.4 Data revision

Data revision - policy

The United Kingdom Statistics Authority (UKSA) Code of Practice for Official Statistics requires all producers of Official Statistics to publish transparent guidance on the policy for revisions.

Data revision - practice

This analysis is published annually and includes an estimate of error and fraud for the latest financial year. As detailed in section 3.5, a projection is made for the value of error and fraud in cases not closed at the time of publication. Any revisions to the estimate necessary after these cases are closed will be published alongside the following year’s statistics.

6.5 Seasonal adjustment

Seasonal adjustment is not applicable for this analysis.

7. Timeliness and punctuality

7.1 Timeliness

There is around a 15 month lag on when error and fraud estimates for a given tax year can be produced. A claimant’s entitlement can change throughout the year which could lead to over or underpayments depending on when the claimant tells HMRC about the change, either in year or at finalisation. Error and fraud can therefore only be found after the claim has been finalised with the actual circumstances of the tax year, which means that compliance officers are unable to start work on some cases until after 31 January of the following year.

7.2 Punctuality

In accordance with the Code of Practice for official statistics, the exact date of publication will be given not less than one calendar month before publication on both the Schedule of updates for HMRC’s statistics and the Research and statistics calendar of GOV.UK.

Any delays to the publication date will be announced on the HMRC National Statistics website.

8. Coherence and comparability

8.1 Geographical comparability

This analysis is presented for a single region - the United Kingdom.

8.2 Comparability over time

The main commentary contains data for the reference tax year and comparisons to the previous tax year. The supplementary data tables contain a time series of error and fraud rates from tax year 2006 to 2007.

The EFAP sampling and methodology have remained consistent to allow comparison to previous years. Any changes for specific years are clearly referenced in the commentary.

8.3 Coherence - cross domain

There are no coherence issues between statistical domains or data sources.

8.4 Coherence - internal

Rounding of numbers may cause some minor internal coherence issues as the figures within a table may not sum to the total displayed. For the risk group breakdowns, some claimants will have more than one risk identified in their claim so the numbers will not sum to the total number of awards presented in the other tables. This is clearly indicated in the commentary.

9. Accessibility and clarity

9.1 News release

There haven’t been any press releases linked to this data over the past year.

9.2 Publication

The tables and associated commentary are published on the Personal tax credits statistics webpage of GOV.UK.

Tables are published in the OpenDocument format, and the associated commentary in HTML format.

Both documents comply with the accessibility regulations set out in the Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018.

Further information can be found in HMRC’s accessible documents policy.

9.3 Online databases

This analysis is not used in any online databases.

9.4 Micro-data access

Access to this data is not possible in micro-data form, due to HMRC’s responsibilities around maintaining confidentiality of taxpayer information.

9.5 Other

There aren’t any other dissemination formats available for this analysis.

9.6 Documentation on methodology

A methodological annex is published alongside the statistical commentary.

9.7 Quality documentation

All official statistics produced by KAI, must meet the standards in the Code of Practice for Statistics produced by the UK Statistics Authority and all analysts adhere to best practice as set out in the ‘Quality’ pillar.

Information about quality procedures for this analysis can be found in section 4 of this document.

10. Cost and burden

The EFAP exercise is run annually, and requires around 120 full-time equivalent (FTE) operational staff to work the sample cases between September and May.

EFAP enquiries are made under Section 19 of the Tax Credits Act 2002 (the “power to enquire”), which requires the person, or either or both of the persons, to provide any information or evidence which HMRC consider they may need for the purposes of the enquiry. Customers are issued an opening letter detailing the specific evidence being requested for the enquiry to reduce the burden on the customer.

11. Confidentiality

11.1 Confidentiality - policy

HMRC has a legal duty to maintain the confidentiality of taxpayer information.

Section 18(1) of the Commissioners for Revenue and Customs Act 2005 (CRCA) sets out our duty of confidentiality.

This analysis complies with this requirement.

11.2 Confidentiality - data treatment

The statistics in these tables are presented at an aggregate weighted level so identification of individuals is not possible.

Further information on anonymisation and data confidentiality best practice can be found on the Government Statistical Service’s website.