Economic Crime Survey 2024: Technical report
Updated 5 November 2025
1. Introduction
The Home Office commissioned Ipsos, supported by Professor Michael Levi from Cardiff University, to design and undertake a new study of the prevalence and impact of economic crime among UK businesses with one or more employees (referred to simply as ‘businesses’ across this report). The study focused on 3 types of economic crime – namely fraud, corruption, and money laundering – as well as covering breaches of financial sanctions.
This is a new study with its own methodology. It follows a similarly named 2020 Home Office survey of UK businesses that also looked at the prevalence and impact of fraud and corruption, but deploys a new questionnaire and qualitative materials based on best practice from other UK and non-UK surveys where available, and developed through consultation with experts in academia, industry and the UK government. The earlier study covered 7 industry sectors, whereas this latest study covers the entire private sector. It also predominantly focused on economic crimes experienced over a 3-year period, rather than in the past 12 months (as this latest study has done). Therefore, the 2 studies are not directly comparable.
This report provides the technical details for all strands of the research, and copies of the main survey instruments to help interpret the findings are also available (Research materials). The Home Office has published a separate report of the main findings from the research.
1.1 Research objectives
This study aimed to:
-
estimate, across businesses with employees, the prevalence and incidence of fraud, bribery and money laundering
-
measure the economic cost of fraud to businesses
-
understand the impact of fraud, bribery and corruption, and money laundering incidents
-
quantify and understand business responses to fraud, bribery and corruption, and money laundering incidents
-
quantify and understand awareness of the current UK financial sanctions regime
-
quantify and understand the processes that businesses have in place to minimise or deal with fraud, bribery and corruption, and money laundering, as well as the risk of breaching financial sanctions
1.2 Summary of methodology
There were 2 major strands to the research:
Quantitative survey: Ipsos designed and conducted a telephone and online survey with 3,477 UK businesses with one or more employees, with question modules on fraud, corruption, money laundering and financial sanctions. Fieldwork, incorporating a pilot phase, took place between 22 February and 30 August 2024, with findings reflecting the 12 months prior to this period. The data have been weighted by size and sector to be representative of the UK business population.
Qualitative interviews: Ipsos carried out 38 in-depth interviews recruited from businesses that had taken part in the quantitative survey, focusing primarily on those that had encountered bribery (a form of corruption) or money laundering incidents in the survey questions. These took place from 12 July to 27 September 2024. Conducting qualitative interviews allowed us to explore these complex and often nuanced incidents in more detail, including their impacts and the business response, which were more difficult to explore in the survey. The interviews also explored businesses’ understanding of, and response to, financial sanctions.
Businesses without employees, charities (which were not also registered as businesses) and public sector organisations were outside the scope of this study.
1.3 Strengths and limitations of the study
1.3.1 Overall strengths
This study was intended to be the most comprehensive examination of economic crime against businesses in the UK to date. As such, several measures were taken to ensure a high level of rigour in the results. The various strengths of the study included:
An extensive development phase
This involved a rapid evidence review, workshops with academic, government and industry stakeholders, cognitive testing and piloting to help develop the definitions and questions asked in the survey. This helped to ensure that, to the extent that it was feasible to do so, the questions only picked up incidents that constituted crimes in UK law (avoiding false positives), provided appropriate reassurances so that respondents would not be put off mentioning incidents (avoiding false negatives), and included clear and jargon-free wording so there would be a consistent response across businesses.
A survey that focused on measurable incidents, rather than solely on perceptions or subjective assessments of economic crime
This reflected the findings of the rapid evidence review, which highlighted that much of the previous research in this area was based on broad perceptions of crime or indirect measurement, and was therefore more subjective. We also opted not to use the terms “corruption” or “money laundering” in the initial questions in the survey, given that these terms tend to be subjective or poorly understood. The survey instead focuses on measuring specific incidents that would count as corruption (in this case opting to focus on bribery – see below) or money laundering. The same approach is taken for fraud, in terms of describing specific incidents, but the survey uses the term fraud more freely, as we expected this term to be better understood.
A primary focus on bribery over more subjective forms of corruption in the survey, in order to improve accuracy
Corruption is an abuse of entrusted power for private benefit that usually breaches laws, regulations, or standards of integrity or professional behaviour. Bribery is a type of corruption, but not the only type – other types of corruption may include influence peddling or embezzlement. During the questionnaire development, the Home Office and Ipsos agreed that other forms of corruption beyond bribery would be too challenging to accurately measure in the survey, due to low awareness and understanding of corruption, and a lack of accepted definitions of other forms of corruption. Instead, the survey focuses on measuring bribery as accurately as possible, such as by following the best practice guidelines set out in the United Nations Office on Drugs and Crime’s (UNODC’s) Manual on Corruption Surveys.
This was done by not using the word “bribe” in the initial questions (which would have also been poorly understood, or have invoked social desirability bias), but instead asking about “a gift, favour, or extra money other than the official fee, in order to secure a business transaction, or to get you/others to perform a service”. The latter parts of the survey and the qualitative research still allowed us to explore other forms of corruption, in terms of how well these were understood, and the experience and impacts of these forms of corruption.
Alignment with the Home Office Counting Rules for fraud
The fraud section of this survey was designed to align with the Home Office Counting Rules, which state that recorded crimes must have a specific intended victim. Within this section of the questionnaire, question routing was used to filter out nuisance fake invoices or investment scams that did not constitute fraud. We specifically only included cases where the respondent business or its staff were mentioned by name, and/or where they engaged with the fake invoice or scam investment opportunity. This can be seen in the questions FRAUD_INVOICENAME, FRAUD_INVOICEREPLY, FRAUD_INVESTNAME and FRAUD_INVESTREPLY.
The use of random probability sampling, interviewing and weighting
This is an established method to avoid selection bias in the results, where those who have had more serious incidents may be less likely to take part in the survey. See Section 2.3.3 for details.
A representative sample, including all business sizes for businesses with one or more employee, and covering all economic sectors
This ensured that the findings were not disproportionately skewed towards larger organisations, which could lead to overstated findings around typical costs (of fraud). It also meant that the findings provide an economy-wide picture, rather than a snapshot of the sectors deemed to be more at risk from economic crime. This addressed the limitations of previous studies included in the literature review, which often had selective samples. However, for the rationale noted below, zero-employee businesses were excluded from the sample.
A comprehensive attempt to obtain accurate and full cost and spending data (for fraud) from respondents, based on best practice from other surveys
Generally, it is challenging to survey businesses about their costs and spending. The individual respondent from the business may not have complete information during the interview, either because the business does not systematically collect information on costs, or because this knowledge is dispersed across the business (for example, an individual in a finance role may know more about spending on insurance against frauds, while someone in a senior operational role may have a better grasp of the costs incurred in the aftermath of a fraud). In addition, the respondent may have problems recalling the incident in question if it occurred far in the past, or if they were not in post when the incident occurred.
This study followed the good practice set by other UK business surveys (such as the Cyber Security Breaches Survey, co-funded by the Home Office) to minimise the impact of these challenges. It gave respondents flexibility in how they can answer (for example, allowing numeric and banded amounts). It broke down costs into constituent parts (including short-term and medium-term direct costs, staff costs, and other indirect costs resulting from fraud), and took a similar approach for spending (broken down into spending on training, digital software, risk assessments, insurance policies, and monitoring and investigation), rather than asking for one overall cost or spending figure. There were also checks on outliers in the data, both during and after fieldwork (see Section 2.4.5.
It is worth noting that there were relatively few “don’t know” and “prefer not to say” responses at these fraud cost and spending questions. For example, across FRAUD_DIRECT and FRAUD_DIRECT_DK (questions asking about the direct cost of any fraud incidents experienced in the last 12 months), there were 5 “don’t know” responses and one “prefer not to say” response, out of a total 533 responses.
1.3.2 Overall limitations
The topic of economic crime is inherently challenging to research. There is a lack of business awareness and conceptual knowledge about these types of crimes, and a potential unwillingness to divulge information on the topic. There were also more practical limitations, based on the kinds of questions that businesses could reasonably be expected to answer with accuracy, and the volume of data that it was possible to collect. The main limitations of the research were as follows:
Exclusion of zero-employee businesses
The Home Office and Ipsos agreed to exclude these from the study for practical reasons. These businesses will also face economic crime and make up around 74% of all UK business units.[footnote 1] Their inclusion in the research would have crowded out the sample allocated to businesses with one or more employees, making it unfeasible to report any findings specifically for small, medium and large businesses. In addition, by their nature, these businesses tend to make a much smaller contribution to economic output, so were deemed to be less of a priority for inclusion relative to the other size bands.
A potential lack of willingness to disclose experiences and social desirability bias
The study collected data on very sensitive crimes. Businesses and individuals may have not wanted to admit to having been victims, witnesses or collaborators in these types of crimes, due to a perceived lack of confidentiality or to give a socially desirable response. The study followed good practice to minimise the impact of this. There were clear reassurances around confidentiality and anonymity at the beginning of all quantitative and qualitative interviews. The quantitative questionnaire was also thoroughly tested and piloted, with cognitive testing and pilot respondents having the opportunity to flag overly sensitive questions that they were unwilling to answer. As noted above, the use of the “prefer not to say” codes was negligible, across the multiple questions where this was an option. For example, at a particularly sensitive set of questions asking whether businesses had to give (PRIVATE_HAD), or were asked to give (PRIVATE_ASKED) a bribe to another UK business or individual, there were only 2 “prefer not to say” responses (out of the entire sample of 3,477).
Only covering incidents that businesses were able to identify
It may be that there were a wider range of incidents than the ones businesses were able to identify in the survey. This could result from a mixture of a lack of awareness of the types of crimes being discussed, a lack of monitoring and logging of incidents, and staff turnover (if staff who recalled a particular incident had left the business). This means that the results showing the number of fraud, bribery and money laundering incidents experienced, the number of businesses affected, and the cost of fraud incidents, are likely to be underestimates. The survey counteracted this through very specific wording of the incidents in scope, the recruitment of senior individuals in the business to answer the survey questions, and restricting the timeframe of incidents captured to the 12 months prior to the survey in order to aid recall.
Small sample sizes for bribery and money laundering incidents place limitations on analysis
A total of 3% of businesses experienced any bribery incidents in the 12 months prior to the survey and 2% experienced any money laundering incidents in this period, making these cases very rare across the business population. The study approach aimed to mitigate this by aiming for a very high sample size compared to other UK business surveys, and by disproportionately stratifying the sample to increase responses among larger businesses, as well as the sectors where corruption and money laundering were thought to be bigger issues (according to the 2020 Home Office study, the rapid evidence review, and the views of experts in the survey development workshops – see Section 2.2.3).
However, the achieved sample sizes (96 businesses that either were offered a bribe from another UK business or individual, or had to or were asked to give a bribe to another UK business, and 100 businesses experiencing money laundering incidents) were still very low relative to the overall sample size, with associated higher margins of error. This meant it was not possible to break down the follow-up questions associated with these incidents by subgroup (for example, size, sector or region). Moreover, there were too few sampled cases of domestic public sector bribery incidents (one) and overseas bribery incidents (6) to report any of the follow-up questions associated with these incidents. A further mitigation was to focus solely on corruption, money laundering and financial sanctions in the qualitative interviews, to deepen our knowledge of this small number of cases.
Economic costs were only captured for fraud
During the survey development, the Home Office and Ipsos agreed that it would not be feasible to quantify the economic costs of corruption or money laundering. This was both because of the expected low volume of cases in the sample, and because expert feedback in the survey development workshops suggested that businesses could not be reasonably expected to answer these questions accurately. This was considered to be a unique challenge for corruption and money laundering, given that fraud is more widespread and more well understood in the business population than these other 2 crime types.
Potential misunderstanding of money laundering emerging during fieldwork
The qualitative recruitment started while quantitative data collection was ongoing. During the recruitment, we found that some businesses that had misidentified money laundering incidents in the survey. There were also systematic issues uncovered with 2 codes at the ML_TYPES question, and the Home Office and Ipsos agreed to make further changes to the questionnaire on 17 July 2024 to minimise any possible misunderstanding from respondents (see Section 2.1.6). The Home Office also reviewed and revalidated all cases of “other” money laundering (the descriptive answers provided at ML_OTHER) both before and after the questionnaire changes, to judge whether these should be included or not. Any misunderstandings that were identified from the survey “other – specify” responses, and from the qualitative recruitment, have been edited out of the final survey data for money laundering.
Ultimately, 18 cases where respondents initially identified money laundering (out of the original 118) were edited to not count as money laundering in the final data, after Home Office review and revalidation. This meant that 100 cases were included in the final data. Within these 100, there were 16 cases where respondents identified money laundering in the survey before the questionnaire changes were made on 17 July, and which related to the 2 codes that may have been misunderstood (and not to any other types of money laundering captured in the survey). Had it been possible to follow up on these 16 cases, we may have found that a proportion of these were misclassified, potentially resulting in an overcount of money laundering instances.
On the other hand, the potential issues with respondents’ lack of willingness to disclose involvement and social desirability bias described above could have equally led to an underestimation of money laundering in the survey. Therefore, the survey estimates are not necessarily underestimates or overestimates, and the work undertaken to revalidate the vast majority of the data should provide confidence in the reliability of the findings.
Focusing on senior staff rather than junior staff
In this study, businesses were represented by a senior individual with an overview of major finance, legal or compliance matters affecting the business. In smaller businesses, this was typically the business owner or a director. This approach ensured that we captured a business-wide picture. However, the rapid evidence review highlighted that economic crime often has more specific and nuanced impacts (for example, emotional impact) on individual victims, who might be less senior staff within the business. In addition, some incidents may simply not be reported to senior staff. This study may therefore miss some of these ground-level incidents and impacts.
1.3.3 Specific limitations of the fraud cost and spending data
The survey collected various data on the costs and spending associated with fraud against UK businesses with employees. This included:
-
the estimated short-term direct cost (the amount stolen by, or paid to the fraudsters) of all frauds experienced in the last 12 months (FRAUD_DIRECT_DUM)
-
the estimated amount of short-term direct cost of all frauds experienced in the last 12 months that was recovered (FRAUD_RECOVER_VALUE_DUM)
-
the estimated medium-term direct cost (the amount of any external payments made in the aftermath) of all frauds or attempted frauds experienced in the last 12 months (FRAUD_AFTERMATH_DUM)
-
the estimated cost of any staff time dealing with all frauds or attempted frauds experienced in the last 12 months (FRAUD_STAFF_DUM)
-
the estimated other indirect costs (for example, on share value, customer complaints or loss of investors) dealing with frauds or attempted frauds in the last 12 months (FRAUD_INDIRECT_DUM)
-
the total cost of all frauds or attempted frauds experienced in the last 12 months – summing the individual measures one, 3, 4 and 5 above (FRAUD_ANYCOST_DUM)
-
spending on fraud training or awareness raising activities (FRAUD_COSTTRAIN_DUM_REBASED)
-
spending on digital software to prevent or detect fraud (FRAUD_COSTSOFT_DUM_REBASED)
-
spending on fraud risk assessments (FRAUD_COSTRISK_DUM_REBASED)
-
spending on insurance policies that cover fraud (FRAUD_COSTINS_DUM_REBASED)
-
spending on monitoring or investigating fraud risks (FRAUD_COSTSTAFF_DUM_REBASED)
-
total spending on fraud risk monitoring and management – summing the individual measures 7 to 11 above (FRAUD_ALLSPEND_DUM_REBASED)
As noted in Section 1.3.1, these measures intended to capture cost and spending data comprehensively, both by splitting it across multiple questions capturing specific aspects, and by providing flexibility in the way businesses could respond. The low proportion of “don’t know” or “prefer not to say” responses to the cost questions reflects the success of these measures (for example, for FRAUD_DIRECT_DUM, there were 527 numeric or banded responses across 533 respondents). Moreover, since the survey was designed to be representative of the overall business population, it is feasible to extrapolate from the cost and spending statistics for the average business facing fraud, to produce overall business fraud cost and spending estimates for all UK businesses with employees. This is covered in Section 1.5.
However, data users should be particularly cautious with these economy-wide fraud cost and spending estimates. These are likely to be underestimates of the total economic and social cost of fraud to UK businesses.
There are several potential reasons as to why these may be underestimated. We cannot say for sure which of these is the most likely reason, but it is probable that each of these factors contributed in part:
Exclusion of zero-employee businesses The survey did not cover fraud among zero-employee businesses. In fraud that is reported to Action Fraud, this is counted as business fraud. According to the Department for Business and Trade (DBT) business population estimates 2024, there are approximately 4.07 million zero-employee businesses, compared with approximately 1.43 million businesses with employees. Although zero-employee businesses are expected to have lower fraud-related costs on average (matching the broad pattern seen in the survey data, where larger businesses have higher costs), the cumulative impact of their exclusion could be substantial.
Cost and spending data being subject to higher margins of error The lower and upper margins of error for the short-term direct cost estimate of £486 million, at the 95% level of confidence, are £278 million and £693 million respectively. This means that, while the best estimate based on the survey sample is £486 million, the true cost for this population (businesses with employees) could feasibly be anywhere between these lower and upper bounds.
This relatively high margin of error is driven by a few factors. Firstly, as discussed in Section 1.5.2, the margins of error in general for the numeric data captured in the survey (covering the incidence of different crime types, and the cost and spending estimates for fraud) are higher than for the other survey results, because of the nature of the statistical calculations involved. In addition, these estimates are not based on the full sample. Instead they are based only on the subgroup of businesses in the sample that experienced any fraud, and were able to provide an associated cost estimate. Furthermore, in order to reduce the average interview length to fit the original specification for the survey, these cost-related questions were only asked of a random half of the sample. Collectively, these factors reduce the sample size to 533 respondents out of the total 3,477. As such, the margins of error would always be higher than for the full sample.
The survey sample missing rare but high-impact incidents It is possible that a very small number of businesses have very costly frauds, which have an inordinate effect on the total economic cost of fraud across all businesses. The sampling strategy, which disproportionately sampled the businesses that were more likely to have high-impact frauds (that is, medium and large businesses and those in specific sectors – see Section 2.2.3), helped to mitigate this risk. However, the extent of oversampling was limited by the available sample frame for larger businesses and specific sectors. Moreover, any survey sample would still have a low probability of capturing these kinds of rare incidents. Additionally, the survey can only measure the cost of incidents that businesses were able to detect and identify or were willing to disclose. As a consequence, the mean (average) result recorded in the survey would be lower than the true mean in the population.
The heavy skew in the distribution of fraud costs in the business population This is inherently more of a challenge for a business survey than for a general public survey aiming to measure the cost of fraud. For an individual member of the general public, fraud costs are extremely unlikely to be in the range of millions of pounds, leading to a narrower range in the costs recorded in any survey sample. By contrast, for businesses, we would expect a much greater range in the costs across different businesses – a small retail business may have experienced one fraudulent invoice in the last 12 months, whereas a high street bank may have incurred several high value frauds totalling millions of pounds.
There was a requirement for this survey to produce representative and highly accurate estimates of prevalence (for example, the number of businesses experiencing fraud in the last 12 months), overall and by subgroup. This necessitated a representative sampling approach – one in which most of the sample was composed of micro and small businesses, despite the oversampling of medium and large businesses. However, the relatively skewed distribution of fraud costs in the business population means that any representative sample is likely to produce cost estimates with high variances. That is, most recorded responses will be far from the mean. A high variance reduces the statistical reliability of the cost estimates. As such, the need to produce accurate prevalence estimates limited our ability to account for the skewed distribution of fraud costs.
Undervaluing of staff time The survey attempts to capture the indirect staff time cost for dealing with fraud. Previous UK government research on measuring the staff cost when dealing with cyber security breaches has shown that businesses find it difficult to account for these costs accurately. Not all businesses require staff to log the time spent on tasks. Furthermore, it may be challenging for businesses to apportion part of a wage cost to fraud, if the relevant staff member is working on multiple areas at once. While this survey followed best practice – the question wording explicitly asked how much staff would have got paid for this time, and told respondents to include this cost even if dealing with these sorts of issues was already part of any staff member’s job role – it is feasible that the overall challenge of measuring staff costs led to this aspect being undervalued.
Some of these factors are interlinked. For instance, the skewed distribution of responses is partly what drives up the margin of error for these questions, and reduces the likelihood of the survey capturing rare, high-impact frauds.
Although these extrapolated population totals are considered underestimates of the true figures, there is still substantial analytical value in the cost and spending data. The mean and median estimates still highlight the costs of fraud for the typical business, the subcategories of costs and spending show where businesses are spending the most money, and the subgroup breakdowns for medium and large businesses show that their fraud-related costs tend to be much higher than for smaller businesses.
1.4 Extrapolating results to the wider population
As the samples for each group are statistically representative, it is theoretically possible to extrapolate survey results to the overall population, which amounts to 1,427,165 businesses with one or more employees according to the DBT business population estimates 2024 (the latest ones available at the time of reporting).[footnote 2] This applies to:
-
standard percentage results across the survey, to give an overall prevalence estimate (for example, the estimated total number of businesses experiencing fraud)
-
mean scores, to estimate total incidences (for example, the estimated total number of frauds experienced across all businesses with employees) and incidence rates (for example, the estimated number of frauds experienced, per every thousand of these businesses)
We recommend restricting extrapolation of results to the overall business population rather than to subgroups within these populations (for example, specific size bands or sectors), given the smaller sample sizes that apply to subgroups, which consequently have much higher margins of error. There are 2 exceptions to this in the findings report. The first is the reporting of subgroups for incidence rates, which are an appropriate measure to be able to compare how common a particular crime type is across subgroups. The second is the total prevalence and incidence of money laundering among regulated and unregulated businesses which helps to contextualise experiences of the regulated sector among the wider business population (it is worth noting that the findings were not weighted specifically by regulated status, see Section 2.5.4). Therefore, the findings report does compare extrapolated incidence rates across size bands, sectors and regulated status (for money laundering), and total prevalence by regulated status (also for money laundering).
Any extrapolated results based on the survey sample should be clearly labelled as estimates and, ideally, should be calibrated against other sources of evidence. We also recommend accounting for the margin of error (see Section 1.5) in any extrapolated results, for example by publishing a range.
If any data users intend to make further extrapolations from the survey data, beyond the extrapolated estimates already included in the findings report, we recommend that they contact the Home Office team responsible for the Economic Crime Survey (EconomicCrimeSurvey@homeoffice.gov.uk), in order to discuss the appropriate way to calculate and use any estimates.
1.5 Margins of error
Broadly, the findings report includes 3 types of estimates:
-
percentage results from the survey (for example, the percentage of businesses that experienced fraud in the last 12 months). The relevant percentage results have also been extrapolated to estimate the total prevalence of specific crimes (for example, the total number of businesses experiencing fraud)
-
averages (for example, the average – mean or median – number of frauds experienced by the businesses that have experienced any, or the average cost of all the frauds experienced)
-
extrapolated incidence estimates, covering total incidence (for example, the total number of frauds across all businesses), and the incidence rate (for example, the estimated number of frauds experienced, per every thousand businesses)
The final data from the survey were based on weighted samples, rather than the entire population of UK businesses or charities. Any survey results are therefore subject to margins of error, which vary with the size of the sample and the result in question. These parameters are factored into the Standard Error (SE) for each result. The SE was calculated using the SPSS Complex Samples statistical software.
To produce the margins of error, we calculate Confidence Intervals at the 95% level (1.96 × SE).
1.5.1 A guide to approximate margins of error for different percentage results
Table 1.1 shows the approximate margins of error that apply to this survey, overall and for important subgroups (which are reported across the findings report), for different percentage results. These margins of error will be broadly similar for any other subgroups of a comparable size in this survey.
For reference, we have also included margin-of-error calculations for the split-sampled questions (for example, CORRUPTION_RISK), where the sample sizes are roughly two-thirds of the total. A list of all the split sampled questions is available in the appendix (Section 5.2).
As a worked through example, the overall sample size (3,477) has a margin of error range of ±1.1 to ±1.9 percentage points, based on a 95% confidence interval calculation. That is to say, if we were to conduct this survey 100 times (each time with a different sample of the business population), we would expect the results to be within 1.1 to 1.9 percentage points of the results we achieved here in 95 out of those 100 surveys. The range illustrates that survey results closer to 50% tend to have higher margins of error. For instance, if 90% of surveyed businesses experienced any kind of fraud in the last 12 months, this result would have a margin of error of ±1.1 percentage points, whereas if only 50% had experienced attempted fraud, the margin of error would be ±1.9 percentage points. The margins of error are calculated using the SE (which take into account survey weighting, covered in Section 2.5.5).
Table 1.1: Margins of error (in percentage points) applicable to percentage results at or near these levels
| 10% or 90% | 30% or 70% | 50% | |
|---|---|---|---|
| 3,477 businesses | ±1.1 | ±1.7 | ±1.9 |
| 2,090 micro businesses | ±1.3 | ±2.0 | ±2.2 |
| 776 small businesses | ±2.2 | ±3.3 | ±3.7 |
| 450 medium businesses | ±2.9 | ±4.4 | ±4.8 |
| 161 large businesses | ±4.8 | ±7.4 | ±8.0 |
| 513 regulated businesses | ±3.0 | ±4.6 | ±5.0 |
| 2,271 split-sampled businesses (two-thirds of overall sample) | ±1.4 | ±2.2 | ±2.3 |
| 1,051 businesses experiencing fraud | ±2.1 | ±3.3 | ±3.6 |
| 102 businesses experiencing bribery (in all contexts covered in the survey) | ±6.7 | ±10.2 | ±11.1 |
| 100 businesses experiencing money laundering | ±7.1 | ±10.9 | ±11.9 |
There are also margins of error when looking at subgroup differences. A difference from the average must be of at least a certain size to be statistically significant. Table 1.2 is a guide to these margins of error for certain size and sector subgroups that we have referred to several times across the findings report. Again, these will be broadly similar for any other subgroups of a comparable size in this survey. The margins of error below only apply to the questions that were asked of the full sample.
Table 1.2: Differences required (in percentage points) from overall result for statistical significance at or near these percentage levels
| 10% or 90% | 30% or 70% | 50% | |
|---|---|---|---|
| 2,090 micro businesses | ±0.7 | ±1.1 | ±1.2 |
| 776 small businesses | ±1.9 | ±2.9 | ±3.2 |
| 450 medium businesses | ±2.7 | ±4.1 | ±4.5 |
| 161 large businesses | ±4.7 | ±7.2 | ±7.9 |
| 140 finance and insurance businesses | ±5.5 | ±8.4 | ±9.1 |
| 171 real estate businesses | ±4.8 | ±7.3 | ±7.9 |
| 513 regulated businesses | ±2.8 | ±4.3 | ±4.7 |
The overall sample size for this survey is high relative to other enterprise-level business surveys. For example, the Cyber Security Breaches Survey sample size for businesses was 2,180 in the 2025 wave. In addition, the survey sampling approach was designed to produce robust sample sizes for key subgroups by size and sector (see Section 2.2.3), and for the subgroups of businesses experiencing fraud, bribery and money laundering. As such, the percentage results in the findings report should be considered as robust, from a statistical perspective.
1.5.2 Margins of error for non-percentage results (means and medians)
Tables 1.1 and 1.2 only apply to the percentage results from the survey. For mean scores the margins of error are different. They are once again based on the SE of the mean, but this calculation typically leads to much wider margins of error than those associated with the percentage results in the above tables.
The wider margins of error for mean scores reflect the nature of the numeric data collected in this survey as well as the nature of the business population. The distribution of the numeric data collected (for example, the number of frauds experienced) tends to have a long tail of low-level responses, and a far smaller number of very high responses. The nature of the business population is also such that larger businesses tend to experience more of these incidents, but they make up a relatively small proportion of the overall business population. As such the variance in responses for numeric questions in this survey – the distance of most responses from the mean response – tends to be relatively high, meaning the SE is also relatively high. The sampling approach for this survey followed best practice to minimise the SE, by oversampling medium and large businesses, as well as the sectors considered to be more likely to experience economic crime (see Section 2.2.3). Therefore, while these margins of error are relatively wide, they should still be considered robust, given the inherent challenges of accurately measuring this kind of numeric data.
The findings report also includes medians alongside means. It is not common practice to report the margins of error (that is, the confidence intervals) for medians from weighted survey samples, and this was the approach taken.
1.5.3 Reporting of extrapolated population estimates
In the main chapters of the findings report, wherever we provide an extrapolated estimate for the total population of businesses with employees, or total prevalence and total incidence at the subgroup level, we quote the upper and lower bounds within the margin of error, in order to aid interpretation. For the reasons noted in the previous section, the margin of error tends to be wider for the incidence estimates, which are derived from mean scores, than for prevalence estimates, which are derived from percentage results.
As a specific example, taken from the findings report, the estimated total incidence for money laundering was approximately 225,000 incidents in the last 12 months, across all businesses with employees. The margins of error for this result were approximately 120,000 (the lower bound) and 329,000 (the upper bound). That is, if we were to conduct this survey 100 times (each time with a different sample of the business population), we would expect this estimate to be between approximately 120,000 and 329,000 in 95 out of those 100 surveys.
To avoid overcomplicating the findings report, we have not reported the margins of error for extrapolated incidence rates per subgroups in the population (for example, for the incidence rates for particular sectors).
2. Quantitative strand
This chapter provides technical details on the quantitative survey questionnaire development, sampling, piloting, main fieldwork and data processing. It also covers technical details and user information for the SPSS data file.
2.1 Survey and questionnaire development
The development of the survey involved multiple stages, including:
-
a rapid evidence review
-
workshops with government, industry and civil society representatives, and academic experts
-
20 cognitive testing interviews with businesses
-
a pilot survey, consisting of 200 telephone interviews
Professor Michael Levi from Cardiff University was involved as an academic consultant throughout the survey development. This included selecting literature for review, participating at workshops, and helping to define and categorise the economic crimes covered in the study.
The development phase was relatively long, lasting from May 2023 to the completion of the pilot survey in March 2024. This reflected the fact that this was a new survey. It reflected elements of the 2020 Home Office study of the same name, but was developed from scratch.
2.1.1 Rapid evidence review
In collaboration with the Home Office and Professor Levi, Ipsos developed a list of 48 key evidence sources on fraud, corruption and money laundering to review. This included the previous Home Office questionnaire on fraud and corruption (conducted in 2020), academic articles, research frameworks and manuals from third-party sources (notably the UNODC Manual on Corruption Surveys). A full list of published sources is included in the appendices (in Section 5.1).
Each source was reviewed using a standard proforma to pull out, for each crime type:
-
definitions
-
existing categorisations or frameworks
-
insights into the kinds of impacts that businesses might face or the actions they might take in response to these crimes
-
existing statistics on prevalence
-
sector-specific insights or key subgroups
This stage highlighted key lessons for the questionnaire design and topics worthy of further discussion at the stakeholder workshops.
2.1.2 Stakeholder workshops
Ipsos facilitated 3 workshops in June and July 2023 to discuss findings from the rapid evidence review and collect further insights from relevant stakeholders. The Home Office identified stakeholders they wanted to include, with the Ipsos team sending email invites and managing the recruitment. Given that several stakeholders had interests in and knowledge of more than one type of economic crime, the workshops were not completely split by crime type, but by audience. The first workshop hosted academic experts and representatives from relevant civil society organisations, while the next 2 workshops focused on industry representatives, policymakers and other government staff.
2.1.3 Questionnaire drafting
The findings from the rapid evidence review and workshops served as the basis for a 90-minute questionnaire brainstorming session between the Ipsos and Home Office teams, and Professor Levi. The questionnaire content was then developed by the Ipsos and Home Office teams across multiple drafts. Ipsos undertook all drafting and produced all other survey instruments (such as the survey script, interviewer briefing materials, privacy policy and any invite, reminder or reassurance emails sent to sampled businesses). The Home Office had final approval of the questionnaire and all email scripts, ahead of cognitive testing, piloting and the main fieldwork.
2.1.4 Cognitive testing
Ipsos carried out 20 cognitive testing interviews with businesses over Microsoft Teams, lasting approximately 45 minutes each. The purpose of these interviews was to test businesses’ question comprehension, ability to recall relevant information, and answer with the required specificity (for example, for the cost questions). All interviews were carried out by Ipsos researchers, using a cognitive testing topic guide with question-specific prompts and probes.
Businesses were recruited by Criteria Fieldwork, one of Ipsos’ specialist business recruitment partners, using a spec laid out by Ipsos. A £70 incentive (in the form of an Amazon voucher or charity donation) was offered to encourage participation. The spec stipulated minimum quotas by business size, sector and region, to ensure the questionnaire was tested with a wide range of businesses.
There were no minimum quotas for businesses that had faced specific incidents to do with fraud, bribery (the type of corruption measured in the questionnaire) or money laundering. This was because of the anticipated low prevalence of these incidents. However, the recruiters attempted through their networks to find businesses that had experienced these types of economic crimes. This information was collected in recruitment, and out of the 20 businesses interviewed, 19 had experienced fraud, 7 bribery and 2 money laundering, with one interviewee having experienced none of these crime types. Therefore, all sections of the questionnaire were able to be tested.
In micro and small businesses, recruiters targeted the business owner or a director. To ensure the appropriate individual was recruited within medium and large businesses, recruiters collected the job titles and key responsibilities of the interviewee and matched it to criteria agreed between Ipsos and the Home Office. This requested that we interview a senior individual with an overview of major finance, legal or compliance matters affecting the business. The Ipsos team also checked with participants at the end of the interview whether they were the most appropriate individual to take part, and whether others in their business would also be capable of answering the questions – so that this information could be used to support interviewers in the pilot and main fieldwork. These conversations overwhelmingly confirmed that we had targeted the appropriate individuals.
The cognitive testing highlighted various improvements to make to question wording. The major changes made to the questionnaire are in Table 2.1:
Table 2.1: Changes made to the questionnaire after cognitive testing
| Type of change | Description |
|---|---|
| Additional prompts on answer formats for respondents | For questions that required numeric responses (for example, FRAUD_TYPES), respondents were initially inclined to give a yes/no answer, requiring several reminders to answer with a number. We added prompts for respondents ahead of these questions to reinforce the answer format. |
| Additional confidentiality reassurances | We added more reassurances around confidentiality and anonymity ahead of the corruption and money laundering questions, to reduce respondent hesitation when answering these sections. |
| Clarifications to language and terminology | In various places, we changed language and terminology to improve respondent comprehension. Examples included: referring to the “official fee” instead of the “normal fee” in the corruption questions, to avoid false negatives if a business thought a bribe was part of the “normal fee” of doing business; clarifying at ML_MANAGE_e that software to monitor unusual payments or patterns of activity needed to be “deployed directly within your business”, to prevent businesses saying “yes” when they thought their business bank might deploy such software on their behalf; amending the wording at ML_TYPES to make clearer to respondents that the money involved in the transaction should be from “a potentially criminal origin”. |
| Adding unprompted answer codes | We added answer options to ensure that unprompted lists of answers were as exhaustive as possible, to make it easier for interviewers to code responses. For example, at FRAUD_DETECTION and ML_DETECTION, we added an option to cover cases where incidents were discovered directly by the respondent. |
2.1.5 Pilot survey
Ipsos conducted a pilot survey, interviewing 200 businesses by telephone between 22 February and 1 March 2024. The pilot survey was used to gather further feedback (from respondents and Ipsos telephone interviewers) on the questionnaire, test the survey script, time the interview, test the usefulness of the interviewer briefing materials, and test the quality and eligibility of the sample.
The pilot sample came from the same sample frame used for the main stage survey (see Section 2.2). The 200 pilot interviews were achieved from the sample first batch of sample that continued to be used in the main fieldwork, which consisted of 11,415 records.
The high target of 200 interviews aimed to ensure that we would naturally encounter businesses that had experienced each of the 3 crime types (fraud, corruption and money laundering). This would mean that every kind of survey question was asked, as several questions were filtered only to those that had experienced these crimes. Out of these 200 interviews, 51 businesses identified fraud, 5 had experienced bribery (the type of corruption measured in the questionnaire) and 2 identified money laundering.
Ipsos reviewed the pilot data in several ways:
-
a rapid review of the initial raw survey data (within 3 days of the start), to check script routing and any excessive “don’t know” responses
-
further monitoring of the raw survey data through an automated Excel report, which flagged when a respondent experienced fraud, bribery or money laundering (to be able to check “other – specify” responses)
-
daily automated updates on average interview length and sample eligibility
-
listening to recordings for 6 interviews where businesses had experienced each type of economic crime, to check the question delivery and respondents’ ability to answer
-
asking interviewers to fill out individual feedback reports at the end of the pilot
After the pilot, Ipsos set out a list of proposed changes to the questionnaire to the Home Office. A final version of the questionnaire was agreed for the main fieldwork, which included the changes presented in Table 2.2. These changes were generally not substantial enough to change the meaning of the questions that remained. As such, Ipsos and the Home Office agreed the 200 pilot interviews could form part of the final data.
Table 2.2: Changes made to the questionnaire after the pilot
| Type of change | Description |
|---|---|
| Simplifying the introduction | The introduction was specifically simplified for micro and small businesses, to avoid putting off those that did not have a specific individual in a finance, legal or compliance role, as well as those that had not experienced economic crime. |
| Simplifying answer options | We merged the “no” and “not applicable” answer options at CORRUPTION_MANAGE, as respondents were taking time to distinguish between these 2 responses, when there was no meaningful difference between the 2 answers when it came to reporting. |
| Updates to split-sampling | During the pilot, certain questions were split-sampled to be asked only to a random half of respondents to reduce the overall questionnaire length. After the pilot, and on reflection with the Home Office, this was updated to a random two-thirds, to ensure there would be enough sample to analyse these questions in more detail. A list of the final split-sampled questions is available in the appendix (Section 5.2). To keep the interview length to an acceptable level, additional cuts were made elsewhere (covered later in this table). |
| Additional interviewer briefing notes | Interviewers were briefed again after the pilot with various time-saving measures. For example, if respondents said “no” rather than zero at a question where a numeric response was requested, interviewers did not need to clarify that “no” meant “zero”. |
| Additional interviewer probing instructions | Where respondents were providing vague answers at unprompted questions, we added interviewer instructions to prompt to code to the existing codeframe (for example, at OFFERED_PURPOSE and OFFERED_IMPACT), or to probe for further reasons (for example, at FRAUD_NOREPORT if a respondent initially said they “dealt with the incident internally”). |
| Adding further pre-codes at unprompted questions | Several answer codes were added to unprompted questions for comprehensiveness. For example, an “other – specify” response was added to FRAUD_PERPETRATOR, a code for checking bank statements or invoices was added to FRAUD_DETECTION, and new answer options were added to ML_LOW (including that the business does not deal in cash, transactions are low-value, and having identity verification in place). |
| Adding new questions | At the Home Office’s request, we added a new question to the corruption preparedness and risks section asking whether businesses have had concerns about staff being involved in corruption in the last 12 months (CORRUPTION_STAFF). |
| Wording changes and clarifications | Several wording changes and additions were made throughout the questionnaire to increase comprehension. These included: clarifying that a direct approach by a fraudster included email contact at FRAUD_DETECTION; providing clearer wording so that the MANAGE questions captured the presence of staff member whose “job description” specifically included monitoring and investigating these incidents (to avoid false positives if the respondent felt they would probably investigate incidents themselves anyway); adding clarification for corruption questions (for example, OFFERED_HAD) that gifts, in the context of these questions, did not include free samples or trials given for marketing purposes. |
| Removing “other – specify” answer options | The pilot aimed to test whether the lists at FRAUD_TYPES and ML_TYPES provided an exhaustive set of categories of fraud and money laundering respectively. For the pilot, in order to capture other categories that these lists might have missed, an “other – specify” option was included. After the pilot, the Home Office agreed that the “other” option for fraud would be removed, whereas it would be kept in for money laundering. For fraud, this removal led to a small number of post-fieldwork data edits to pilot respondents’ answers (see Section 2.4.3). |
| Removing questions | Ipsos agreed with the Home Office to delete several questions measuring business characteristics, to reduce the average interview length. These included questions to identify goods vs. service businesses, business-to-business vs. business-to-consumer businesses, what proportion of turnover came from their biggest customer, the age of the business, and whether it was a family business. |
2.1.6 Changes made to the questionnaire during fieldwork
Following the pilot, the questionnaire remained largely intact for the remainder of fieldwork. However, 2 further sets of changes were made during the main fieldwork, on agreement with the Home Office, as laid out in Table 2.3.
Table 2.3: Changes made to the questionnaire during main fieldwork
| Type of change | Description |
|---|---|
| More granular business size bands | On 8 March 2024, the government suggested it may redefine large businesses (currently those with 250 or more employees) as being those with 500 or more employees. For most businesses in the survey, exact information is captured on the number of employees. However, if respondents say they do not know an exact answer, they are presented with size bands (at SIZEB). This banded question was amended to split out the 250 to 499 and 500+ size bands. This change was made very early in fieldwork, meaning there was no missing data from earlier interviews. |
| Greater validation of responses at the money laundering questions | There was evidence from the qualitative recruitment that suggested some respondents were giving false positive responses in the survey, with some examples of businesses misunderstanding ML_TYPES_05 to be about invoice fraud and ML_TYPES_10 to be about investment fraud. Therefore, on 17 July 2024, the Home Office and Ipsos agreed to make the following changes to the money laundering questions to better validate that respondents had genuinely faced a money laundering incident: - adding ML_CHECK to double-check with any respondents identifying a money laundering incident, that this incident was definitely one where they knew or suspected that the money being moved or used for payment was derived from criminal activity; changing ML_OTHER routing so that this question (which asked respondents to describe the money laundering incident they had experienced) was asked of all respondents that said only one type of money laundering at ML_TYPES (as well as those who said they had experienced an “other” type of money laundering incident at ML_TYPES); systematically sharing the responses from ML_OTHER with the Home Office (each week) to revalidate whether these descriptions were of genuine money laundering incidents or not – including all responses received before 17 July. It is important to note that not all the verbatim responses at ML_OTHER had enough information to definitively show that the incident was money laundering. Therefore, the approach adopted was to remove cases where the information provided definitively showed that incidents were not money laundering, and to leave all other cases unedited. |
A copy of the final questionnaire used in the main survey, after the changes at Table 2.3 were made, is available here.
2.2 Sampling
2.2.1 Target businesses and exclusion of zero-employee businesses
The target population was UK private sector businesses (including sole traders with employees, companies and partnerships) with one or more employees. This included all economic sectors except the public sector, households as employers and extra territorial organisations, meaning that Standard Industry Classification (SIC) 2007 sector groupings O, T and U were excluded. It also included businesses across the UK (that is, England, Northern Ireland, Scotland and Wales). Charities may have been included in this sample but only if they were also registered as businesses.
During the development phase, Ipsos and the Home Office discussed the possibility of including zero-employee businesses in the sample, either as part of the overall business sample or, alternatively, as an independent sample (reported separately from employers). The Home Office ultimately decided to exclude these businesses. This was because, as a proportion of business units, zero-employee businesses accounted for 74% of the total UK business population[footnote 3], even though their combined contribution to economic output is relatively low. Their inclusion as part of a representative sampling approach would have crowded out the sample allocated to businesses with one or more employees.
This would have made it unfeasible to report any findings specifically for small, medium and large businesses. It would also have potentially decreased the statistical reliability of any numeric data collected in the survey (for example, the mean number of frauds experienced in the last 12 months), since the distribution of economic crime, as shown in the 2020 Home Office study and reaffirmed in this study, tends to be more concentrated among medium and large businesses.
2.2.2 Sample frame
Ipsos purchased a sample of UK businesses from the Market Location business database for use in this study. During the development phase, various potential sample sources were reviewed, with Market Location being jointly chosen by Ipsos and the Home Office as the superior source. Other options considered were the Office for National Statistics’ (ONS) Inter-Departmental Business Register (IDBR), the Dun & Bradstreet (D&B) business database, and the Experian business database. Market Location was found to be superior to these alternatives in terms of balancing the joint requirements of good population coverage, usability and quality.
2.2.3 Sample selection and disproportionate sampling approach
In total, Ipsos selected 67,824 businesses from the Market Location database. Records were selected based on disproportionate targets by sector and size. The intention of a disproportionate sampling approach was to:
-
ensure that there would be a sufficient number of achieved interviews for important size subgroups (namely small, medium and large businesses) and sector subgroups (discussed below) to be able to report differences by size and sector
-
boost the sections of the sample that were anticipated to be most likely to have experienced fraud, corruption and money laundering, in order to increase variance in the survey responses, and reduce standard error (that is, increase statistical reliability)
The following sectors were identified in the rapid evidence review and stakeholder workshops as particularly important for analytical purposes:
-
agriculture, forestry and fishing (SIC A)
-
utilities and production (SIC BDE)
-
construction (SIC F)
-
retail and wholesale (SIC G)
-
finance and insurance (SIC K)
-
information and communication (SIC J)
-
real estate (SIC L)
-
professional, scientific and technical (SIC M)
-
health, social care and social work (SIC Q)
However, not all sectors needed to be boosted. For example, the construction, information and communication, professional services, and retail and wholesale sectors already made up a large part of the economy (in terms of total number of business units), according to DBT business population estimates 2023 (the latest ones available at the time of sampling), so did not need further boosting. Therefore, we focused on ordering a disproportionately large amount of sample for the following sectors:
-
agriculture, forestry and fishing (SIC A)
-
utilities and production (SIC BDE)
-
finance and insurance (SIC K)
-
real estate (SIC L)
-
health, social care and social work (SIC Q)
The Home Office also expressed an interest in boosting businesses that were subject to Money Laundering Regulations. These businesses would come from small subsectors within SIC sector groupings K, L and M (for example, accountants, financial service businesses, estate agents and solicitors), which were either already being boosted or sampled in sufficient numbers. As it was not feasible for a survey like this to target very specific subsectors of the economy, the Home Office considered the broader SIC sector boosts to be sufficient. The regulated sector was well-represented in the final survey sample, with 15% of respondents (12% of the weighted sample profile) self-identifying as regulated businesses. This is greater than the estimated 2% to 7% of businesses that are regulated within the full population of businesses with employees (derived from the HMT supervision report 2023/24 and DBT business population estimates 2024).[footnote 4]
Table 2.4 breaks down the received Market Location sample (67,824 records) by size and sector, before it underwent any cleaning. Not all this sample was used in fieldwork. As the survey fieldwork outcomes later in this chapter (see Section 2.3.8) show, only 45,945 Market Location records were included in the final survey. This is because 11,522 records were unusable (see Section 2.2.4) and the remaining 10,357 were not required to meet the targets, that is, they were held in reserve.
Table 2.4: Pre-cleaning Market Location sample received by size (number of employees) and sector (SIC letter and description)
| 1 to 9 | 10 to 49 | 50 to 249 | 250+ | Total | |
|---|---|---|---|---|---|
| A: Agriculture | 2,071 | 225 | 180 | 40 | 2,516 |
| BDE: Utilities and production | 1,160 | 2,004 | 548 | 130 | 3,842 |
| C: Manufacturing | 1,654 | 546 | 1,725 | 788 | 4,713 |
| F: Construction | 7,949 | 470 | 312 | 107 | 8,838 |
| G: Retail and wholesale | 5,938 | 1,175 | 1,543 | 846 | 9,502 |
| H: Transport and storage | 1,759 | 186 | 563 | 209 | 2,717 |
| I: Food and hospitality | 4,931 | 1,037 | 793 | 243 | 7,004 |
| J: Information and communication | 1,554 | 307 | 1,060 | 325 | 3,246 |
| K: Finance and insurance | 1,328 | 218 | 1,003 | 478 | 3,027 |
| L: Real estate | 2,113 | 266 | 254 | 99 | 2,732 |
| M: Professional, scientific and technical | 3,784 | 545 | 1,378 | 574 | 6,281 |
| N: Administration | 3,710 | 518 | 1,006 | 463 | 5,697 |
| P: Education | 305 | 61 | 35 | 37 | 438 |
| Q: Health, social care and social work | 1,135 | 448 | 1,298 | 498 | 3,379 |
| R: Arts and recreation | 573 | 146 | 354 | 176 | 1,249 |
| S: Service and membership organisations | 2,307 | 134 | 70 | 132 | 2,643 |
| Total | 42,271 | 8,286 | 12,122 | 5,145 | 67,824 |
2.2.4 Sample cleaning and improvement
Ahead of the pilot fieldwork, the sample was cleaned. In this process, 11,522 records were found to be unusable for the reasons noted in Table 2.5. We systematically excluded businesses listed as having only one or 2 workers, as these were found in large numbers during the pilot to have zero employees (that is, a very high ineligibility rate).
Table 2.5: Number of sampled Market Location records excluded for various reasons
| Number of records | |
|---|---|
| Businesses listed as having only one or 2 workers | 10,193 |
| Duplicate phone number or email | 620 |
| Isle of Man or Channel Islands business | 365 |
| Phone number or email on Ipsos’ list of businesses that have refused all research contact | 185 |
| No valid telephone number (for example, a premium 09 number) | 159 |
Table 2.6 shows a breakdown of the 56,302 records available to be used post-cleaning.
Table 2.6: Post-cleaning available Market Location sample by size (number of employees) and sector (SIC letter and description)
| 1 to 9 | 10 to 49 | 50 to 249 | 250+ | Total | |
| A: Agriculture | 1,162 | 224 | 170 | 38 | 1,594 |
| BDE: Utilities and production | 927 | 1,896 | 511 | 119 | 3,453 |
| C: Manufacturing | 1,301 | 541 | 1,690 | 751 | 4,283 |
| F: Construction | 5,594 | 462 | 306 | 107 | 6,469 |
| G: Retail and wholesale | 4,386 | 1,157 | 1,512 | 805 | 7,860 |
| H: Transport and storage | 1,367 | 184 | 543 | 198 | 2,292 |
| I: Food and hospitality | 3,865 | 1,024 | 776 | 235 | 5,900 |
| J: Information and communication | 1,179 | 301 | 1,000 | 288 | 2,768 |
| K: Finance and insurance | 1,068 | 212 | 932 | 423 | 2,635 |
| L: Real estate | 1,777 | 261 | 246 | 94 | 2,378 |
| M: Professional, scientific and technical | 3,066 | 536 | 1,345 | 543 | 5,490 |
| N: Administration | 2,702 | 505 | 976 | 440 | 4,623 |
| P: Education | 262 | 61 | 34 | 37 | 394 |
| Q: Health, social care and social work | 945 | 442 | 1,276 | 475 | 3,138 |
| R: Arts and recreation | 415 | 141 | 344 | 163 | 1,063 |
| S: Service and membership organisations | 1,633 | 132 | 70 | 127 | 1,962 |
| Total | 31,649 | 8,079 | 11,731 | 4,843 | 56,302 |
Ipsos took efforts to improve the quality of the sample, to help raise the survey response rate. All Market Location sample was supplied with telephone numbers, so further telephone tracing was not required. We worked with an external sampling partner to add further key decision maker contact names and emails where possible. These details came from web scraping of company websites, online news sites, press releases and publicly available LinkedIn profiles, with Ipsos providing a guide of relevant job titles. As per the Market Location contact names and emails, these details were not necessarily assumed to be for the appropriate individual in the business. Instead, they were used in this study to broker contact with the appropriate individual, and to invite them to take part online, alongside the telephone survey approach.
Before this sample improvement work, 29,239 out of the 56,302 usable records had emails (from the original Market Location sample). These were a mix of generic company emails and specific key decision maker contact emails. All Market Location records had a key decision maker contact name attached, although these did not necessarily match the ideal list of job titles.
Following the sample improvement work, 33,801 records out of the usable sample had emails (both the original ones from Market Location and the additional sourced emails). The sample improvement work added an extra 18,288 contact names to the usable sample that were different from the original Market Location sample.
2.2.5 Sample batches
The 45,587 records used for the pilot and main fieldwork were randomly allocated into batches, so that each batch could be fully exhausted before further batches were released. Table 2.7 shows the sample volumes for each batch and when they were released. Batches 4 and 5 were released close together, as batch 4 focused on micro businesses, while batch 5 focused on medium and large businesses.
Table 2.7: Sample batches
| Batch number | Number of records | Date of release |
| 1 | 12,544 | 22 February 2024 |
| 2 | 19,831 | 21 March 2024 |
| 3 | 8,118 | 17 May 2024 |
| 4 | 3,360 | 2 August 2024 |
| 5 | 2,092 | 12 August 2024 |
The random selection counts for each batch were modelled according to the following criteria:
-
if a particular size band or sector had a higher interview target based on the disproportionate stratification, we selected more records to reflect that higher target
-
equally, if a particular size band or sector had historically achieved lower response rates, we selected more records to reflect these lower response rate expectations; the response rate expectations were modelled on how other recent Ipsos surveys had performed when using Market Location sample
-
subsequent batches were also reshaped to take into account the remaining interview targets and response rates achieved up to that point
2.3 Fieldwork
In total and including the pilot interviews, Ipsos carried out 3,477 interviews between 22 February and 30 August 2024. This section covers the data collection modes, random probability approach, fieldwork preparation and timeline, respondent screening and fieldwork outcomes (including the response rate).
2.3.1 Multimode data collection
As part of a range of measures to improve the survey sample coverage and response rate, we undertook multimode (online and telephone) data collection following the telephone-only pilot. In practical terms, the multimode methodology worked as follows:
-
the businesses with email addresses received up to 4 invite emails containing a unique survey link, if they had not already completed (or refused) an online or telephone interview by that point, spread across several weeks of the fieldwork period; the sample left over from the pilot, where it had email addresses, was also included in these emails
-
beyond this, initial contact with all sampled businesses took place by phone, with Ipsos telephone interviewers calling the sample throughout the fieldwork period; the online survey remained open during this period, with respondents that completed online removed from the telephone queue in real time
-
where a business requested more information over the phone before deciding to take part, interviewers could send out an information and reassurance email to them, generated from the survey script; this email also contained the unique online survey link
Ultimately, 5% of the total 3,477 businesses that completed an interview did so online (167 interviews) and 95% completed it via telephone (3,310 interviews).
Ipsos and the Home Office did not expect there to be substantial mode effects in this survey, given that much of the information collected was factual, rather than attitudinal. Nevertheless, there were various measures in place to minimise the chances of mode effects and to monitor the data to identify mode effects:
-
we used unimode questionnaire design wherever feasible, whereby the questionnaire administration was as similar as possible for respondents across modes; for example, sequential statements on the telephone survey (for example, at FRAUD_TYPES) appeared as a carousel of statements in the online survey; we minimised the number of questions with long, unprompted answer lists in the telephone survey, which would need to be prompted answer lists in the online survey (for example, at FRAUD_DETECTION and ML_DETECTION) – see Section 2.3.2.
-
we added a screener question to the online survey (VERIFYSENIOR) for respondents to self-validate that they were the right person within their organisation to complete the survey – something the telephone interviewer would have established verbally; this was an extra quality assurance to prevent the survey being completed by someone who would be unable to answer many questions
-
as part of the final data checks, we reviewed the answers of online respondents to see if they had been satisficing during the interview (for example, only answering “don’t know”), or if they had sped through the interview in an unusually short amount of time (under 5 minutes); following these broad checks, we did not need to remove any online respondents from the final data
-
finally, the intention was that only a small proportion of the sample would complete the survey online, so that any potential mode effects would be contained, while yielding the benefits mentioned earlier (improving coverage and response by offering an additional option for completion); in this case, we did not have to cap the number of online interviews, given that it was only 5% of all completed interviews
2.3.2 Minor differences between the telephone and online survey administration
A small number of survey questions were administered differently in the telephone interviews (the primary data collection approach) and the online interviews. These questions were unprompted when carried out by phone, that is, the business provided an answer in their own words, which was then coded according to the response list in the questionnaire. This was not feasible online, so the response list was simply shown to the online respondents, for them to select the appropriate answers. While it was typical for businesses responding online to provide a wider range of answers to these questions, when shown a list of possible responses, neither the telephone nor online method is likely to have biased respondents towards a particular set of answers. Therefore, we have merged both sets of answers in this report, and indicated the different approaches used underneath the respective charts for these questions.
2.3.3 Random probability approach
Random-probability sampling and interviewing was used to minimise selection bias. The overall aim with this approach is to have a known outcome for every piece of sample released. Our approach was comparable to other robust business surveys, involving the following:
-
we called each piece of sample a minimum of 5 times even if there was no response, in addition to email contact (see Section 2.3.1), until we achieved an interview, received a refusal, or received enough information to make a judgement on the eligibility of that contact – whichever was achieved first; typically, we called leads 10 or more times (for example, when respondents had requested to be called back at an early stage in fieldwork but were subsequently not reached); the sample records called 5 times consecutively without any response were not immediately discarded, but were deprioritised in later stages of fieldwork (for example, by moving them to the back of a call queue)
-
each piece of sample was called at different times of the day, throughout the working week, to make every possible attempt to achieve an interview; we also offered evening and weekend interviews on request to respondents
-
we offered a £15 incentive (in the form of a charity donation) to the businesses identified as large (with 250 or more employees) in our sample, if they completed an interview; respondents had the option to choose between 3 charities: Turn2Us, the National Society for the Prevention of Cruelty to Children (NSPCC) or Samaritans; this reflected the expected lower response rate among large businesses
2.3.4 Fieldwork timeline
There were 2 major pauses in fieldwork outside of the control of Ipsos or the Home Office. These were due to the local government elections in England and Wales, and the later UK general election. During these periods, neither telephone nor online interviewing was permitted. No calls were made and the online survey links were temporarily disabled. This, coupled with the very high target number of interviews relative to other business surveys, led to a protracted fieldwork period for this survey.
Table 2.8 outlines the full fieldwork timeline, across the pilot and main stages.
Table 2.8: Timeline of key events and pauses in fieldwork
| Date | Event |
|---|---|
| 22 February to 1 March 2024 | Pilot, with survey paused from 1 to 14 March to allow post-pilot script changes to be checked and approved |
| 14 March 2024 | Survey relaunched for main fieldwork (incorporating post-pilot changes) |
| 11 April to 2 May 2024 | Survey paused for local pre-election period |
| 3 May 2024 | Survey relaunched |
| 25 May to 4 July 2024 | Survey paused for general pre-election period |
| 5 July 2024 | Survey relaunched |
| 30 August 2024 | Fieldwork closed |
2.3.5 Fieldwork preparation
The Ipsos telephone interviewers received 3 recorded briefings from the research team, first ahead of pilot fieldwork, again ahead of the main fieldwork, and finally ahead of recommencing fieldwork after the general election period (see Table 2.8). The Home Office participated in the first of these briefings. The initial briefing covered the purpose of the research (explained by the Home Office team), the questionnaire content, and any specific instructions and questions they should be mindful of. The post-pilot briefing focused on the changes made as a result of the pilot and the time-saving measures that interviewers could adopt to reduce the average interview length.
The following materials were also provided to interviewers:
-
written briefing notes about all aspects of the survey
-
a copy of the questionnaire, privacy policy and the invite and reassurance email scripts
Screening of respondents
The survey script was set up to screen all sampled businesses at the beginning of the interview, to ensure the business was eligible to take part, and to identify the appropriate individual within the business to take part. At this point, the following businesses were removed as ineligible:
-
businesses with zero employees
-
public sector organisations
In addition, telephone interviewers were instructed to reach someone senior in the business, based in the UK, with an overview of major finance, legal, or compliance matters affecting the business. The interviewer briefing materials clarified that:
-
in micro, small and medium businesses, the appropriate person was most likely to be a business owner, director, or another board member
-
in large businesses, relevant roles could include a list of job titles such Chief Finance Officer (CFO), Finance Manager, General Counsel (GC), Head of Compliance, other senior finance roles, a director, or a board member
The survey introductory script also summarised who we intended to speak with inside larger businesses, requesting someone senior with an overview of major finance, legal or compliance matters affecting the business, such as a director, legal counsel or compliance officer.
For UK businesses that were part of a multinational group, interviewers requested to speak to the relevant person in the UK who dealt with finance, legal or compliance issues specifically for the UK business. In any instances where a multinational group had different registered businesses in Great Britain and in Northern Ireland, both companies were considered eligible. Franchisees with the same business name but different trading addresses were also all considered eligible as separate, independent respondents.
For the online survey, there was a check question at the start which asked respondents to confirm that they were the most appropriate person from their business to take part, reiterating the same points given to telephone interviewers.
2.3.6 GOV.UK page
A GOV.UK page was created to reassure respondents that the survey was legitimate and provide more information before respondents agreed to take part. Ipsos drafted the text of this page while the Home Office managed its creation.
Telephone interviewers could refer to the page at the start of the telephone call, while the invite and reassurance emails sent to sampled businesses also included a link to it.
2.3.7 Fieldwork monitoring and quality control
Ipsos implemented several quality control and monitoring measures throughout fieldwork:
Ipsos is a member of the Interviewer Quality Control Scheme recognised by the Market Research Society. In accordance with this scheme, the telephone supervisor on this project listened in on at least 10% of the interviews and checked the data entry on screen for these interviews. The Ipsos core research team also listened to recordings for 6 interviews during the pilot phase to help understand if there was anything that could be improved in the questionnaire.
The Ipsos team also monitored fieldwork progress using automated monitoring tools. These provided daily statistics on the number of completed interviews, average interview length and sample eligibility, as well as weekly statistics on the number of completed interviews by data collection mode, business size, sector and region, and the number of cases of fraud, bribery and money laundering being identified each week. In addition, later in fieldwork, Ipsos regularly provided excerpts of the anonymised raw survey data to the Home Office for checking, as well as the descriptions provided by respondents experiencing money laundering (at ML_OTHER). This enabled the Home Office to revalidate cases of money laundering picked up in the survey (see Section 1.3.2, regarding the limitations of the survey statistics on money laundering).
Beyond the thorough checks at ML_OTHER, Ipsos also monitored the responses to the various “other – specify” questions throughout fieldwork, to check whether there were large proportions of answers in “other” categories instead of the pre-determined codes. Beyond the codes added following the pilot stage, no further codes were added to the pre-determined lists.
The telephone project manager had weekly meetings with the Ipsos team to check on fieldwork progress, monitor sample usage, and identify strategies for the coming week. This was also an opportunity for the telephone team to raise anything that interviewers found difficult or confusing. In addition, there was a scheduled meeting between the Ipsos research and telephone teams, and the Home Office, in order to discuss survey progress roughly halfway through fieldwork.
2.3.8 Fieldwork outcomes and response rate
In total, Ipsos completed interviews with 3,477 businesses. The average interview length was 24 minutes. Table 2.9 shows the final outcomes, the unadjusted response rate,[footnote 5] and the adjusted response rate[footnote 6] from the survey. It is good practice when reporting response rates for random probability surveys to report both adjusted and unadjusted rates. The adjusted response rate is considered the more accurate measure of the survey’s performance, accounting for the fact that a proportion of the sample was not eligible to take part.
Table 2.9: Fieldwork outcomes and response rate calculations
| Outcome | Total |
|---|---|
| Total sample released | 45,945 |
| Completed interviews | 3,477 |
| Incomplete interviews | 304 |
| Ineligible records[footnote 7] | 558 |
| Refusals | 5,156 |
| Unusable numbers[footnote 8] | 5,895 |
| Active numbers[footnote 9] | 30,555 |
| Unadjusted response rate | 8% |
| Expected eligibility[footnote 10] | 87% |
| Adjusted response rate | 10% |
2.4 Data processing and editing
2.4.1 Data outputs
An SPSS dataset was created to include all the survey variables and derived variables needed for analysis and reporting. This process is discussed further in Section 2.5. In addition, a series of data tables with pre-defined statistical significance testing (based on t-tests, and undertaken for all variables with effective sample sizes of 30 or higher) and crosstabs were created using Quantum software, to speed up this analysis and reporting – these tables are very common in market and social research studies.
Significance testing for subgroup differences focused on the following groups (looking at subgroup versus total differences, as well as subgroup versus subgroup):
-
size of business (number of employees and turnover, from the questionnaire)
-
sector based on SIC (from the sample)
-
region (from the sample)
-
whether the business was in a regulated sector (from the questionnaire)
-
international trading status (from the questionnaire)
-
international trading areas (from the questionnaire)
-
experience of fraud, bribery or money laundering (from the questionnaire)
-
awareness of financial sanctions (from the questionnaire)
-
perceived risk of fraud, corruption, money laundering, or of breaching financial sanctions (from the questionnaire)
2.4.2 Coding
The verbatim responses to unprompted questions could be coded as “other” by interviewers when they did not appear to fit into the predefined code frame. Ipsos’ coding team coded these “other” responses manually, and where possible, assigned them to codes in the existing code frame. It was also possible for new codes to be added where enough respondents – 10% or more of all the respondents at the question – had given a similar answer outside of the existing code frame. The accuracy of the coding was verified by the Ipsos research team, who checked and approved each new code proposed. The Home Office also reviewed codes and suggested merging certain codes at the end of fieldwork.
Coding was not undertaken for OVERSEAS_COUNTRY. This was an open-ended question, asking those who had experienced a bribery incident overseas which country (beyond the UK) their most recent incident was in. Ultimately, only 6 respondents answered this question, providing too little data to report on and potentially making the answers disclosive of the respondent’s identity. Therefore, OVERSEAS_COUNTRY has not been included within the SPSS dataset. Instead, the Home Office were given the 6 verbatim responses (that is, the country names) separately.
2.4.3 Data edits and legitimate base discrepancies
In the pilot survey, within the fraud section of the questionnaire, there was an option for respondents to say they had experienced another type of fraud, other than the ones listed in FRAUD_TYPES. The verbatim descriptions of these incidents were then recorded in a variable called FRAUD_OTHER. Following the pilot, Ipsos and the Home Office concluded that the verbatim responses had not raised any new types of fraud that the questionnaire precodes had not already anticipated. Moreover, the 10 pilot respondents who mentioned “other” types of fraud had each provided descriptions that strongly suggested they were mistaken about these being fraud incidents – for example, one respondent mentioned issues of shoplifting. Therefore, Ipsos edited the responses for these 10 respondents to remove any illegitimate answers. In one case, this led to a base discrepancy of one for FRAUD_RECENT_DUM through to FRAUD_WHEREREPORT_NUM (that is, these questions have a base size of one less than expected), since this respondent had mentioned more than one type of fraud, and had chosen the “other” fraud as their most recent incident, but this incident was subsequently edited out of the survey data.
A similar set of edits were required in the money laundering section of the survey. Responses to ML_OTHER (where respondents who identified money laundering incidents were asked to briefly describe the incidents) were also reviewed by the Ipsos and Home Office teams. In some cases, this review highlighted that these were not legitimate money laundering incidents. It also became apparent during qualitative recruitment that some further incidents identified by respondents as money laundering in the survey were in fact not money laundering. This reviewing led to:
-
3 incidents being recoded from one type of money laundering to another at ML_TYPES
-
18 instances where a money laundering incident recorded in the survey was removed entirely from the final data
From the above 18 instances where money laundering was edited out of the survey data, there was one instance where a respondent had recorded 2 types of money laundering incident at ML_TYPES, but one of these was considered to be illegitimate, and edited out of the data. This meant we had no data on their most recent money laundering incident. This led to a base discrepancy of one for ML_DUM_2 through to ML_WHEREREPORT_NUM, as well as ML_REGULATOR (that is, these questions have a base size of one less than expected).
No edits of this nature were made to the data on bribery and corruption.
2.4.4 Quality checks of the SPSS dataset and data tables
The following standard checks were carried out to ensure the integrity of the data outputs:
-
checking all respondents who had completed the survey had been included
-
checking that all derived variables (listed in Section 2.6.1) had been calculated correctly
-
checking all variables (in SPSS), as well as tables and crosstabs (in the data tables) had been included as specified
-
checking that the SPSS and data tables had matching data, both unweighted and weighted
-
checking the wording and spelling of all variable labels, value labels and table titles
-
checking the base sizes for all variables (in SPSS), as well as individual tables and crosstabs (in the data tables)
-
checking that any net codes added to the data tables had been calculated correctly
2.4.5 Outlier checks
Outliers are unusually low or high numeric values recorded in the survey that strongly influence any mean results. In the context of this survey, this might have been at the questions measuring the number of incidents or a particular crime type, the questions measuring the costs of fraud, or spending to manage the risks of fraud, or the question on the value of bribes.
Outliers are not necessarily inaccurate or unreliable. They could simply reflect the true distribution of values within the population. Therefore, a best practice approach is to minimise the chances of inaccurate or unreliable answers in the survey design, and to only remove outliers from the reported data if there are strong contextual concerns and statistical evidence to suggest that the recorded response was incorrect.
On this study, there were various measures in place before and during fieldwork to minimise the chances of recording inaccurate or unreliable survey responses:
-
the survey script contained several check questions that asked respondents to validate numeric responses below or above a certain threshold; the respondent was told the answer they had given and asked to check if this was correct, or whether they wanted to change their answer; this was both for telephone and online respondents
-
the online survey including an additional validation question, to check that the survey was completed by a senior individual from the respondent business, with an overview of major finance, legal or compliance matters affecting the business
In addition, there was a stage of checking during and after fieldwork to identify potentially inaccurate or unreliable responses, and more broadly to identify statistical outliers in the data:
-
during and after fieldwork, as noted earlier, the response quality of the online data was checked; there was no need to remove any online respondents as a result of these checks
-
after fieldwork was completed, the Ipsos team reviewed the distributions of responses at all questions allowing numeric answers, and identified outliers from a statistical perspective; in order to understand the impact that statistical outliers were having on the results, we adjusted the results using 2 techniques – trimming and Winsorization; more detail on the application of these techniques is available in an appendix (Section 5.3)
-
the Ipsos team also more broadly checked the credibility of unusually high responses; this involved cross-referencing the responses against the business size (SIZE), sector (SECTOR) and annual business turnover (TURNOVER) to identify potentially illegitimate responses in a purposive way; the Home Office also reviewed unusually high responses
All outliers were ultimately kept in the final data used in the findings report, as there was no clear evidence to suggest beyond doubt that any responses were incorrect. We opted to report the unadjusted and adjusted estimates (that is, trimmed or Winsorized, depending on the more appropriate technique in each instance) as an appendix to this technical report (Section 5.3). This was in order to show the extent to which a small number of cases in the data were driving certain estimates (for example, for incidence rates), overall and within subgroups. The unadjusted values are still considered to be the most representative estimates of the whole business population.
2.5 Weighting
2.5.1 Weighting approach
We applied Random Iterative Method (RIM) weighting to account where possible for non-response bias, and for the disproportionate sampling by size and SIC 2007 sector (see Section 2.2.3). The intention was to make the final reported data representative of the actual UK business population.
RIM weighting is a standard weighting approach undertaken in business surveys of this nature. In cases where the weighting variables are strongly correlated with each other, it is potentially less effective than other methods, such as cell weighting. However, this is not the case for this survey, as business size and sector are not correlated.
We used non-interlocking weights by size (number of employees) and sector, based on the population profile in the DBT business population estimates 2023 (the latest ones published at the time of data processing). A 2024 set of business population estimates has more recently been published, but the changes across years are minor, and we have opted on this basis not to reweight the data with the newest statistics.[footnote 11]
Non-interlocking weighting means that we did not weight by size within each sector but weighted the whole sample separately by size and then by sector. Interlocking weighting (that is, weighting by size band within each sector) was also possible but would have potentially resulted in very large weights. This would have reduced the statistical power of the survey results without making any considerable difference to the weighted percentage scores for each question, so was not applied.
2.5.2 Rationale for not weighting by annual business turnover
In the 2020 Home Office survey, the data was weighted by size in terms of annual business turnover, as well as by sector. These statistics were taken from the ONS IDBR (which is also the main data source for the DBT business population estimates). This is different from the approach in this latest survey, where size weights were based on the number of employees rather than turnover. There were various reasons behind this change in approach:
Reflecting the sampling approach
The sampling approach involved oversampling small (10 to 49 employees), medium (50 to 249) and large businesses (250 or more) relative to micro businesses (1 to 9). The definitions of these size categories are based on the number of employees, rather than turnover. Therefore, it was important to correct accurately for this oversampling, by applying weighting based on the number of employees.
Avoiding unnecessary weighting
The number of employees in a business has a strong linear correlation with the annual business turnover, which is evidenced in ONS data published in 2022. This is to be expected – businesses with more employees have higher payroll costs, and typically require a higher turnover to fund these costs. Typically in survey weighting, it is not necessary to weight to more than one correlated weight factor. In other words, the weighting by number of employees would largely perform the same function as weighting by annual business turnover. Moreover, weighting by 2 correlated weight factors can cause problems with the data and lead to unusual weights for individual respondents. Weighting by unnecessary weight factors also reduces the statistical reliability of the data, so is not desirable.
Available population profile data
The DBT business population estimates do contain some data on annual business turnover, but the most comprehensive counts available in this annual publication are on number of employees within each SIC sector. Therefore, the most readily available population profile data on UK businesses, for the purpose of weighting, was on business size by number of employees, rather than turnover. It would have been possible to submit a data request to the ONS, to request IDBR counts by turnover and sector – this was the approach taken in the 2020 survey – but this would have slowed down the survey data processing. This should be considered as a more minor point, rather than a main reason for not weighting by turnover.
Potential inclusion of zero-employee businesses
As discussed in Section 2.2.1, in the development stages of the project, the Home Office considered including zero-employee businesses. The IDBR does not contain turnover data for a large proportion of zero-employee businesses. It is compiled from Value Added Tax (VAT) and Pay As You Earn (PAYE) records from HM Revenue and Customs, so does not cover unregistered businesses without employees. By contrast, the DBT business population estimates do contain an estimation of the count of zero-employee businesses within each SIC sector.
Ultimately, the Home Office decided to exclude zero-employee businesses from this survey. However, had they been included, it would have been necessary to sample and weight the data based on the available population statistics, which cover the number of employees rather than turnover. In addition, if they are included in future iterations, the approach of weighting by number of employees and sector can be maintained.
Following the approach of other established business surveys. Weighting by number of employees and SIC sector is a common approach already used in various government surveys of businesses. This includes the Home Office and Department for Science, Innovation and Technology Cyber Security Breaches Survey series, which is an Official Statistic. There was, therefore, a strong precedent for using the same weighting approach with this survey.
There are examples of UK business surveys that have turnover weighting, such as the ONS Annual Business Survey (ABS). However, the ABS has 2 sets of weights, weighting either by number of employees or by annual business turnover, not both. This indicates that both types of weighting approaches are equally valid, but that using just one of these potential weight factors should be sufficient.
Ipsos reviewed the annual business turnover profile in the weighted data, and compared it to the weighted profile of the 2020 survey. This acted as a sense check to ensure that the survey did not suffer from any unusual skews when it came to turnover. This broad comparison confirmed that this latest survey had not undercounted high-turnover businesses (£1 million or more a year) in the weighted sample. Therefore, we do not have major concerns that the results are biased because of not weighting by turnover.
2.5.3 Rationale for not weighting by region
Ipsos trialled weighting by region alongside size and sector, but ultimately agreed with the Home Office not to apply these additional region weights. This reflected the fact that there had been no disproportionate sampling by region, meaning corrective region weights were not essential. In addition, there was no strong theoretical reason or wider evidence to suggest that key statistics from this survey would be substantially correlated with region, so non-response bias by region was not a major concern.
Moreover, the final data, weighted by size and sector, already closely aligned with the regional profile of the population, albeit with an underrepresentation of businesses in London. London businesses accounted for 9.6% of the sample weighted by size and sector, compared with 17.6% of the total business population (according to the 2023 business population estimates). However, this underrepresentation of London businesses had a negligible impact on the data. This is evidenced by the survey results with and without region weights ultimately being very similar on several key statistics, including the prevalence (for example, FRAUD_DUM) and perceived risk (for example, FRAUD_RISK) of each crime type.
2.5.4 Rational for not weighting by regulated status
Regulated businesses refer to those with anti-money laundering obligations, under the Money Laundering Regulations 2017. The regulated sector was well-represented in the final survey sample, with 12% of weighted respondents self-identifying as regulated businesses. This is greater than the estimated 2% to 7% of businesses that are regulated within the full population of businesses with employees (derived from the HMT supervision report 2023/24 and DBT business population estimates 2024).[footnote 12] While this is an overrepresentation relative to the population profile, we did not weight by regulated status for 3 main reasons:
Weighting complexity
Anti-money laundering regulation is applicable to certain business activities and is therefore associated with some SIC 2007 sectors more than others. Weighting by regulated status as well as SIC 2007 sector would have increased the complexity of the weighting and could have led to unusual weights for individual respondents, since both variables are linked. This would have potentially reduced the statistical reliability of the weighted results.
Unintended impact on fraud and corruption data
Regulated status was only relevant to the money laundering section of the survey. However, weighting by regulated status would have affected the data for the fraud and corruption questions as well.
Availability of matching and specific population data
The available statistics on the supervised population from HM Treasury includes zero-employee businesses whereas this survey excluded such businesses. In addition, the data sources available to derive estimates on the proportion of regulated businesses consider 2 slightly different time periods, with the HM Treasury figures covering the end of the 2023/2024 financial year, and DBT figures covering the start of 2024. Therefore, there was no single, robust estimate that matched our survey population, to be used for weighting purposes.
2.5.5 Impact of the weighting
The weighting had the greatest impact on the medium and large business samples. While we interviewed 450 and 161 medium and large businesses respectively, these only count for 92 and 20 responses respectively in the weighted sample (out of a total 3,477 responses). This was expected, because the sampling approach (see Section 2.2.3) intentionally and considerably oversampled medium and large businesses (and, to a lesser extent, small businesses) relative to the overall business population profile. Therefore, the fact that medium and large businesses make up a very small percentage of the weighted sample is an inherent part of the representative sampling and weighting approach. It does not indicate that the survey estimates underrepresent the more high-profile or costly incidents that affect larger businesses.
Weighting is necessary to produce representative survey estimates, but strong weights reduce statistical reliability. The weight efficiency of a survey measures the impact of the weighting on the survey’s statistical reliability. In this survey, the weight efficiency was 77%. This is calculated by dividing the effective sample size (2,680) by the total unweighted sample size (3,477). The effective sample size calculation is a standard calculation in weighted surveys (using Kish’s formula). It is:
(Σ weights)2 / Σ weights2
This should be considered a good weight efficiency for a UK business survey, balancing representativeness against statistical reliability. There is no standard benchmark for an acceptable weight efficiency – it will always depend on the extent to which the achieved unweighted survey sample deviates from a proportionate sampling approach. This can be an intentional deviation, as with this survey, where we intentionally and heavily oversampled medium and large businesses. Nevertheless, we can compare the weight efficiency to other major UK business surveys taking a similar sampling and weighting approach, such as the Cyber Security Breaches Survey 2024 (another Official Statistic business survey co-funded by the Home Office), which had a weight efficiency of 70%.
Table 2.10 shows the sample proportions before and after the weighting was applied. These are rounded to one decimal place.
Table 2.10: Unweighted and weighted sample profile
| Unweighted percentage | Weighted percentage | |
|---|---|---|
| Size | ||
| Micro (1 to 9 employees) | 60.1% | 81.5% |
| Small (10 to 49 employees) | 22.3% | 15.4% |
| Medium (50 to 249 employees) | 12.9% | 2.5% |
| Large (250 or more employees) | 4.6% | 0.5% |
| Sector | ||
| A: Agriculture | 3.2% | 3.6% |
| BDE: Utilities and production | 3.7% | 0.6% |
| C: Manufacturing | 8.0% | 6.2% |
| F: Construction | 11.4% | 13.6% |
| G: Retail and wholesale | 16.0% | 17.2% |
| H: Transport and storage | 3.5% | 3.6% |
| I: Food and hospitality | 8.3% | 10.1% |
| J: Information and communication | 4.5% | 5.3% |
| K: Finance and insurance | 4.0% | 1.7% |
| L: Real estate | 4.9% | 3.5% |
| M: Professional, scientific and technical | 13.4% | 13.1% |
| N: Administration | 8.5% | 9.0% |
| P: Education | 0.9% | 1.5% |
| Q: Health, social care and social work | 4.2% | 4.2% |
| R: Arts and recreation | 1.8% | 1.9% |
| S: Service and membership organisations | 3.7% | 4.8% |
| Region | ||
| East Midlands | 7.6% | 7.2% |
| East of England | 12.0% | 12.2% |
| London | 9.8% | 9.6% |
| North East | 3.1% | 3.2% |
| North West | 8.4% | 8.7% |
| Northern Ireland | 2.3% | 2.1% |
| Scotland | 8.3% | 8.1% |
| South East | 15.8% | 15.7% |
| South West | 11.8% | 11.8% |
| Wales | 4.6% | 4.6% |
| West Midlands | 8.2% | 8.2% |
| Yorkshire and the Humber | 8.1% | 8.7% |
2.6 SPSS data and additional derived variables
2.6.1 List of derived variables
Table 2.11 shows the complete list of derived variables included in the SPSS file, the variable labels and their descriptions (including an explanation of how they were derived). The variable labels for all derived variables include the term “(DERIVED)” to easily identify them.
Table 2.11: Derived variables in the SPSS file
| Variable |
Source questions or variables |
Description |
|---|---|---|
| FRAUD_TYPES_YES_01 to FRAUD_TYPES_YES_12 | FRAUD_TYPES_01 to FRAUD_TYPES_12 | FRAUD_TYPES requests numeric responses as to how many instances, if any, businesses have had of specific types of fraud. This set of derived variables converts those numeric responses into selected/not selected responses (that is, “selected” for answers of one or more, and “not selected” for answers of zero, “don’t know” or “prefer not to say”). |
| FRAUD_CYBERENABLED_DUM | FRAUD_CYBERENABLED | A yes/no variable. This records whether the business experienced cyber-facilitated fraud or not. A “don’t know” at FRAUD_CYBERENABLED counts as a “no” here. |
| FRAUD_INVOICE_DUM | FRAUD_INVOICENAME, FRAUD_INVOICEREPLY | A yes/no variable. If the business either received an invoice that explicitly mentioned their business or their staff by name, any of their specific business activities, or that the business responded to (that is, they were a specific, intended victim), this counts as invoice fraud and is recorded here. A “don’t know” at either question counts as a “no” here. |
| FRAUD_INVOICENUM_DUM | FRAUD_INVOICENAME, FRAUD_INVOICEREPLY | A numeric variable. If the business had invoice fraud (recorded as “yes” at FRAUD_INVOICE_DUM), this sums the number of (mutually exclusive) instances across both questions to produce the total number of invoice frauds for each respondent. A “don’t know” at either question counts as a “0” in this calculation. If the business did not have any invoice fraud, this variable is set as missing (a value of -1). |
| FRAUD_INVEST_DUM | FRAUD_INVESTNAME, FRAUD_INVESTREPLY | A yes/no variable. If the business either encountered a fraudulent investment opportunity that explicitly mentioned their business or their staff by name, any of their specific business activities, or that their business engaged with (that is, they were a specific, intended victim), this counts as investment fraud and is recorded here. A “don’t know” at either question counts as a “no” here. |
| FRAUD_INVESTNUM_DUM | FRAUD_INVESTNAME, FRAUD_INVESTREPLY | A numeric variable. If the business had investment fraud (recorded as “yes” at FRAUD_INVEST_DUM), this sums the number of (mutually exclusive) instances across both questions to produce the total number of investment frauds for each respondent. A “don’t know” at either question counts as a “0” in this calculation. If the business did not have any investment fraud, this variable is set as missing (a value of -1). |
| FRAUD_DUM | FRAUD_TYPES, FRAUD_INVOICE_DUM, FRAUD_INVEST_DUM | A categorical variable. This records whether the business experienced no fraud, one type of fraud, or multiple types, rather than a simple yes/no – this was done to aid routing in the scripted questionnaire. Fake invoices and fake investment opportunities are only counted as fraud in line with the rules for FRAUD_INVOICE_DUM and FRAUD_INVEST_DUM. A “don’t know” or “prefer not to say” at the source questions counts as “no fraud” of this type here. |
| FRAUD_NUM_DUM | FRAUD_TYPES, FRAUD_INVOICENUM_DUM, FRAUD_INVESTNUM_DUM | A numeric variable. This records how many times the business experienced fraud, among those experiencing any. Fake invoices and fake investment opportunities are only counted as fraud in line with the rules for FRAUD_INVOICE_DUM and FRAUD_INVEST_DUM. A “don’t know” or “prefer not to say” at the source questions counts as a “0” in this calculation. If the business did not have any instances of fraud, this variable is set as missing (a value of -1). |
| FRAUD_NUM_ DUM_REBASED |
FRAUD_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| FRAUD_DIRECT_DUM | FRAUD_DIRECT, FRAUD_DIRECT_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_RECOVER_VALUE_DUM | FRAUD_DIRECT_DUM, FRAUD_RECOVER, FRAUD_RECOVER_VALUE, FRAUD_RECOVER_VALUE_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. Some businesses are not asked this cost question, as they have already said at FRAUD_RECOVER that they recovered the full value. In this case, that full value amount is taken from FRAUD_DIRECT_DUM. |
| FRAUD_AFTERMATH_DUM | FRAUD_AFTERMATH, FRAUD_AFTERMATH_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_STAFF_DUM | FRAUD_STAFF, FRAUD_STAFF_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_INDIRECT_DUM | FRAUD_INDIRECT, FRAUD_INDIRECT_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_ANYCOST_DUM | FRAUD_DIRECT_DUM, FRAUD_AFTERMATH_DUM, FRAUD_STAFF_DUM, FRAUD_INDIRECT_DUM | This is the combined value of all the cost estimates across the 4 cost questions above. It does not make adjustments for recovered costs. |
| FRAUD_NONRECOVERED_DUM | FRAUD_ANYCOST_DUM, FRAUD_RECOVER_VALUE_DUM | This is the combined value of all the cost estimates across the 4 cost questions above. It makes adjustments for recovered costs. |
| FRAUD_ZERO_DUM | FRAUD_DUM, FRAUD_DIRECT_DUM, FRAUD_AFTERMATH_DUM, FRAUD_STAFF_DUM, FRAUD_INDIRECT_DUM | This records whether the business experienced any costs relating to fraud, among those experiencing any frauds. They are listed either as experiencing “financial loss” or having “no financial loss”. A “don’t know” at any cost question counts as “no financial loss” here. |
| FRAUD_LOSS_DUM | FRAUD_NUM_DUM, FRAUD_ZERO_DUM, FRAUD_LOSS | This records how many frauds (among those experiencing any) that led to financial loss. It accounts for the fact that FRAUD_LOSS only picks up this information for businesses that incurred multiple frauds. For businesses that only mention one fraud in this survey (recorded at FRAUD_NUM_DUM), we can already infer if this resulted in a financial loss or not (based on FRAUD_ZERO_DUM). |
| FRAUD_RECENTDUM | FRAUD_TYPES, FRAUD_DUM, FRAUD_INVOICE_DUM, FRAUD_INVEST_DUM, FRAUD_RECENT | This records what type of fraud was most recently experienced (among those experiencing any). It accounts for the fact that FRAUD_RECENT only picks up this information for businesses that incurred multiple types of frauds. For businesses that only mention one type of fraud in this survey (recorded at FRAUD_ DUM), we can already infer what type this was (from FRAUD_TYPES, FRAUD_INVOICE_DUM or FRAUD_INVEST_DUM). |
| FRAUD_RECENTLOSS_DUM | FRAUD_NUM_DUM, FRAUD_ZERO_DUM, FRAUD_RECENTLOSS | This records whether the most recent fraud resulted in a financial loss. It accounts for the fact that FRAUD_RECENTLOSS only picks up this information for businesses that incurred multiple frauds. For businesses that only mention one fraud in this survey (recorded at FRAUD_NUM_DUM), we can already infer if this resulted in a financial loss or not (based on FRAUD_ZERO_DUM). |
| FRAUD_RESPONSE_NUM | FRAUD_RESPONSE | A numeric variable. This records how many of the actions listed in FRAUD_RESPONSE that businesses have taken. Those that said “don’t know” at any of the FRAUD_RESPONSE statements are assumed not to have taken this action in response. |
| FRAUD_WHEREREPORT_NUM | FRAUD_WHEREREPORT | A numeric variable. This records how many of the places listed in FRAUD_WHEREREPORT that businesses have reported to. It accounts for any new codes raised at this question during and after fieldwork. |
| FRAUD_MANAGE_NUM | FRAUD_MANAGE | A numeric variable. This records how many of the actions listed in FRAUD_MANAGE that businesses have taken. Those that said “don’t know” at any of the FRAUD_MANAGE statements are assumed not to have taken this action in response. |
| FRAUD_COSTTRAIN_DUM | FRAUD_COSTTRAIN, FRAUD_COSTTRAIN_DK | A numeric variable. This records approximately how much businesses spent on any fraud training or awareness raising activities for staff or customers. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_COSTTRAIN_DUM_REBASED | FRAUD_COSTTRAIN_DUM | This rebases the above variable, to count all businesses (that is, including “£0” values). A “don’t know” at the source question counts as a “0” in this calculation. |
| FRAUD_COSTSOFT_DUM | FRAUD_COSTSOFT, FRAUD_COSTSOFT_DK | A numeric variable. This records approximately how much businesses spent on any digital software to prevent or detect fraud. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_COSTSOFT_DUM_REBASED | FRAUD_COSTSOFT_DUM | This rebases the above variable, to count all businesses (that is, including “£0” values). A “don’t know” at the source question counts as a “0” in this calculation. |
| FRAUD_COSTRISK_DUM | FRAUD_COSTRISK, FRAUD_COSTRISK_DK | A numeric variable. This records approximately how much businesses spent to undertake any fraud risk assessments. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_COSTRISK_DUM_REBASED | FRAUD_COSTRISK_DUM | This rebases the above variable, to count all businesses (that is, including “£0” values). A “don’t know” at the source question counts as a “0” in this calculation. |
| FRAUD_COSTINS_DUM | FRAUD_COSTINS, FRAUD_COSTINS_DK | A numeric variable. This records approximately how much businesses spent on any insurance policies that cover fraud. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_COSTINS_DUM_REBASED | FRAUD_COSTINS_DUM | This rebases the above variable, to count all businesses (that is, including “£0” values). A “don’t know” at the source question counts as a “0” in this calculation. |
| FRAUD_COSTSTAFF_DUM | FRAUD_COSTSTAFF, FRAUD_COSTSTAFF_DK | A numeric variable. This records approximately how much businesses spent on any employees whose job role includes monitoring or investigating fraud risks. In the questionnaire, if someone cannot give exact numeric responses to this cost question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£10 million or more”, the answer is assumed to be “10,000,000”. |
| FRAUD_COSTSTAFF_DUM_REBASED | FRAUD_COSTSTAFF_DUM | This rebases the above variable, to count all businesses (that is, including “£0” values). A “don’t know” at the source question counts as a “0” in this calculation. |
| FRAUD_ALLSPEND_DUM | FRAUD_COSTTRAIN_DUM, FRAUD_COSTSOFT_DUM, FRAUD_COSTRISK_DUM, FRAUD_COSTINS_DUM, FRAUD_COSTSTAFF_DUM | This is the combined value of all the cost estimates across the 5 cost questions above. |
| FRAUD_ALLSPEND_DUM_REBASED | FRAUD_ALLSPEND_DUM | This rebases the above variable, to count all businesses (that is, including “£0” values). A “don’t know” at the any of the source questions counts as a “0” in this calculation. |
| OFFERED_DUM | OFFERED_HAD | A yes/no variable. This records whether the business was offered a bribe from another UK business or individual. A “don’t know” or “prefer not to say” at the source question counts as a “no” here. |
| OFFERED_NUM_DUM | OFFERED_HAD | A numeric variable. This records how many times the business was offered a bribe from another UK business or individual, among those experiencing any instances. A “don’t know” or “prefer not to say” at the source question counts as a “0”, so is not included in the base. |
| OFFERED_NUM_DUM_REBASED | OFFERED_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| OFFERED_VALUE_EST | OFFERED_VALUE, OFFERED_VALUE_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this financial value question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£100,000 or more”, the answer is assumed to be “100,000”. |
| OFFERED_RESPONSE_NUM | OFFERED_RESPONSE | A numeric variable. This records how many of the actions listed in OFFERED_RESPONSE that businesses have taken. Those that said “don’t know” at any of the OFFERED_RESPONSE statements are assumed not to have taken this action in response. |
| OFFERED_WHEREREPORT_NUM | OFFERED_WHEREREPORT | A numeric variable. This records how many of the places listed in OFFERED_WHEREREPORT that businesses have reported to. |
| PRIVATE_DUM | PRIVATE_HAD, PRIVATE_ASKED | A yes/no variable. This records whether the business had to give, or was asked to give, a bribe to another UK business. A “don’t know” or “prefer not to say” at the source questions counts as a “no” here. |
| PRIVATE_NUM_DUM | PRIVATE_HAD, PRIVATE_ASKED | A numeric variable. This records how many times the business had to give, or was asked to give, a bribe to another UK business, among those experiencing any instances. A “don’t know” or “prefer not to say” at the source questions counts as a “0” in this calculation. If the business did not have any instances of this kind, this variable is set as missing (a value of -1). |
| PRIVATE_NUM_DUM_REBASED | PRIVATE_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| PRIVATE_VALUE_EST | PRIVATE_VALUE, PRIVATE_VALUE_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this financial value question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£100,000 or more”, the answer is assumed to be “100,000”. |
| PRIVATE_RESPONSE_NUM | PRIVATE_RESPONSE | A numeric variable. This records how many of the actions listed in PRIVATE_RESPONSE that businesses have taken. Those that said “don’t know” at any of the PRIVATE_RESPONSE statements are assumed not to have taken this action in response. |
| PRIVATE_WHEREREPORT_NUM | PRIVATE_WHEREREPORT | A numeric variable. This records how many of the places listed in PRIVATE_WHEREREPORT that businesses have reported to. |
| PUBLIC_INTERACT_DUM | PUBLIC | A yes/no variable. This records whether the business had no interactions with a public official, interactions in one context, or interactions in multiple contexts. A “don’t know” at the source questions counts as “no interaction” here. |
| PUBLIC_DUM | PUBLIC_HAD, PUBLIC_ASKED | A yes/no variable. This records whether the business had to give, or was asked to give, a bribe to a UK public official. A “don’t know” or “prefer not to say” at the source questions counts as a “no” here. |
| PUBLIC_NUM_DUM | PUBLIC_HAD, PUBLIC_ASKED | A numeric variable. This records how many times the business had to give, or was asked to give, a bribe to a UK public official, among those experiencing any instances. A “don’t know” or “prefer not to say” at the source questions counts as a “0” in this calculation. If the business did not have any instances of this kind, this variable is set as missing (a value of -1). |
| PUBLIC_NUM_DUM_REBASED | PUBLIC_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| PUBLIC_CONTEXT_DUM | PUBLIC_CONTEXT | A yes/no variable. This records whether the business had interactions with a UK public official in one context or multiple contexts. A “don’t know” or “prefer not to say” at the source questions counts as a “no” here. |
| PUBLIC_VALUE_EST | PUBLIC_VALUE, PUBLIC_VALUE_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this financial value question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£100,000 or more”, the answer is assumed to be “100,000”. |
| PUBLIC_RESPONSE_NUM | PUBLIC_RESPONSE | A numeric variable. This records how many of the actions listed in PUBLIC_RESPONSE that businesses have taken. Those that said “don’t know” at any of the PUBLIC_RESPONSE statements are assumed not to have taken this action in response. |
| PUBLIC_WHEREREPORT_NUM | PUBLIC_WHEREREPORT | A numeric variable. This records how many of the places listed in PUBLIC_WHEREREPORT that businesses have reported to. |
| OVERSEAS_DUM | OVERSEAS_PRIVATE, OVERSEA_PUBLIC | A yes/no variable. This records whether the business had to give, or was asked to give, a bribe to a non-UK business or non-UK public official. A “don’t know” or “prefer not to say” at the source questions counts as a “no” here. |
| OVERSEAS_NUM_DUM | OVERSEAS_PRIVATE, OVERSEA_PUBLIC | A numeric variable. This records how many times the business had to give, or was asked to give, a bribe to a non-UK business or non-UK public official, among those experiencing any instances. A “don’t know” or “prefer not to say” at the source questions counts as a “0” in this calculation. If the business did not have any instances of this kind, this variable is set as missing (a value of -1). |
| OVERSEAS_NUM_DUM_REBASED | OVERSEAS_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| OVERSEAS_VALUE_EST | OVERSEAS_VALUE, OVERSEAS_VALUE_DK | A numeric variable. In the questionnaire, if someone cannot give exact numeric responses to this financial value question, they are asked for an answer in bands. This variable merges responses for those that give numeric answers as well as banded answers. The midpoint of the banded response is assumed to be the numeric response (for example, “£100 to less than £500” is assumed to be “250”). For those giving the top banded response of “£100,000 or more”, the answer is assumed to be “100,000”. |
| OVERSEAS_RESPONSE_NUM | OVERSEAS_RESPONSE | A numeric variable. This records how many of the actions listed in OVERSEAS_RESPONSE that businesses have taken. Those that said “don’t know” at any of the OVERSEAS_RESPONSE statements are assumed not to have taken this action in response. |
| OVERSEAS_WHEREREPORT_NUM | OVERSEAS_WHEREREPORT | A numeric variable. This records how many of the places listed in OVERSEAS_WHEREREPORT that businesses have reported to. |
| PRIVATE_DOMESTIC_DUM | OFFERED_DUM, PRIVATE_DUM | A yes/no variable. This records whether the business was offered, had to give, or was asked to give, a bribe to another business or individual. A “don’t know” or “prefer not to say” at the source questions counts as a “no” here. |
| PRIVATE_DOMESTIC_TYPE_01 to PRIVATE_DOMESTIC_TYPE_14 | OFFERED_TYPE_01 to OFFERED_TYPE_14, PRIVATE_TYPE_01 to PRIVATE_TYPE_14 | A numeric variable. The type of gift, favour or extra money that was offered, given or asked for, merging together the 2 most recent instances in which was offered, had to give, or was asked to give, a bribe to another business or individual. If the same answer was given at OFFERED_TYPE and PRIVATE_TYPE, the response at the equivalent variable here would be “2”. |
| PRIVATE_DOMESTIC_VALUE_EST | OFFERED_VALUE_EST, PRIVATE_VALUE_EST | A numeric variable. This merges together the approximate value of any gifts, favours or extra money in this instance (among those experiencing bribery in these contexts and providing a value). If the business was both offered and gave, or was asked to give, any gifts, favours or extra money, then the values from OFFERED_VALUE_EST and PRIVATE_VALUE_EST are both included. |
| PRIVATE_DOMESTIC_PURPOSE_01 to PRIVATE_DOMESTIC_PURPOSE_10 | OFFERED_PURPOSE_01 to OFFERED_ PURPOSE_10, PRIVATE_ PURPOSE_01 to PRIVATE_ PURPOSE_10 | A numeric variable. The purpose of the gift, favour or extra money that was offered, given or asked for, merging together data from incidents in which bribes were offered to businesses and in which businesses had to give, or were asked to give a bribe to another business or individual. If the same answer was given at OFFERED_PURPOSE and PRIVATE_PURPOSE, the response at the equivalent variable here would be “2”. |
| PRIVATE_DOMESTIC_APPROACH_01 to PRIVATE_DOMESTIC_APPROACH_REF | OFFERED_APPROACH_01 to OFFERED_APPROACH_REF, PRIVATE_APPROACH_01 to PRIVATE_APPROACH_REF | The way in which the gift, favour or extra money was offered, given or asked for, merging together data from incidents in which bribes were offered to businesses and in which businesses had to give, or were asked to give a bribe to another business or individual. If the same answer was given at OFFERED_APPROACH and PRIVATE_APPROACH, the response at the equivalent variable here would be “2”. |
| MIXED_BRIBERY_DUM | OFFERED_DUM, PUBLIC_DUM, OVERSEAS_DUM | A yes/no variable. This records whether the business was offered a bribe from a UK business or individual, or had to give, or was asked to give, a bribe to a UK public official or an overseas business or public official. This variable was not reported in the findings report, to avoid confusion with the GIVING_BRIBERY_DUM measure below. |
| MIXED_BRIBERY_NUM_DUM | OFFERED_NUM_DUM, PUBLIC_NUM_DUM, OVERSEAS_NUM_DUM | A numeric variable. This records how many times the business was offered a bribe from a UK business or individual, or had to give, or was asked to give, a bribe to a UK public official or an overseas business or public official. If the business did not have any instances of this, this variable is set as missing (a value of -1). As per the reason above, this variable was not reported in the findings report. |
| MIXED_BRIBERY_NUM_DUM_REBASED | MIXED_BRIBERY_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| GIVING_BRIBERY_DUM | PRIVATE_DUM, PUBLIC_DUM, OVERSEAS_DUM | A yes/no variable. This records whether the business had to give, or was asked to give, a bribe to a UK business, individual or public official, or an overseas business or public official. |
| GIVING_BRIBERY_NUM_DUM | PRIVATE_NUM_DUM, PUBLIC_NUM_DUM, OVERSEAS_NUM_DUM | A numeric variable. This records how many times the business had to give, or was asked to give, a bribe to a UK business, individual or public official, or an overseas business or public official. If the business did not have any instances of this, this variable is set as missing (a value of -1). |
| GIVING_BRIBERY_NUM_DUM_REBASED | GIVING_BRIBERY_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| BRIBERY_DUM | OFFERED_DUM, PRIVATE_DUM, PUBLIC_DUM, OVERSEAS_DUM | A yes/no variable. This records whether the business was offered, had to give, or was asked to give, a bribe in any context (within or outside the UK). |
| BRIBERY_NUM_DUM | OFFERED_NUM_DUM, PRIVATE_NUM_DUM, PUBLIC_NUM_DUM, OVERSEAS_NUM_DUM | A numeric variable. This records how many times the business was offered, had to give, or was asked to give, a bribe in any context, among those experiencing any instances. If the business did not have any instances of this, this variable is set as missing (a value of -1). This variable was not reported in the findings report, to avoid double counting instances of bribery (as it included a measure of businesses both being offered a bribe, and having to give or being asked to give a bribe). |
| BRIBERY_NUM_DUM_REBASED | BRIBERY_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. As per the reason above, this variable was not reported in the findings report. |
| CORRUPTION_MANAGE_NUM | CORRUPTION_MANAGE | A numeric variable. This records how many of the actions listed in CORRUPTION_MANAGE that businesses have taken. Those that said “don’t know” at any of the CORRUPTION_MANAGE statements are assumed not to have taken this action in response. |
| ML_TYPES_YES_01 to ML_TYPES_YES_11 | ML_TYPES_01 to ML_TYPES_11 | ML_TYPES requests numeric responses as to how many instances, if any, businesses have had of specific money laundering incidents. This set of derived variables converts those numeric responses into selected/not selected responses (that is, “selected” for answers of one or more, and “not selected” for answers of 0, “don’t know” or “prefer not to say”). |
| ML_DUM | ML_TYPES | A categorical variable. This records whether the business experienced no money laundering incidents, one type of incident, or multiple types, rather than a simple yes/no – this was done to aid routing in the scripted questionnaire. A “don’t know” or “prefer not to say” at the source questions counts as “no incident” of this type here. |
| ML_NUM_DUM | ML_TYPES | A numeric variable. This records how many times the business experienced money laundering incidents, among those experiencing any. If the business did not have any instances of money laundering, or said “don’t know” or “prefer not to say”, this variable is set as missing (a value of -1). |
| ML_NUM_DUM_REBASED | ML_NUM_DUM | This rebases the above variable, to count all businesses (that is, including “0” values). A “don’t know” or “prefer not to say” at any of the source questions counts as a “0” in this calculation. |
| ML_DUM_2 | ML_TYPES, ML_DUM, ML_RECENT | This records what type of money laundering incident was most recently experienced (among those experiencing any). It accounts for the fact that ML_RECENT only picks up this information for businesses that incurred multiple types of incidents. For businesses that only mention one type of money laundering incident in this survey (recorded at ML_ DUM), we can already infer what type this was (from ML_TYPES). |
| ML_DETECTION_NUM | ML_DETECTION | A numeric variable. This records the number of routes listed in ML_DETECTION through which businesses were made aware of the money laundering incident. It accounts for any new codes raised at this question during and after fieldwork. |
| ML_RESPONSE_NUM | ML_RESPONSE | A numeric variable. This records how many of the actions listed in ML_RESPONSE that businesses have taken. Those that said “don’t know” at any of the ML_RESPONSE statements are assumed not to have taken this action in response. |
| ML_WHEREREPORT_NUM | ML_WHEREREPORT | A numeric variable. This records how many of the places listed in ML_WHEREREPORT that businesses have reported to. It accounts for any new codes raised at this question during and after fieldwork. |
| ML_MANAGE_NUM | ML_MANAGE | A numeric variable. This records how many of the actions listed in ML_MANAGE that businesses have taken. Those that said “don’t know” at any of the ML_MANAGE statements are assumed not to have taken this action in response. |
| SANCTIONS_MANAGE_NUM | SANCTIONS_MANAGE | A numeric variable. This records how many of the actions listed in SANCTIONS_MANAGE that businesses have taken. Those that said “don’t know” at any of the SANCTIONS_MANAGE statements are assumed not to have taken this action in response. |
| SMETURNOVER | SIZE, TURNOVER | This creates net codes merging the questionnaire information on number of employees (SIZE) and annual turnover (TURNOVER). The aim was to create bespoke subgroups of low (under £1 million) and high turnover businesses (£1 million or more) within the small and medium enterprise (SME) population. This, in turn, was to be able to report differences in terms of turnover, which were not simply correlated with size (number of employees). |
2.6.2 Data values classed as missing
The SPSS file includes 2 types of missing values, applied systematically throughout the file:
-
any value of -1 (“Not asked”) means either that this question was not asked to the respondent (as a result of the questionnaire routing), or that a derived variable has excluded a respondent on purpose; an example is the FRAUD_NUM_DUM derived variable, which is only based on those that had experienced any fraud; this means that those not experiencing any fraud are listed as -1 at this variable
-
for any questions that appear as scale variables (that is, where respondents gave their answer as a number), rather than a nominal or ordinal variable, the file also classes any -96 (“Not applicable to my business”), -97 (“Don’t know”) and -98 (“Prefer not to say”) values as missing, meaning they are excluded from the base; this is so that, when calculating mean and median values, only the respondents actually giving a number are counted.
2.6.3 Treatment of “don’t know” and “prefer not to say” responses when creating derived variables
When deriving the variables listed in Table 2.11, “don’t know” and “prefer not to say” responses were treated differently depending on the nature of the derived result. This has implication for whether the derived results may be slight underestimates of the true picture in the population.
When calculating, for each crime type, the mean or median number of incidents, prevalence estimates, or incidence estimates, our derived variables inherently include “don’t know” and “prefer not to say” responses in the base. That is, these estimates are based on the entire sample of 3,477. For example, ML_NUM_DUM (the number of money laundering incidents of any type) is derived from the responses at ML_TYPES_01 through to ML_TYPES_11. It may be that a respondent said “prefer not to say” at ML_TYPES_01, “don’t know” at ML_TYPES_02, and “0” to all other instances at the ML_TYPES question. In this case, we could either assume that the derived value at ML_NUM_DUM is “0”, or that the result for this respondent is missing (since we do not have complete information on all the types of money laundering).
Classing this respondent as missing by default (that is, assuming that “prefer not to say” or “don’t know” overrides the other “0” responses at ML_TYPES) would greatly reduce the number of cases included in the base, and is strongly likely to lead to an overestimation of incidents. Moreover, the most likely interpretation of “don’t know” is that the business has not had this kind of incident. This is because we have surveyed a senior person in the business, and if they do not know about these incidents, the business most likely has not experienced them. It is possible that taking this approach slightly underestimates the number of incidents. However, in the context of this survey, this would not be as severe as the substantial overestimation that would result from taking the alternative approach.
For the total, combined economic cost and spending estimates for fraud (FRAUD_ANYCOST_DUM, FRAUD_NONRECOVERED_DUM and FRAUD_ALLSPEND_DUM_REBASED), we also take the same approach. For example, FRAUD_ANYCOST_DUM (the estimated total costs from fraud or attempted fraud in the last 12 months) is derived by summing the itemised economic cost results at FRAUD_DIRECT_DUM, FRAUD_AFTERMATH_DUM, FRAUD_STAFF_DUM and FRAUD_INDIRECT_DUM. It may be that a respondent said “£1,000” at FRAUD_DIRECT_DUM, “£500” at FRAUD_AFTERMATH_DUM, “prefer not to say” at FRAUD_STAFF_DUM and “don’t know” at FRAUD_INDIRECT_DUM.
Again, we could either assume that the derived value at FRAUD_ANYCOST_DUM is “£1,500”, or that the result for this respondent is missing (since we do not have complete information on all the itemised costs). As above, classing the respondent as missing by default would greatly reduce the number of cases included in the base, and would ignore important cost information provided by many respondents. Therefore, our default approach is to include these respondents, and assume “don’t know” and “prefer not to say” responses are the same as someone answering “£0”.[footnote 13] Once more, it is possible that taking this approach slightly underestimates the total, combined economic cost and spending estimates for fraud.
By contrast, for the individual, itemised economic cost estimates for fraud (FRAUD_DIRECT_DUM, FRAUD_AFTERMATH_DUM, FRAUD_STAFF_DUM and FRAUD_INDIRECT_DUM), we exclude those that do not provide a numeric £ value (for example, at FRAUD_DIRECT) or banded £ value (for example, at FRAUD_DIRECT_DK) from the base. This is because, in these cases, there is no legitimate reason to assume that “don’t know” and “prefer not to say” responses mean “£0”.
2.6.4 Rounding differences between the SPSS dataset and published data
The default setting of the SPSS crosstabs command does not handle non-integer weighting in the same way as typical survey data tables run using the Quantum software. This means that in a small number of cases, weighted survey percentages run from the SPSS dataset and the Quantum data tables may differ. Thee differences are expected to be small – no more than one percentage point – and to only occur on rare occasions. However, to avoid any inconsistency, and for practical reasons, the descriptive percentage statistics from the findings report are taken almost entirely from the Quantum data tables. The only exceptions, in which figures were reported based on SPSS outputs, were for complex derived variables that could not be included in the data tables (for example, the mean cost and spending estimates for fraud, or the incidence rates for the different crime types).
3. Qualitative strand
A total of 38 qualitative interviews were undertaken alongside and shortly after the quantitative survey, covering corruption, money laundering and financial sanctions. The qualitative interviews allowed us to explore these areas in greater depth than was possible in the quantitative survey, providing more details about the crimes experienced, the impacts on businesses, and businesses’ responses to incidents. They also more broadly covered businesses’ awareness and understanding of these rarer and less well documented areas of economic crime.
3.1 Sampling
The sample for the qualitative interviews was taken from the quantitative survey, and was drawn intermittently throughout the survey fieldwork period. Respondents in the survey were asked whether they would be willing to take part in a follow-up interview based on a similar topic. A total of 1,993 (57% of those asked) agreed to be recontacted.
However, not all these records were used for the qualitative research. Ipsos undertook a highly purposive approach to qualitative sample selection, based on a range of answers provided in the quantitative survey, and subsequently during the qualitative recruitment. This was to mitigate the risk of recruiting businesses that had encountered very simplistic or low-impact incidents, or would not be able to discuss relevant processes to manage the risks of corruption and money laundering.
Initially, the recruitment focused on businesses that had experienced bribery or money laundering in the last 12 months (as recorded in the survey). Within this overarching requirement, we assigned a priority score of one (highest priority), 2 or 3 (lowest priority) to each recontact lead, based on the substantiveness of the incident (as per Table 3.1).
Table 3.1: Initial prioritisation criteria for qualitative interviews
| Priority group | Fulfilled at least one of the following criteria | Available records* | Achieved interviews* |
|---|---|---|---|
| Corruption priority one | reported a bribery incident to a third party in the last 12 months; felt their business was “very at risk” of corruption; the value of the bribe was in the top 10% of all recorded values | 19 | 4 |
| Corruption priority 2 | the value of the bribe was in the top 25% of all recorded values, the respondent received the offer directly. | 19 | 2 |
| Corruption priority 3 | all other businesses that experienced a bribery incident. | 34 | 7 |
| Money laundering priority one | reported a money laundering incident to a third party in the last 12 months; felt their business was “very at risk” of money laundering; had either indisputable or strong evidence that the incident involved money derived from criminal activity; detected the incident through software, monitoring or purchases, a third-party audit, or via their bank or insurance company. | 47 | 11 |
| Money laundering priority 2 | felt their business was “fairly at risk” of money laundering; had evidence that the incident involved money derived from criminal activity. | 11 | 3 |
| Money laundering priority 3 | all other businesses that suspected a money laundering incident. | 24 | 3 |
Notes:
- * Some businesses recorded both bribery and money laundering incidents, so there was an overlap between the corruption and money laundering priority samples.
A total of 25 interviews were recruited according to the criteria in Table 3.1. After the survey fieldwork closed on 30 August 2025, and by 10 September 2025, the eligible survey leads of this profile had been exhausted. Therefore, the Home Office agreed to relax the criteria for inclusion, to incorporate either businesses that had experienced bribery or money laundering incidents since 2020 (that is, not just in the last 12 months), or businesses that had a range of processes in place to manage the risks of corruption and money laundering. This led to the recruitment of 4 additional money laundering interviews with businesses who had experienced any incidents since 2020, and 12 general interviews with businesses that had measures in place.
3.2 Recruitment approach
Recruitment for the qualitative element was carried out by email and telephone, using the contact details collected in the survey. This was managed by Ipsos’ specialist business recruitment partner. We offered a charity donation of £70 made on behalf of participants to encourage participation. No direct cash incentives were offered, to eliminate the risk of paying direct incentives to businesses that might have been complicit in economic crime.
Because of the highly purposive recruitment approach and the limited sample, there were no firmographic quotas (for example, by size, sector and region). Instead, the only targets were those for corruption and money laundering-focused interviews (with an intention to achieve 19 in each group, for 38 interviews in total).
Businesses were asked to reconfirm the details they had given as part of the survey. There were 2 mis-recruits early in the qualitative fieldwork (which have not been included as part of the final tally of interviews). It transpired that these 2 participants who took part in a qualitative interview had mistaken a fraud incident for a money laundering incident. Following this, a couple of additions were made to the screening process. The first was to ask participants explicitly how much extra detail they felt they could give when discussing the incident, to mitigate the chances of recruiting low-impact cases. The second was to ask participants for a short explanation of the incident in more detail, which was passed to the Ipsos research team. The research team then judged whether the incident met the inclusion criteria.
The full breakdown of recruited interviews by characteristic is produced below in Tables 3.2 and 3.3.
Table 3.2: Completed qualitative interviews by crime types experienced
| Experience of crime | Achieved interviews |
|---|---|
| Was offered a gift, favour, or extra money other than the official fee, in order to secure a business transaction, or to get you to perform a service within the last 12 months | 6 |
| Was asked to give a gift, favour, or extra money other than the official fee, to any UK business, in order to secure a business transaction, or to get others to perform a service within the last 12 months | 4 |
| Was asked to give a UK public official a gift, favour, or extra money other than the official fee within the last 12 months | 0 |
| Was asked to give, or was it asked to give, another business outside the UK a gift, favour, or extra money other than the official fee within the last 12 months | 0 |
| Total interviews covering specific bribery incidents* (encompassing the 4 rows above) | 10 |
| One type of money laundering incident experienced within the last 12 months | 3 |
| Multiple types of money laundering incident experienced within the last 12 months | 9 |
| Experienced money laundering since 2020 | 4 |
| Total interviews covering specific money laundering incidents* (encompassing the 3 rows above, accounting for overlaps) | 16 |
| Total broader interviews (where participants had not experienced an incident within the last 12 months) | 12 |
Notes:
- * Some businesses recorded both bribery and money laundering incidents, or more than one kind of bribery incident, so there was an overlap between categories.
Table 3.3: Completed qualitative interviews by participant characteristics
| Characteristic | Achieved interviews |
|---|---|
| Regulated versus unregulated businesses | |
| Regulated | 8 |
| Unregulated | 30 |
| Size (number of employees) | |
| Micro (1 to 9 employees) | 12 |
| Small (10 to 49 employees) | 12 |
| Medium (50 to 249 employees) | 12 |
| Large (250 or more employees) | 2 |
| Size (annual turnover) | |
| Under £85,000 | 0 |
| £85,000 to under £250,000 | 5 |
| £250,000 to under £500,000 | 5 |
| £500,000 to under £1 million | 3 |
| £1 million to under £5 million | 9 |
| £5 million to under £10 million | 4 |
| £10 million to under £25 million | 7 |
| £25 million or more | 4 |
| Unknown | 1 |
| Head office location | |
| England | 32 |
| Scotland | 5 |
| Northern Ireland | 1 |
| Wales | 0 |
| Sector | |
| A: Agriculture | 0 |
| BDE: Utilities and production | 0 |
| C: Manufacturing | 2 |
| F: Construction | 1 |
| G: Retail and wholesale | 4 |
| H: Transport and storage | 1 |
| I: Food and hospitality | 1 |
| J: Information and communication | 1 |
| K: Finance and insurance | 7 |
| L: Real estate | 8 |
| M: Professional, scientific and technical | 9 |
| N: Administration | 2 |
| P: Education | 0 |
| Q: Health, social care and social work | 1 |
| R: Arts and recreation | 0 |
| S: Service and membership organisations | 1 |
3.3 Fieldwork and topic guides
The Ipsos research team carried out all fieldwork between 12 July 2024 and 27 September 2024. Interviews were carried out both over the phone and via Microsoft Teams, as per the participant’s preference. They lasted around 60 minutes on average.
There were separate qualitative topic guides for corruption and money laundering. These covered the topics of interest provided by the Home Office, after they reviewed the interim survey results. The broad themes covered in these guides were:
-
the business’s experience of their bribery or money laundering incident(s)
-
the costs associated with the incident(s)
-
reporting the incident(s) to a third party
-
the business’s understanding of corruption or money laundering
-
the business’s interpretation of the risk of corruption or money laundering
-
awareness and information needs around financial sanctions (only for the interviews that also covered money laundering)
Some of these topics were lower priority based on the hour of interview time, with the focus being on details of the incident(s) – parts 1 to 3 above. Where interviews covered both crime types, both topic guides were used. In addition, when the recruitment was later expanded to include businesses that had not necessarily experienced any corruption or money laundering incidents, the focus shifted to the latter sections of the topic guides (parts 4 to 6 above). Across the course of fieldwork, the core research team reviewed the notes from each interview and provided ongoing guidance to all interviewers as to the topics that needed further coverage in the remaining interviews.
As part of the qualitative interviews, participants were asked about how they would define corruption and money laundering. It is important to note that, in the quantitative survey, most participants were presented with a definition of corruption or money laundering in order to inform responses on risk perceptions and preparedness. For corruption, this was as follows:
’Corruption is an abuse of entrusted power for private benefit that usually breaches laws, regulations, or standards of integrity or professional behaviour. Gifts, favours, or extra money other than an official fee, exchanged in order to secure a business transaction, or acquire a service, would be examples of corruption.’
For money laundering, it was as follows:
’Money laundering is where individuals or organisations attempt to hide money derived from crime by converting it into a legitimate source.’
While it is possible that the responses of some participants were framed by the prior provision of these definitions in the survey, we do not expect this to have had a substantial effect in the qualitative interviews, given the time lag between the quantitative survey and qualitative interviews.
The topic guides are reproduced in full here.
3.4 Quality assurance and analysis
The Ipsos research team took an iterative approach to analysis. This approach had several quality checks built in. It involved:
-
recording interviews and summarising them in an Excel notes template (approved by the qualitative director on the project, and reviewed after each set of notes were added), which categorised findings by topic area and the research questions within that topic area
-
sending a selection of the more detailed interviews for transcription to review and quality-assure them in more detail
-
analysis sessions with the entire fieldwork team and the Home Office (whose team attended the final session), with the first halfway through fieldwork and the final one towards the end of fieldwork – researchers discussed the findings from individual interviews, drew out emerging key themes, recurring findings and other patterns across the interviews
-
developing an analysis framework based on these analysis sessions against which the qualitative data could be assessed
-
reviewing transcripts to identify verbatim quotes to include in the main report
4. Glossary
This glossary defines the specific use of words or phrases in this study. Some of these may have a broader use in everyday language, but were defined more precisely for this research.
| Bribery | Bribery is a type of corruption. In the survey, several questions do not explicitly mention bribes, but refer to “a gift, favour or extra money, other than the official fee, in order to secure a business transaction, or to get the respondent business to perform a service”. Within this definition, survey respondents were explicitly told to exclude free samples or trials given for marketing purposes. |
| Businesses | This study covered businesses with employees only (that is, it excluded businesses with zero employees). Where the report refers to ‘businesses’, this is implicitly referring to businesses with employees. |
| Corruption | Corruption is the abuse of entrusted power for private benefit that usually breaches laws, regulations, standards of integrity and/or standards of professional behaviour. Gifts, favours, or extra money other than an official fee, exchanged in order to secure a business transaction, or acquire a service, would be examples of corruption. Survey respondents were provided with this definition of corruption at the relevant questions. |
| Cyber-facilitated fraud | Cyber-facilitated fraud is a subset of all fraud, which uses data or access obtained through electronic means such as malware, hacking (of files, user accounts or bank accounts) or phishing attacks. |
| Economic crime | Economic crime refers to a broad category of activity involving money, finance or assets, the purpose of which is to unlawfully obtain a profit or advantage for the perpetrator or cause loss to others. In the context of this report, the 3 economic crimes covered were fraud, corruption and money laundering. |
| Financial sanctions | Financial sanctions commonly include measures that restrict access to financial markets, funds and economic resources, and limit the provision of financial services, to a sanctioned person or business (known as a designated person). Financial sanctions restrictions can also be applied more broadly against specific groups or entire sectors. They can be applied on a geographic basis or in relation to a particular issue, such as terrorism, human rights or anti-corruption. |
| Fraud | In order to avoid any potential confusion or distraction caused by legal definitions (such as the definition set out by the Fraud Act 2006), the term fraud is not specifically defined in the survey. However, the survey questions give examples of what constitutes fraud in a business context. These include staff or fraudsters attempting to use business payment information without permission, trying to access a business’s online bank account to move money without permission, or tricking a business into changing bank transfers to divert funds to fraudsters. Also included are fake invoices or investment opportunities where there is a specific intended victim, or that elicit a response from the targeted business. Finally, there are frauds emanating from legitimate suppliers or customers, including suppliers claiming for undelivered goods or services, or falsifying expenses, as well as customers dishonestly claiming refunds. The frauds measured in the survey includes incidents where fraudsters successfully defrauded the business, as well as attempted, but unsuccessful, frauds. |
| High-turnover and low-turnover small and medium enterprises (SMEs) | In the context of this study, high-turnover SMEs are defined as enterprises with 1 to 249 employees, with £1 million or more in annual turnover. Low-turnover SMEs are enterprises with 1 to 249 employees, with under £1 million in annual turnover. |
| Incidence rate | The incidence rate, in the context of this study, is the total number of cases of economic crime among every thousand businesses (with employees). For example, the incidence rate of fraud among UK businesses represents the number of frauds that are estimated to have been perpetrated in the last 12 months, for every thousand UK businesses. The value of this statistic is in allowing comparisons across subgroups such as the size of business, as it accounts for the fact that there are far more micro and small businesses than medium and large ones. |
| Margin of error | The margin of error is a statistical measure of the uncertainty associated with the survey results. It indicates the range – a lower bound and upper bound – within which the true result for the business population is likely to fall, given the responses collected from our survey sample. There is a further guide to margins of error in an annex at the end of this report. |
| Money laundering | Money laundering occurs when money derived from crime is spent, hidden from view, transferred between individuals or business entities, or moved outside of the country. In order to reduce confusion about what it covered, the survey did not explicitly refer to money laundering, but instead asked about specific instances where respondents knew or suspected that money was derived from criminal activity. |
| Prevalence rate | The prevalence rate is the proportion of UK businesses (with employees) in the population that have experienced economic crime (for example, the proportion, or percentage, of businesses that have experienced fraud in the last 12 months). |
| Regulated business | Within this survey, “regulated businesses” refers to those with anti-money laundering obligations. Under the Money Laundering Regulations 2017, businesses in different sectors, including accountants, gambling firms, financial service businesses, estate agents and solicitors, must register to be monitored by a relevant supervisory authority. This is sometimes called Anti-Money Laundering (AML) supervision. Supervisory authorities could include public bodies such as the Financial Conduct Authority or Gambling Commission, government departments such as HM Revenue & Customs, and professional bodies such as the Law Society. |
| Suspicious Activity Report (SAR) | Suspicious Activity Reports (SARs) alert law enforcement to potential instances of money laundering or terrorist financing. Persons working in the regulated sector are legally required to submit a SAR in respect of information that comes to them in the course of their business if they know, suspect, or have reasonable grounds for knowing or suspecting that a person is engaged in, or attempting, money laundering or terrorist financing. SARs may also be submitted by unregulated businesses and private individuals where they have suspicion or knowledge of money laundering or terrorist financing, though they are not legally required to do this. SARs are submitted to the UK’s Financial Intelligence Unit (UKFIU) which is housed within the National Crime Agency (NCA). |
| Small and medium enterprise (SME) | SMEs are a grouping of the business population that includes businesses with 1 to 249 employees (that is, micro businesses, small businesses and medium businesses). |
| Subgroup difference | Subgroup differences are differences between particular groups within the overall business population (such as specific size or turnover bands, or industry sectors), or between a particular group and the overall result for all businesses. |
| Total incidence | The total incidence shows the total number of cases of economic crime across UK businesses (with employees). For example, the total incidence of fraud among UK businesses represents the number of frauds that are estimated to have been perpetrated against all UK businesses in the last 12 months. |
| Total prevalence | The total prevalence accounts for the total number of UK businesses (with employees) in the population that have experienced economic crime (for example, the total number of businesses that have experienced fraud in the last 12 months). |
5. Appendices
5.1 List of published sources for rapid evidence review
Agca, S., Slutzky, P., & Zeume, S. (2021). Anti-Money Laundering Enforcement, Banks, and the Real Economy. The George Washington University, Institute for International Economic Policy.
Bekkers, L., van ‘t Hoff-de Goede, S., Misana-ter Huurne, E., van Houten, Y., Spithoven, R., & Leukfeldt, E. R. (2023). Protecting your business against ransomware attacks? Explaining the motivations of entrepreneurs to take future protective measures against cybercrimes using an extended protection motivation theory model. Computers & Security, 127, 103099. https://doi.org/10.1016/j.cose.2023.103099
Borwell, J., Jansen, J., & Stol, W. (2022). The psychological and financial impact of cybercrime victimization: A novel application of the shattered assumptions theory. Social Science Computer Review, 40(4), 933-954. https://doi.org/10.1177/0894439320983828
Buil-Gil, D., Trajtenberg, N., & Aebi, M. F. (in press). Measuring cybercrime and cyberdeviance in surveys. In Routledge International Handbook of Online Deviance. Routledge.
Croall, H. (2016). What is known and what should be known about white-collar crime victimization? In S. R. Van Slyke, M. L. Benson, & F. T. Cullen (Eds.), The Oxford Handbook of White-Collar Crime. Oxford University Press.
Ferwerda, J., Deleanu, I. S., & Unger, B. (2019). Strategies to avoid blacklisting: The case of statistics on money laundering. PLoS ONE, 14(6), e0218532. https://doi.org/10.1371/journal.pone.0218532
Holtfreter, K., & Alvesalo-Kuusi, A. (2022). Editorial: The importance of context in studies of victimization and offending. Journal of White Collar and Corporate Crime, 4(1), 3-4. https://doi.org/10.1177/2631309X221138373
Imanpour, M., Rosenkranz, S., Westbrock, B., Unger, B., & Ferwerda, J. (2019). A microeconomic foundation for optimal money laundering policies. International Review of Law and Economics, 60(C).
Jian, Y. S. (Ethan), Lin, K., Sun, I. Y., & Chen, S. (2025). Cyberbullying victim-offender overlap among Chinese college students: Comparing the predictive effects across criminological factors. Victims & Offenders, 1-22. https://doi.org/10.1080/15564886.2025.2471497
Kemp, S., Buil-Gil, D., Miró-Llinares, F., & Lord, N. (2023). When do businesses report cybercrime? Findings from a UK study. Criminology & Criminal Justice, 23(3), 468-489. https://doi.org/10.1177/17488958211062359
Levi, M. (2025). Human, institutional, political and technological factors involved in a public health approach to frauds against individuals. European Journal of Criminology, 0(0). https://doi.org/10.1177/14773708251341076
Lewis, J. B. (2023). Money finds a way: Increasing AML regulation garners diminishing returns and increases demand for dark financing. Vanderbilt Law Review, 55, 529.
Moneva, A., & Leukfeldt, R. (2023). Insider threats among Dutch SMEs: Nature and extent of incidents, and cyber security measures. Journal of Criminology, 56(4), 416-440. https://doi.org/10.1177/26338076231161842
Paoli, L., Visschers, J., & Verstraete, C. (2018). The impact of cybercrime on businesses: a novel conceptual framework and its application to Belgium. Crime, Law and Social Change, 70(4), 397-420. https://doi.org/10.1007/s10611-018-9774-y
PwC. (2022). Global Economic Crime and Fraud Survey 2022. https://www.pwc.com/gx/en/forensics/gecsm-2022/pdf/PwC%E2%80%99s-Global-Economic-Crime-and-Fraud-Survey-2022.pdf
Roškot, M., Wanasika, I., & Kreckova Kroupova, Z. (2021). Cybercrime in Europe: Surprising results of an expensive lapse. Journal of Business Strategy, 42(2), 91-98. https://doi.org/10.1108/JBS-12-2019-0235
Simpson, G., & Moore, T. (2020). Empirical analysis of losses from business-email compromise. 2020 APWG Symposium on Electronic Crime Research (eCrime), Boston, MA, USA. https://doi.org/10.1109/eCrime51433.2020.9493250
Smith, R. (2018). Estimating the cost to Australian businesses of identity crime and misuse (Research Report No. 15). Australian Institute of Criminology. https://doi.org/10.52922/rr196685
Snaphaan, T., Hardyns, W., & Pauwels, L. J. R. (2022). Expanding the methodological toolkit of criminology and criminal justice with the Total Error Framework. Journal of Crime and Justice, 1-22.
Transparency International. (2021). Global Corruption Barometer – Pacific 2021: Citizens’ Views and Experiences of Corruption.
Van de Weijer, S. G. A., Leukfeldt, R., & van der Zee, S. (2021). Cybercrime reporting behaviors among small- and medium-sized enterprises in the Netherlands. In Cybercrime in Context. Crime and Justice in Digital Society (Vol. I, pp. 303-325). Springer-Verlag. https://doi.org/10.1007/978-3-030-60527-8_17
Van der Zee, S. (2021). Shifting the blame? Investigation of user compliance with digital payment regulations. In M. Weulen Kranenbarg & R. Leukfeldt (Eds.), Cybercrime in Context. Crime and Justice in Digital Society (Vol. I). Springer, Cham.
Wanamaker, K. A. (2017). Profile of Canadian Businesses who Report Cybercrime to Police, The 2017 Canadian Survey of Cyber Security and Cybercrime. Statistics Canada.
World Bank. (n.d.). World Bank’s enterprise survey: Understanding the sampling methodology. World Bank. http://documents.worldbank.org/curated/en/484931468156894681
5.2 Full list of split-sampled questions
Questions asked to a random half of the sample:
-
FRAUD_STOPPED
-
FRAUD_IMPACT
-
FRAUD_DIRECT / FRAUD_DIRECT_DK
-
FRAUD_RECOVER
-
FRAUD_RECOVER_VALUE / FRAUD_RECOVER_VALUE_DK
-
FRAUD_AFTERMATH / FRAUD_AFTERMATH_DK
-
FRAUD_STAFF / FRAUD_STAFF_DK
-
FRAUD_INDIRECT / FRAUD_INDIRECT_DK
-
FRAUD_ZERO_DUM
-
FRAUD_LOSS
-
FRAUD_LOSS_DUM
-
FRAUD_RECENT
-
FRAUD_RECENTDUM
-
FRAUD_RECENTLOSS
-
FRAUD_RECENTLOSS_DUM
-
FRAUD_DETECTION
-
FRAUD_PERPETRATOR
-
FRAUD_COMM
-
FRAUD_NATURE
-
FRAUD_CYBER
-
FRAUD_RESPONSE
-
FRAUD_NOREPORT
-
FRAUD_WHEREREPORT
-
FRAUD_RISK
-
FRAUD_MANAGE
-
FRAUD_COSTTRAIN / FRAUD_ COSTTRAIN_DK
-
FRAUD_COSTSOFT / FRAUD_ COSTSOFT_DK
-
FRAUD_COSTRISK / FRAUD_ COSTRISK_DK
-
FRAUD_COSTINS / FRAUD_ COSTINS_DK
-
FRAUD_COSTSTAFF / FRAUD_ COSTSTAFF_DK
Questions asked to a random two-thirds of the sample:
-
CORRUPTION_RISK
-
CORRUPTION_SECTOR
-
CORRUPTION_UKLOSS
-
CORRUPTION_CANCELLED
-
CORRUPTION_STAFF
-
CORRUPTION_ADEQUATE
-
CORRUPTION_MANAGE
-
CORRUPTION_BARRIER
-
ML_RISK
-
ML_LOW
-
ML_MANAGE
5.3 Unadjusted and adjusted survey estimates, accounting for outliers
In the quantitative survey, several questions collected numeric data from businesses, which fed into the estimates of the mean number of incidents for each crime type, the incidence rate, and of any cost or spending data (which included the costs of fraud, spending to manage the risks of fraud, and the value of bribes that businesses were offered, had to give or were asked to give). As outlined in Section 2.4.5, several approaches were implemented before, during and after fieldwork to minimise the risk of businesses providing inaccurate or unreliable responses. One approach involved using statistical techniques to identify and adjust for outliers from a statistical perspective.
Outliers are unusually low or high numeric values recorded in the survey that strongly influence any mean results. As explained in Section 2.4.5, outliers are not necessarily inaccurate or unreliable. They could simply reflect the true distribution of values within the population. All outliers were ultimately kept in the final data used in the findings report, as there was no clear evidence to suggest beyond doubt that any responses were incorrect.
In order to show the extent to which a small number of cases in the data were driving certain estimates, this appendix shows both the unadjusted estimates included in the findings report, as well as the adjusted estimates removing statistical outliers. The unadjusted values in the findings report are still considered to be the most representative estimates of the whole business population.
Two statistical techniques were used – trimming and Winsorization. Trimming involves removing values above a certain percentile, while Winsorization involves capping values above a certain percentile (to match the next highest value recorded at lower percentiles). Winsorization was the preferred approach, as it allowed more cases to be retained when producing the estimates. However, it was not always feasible to run this technique. The Winsorization process did not work when the distribution of responses included a very large proportion of zero values or if there were too few responses at any particular question. Therefore, trimming provided a suitable secondary approach where Winsorization was not possible. Before pursuing these techniques, Ipsos also considered potential alternative techniques such as the Cook’s Distance method, but ultimately considered trimming and Winsorization to be the best-practice approaches for this survey.
The threshold percentiles were set at either 99%, 99.5% or 99.9% depending on the question and the distribution of responses for that question. For example, where the threshold was set at 99.9%, that meant that only the top 0.01% of responses were either trimmed or Winsorized. Each threshold decision was applied by an independent Ipsos statistician.
Tables 5.1 to 5.9 show the unadjusted and adjusted values for all the means and incidence rates (i.e., the number of incidents per every 1,000 businesses) included in the findings report, at the overall level. The finding report also covered subgroup differences in incidence rates for each crime type, so the subgroup data (focusing on industry sector) has also been reproduced here, unadjusted and adjusted. As per the main report, the unadjusted and adjusted estimates have been presented to 3 significant figures if a thousand or more, and otherwise to the nearest whole number.
Table 5.1: Average (mean) number of incidents for each crime type, among the businesses with employees experiencing these crime types in the last 12 months, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| FRAUD_NUM_DUM | Average (mean) number of attempted frauds | 16 | 15 |
| OFFERED_NUM_DUM | Average (mean) number of bribes offered to businesses by another UK business or individual | 6 | 4 |
| PRIVATE_NUM_DUM | Average (mean) number of bribes businesses had to, or were asked to give, to another UK business | 4 | 3 |
| ML_NUM_DUM | Average (mean) number of money laundering incident | 7 | 6 |
Table 5.2: Incidence rates for fraud overall, and for the specific types of frauds experienced in the last 12 months, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| FRAUD_NUM_DUM | All attempted frauds | 4,230 | 3,960 |
| FRAUD_TYPES_01 | Attempting to use respondent business’s debit or credit card information without permission | 201 | 119 |
| FRAUD_TYPES_02 | Attempting to access respondent business’s online bank account, to move money without permission | 125 | 99 |
| FRAUD_TYPES_03 | Attempts to trick respondent business to change direct debits, standing orders or bank transfers | 817 | 708 |
| FRAUD_INVOICENUM_DUM | Attempted invoice fraud with a specific intended victim, or that the business responded to | 1,200 | 1,040 |
| FRAUD_INVESTNUM_DUM | Attempted investment fraud with a specific intended victim, or that the business responded to | 948 | 888 |
| FRAUD_TYPES_06 | Suppliers knowingly claiming for goods or services that were not delivered | 208 | 197 |
| FRAUD_TYPES_07 | Suppliers knowingly charging an advance fee for goods or services not delivered | 62 | 44 |
| FRAUD_TYPES_08 | Current or prospective suppliers knowingly misleading respondent business | 143 | 100 |
| FRAUD_TYPES_09 | People making false insurance claims against respondent business | 42 | 39 |
| FRAUD_TYPES_10 | Falsifying personal expenses | 13 | 12 |
| FRAUD_TYPES_11 | Accounts to procure services being opened in respondent business’s name, without respondent business’s permission | 22 | 22 |
| FRAUD_TYPES_12 | Customers dishonestly claiming refunds | 451 | 282 |
Table 5.3: Incidence rates for fraud experienced in the last 12 months, overall and by sector, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| FRAUD_NUM_DUM | All businesses | 4,230 | 3,960 |
| FRAUD_NUM_DUM, SECTOR | SIC A: Agriculture | 7,270 | 7,270 |
| FRAUD_NUM_DUM, SECTOR | SIC BDE: Utilities and production | 3,320 | 3,320 |
| FRAUD_NUM_DUM, SECTOR | SIC C: Manufacturing | 3,130 | 3,130 |
| FRAUD_NUM_DUM, SECTOR | SIC F: Construction | 4,430 | 4,430 |
| FRAUD_NUM_DUM, SECTOR | SIC G: Retail and wholesale | 3,680 | 3,660 |
| FRAUD_NUM_DUM, SECTOR | SIC H: Transport and storage | 2,550 | 2,550 |
| FRAUD_NUM_DUM, SECTOR | SIC I: Food and hospitality | 2,510 | 2,460 |
| FRAUD_NUM_DUM, SECTOR | SIC J: Information and communication | 7,700 | 7,700 |
| FRAUD_NUM_DUM, SECTOR | SIC K: Finance and insurance | 1,800 | 1,800 |
| FRAUD_NUM_DUM, SECTOR | SIC L: Real estate | 3,930 | 3,930 |
| FRAUD_NUM_DUM, SECTOR | SIC M: Professional, scientific and technical | 4,360 | 4,360 |
| FRAUD_NUM_DUM, SECTOR | SIC N: Administration | 7,030 | 7,030 |
| FRAUD_NUM_DUM, SECTOR | SIC P: Education | 287 | 287 |
| FRAUD_NUM_DUM, SECTOR | SIC Q: Health, social care and social work | 6,190 | 6,190 |
| FRAUD_NUM_DUM, SECTOR | SIC R: Arts and recreation | 2,350 | 2,350 |
| FRAUD_NUM_DUM, SECTOR | SIC S: Service and membership organisations | 1,790 | 1,790 |
Table 5.4: Average (mean) cost of fraud incidents, among the businesses experiencing any frauds in the last 12 months
| Variable used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| FRAUD_DIRECT_DUM | Immediate direct costs | £1,250 | £959 |
| FRAUD_AFTERMATH_DUM | Wider direct costs | £291 | £193 |
| FRAUD_STAFF_DUM | Indirect staff time costs | £432 | £347 |
| FRAUD_INDIRECT_DUM | Wider indirect costs | £144 | £129 |
| FRAUD_ANYCOST_DUM | Combined total costs | £2,090 | £1,900 |
Table 5.5: Average (mean) spending on fraud prevention and risk management, among the businesses investing in each of the following areas
| Variable used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| FRAUD_COSTINS_DUM | Insurance policies | £71,600 | £63,800 |
| FRAUD_COSTTRAIN_DUM | Training or awareness raising activities | £2,550 | £1,710 |
| FRAUD_COSTSOFT_DUM | Digital software | £2,590 | £2,140 |
| FRAUD_COSTSTAFF_DUM | Staff members | £10,300 | £4,190 |
| FRAUD_COSTRISK_DUM | Fraud risk assessments | £1,360 | £1,240 |
| FRAUD_ALLSPEND_DUM | Total combined spending | £13,500 | £3,270 |
| FRAUD_COSTINS_DUM, FRAUD_ALLSPEND_DUM* | Total combined spending excluding insurance | £5,150 | £3,440 |
Notes:
- * This was run as a bespoke variable by subtracting the FRAUD_COSTINS_DUM value from the FRAUD_ALLSPEND_DUM value.
Table 5.6: Incidence rates for bribes offered to businesses with employees by other UK businesses or individuals in the last 12 months, overall and by sector, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| OFFERED_NUM_DUM | All businesses | 82 | 64 |
| OFFERED_NUM_DUM, SECTOR | SIC A: Agriculture | No cases | No cases |
| OFFERED_NUM_DUM, SECTOR | SIC BDE: Utilities and production | 11 | 11 |
| OFFERED_NUM_DUM, SECTOR | SIC C: Manufacturing | 2 | 2 |
| OFFERED_NUM_DUM, SECTOR | SIC F: Construction | 50 | 50 |
| OFFERED_NUM_DUM, SECTOR | SIC G: Retail and wholesale | 128 | 128 |
| OFFERED_NUM_DUM, SECTOR | SIC H: Transport and storage | No cases | No cases |
| OFFERED_NUM_DUM, SECTOR | SIC I: Food and hospitality | 61 | 61 |
| OFFERED_NUM_DUM, SECTOR | SIC J: Information and communication | 66 | 66 |
| OFFERED_NUM_DUM, SECTOR | SIC K: Finance and insurance | 114 | 114 |
| OFFERED_NUM_DUM, SECTOR | SIC L: Real estate | 846 | 106 |
| OFFERED_NUM_DUM, SECTOR | SIC M: Professional, scientific and technical | 52 | 52 |
| OFFERED_NUM_DUM, SECTOR | SIC N: Administration | 28 | 28 |
| OFFERED_NUM_DUM, SECTOR | SIC P: Education | No cases | No cases |
| OFFERED_NUM_DUM, SECTOR | SIC Q: Health, social care and social work | 16 | 16 |
| OFFERED_NUM_DUM, SECTOR | SIC R: Arts and recreation | No cases | No cases |
| OFFERED_NUM_DUM, SECTOR | SIC S: Service and membership organisations | 32 | 32 |
Table 5.7: Incidence rates for bribes businesses with employees had to, or were asked to give to other UK businesses in the last 12 months, overall and by sector, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| PRIVATE_NUM_DUM | All businesses | 45 | 43 |
| PRIVATE_NUM_DUM, SECTOR | SIC A: Agriculture | 11 | 11 |
| PRIVATE_NUM_DUM, SECTOR | SIC BDE: Utilities and production | 106 | 106 |
| PRIVATE_NUM_DUM, SECTOR | SIC C: Manufacturing | 53 | 53 |
| PRIVATE_NUM_DUM, SECTOR | SIC F: Construction | 42 | 42 |
| PRIVATE_NUM_DUM, SECTOR | SIC G: Retail and wholesale | 25 | 5 |
| PRIVATE_NUM_DUM, SECTOR | SIC H: Transport and storage | 90 | 90 |
| PRIVATE_NUM_DUM, SECTOR | SIC I: Food and hospitality | 3 | 3 |
| PRIVATE_NUM_DUM, SECTOR | SIC J: Information and communication | 38 | 38 |
| PRIVATE_NUM_DUM, SECTOR | SIC K: Finance and insurance | 12 | 12 |
| PRIVATE_NUM_DUM, SECTOR | SIC L: Real estate | 23 | 23 |
| PRIVATE_NUM_DUM, SECTOR | SIC M: Professional, scientific and technical | 26 | 26 |
| PRIVATE_NUM_DUM, SECTOR | SIC N: Administration | 130 | 130 |
| PRIVATE_NUM_DUM, SECTOR | SIC P: Education | No cases | No cases |
| PRIVATE_NUM_DUM, SECTOR | SIC Q: Health, social care and social work | 7 | 7 |
| PRIVATE_NUM_DUM, SECTOR | SIC R: Arts and recreation | No cases | No cases |
| PRIVATE_NUM_DUM, SECTOR | SIC S: Service and membership organisations | 192 | 192 |
Table 5.8: Average (mean) value of bribes businesses with employees were offered by other UK businesses or individuals, or had to, or were asked to give to other UK businesses in the last 12 months, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| PRIVATE_DOMESTIC_VALUE_EST | Approximate value of any bribes businesses were offered, had to give, or were asked to give, in the most recent instances of bribery involving another UK business | £2,780* | £2,650 |
Notes:
- * In the findings report, the reported mean value of bribes reported was £2,640, rather than £2,780. This is because the figure for the findings report was rebased to account for the number of bribery incidents recorded rather than the number of businesses. Specifically, 2 respondents reported that they had both been offered a bribe from a UK business or individual, and had given or been asked to give a bribe to a UK business – these were rebased to count as 2 cases of bribery each.
Table 5.9: Incidence rates for money laundering overall, and the specific types of money laundering incident experienced in the last 12 months, overall and by sector, unadjusted and adjusted
| Variables used | Description | Unadjusted | Adjusted |
|---|---|---|---|
| ML_NUM_DUM | All businesses | 157 | 139 |
| ML_NUM_DUM, SECTOR | SIC A: Agriculture | 11 | 11 |
| ML_NUM_DUM, SECTOR | SIC BDE: Utilities and production | 527 | 527 |
| ML_NUM_DUM, SECTOR | SIC C: Manufacturing | 35 | 35 |
| ML_NUM_DUM, SECTOR | SIC F: Construction | 78 | 78 |
| ML_NUM_DUM, SECTOR | SIC G: Retail and wholesale | 64 | 64 |
| ML_NUM_DUM, SECTOR | SIC H: Transport and storage | 211 | 211 |
| ML_NUM_DUM, SECTOR | SIC I: Food and hospitality | 227 | 227 |
| ML_NUM_DUM, SECTOR | SIC J: Information and communication | No cases | No cases |
| ML_NUM_DUM, SECTOR | SIC K: Finance and insurance | 344 | 205 |
| ML_NUM_DUM, SECTOR | SIC L: Real estate | 199 | 199 |
| ML_NUM_DUM, SECTOR | SIC M: Professional, scientific and technical | 97 | 97 |
| ML_NUM_DUM, SECTOR | SIC N: Administration | 119 | 119 |
| ML_NUM_DUM, SECTOR | SIC P: Education | 4 | 4 |
| ML_NUM_DUM, SECTOR | SIC Q: Health, social care and social work | 944 | 438 |
| ML_NUM_DUM, SECTOR | SIC R: Arts and recreation | No cases | No cases |
| ML_NUM_DUM, SECTOR | SIC S: Service and membership organisations | 478 | 478 |
-
This is based on the Department for Business and Trade (DBT) business population estimates 2024. ↩
-
The 2024 estimates were the latest ones available at the time of reporting, and therefore it was considered appropriate to use these for all population extrapolations included in the findings report. The 2023 estimates were used in the sampling and weighting of the survey, because these were the latest ones available at those points in the study. The changes across years are minor, and we opted on this basis not to reweight the data with the newest statistics (as explained in Section 2.5.1). ↩
-
This is based on the DBT business population estimates 2023, which were the latest ones available at the point of sampling. A 2024 set was subsequently published before the reporting stage, and this set shows the same result. ↩
-
These figures have been calculated using HMT data on the total supervised population size at the end of the 23/24 financial year (94,937), and dividing this by DBT business population estimates at the start of 2024 for all businesses (5,498,990; 2%) and employers only (1,427,165; 7%). As breakdowns of the supervised population by business size aren’t published, it is not possible to provide a more precise estimate on the proportion of businesses with employees that are regulated. However, we can be confident it would fall within this range. ↩
-
The unadjusted response rate is: completed interviews / total sample released. ↩
-
The adjusted response rate with expected eligibility has been calculated as: completed interviews / (completed interviews + incomplete interviews + refusals + (active numbers × expected eligibility)). It adjusts to exclude the unusable and likely ineligible proportion of the total sample used. ↩
-
Ineligible records from the Market Location sample were those found to be zero-employee businesses or from public sector organisations, which were not part of the intended population for this survey. ↩
-
This includes wrong numbers, fax numbers, household numbers (rather than businesses) or disconnected numbers. ↩
-
This includes sample that had a working telephone number but where the respondent was unreachable or unavailable for an interview during the fieldwork period, so eligibility could not be assessed. ↩
-
Expected eligibility of screened respondents has been calculated as: (completed interviews + incomplete interviews) / (completed interviews + incomplete interviews + leads established as ineligible during screener). This is the proportion of refusals and working numbers expected to have been eligible for the survey. ↩
-
As an example of the differences between the 2023 estimates and 2024 estimates, the overall business population (with employees) declined from 1,444,985 to 1,427,165. The variation of the profile by size and sector across years is negligible, with the proportion of businesses in each size and sector band changing by 0.1 percentage point or less. ↩
-
These figures have been calculated using HMT data on the total supervised population size at the end of the 23/24 financial year (94,937), and dividing this by DBT business population estimates at the start of 2024 for all businesses (5,498,990; 2%) and employers only (1,427,165; 7%). As breakdowns of the supervised population by business size aren’t published, it is not possible to provide a more precise estimate on the proportion of businesses with employees that are regulated. However, we can be confident it would fall within this range. ↩
-
If a respondent said “don’t know” or “prefer not to say” at all 4 itemised cost areas (FRAUD_DIRECT, FRAUD_DIRECT_DK, FRAUD_AFTERMATH, FRAUD_AFTERMATH_DK, FRAUD_STAFF, FRAUD_STAFF_DK, FRAUD_INDIRECT and FRAUD_INDIRECT_DK), they would, by contrast, be treated as a missing respondent at the combined FRAUD_ANYCOST_DUM derived variable. ↩