Automated vehicles: statement of safety principles
Published 10 June 2025
The passage of the Automated Vehicles Act 2024 has paved the way for the deployment of automated vehicles. These vehicles must be able to safely and legally drive themselves without a human needing to monitor or control the vehicle for at least part of their journey.
Section 2 of the Automated Vehicles Act, 2024 requires the Secretary of State for Transport to prepare a Statement of Safety Principles.
We want to understand how:
- the safety principles may be used
- the safety standard may be described
- safety performance could be measured
This call for evidence will be one of the first in a suite of public consultations to support the development of the regulatory framework.
Ministerial foreword
Establishing our new regulatory framework for automated vehicles provides a once-in-a-generation opportunity to harness the transformative impact artificial intelligence (AI) can have for our citizens. Not only will this maintain the UK’s position as a global AI superpower, but automated vehicles can also be a key enabler of our Plan for Change.
Automated vehicles can make transport safer, more convenient and more accessible. They will increase choice for non-drivers, including disabled people and older people. By better enabling freight to be transported outside peak hours, they may also reduce congestion, making journeys to work easier and quicker. In doing so, automated vehicles can improve the lives of millions of people.
We want to harness the sector’s huge potential to kickstart economic growth, creating the right conditions for a UK market potentially worth up to £42 billion by 2035, and that supports 38,000 new jobs. Further strengthening the UK’s position as a leader in the future of vehicle technology will also better enable us to support jobs in our vehicle manufacturing industry – an industry that is critical to the UK economy.
While by global standards, roads in the UK are very safe, every road death and injury is a tragedy for the families involved[footnote 1][footnote 2]. In 2023, collisions cost medical and ambulance services an estimated £2.2 billion. Every collision prevented will improve the safety of our communities and support our NHS to get on a more sustainable footing. With 88% of collisions involving human driver error as a contributing factor, automated vehicles can play a clear role in meeting this challenge.
While vehicle technologies have already provided significant advances in road safety and will continue to do so, technology is not foolproof. The UK has a heritage of world leading, smart regulation. Our new regime must uphold this standard, capture the opportunities, while safeguarding against the new risks that may arise.
The UK has already made big achievements in this space, with the Automated Vehicles Act 2024 establishing one of the most comprehensive legal frameworks of its kind in the world. We also play a leading role in harmonising international rules on safety assurance at the UN, ensuring consistent approaches are adopted globally. This has involved close working throughout, and I am very grateful for the expertise shared by industry, road safety groups and academia to develop our thinking.
This call for evidence and our misleading marketing consultation are the first of a suite of public consultations on our new regulatory framework. They will be open to all, and I want to encourage anyone who feels they will be affected to respond.
Lilian Greenwood
Minister for Future of Roads
Legal framework for automated vehicles
The Automated Vehicles Act, 2024 enables the implementation of a new legal framework to enable the use of automated vehicles on Great Britain’s roads.
This adopts many of the recommendations made by the Law Commissions of England and Wales and the Scottish Law Commission on automated vehicles.
The act puts in place measures to:
- allow government to establish core safety principles that all automated vehicles must adhere to so they can be used on Great Britain’s roads
- establish new legal entities to ensure appropriate liabilities for the operation and use of automated vehicles
- create a whole life ‘in-use’ assurance system for automated vehicles, supported by obligations for operators to provide information and prevent misrepresentation
Overview of the new regulated bodies created by the Automated Vehicles Act, 2024
Authorised self-driving entity
Every authorised automated vehicle will have a corresponding authorised self-driving entity (ASDE) that will be responsible for the behaviour of the vehicle when it is self-driving.
These companies will have ongoing obligations to keep their vehicles safe and ensure that they continue to drive following the law. Companies will only be authorised as an ASDE if the Secretary of State for Transport considers them to have:
- a good reputation
- a good financial position
- capability of discharging their duties well
No user in charge operator
Vehicles that can self-drive for the whole of the journey are considered no user in charge (NUIC) vehicles.
As well as an ASDE, these vehicles will also have an NUIC operator that will be responsible for non-driving related tasks. These include:
- maintaining the roadworthiness of the vehicle
- responding to incidents, such as breakdowns
Government is developing secondary legislation that will enable the safe deployment of automated vehicles on our roads.
We will consult further on these measures to enable secondary legislation by the second half of 2027.
Statement of safety principles
Section 2 of the Automated Vehicles Act 2024 requires the Secretary of State for Transport to prepare a statement of safety principles (SoSP).
This statement will be used in different ways across the safety framework, including:
- when authorisation authorities carry out authorisation checks (pre-deployment)
- when regulators carry out in-use monitoring and regulatory compliance checks (post-deployment)
- when undertaking annual assessments on the overall performance of self-driving vehicles
This call for evidence seeks information to support our understanding of how:
- the safety principles might be used
- the safety standard might be described
- safety performance can be measured
Some questions will be about the development of the safety principles themselves while others are about how the safety principles could be used in practice.
We will consult further on the SoSP before it is laid before Parliament, where the House of Commons and House of Lords will have the opportunity to:
-
consider the SoSP
-
positively vote for the SoSP to take effect
Question 1: in your view, are there any other uses for the safety principles we have not identified?
Question 2: in your view, what other uses might there be for the safety principles and why? Provide evidence if possible.
Pre-deployment
Two processes at pre-deployment consider whether the vehicle can safely drive itself on our roads.
Vehicle type approval
This is a well-established process already used for conventional vehicles. Under type approval, new vehicles, their systems and their components are checked to ensure they comply with requirements relating to:
- safety
- security
- environmental performance
This compliance must be confirmed before a vehicle can be placed on the market.
Requirements for type approval are typically set through internationally agreed regulations, ensuring consistent requirements are adopted across the world.
For self-driving technology, a new approval regulation is being developed through the United Nations Economic Commission for Europe (UNECE).
Automated lane keeping systems
There is already one established UNECE type approval regulation for automated vehicles – UN Regulation No. 157 – which sets requirements for automated lane keeping systems (ALKS).
ALKS technology enables a vehicle to self-drive on motorways only, up to 70 miles per hour (mph) and with the possibility of lane changing.
Currently, no ALKS vehicles are approved for use on UK roads.
Automated driving systems
The UK is currently helping to lead the development of the new regulation for automated driving systems (ADS), which will enable use cases beyond ALKS.
This ADS regulation will set internationally harmonised requirements for the safety performance of the hardware and software which provide the self-driving capability of the vehicles. Development of the regulation is building upon guidelines previously created by UNECE, (PDF) and is anticipated to include a requirement that the ADS is ‘free from unreasonable risk’.
Authorisation
This process will review the automated vehicle, ensuring that it can drive safely on our roads. This will require an assessment of the evidence of the safety of an ASDE’s automated vehicles and plans for establishing, maintaining and retaining the necessary operational processes to ensure continued safety.
The authorisation process will also include checks on the ASDE. These checks are expected to ensure these entities have a good reputation and financial position.
For example, this will check that the ASDE has the safety processes in place that ensure safe operations of the vehicle during its lifetime.
Through the granting of authorisation, requirements can be imposed for data and information sharing to support performance monitoring and incident investigations. We intend to launch a call for evidence in autumn 2025, which will cover authorisation.
Using the statement of safety principles during pre-deployment
To support the safety assessments at approval and authorisation, the manufacturer will outline their safety case.
For automated vehicles that are type-approved, the safety case is expected to set out the evidence which demonstrates that the ADS is free from unreasonable risk.
The SoSP may provide some guidance when interpreting the meaning of absence of unreasonable risk.
Definition of a safety case
A safety case is the documentation that sets out the reasoning for how the ADS meets the regulatory requirements and is free from unreasonable risk. It will set out the manufacturer’s evidence base for why the ADS is considered safe.
This could include:
- information on the intended use
- detail on tests undertaken to validate the safety of the system
- an explanation of what processes will be in place to ensure the system’s safety over its lifetime
The safety case is expected to include an assessment of the risks associated with the driving environment and what design and operational measures will be put in place to prevent or mitigate them. This will include road characteristics, weather conditions and interactions with other road users.
To minimise regulatory burden, we wish to mirror requirements set at type-approval and authorisation as far as possible.
At pre-deployment, the SoSP can inform the requirements set at authorisation for the behaviours and performance expected of automated vehicles.
Evidence supporting compliance with the requirements is likely to include a mix of virtual, track and real-world testing.
Another consideration proposed by stakeholders is that the assessment of how a vehicle performs against the safety standard should include a comparison to the performance of human drivers.
For example, the manufacturer could seek to model the performance of careful and competent human drivers and assess automated vehicles against the model on a scenario-by-scenario basis.
Another approach could be to use reference data sets to identify metrics on the typical performance of human drivers (for example, an aggregate analysis). These metrics could be used as a benchmark for supporting claims that the automated vehicle meets the safety standard set by the SoSP.
Question 3: do you agree or disagree with our characterisation of how the SoSP might be used at pre-deployment?
Question 4: why do you think this? Provide evidence if possible.
Question 5: Do you agree or disagree with our characterisation of how the SoSP might be used to inform pre-deployment safety requirements?
Question 6: why do you think this? Provide evidence if possible.
Question 7: what information do you think would need to be provided pre-deployment to demonstrate consistency with the SoSP?
Question 8: in your view, what considerations should be taken into account when assessing at pre-deployment whether automated vehicles meet the expectations set by the SoSP?
Post-deployment
The in-use regulatory scheme will consider the ongoing safety performance of self-driving vehicles.
Specifically, the in-use regulator will:
- consider whether vehicles continue to comply with the conditions and requirements placed on them through the authorisation process
- consider whether NUIC operator licensing requirements are being complied with
- identify and investigate incidents that indicate a vehicle may not be driving safely and legally
This consideration may relate to the safety performance of a single vehicle (for example, specific collisions) or to a fleet of vehicles (for example, repeat traffic infractions).
It could also relate to those operated by the same operator, or by multiple operators using the same ADS feature.
If the ASDE or the NUIC operator is found to be at fault, the in-use regulator can sanction them to encourage better compliance. A range of civil and regulatory sanctions ensure a fair and proportionate approach will be taken.
In addition to the in-use regulatory scheme, independent statutory inspectors will also be appointed. Similarly to the existing UK Accident Investigation Branches (UKAIB), these inspectors will undertake no-blame safety investigations.
A statutory inspector must report and publish the findings of their investigations. These reports will include non-binding recommendations to those best placed to implement positive change and ultimately improve the safety of automated vehicles.
While there is no statutory requirement for these inspectors to consider the SoSP, they may take into account the safety standard when making recommendations to support this continuous improvement.
This post-deployment analysis will, therefore, allow for ongoing learning about the appropriate behaviour/requirements for this new technology. One of the greatest opportunities offered by automated vehicles is the potential for continuous learning and improvement – if one vehicle is involved in an incident, learnings can be taken from this to improve safety across the fleet. For example, through developing new scenarios to support the development of self-driving technologies.
Further, the learning gained from the use of automated vehicles could help update the SoSP, its application and the requirements and conditions that apply to automated vehicles, ASDEs and NUIC operators – forming a feedback loop.
Using the statement of safety principles during post-deployment
As part of the in-use regulatory scheme, consideration should be made as to whether automated vehicles continuously meet the safety standard set out in the SoSP.
This is expected to include ensuring ongoing compliance with the conditions and requirements set at authorisation. For example, checking that the ASDE’s safety management system continues to be appropriately applied.
Definition of a safety management system
A safety management system is a formal set of processes put in place for the systematic and proactive approach to managing safety risks.
The implementation of an effective safety management system should be supported by a strong assurance function that monitors compliance, training and a safety culture embedded within the organisation.
There is also a clear duty to investigate relevant incidents that may demonstrate that a self-driving vehicle is no longer operating legally or safely and act where necessary.
Investigations may focus on single events, for example, collisions involving an automated vehicle. This could be an opportunity to consider whether the vehicle demonstrated behaviours that fall below the expectations set by the safety standard.
This will need to be considered on a case-by-case basis. For example, some incidents may be found to be the fault of other road users.
Other incidents may also be found to have been unavoidable, exceeding what could be reasonably expected from a human driver providing the same safety standard set by the SoSP.
In addition to this, investigations may also consider aggregated data across a series of events. For example, if a fleet of vehicles utilising the same ADS system repeatedly commits a traffic infraction such as speeding, running a red light or stopping in a yellow box at a junction.
These incidents are unsafe, illegal and below the behaviours of a careful and competent driver, which would indicate the safety standard is not being met.
Table 1: examples of event-level and aggregated information post-deployment
Stage | Event level | Aggregate level |
---|---|---|
Post-deployment | Results of incident investigation using on-vehicle data recording | Number of collisions, injuries, traffic incidents and near misses after deployment |
A range of data might be used to support these investigations and we are interested in exploring how both leading and lagging metrics should be used.
Definition of leading metrics
Leading metrics are used to monitor safety risks caused by automated vehicles. Some experts consider these safety risks could predict harmful safety outcomes. By monitoring leading metrics, it may be possible to identify safety issues before harm has occurred.
Definition of lagging metrics
Lagging metrics typically measure outcomes after an event. For example, the number of collisions and the health impacts on victims involved in collisions. They are useful for monitoring the harm caused, which will be critical for government to understand.
These metrics are currently measured – for example through STATS19 – and so could also be important for supporting any comparisons between the performance of automated vehicles and human drivers.
Table 2: examples of leading and lagging metrics possible in post-deployment
Stage | Leading | Lagging |
---|---|---|
Post-deployment | Near miss data, minor traffic infraction data | Number of collisions, injuries, traffic incidents, hospital outcomes in deployment |
Lagging metrics, based on data collected after certain outcomes – for example, the number of collisions resulting in injuries – will be used to assess performance against the safety standard.
However, leading metrics – those that indicate potentially unsafe automated vehicles before undesired outcomes occur – may also play a role in judging whether automated vehicles are performing in line with the SoSP.
General monitoring duty for post-deployment
Section 38 of the Automated Vehicle Act 2024 requires the Secretary of State for Transport to arrange for the effective and proportionate monitoring and assessment of the general performance of authorised automated vehicles on roads and other public places in Great Britain.
These arrangements must include how performance is consistent with the SoSP. This means reports that set out the Secretary of State for Transport’s conclusions about the monitoring and assessment of automated vehicles must be published annually.
Monitoring safety performance will focus on aggregate analysis of performance across the whole automated vehicle fleet. See the Measuring performance under the general monitoring duty section for details on how this should be achieved.
Question 9: do you agree or disagree with our characterisation of how the SoSP might be used at post-deployment?
Question 10: why do you think this? Provide evidence if possible.
Question 11: do you agree or disagree with our characterisation of how the SoSP might be used to inform post-deployment safety requirements?
Question 12: why do you think this? Provide evidence if possible.
Question 13: what information do you think would need to be provided to the authorities post-deployment to demonstrate ongoing consistency with the SoSP?
Question 14: in your view, what considerations should be taken into account when assessing, at post-deployment whether automated vehicles meet the expectations set by the SoSP?
Setting the safety standard
The Automated Vehicle Act 2024 provides some guidance on this, requiring that:
The principles must be framed with a view to securing that –
(a) authorised automated vehicles will achieve a level of safety equivalent to, or higher than, that of careful and competent human drivers; and (b) road safety in Great Britain will be better as a result of the use of authorised automated vehicles on roads than it would otherwise be.
This sets the statutory minimum level of safety that the SoSP should aim to achieve as being ‘equivalent to careful and competent human drivers’. This holds self-driving vehicles to the same high standard as drivers on our roads.
It is also a higher standard than the average driver who may have the necessary skills to be competent but may not always be careful. Therefore, we would expect the deployment of automated vehicles designed to this standard to improve road safety and reduce harm.
Careful and competent human driving
We have considered a range of relevant sources that set the expectations for human drivers to set out this safety standard.
Historically, whether driver behaviour was justified in a particular circumstance has been determined by courts (for example, an ex-post assessment).
Like all road users, drivers are required to comply with road traffic law in the interests of their own safety and that of other road users and this is reflected in the Highway Code.
For those who do not adopt a responsible attitude, or if their use of the highway creates an unsafe environment or causes nuisance, there are laws in place that can make them liable for prosecution.
To decide whether an offence of dangerous or careless driving has been committed, the courts will consider the extent to which driving falls below the standard expected of a competent and careful driver for that vehicle type. See further information on these offences set out by the Crown Prosecution Service.
There are also well-established expectations for the skills, knowledge and behaviours human drivers must demonstrate. These expectations are set out across a range of sources, including:
- in legislation, such as the Road Traffic Act 1988
- in statutory guidance, such as the Highway Code
- in driver education documents, such as DVSA’s essential skills for driving and national driving standards for cars, LGVs and driver trainers
When we assess these sources, we can understand the core behaviours we expect drivers to continue to demonstrate.
For example, a careful driver will:
- be unimpaired and pay attention to the driving task (for example, they are not under the influence of alcohol/drugs and are not distracted by screens or passengers)
- adhere to traffic rules
- use a cautious driving style, which they adapt to the prevailing conditions
The competence of a driver will vary depending on experience and level of training. Some differences in the competence of human drivers are in their:
- ability to anticipate actions by other road users, identify potential hazards, assess the associated risks and plan accordingly
- ability to safely handle a wider range of scenarios and apply expertise to new situations not previously encountered
- familiarity with their vehicle, its limitations and legal restrictions
- level of vehicle control under adverse driving conditions and during vehicle systems and component failures
We also expect careful and competent drivers to demonstrate good judgement. While many of the rules in the Highway Code are legal requirements identified using the words ‘must/must not’, there are also advisory rules identified by ‘should/should not’ or ‘do/do not’.
Driving is a complex task and there must be space for scenarios where it is right not to comply with all the rules. For example, allowing an ambulance to pass in an emergency.
Question 15: provide any evidence you are aware of on the current performance of human drivers.
Question 16: in your view, does human driving performance improve with competence?
Question 17: why do you think this? Provide evidence if possible.
Question 18: in your view, what characterises careful and competent human driving and why? Your answer may consider capabilities, behaviours and outcomes.
Careful and competent automated driving
As we develop the SoSP, we believe consideration must be given to the expectations for human drivers and road users, and how the behaviours of automated vehicles may differ.
Machines are different from humans. We should expect some instances where the capabilities exhibited by an automated vehicle when executing elements of the driving task may look and feel different to a human. For example, automated vehicles may react to objects beyond the line of sight of a human driver.
There may also be unusual circumstances where they need to come to a stop and await assistance from remote operators, whereas human drivers would be capable of progressing. In some instances, this will be for the better – machines do not exhibit lapses in attentions.
However, while actions may differ, it would be unacceptable if this resulted in worse road safety outcomes overall.
Question 19: do you agree or disagree with the considerations we have outlined in thinking about careful and competent automated driving?
Question 20: which consideration do you disagree with and why? Provide evidence if possible.
Our stakeholder engagement to date has indicated that many different approaches can be taken to set out:
- what behaviours and skills must be demonstrated pre-deployment to ensure that this safety expectation can be met
- what safety outcomes should be monitored post-deployment to assess whether this safety expectation is retained
In part, this can be explained by the fact that experts in different fields bring different perspectives to this issue.
However, it is also clear that opinions vary on the level of performance, consistency and subsequent safety levels achieved by careful and competent human drivers.
Question 21: in your view, how might the assessment of careful and competent driving differ between human drivers and automated vehicles?
Careful and competent concept in international regulations
Within international regulations, the concept of careful and competent human drivers is already well established. For example, in UN Regulation No. 157, which sets requirements for the approval of ALKS.
The UNECE’s draft ADS regulation is also anticipated to include a general reference to a competent and careful human driver. While this overarching concept is unlikely to be defined – allowing flexibility in interpretation – it is expected to be accompanied by more prescriptive provisions on the expected behaviours and outcomes.
Achieving a safety level better than careful and competent human drivers
The AV Act 2024 sets the statutory minimum level of safety that the SoSP should aim to achieve as being ‘equivalent to careful and competent human drivers’.
In setting this minimum, we ensured safety and supported future innovation in line with public perspectives on what is acceptably safe.
This safety minimum was also informed by the Law Commission’s work and a consultation on the UK’s safety ambition for self-driving vehicles.
During these consultations, some respondents said this standard is too low, which risks undermining public acceptance. However, others noted that setting the standard too high could suppress the introduction of the technology, delaying the potential road safety benefits.
For example, we are clear that ‘perfect’ performance would not be a viable expectation as it would prohibit any safety benefits being accessed in the short-medium term.
Some stakeholders proposed an additional safety factor should be applied to a baseline human driver level of performance to account for any safety assurance uncertainties for this innovative technology. For example, setting expectations that the automated vehicles achieve the same safety outcomes as the top per cent of the baseline.
In establishing a baseline, comparison may be needed against the performance of human drivers. Further consideration of these identified challenges is given in the making comparisons to the performance of human drivers section.
We want to explore these viewpoints to understand the impacts of framing the SoSP to achieve a safety level ‘higher than that of careful and competent human drivers’.
We welcome evidence on:
- road safety
- the deployment of technology and future innovation
- public acceptance of technology
Question 22: in your view, what are the implications of setting a safety standard equivalent to careful and competent human drivers?
Question 23: in your view, what characterises a standard higher than careful and competent human driving and why? Your answer may like to consider capabilities, behaviours or outcomes.
Question 24: in your view, what are the implications of setting a higher safety standard than careful and competent human drivers?
Securing an improvement in road safety
We expect automated vehicles will achieve an overall improvement in road safety if a level of safety equivalent to careful and competent human drivers is developed. For example, reducing the number of traffic collisions should reduce the overall number of casualties and make transport safer.
In setting our safety standard, we must consider the overall population’s safety while travelling in or in the vicinity of automated vehicles. The introduction of automated vehicles to GB roads has the potential to make travel more inclusive and accessible in the future. It could improve transportation options for older and disabled passengers.
We believe the overall improvement in road safety should not come at the expense to safety of any groups of road users and should comply with the rules in the Highway Code, including the hierarchy of road users.
Road users most likely to be injured in the event of a collision are:
- pedestrians
- cyclists
- horse riders
- motorcyclists
- people with children
- older adults
- disabled people
For example, an automated vehicle should be capable of perceiving and reacting to all kinds of other road users, including the most vulnerable.
The ways automated vehicles may influence equality and fairness outcomes include:
-
through the vehicle’s self-driving feature(s) and where reasonable, ensuring that vehicles are safe for all users and people near the vehicles
-
through use of the vehicle, with general consideration to not placing anyone at greater risk than the risk level of the overall population, with specific consideration given to the communities and environments that the vehicles will be operating within
These concerns were raised during the consultations on the Law Commission’s work into automated vehicles, but also more recently during the passage of the Automated Vehicles Act 2024.
The previous 2023 to 2024 government committed during consideration of the Bill that the SoSP would be designed to include a safety principle relating to equality and fairness.
During authorisations, the Secretary of State for Transport must consider the Public Sector Equality Duty (PSED).
This means due regard must be given with the aims of:
-
removing or minimising disadvantages for people in line with the Equality Act 2010 protected characteristics
-
taking account of and meet the needs of disabled people through reasonable adjustments
-
advancing opportunities through supporting greater confidence in travelling and participating in public life
-
fostering good relations and understanding
Under Section 2 of the AV Act 2024, the Secretary of State for Transport must have regard to the SoSP. This duty is, therefore, applied separately from the Public Sector Equality Duty.
We intend to apply equality and fairness safety principles to the outcomes between different groups of road users, for example, cyclists and pedestrians. We are also interested in exploring what safety impact automated vehicles have on those with protected characteristics.
Question 25: in your view, what evidence should be used to assess the safety impact that automated vehicles have on other road users through the hierarchy of road users? Provide specific evidence to support your response.
Examples include cyclists, pedestrians and horse riders.
Question 26: what evidence are you aware of about the safety impact that automated vehicles will have on groups with protected characteristics?
Do not provide any personal information relating to yourself or another identifiable person.
The protected characteristics are: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation.
Question 27: do you agree or disagree that an equality and fairness safety principle should be included within the SoSP?
Question 28: why do you think this? Provide evidence if possible.
Question 29: do you agree or disagree that an equality and fairness safety principle should focus on all road users?
Question 30: why do you think this? Provide evidence if possible.
Question 31: in your view, what metrics, if any, should be considered to support monitoring and evaluation of performance against an equality and fairness safety principle?
Measuring performance under the general monitoring duty
The AV Act 2024 sets out the requirement for the Secretary of State for Transport to put in place arrangements that are effective and proportionate to monitor and assess the general safety performance of authorised automated vehicles, including:
Those arrangements must, in particular, include monitoring and assessing the extent to which that performance is consistent with the statement of safety principles.
A range of approaches could be taken to monitor and assess whether the safety standard set by the SoSP is being met, including:
- identifying and monitoring substantial safety outcomes
- monitoring broader measures of safety risk
The ability to monitor road safety outcomes of automated vehicles is limited by the accessibility and quality of data and information available to the Secretary of State for Transport. Some information can be accessed by the Secretary of State for Transport – for example, collisions and casualties recorded by police in the STATS19 dataset.
Data sharing requirements may also be set as part of the authorisation process or using investigatory powers within the Automated Vehicle Act 2024. However, other relevant information is held by private organisations and individuals – such as vehicle insurance data, some on-board data records and people’s experiences.
Safety outcomes that could be considered include:
- the number of collisions involving automated vehicles
- health impacts on victims, for example, the number of people killed and seriously injured
Measures based on the outcome of harm are defined as ‘lagging’ because they can only be measured after the event has happened.
Broader measures of safety risk might include the frequency of traffic infractions (violations of traffic laws), near misses and evidence of erratic vehicle behaviour. These could be used to indicate higher risk of negative safety outcomes.
They could be considered as ‘leading’ metrics because they can be measured before an event resulting in harm happens.
As with our questions on how the SoSP should be used by the in-use regulatory scheme, we want to explore how both metrics could be used.
Question 32: in your view, what outcomes should be considered for the monitoring and evaluation of performance against the SoSP?
Question 33: in your view, what sources of information could be used to monitor and evaluate performance of these outcomes?
Making comparisons to the performance of human drivers
Some stakeholders said an assessment of how a vehicle performs against the safety standard should include a comparison to the performance of human drivers.
To make fair comparison between human and automated driving, consideration will need to be given to how safety risks/outcomes vary between driving conditions and environments.
When comparing the environment an automated vehicle operates within to a human driver, various factors might be considered, including the:
- road type (urban, rural, motorway)
- time of day
- weather conditions
- specific local authority or roads used
The number of incidents in an environment will decrease as the area is more narrowly defined.
It is difficult to decide which types of human drivers could be fairly compared to an AV. For instance, if careful and competent humans are only included in our comparison with AVs, this could mean incidents where human drivers were impaired, distracted or not complying with traffic rules are discounted. If these incidents were discounted, it would be challenging to provide judgment, as it could involve subjectivity and data limitations.
Finally, the severity of incidents may also vary between automated vehicles and human drivers. Therefore, the outcome of an incident should always be considered. If the number of collisions remains the same but the number of fatalities falls, this should be considered in making an assessment against the safety standard.
We want to explore what evidence could exist on human drivers’ performance and how this could be used to make comparisons with automated vehicles’ performance.
Question 34: in your view, what evidence sources could be used to compare the safety performance of human drivers and automated vehicles?
Question 35: in your view, what metrics comparing the safety performance of human drivers and automated vehicles should be annually reported on by the Secretary of State for Transport?
Other principles for consideration
The previous 2023 to 2024 administration identified several principles that might be included within the SoSP – see the policy scoping notes for the introduction of the Automated Vehicle Bill to the House of Lords for more information.
We believe some of these themes previously identified should be considered as enablers of the safety expectations that the SoSP will establish.
We consider 3 potential principles originally proposed for inclusion within the SoSP are, therefore, better placed within other areas of the regulatory regime.
These are:
1. The ability to drive without human monitoring of the vehicle or road environment or without human control. This ability is fundamental to how automated vehicles will achieve our minimum safety standard, set in Section 1(5) of the Automated Vehicle Act 2024. More detailed consideration of how this requirement should be met will be considered within type approval and authorisation.
2. Cyber resilience. We believe safety expectations for a regulated body to maintain cyber resilience of an automated vehicle should be captured through the technical requirements set within the UNECE’s Regulation 155, which government plans to mandate in the GB type approval framework.
3. Explainability. This is considered a core enabler of regulating AI safety. While this is a developing area of research and an area of active policy consideration, we anticipate this will help in-use regulation and our no-blame safety investigations. Expectations for how regulated bodies support explainability may be set within the wider regulatory framework.
Question 36: do you agree or disagree with our proposed approach to these potential principles?
Question 37: why do you think this? Provide evidence if possible.
Question 38: in your view, are there any other principles you consider should be included within the SoSP?
Question 39: what other principles do you think should be included and why? Provide evidence if possible.
Question 40: provide any further evidence you wish to submit for consideration on what safety expectations should be set for the deployment of automated vehicles.
Question 41: any other comments?
How to respond
This call for evidence began on 10 June 2025 and will run until 23:59 on 1 September 2025.
The easiest way to respond is via the online questionnaire. You can find a link to the questionnaire, and details of other ways to respond, in the Ways to respond section of the GOV.UK home page for this consultation.
Next steps
We will publish a summary of responses and the government response on the homepage for this call for evidence. Paper copies will be available on request.
If you have questions about this consultation please contact:
Safety principles consultation
Great Minster House
33 Horseferry Road
London, SW1P 4DR
Alternatively, you can email: Consultation@ccav.gov.uk.
Full list of questions
These questions are included here so you can read them in the context of this document. The call for evidence response form may include more questions, for example, questions about who you are.
See the Ways to respond section of the GOV.UK home page for this call for evidence to read a full list of questions and find out how you can respond to them.
Question 1: in your view, are there any other uses for the safety principles we have not identified?
Question 2: in your view, what other uses might there be for the safety principles and why? Provide evidence if possible.
Question 3: do you agree or disagree with our characterisation of how the SoSP might be used at pre-deployment?
Question 4: why do you think this? Provide evidence if possible.
Question 5: do you agree or disagree with our characterisation of how the SoSP might be used to inform pre-deployment safety requirements?
Question 6: why do you think this? Provide evidence if possible.
Question 7: what information do you think would need to be provided pre-deployment to demonstrate consistency with the SoSP?
Question 8: in your view, what considerations should be taken into account when assessing at pre-deployment whether automated vehicles meet the expectations set by the SoSP?
Question 9: do you agree or with our characterisation of how the SoSP might be used at post-deployment?
Question 10: why do you think this? Provide evidence if possible.
Question 11: do you agree or disagree with our characterisation of how the SoSP might be used to inform post-deployment safety requirements?
Question 12: why do you think this? Provide evidence if possible.
Question 13: what information do you think would need to be provided to the authorities post-deployment to demonstrate ongoing consistency with the SoSP?
Question 14: in your view, what considerations should be taken into account when assessing at post-deployment whether automated vehicles meet the expectations set by the SoSP?
Question 15: provide any evidence you are aware of on the current performance of human drivers.
Question 16: in your view, does human driving performance improve with competence?
Question 17: why do you think this? Provide evidence if possible.
Question 18: in your view, what characterises careful and competent human driving, and why? Your answer may like to consider capabilities, behaviours and outcomes.
Question 19: do you agree or disagree with the considerations we have outlined in thinking about careful and competent automated driving?
Question 20: which consideration do you disagree with and why? Provide evidence if possible.
Question 21: in your view, how might the assessment of careful and competent driving differ between human drivers and automated vehicles?
Question 22: in your view, what are the implications of setting a safety standard equivalent to careful and competent human drivers?
Question 23: in your view, what characterises a standard higher than careful and competent human driving and why? Your answer may like to consider capabilities, behaviours or outcomes.
Question 24: in your view, what are the implications of setting a higher safety standard than careful and competent human drivers?
Question 25: in your view, what evidence should be used to assess the safety impact that automated vehicles have on other road users through the hierarchy of road users? Provide specific evidence to support your response.
Question 26: what evidence are you aware of about the safety impact that automated vehicles will have on groups with protected characteristics?
Question 27: do you agree or disagree that the equality and fairness safety principle should be included within the SoSP?
Question 28: why do you think this? Provide evidence if possible.
Question 29: do you agree or disagree that an equality and fairness safety principle should focus on all road users?
Question 30: why do you think this? Provide evidence if possible.
Question 31: in your view, what metrics, if any, should be considered to support monitoring and evaluation of performance against an equality and fairness safety principle?
Question 32: in your view, what outcomes should be considered for the monitoring and evaluation of performance against the SoSP?
Question 33: in your view, what sources of information could be used to monitor and evaluate performance of these outcomes?
Question 34: in your view, what evidence sources could be used to compare the safety performance of human drivers and automated vehicles?
Question 35: in your view, what metrics comparing the safety performance of human drivers and automated vehicles should be annually reported on by the Secretary of State for Transport?
Question 36: do you agree or disagree with our proposed approach to these potential principles?
Question 37: why do you think this? Provide evidence if possible.
Question 38: in your view, are there any other principles you consider should be included within the SoSP?
Question 39: what other principles do you think should be included and why? Provide evidence if possible.
Question 40: provide any further evidence you wish to submit for consideration on what safety expectations should be set for the deployment of automated vehicles.
Question 41: any other comments?
Background to the AV Act, 2024
In 2018, the Centre for Connected and Autonomous Vehicles (CCAV) asked the Law Commissions of England and Wales and the Scottish Law Commission to conduct a review of legislation to prepare for the safe introduction of automated vehicles on GB roads.
The project involved 3 rounds of public consultation. Between November 2018 and December 2020. The law commissions published 3 consultation papers and received a total of 404 written responses. They held more than 350 meetings with interested parties, with each consultation open for 3 months.
This review concluded in 2022 with the publication of a report, including 75 recommendations to government. These recommendations set out a new regulatory framework for automated vehicles to address responsibilities and ensure safety in use.
The Automated Vehicles Act 2024 implements the recommendations made by the law commissions.
Through the Automated Vehicles Act 2024 a new safety framework has been established, designed so that its individual elements – approval, authorisation, operator licensing, in-use regulation and incident investigation – will provide a safety feedback mechanism, enabling learning and improvement to be continuously used.
A comprehensive programme of secondary legislation will establish the detailed mechanics of the new regulatory framework. This will incorporate several statutory instruments and statutory guidance.
These will be developed by working together with stakeholders across industry, the public sector and civil society. Formal consultation will be supported by extensive engagement throughout the process.
Freedom of Information
Information provided in response to this consultation, including personal information, may be subject to publication or disclosure in accordance with the Freedom of Information Act 2000 (FOIA) or the Environmental Information Regulations 2004.
If you want information that you provide to be treated as confidential, please be aware that, under the FOIA, there is a statutory code of practice with which public authorities must comply and which deals, amongst other things, with obligations of confidence.
In view of this it would be helpful if you could explain to us why you regard the information you have provided as confidential. If we receive a request for disclosure of the information, we will take full account of your explanation, but we cannot give an assurance that confidentiality can be maintained in all circumstances. An automatic confidentiality disclaimer generated by your IT system will not, of itself, be regarded as binding on the department.
Data protection
Your personal data collected through this call for evidence is processed in line with our online forms, surveys and consultations privacy notice.
If you are collecting more data or collecting data in a way different to that outlined in DfT’s online forms, surveys and consultations privacy notice, you will need additional privacy wording.
Although we are not asking for sensitive personal data, any that is provided in response to this consultation will be processed under article 9.2.g, substantial public interest, with reference to the Data Protection Act schedule 1 part 2 section 8 for the purpose of equality of opportunity or treatment.
For advice, email: data.protection@dft.gov.uk.
-
Overall, the UK ranked fourth in the world (after Norway, Iceland and Sweden) in terms of safety in 2023, as ranked by number of road deaths per million inhabitants. International transport forum road safety annual report, 2024 (PDF). ↩
-
UK has consistently ranked within the top 10 countries over the past decade. DfT reported road casualties Great Britain annual report: 2023. ↩