Guidance

AI Safety Summit: summary of pre-Summit engagement

Updated 31 October 2023

The Secretary of State for Science, Innovation and Technology would like to convey her sincere thanks to the business leaders, voices of civil society, academics, and our partners from right across the private, public and third sectors who have all played their part in the run up to this historic summit. The views summarised below have helped shape the conversation on AI safety and the crucial discussions being held at Bletchley Park. Our global AI Safety Summit is just the beginning of this exciting journey, and we look forward to engaging further with all of these groups over the coming weeks and months.

AI brings great opportunities to transform our economies and societies, but also significant risks if it cannot be developed safely. The UK is hosting the world’s first AI Safety Summit on 1-2 November at Bletchley Park. This landmark event is a generational moment and will bring together a globally representative group of world leaders, businesses, academia, and civil society to set a new path for collective international action to navigate the opportunities and risks of frontier AI.  We will unite for crucial talks to explore and build consensus on targeted international action which promotes safety at the frontier of AI

The Summit follows on the heels of extensive public consultation on proposals in the AI Regulation White Paper. The White Paper set out our first steps towards establishing a regulatory framework for AI that maximises the opportunities brought by AI while keeping people safe and secure. We will be providing an update on our regulatory approach through the response to the AI Regulation White Paper consultation later this year. 

In advance of the Summit, the UK set 5 objectives for the AI Safety Summit, in order to encourage and facilitate a broad and inclusive dialogue. The opportunities are so substantial, and the potential risks so significant, that we must welcome voices from across sectors, government, business, civil society, academia, and beyond, to be heard. Following the publication of these objectives and noting that attendance at the Summit itself is limited, we have been welcoming insight from the many individuals and organisations with the expertise and desire to contribute to this critical topic. Whilst the Summit is predominantly focused on the risks posed by frontier AI, we have welcomed discussion on the broadest possible impacts of AI. The government has published the Summit programme which will build on the discussions held so far to deliver on our objectives.

The UK takes great pride in being globally recognised for our inclusive and comprehensive approach to stakeholder engagement behind events and decisions of this scale and importance.  In the leadup to the Summit, we have engaged extensively with frontier AI companies, countries, and multilateral organisations such as the OECD, Council of Europe, G7 Hiroshima Process, and many others to gather the most complete insight into AI safety and associated concerns.  Yet we also understand that some of the most important insights can be drawn from the UK’s rich tapestry of academia, civil society, SMEs and voluntary, community and social enterprises.  The UK’s voice is a particularly valuable one as an AI superpower already leading in research, employment, and adoption of the technology; with a regulatory agenda that has been hailed as world-leading in its proportionate and risk-based approach; and our leading role on the international stage.

To ensure the widest range of domestic voices were heard, we partnered with the Royal Society, the British Academy, techUK, The Alan Turing Institute, the Founders Forum, and the British Standards Institution, as part of an ambitious and truly inclusive public dialogue to inform global action. A wide range of further events were also held including a business roundtable with CEOs and senior leaders from firms in a variety of sectors that develop and/or use AI, and roundtables with venture capital firms and think tanks, recognising the impact of AI is already present across our economy and society. Ministers from DSIT and other government departments have been present at a majority of the 24 events that will have been held, engaging with representatives from media, business, charity, and research sectors on top of ministers meeting with businesses and community groups. This has provided stakeholders with an opportunity to feed in their views on key themes to enrich Summit conversation and has underlined our commitment to a continued conversation before and after the Summit.

Over the course of the Road to the Summit, we engaged with hundreds of individuals, businesses and society organisations that attended the events that were held, as well as many more that engaged through online Q&A. Throughout this all, we have endeavoured to develop our understanding of the opportunities and risks of AI and have built from these events to shape our ambitions for the AI Safety Summit, for which we look forward to sharing more detail in due course. A summary of the areas discussed is set out below.

These conversations build on the recently published discussion paper on the capabilities and risks from frontier AI. The information and insight we have been able to gather from this broad engagement will inform our future domestic and international work on AI opportunities and risks, and we look forward to continuing the conversation.

Statements do not reflect UK government policy or a consensus stakeholder view.  The summary below represents significant views from a plurality of participants, or valuable insight from at least one participant, but should not be interpreted as a majority or consensus view.  Participation was wide, a broad range of views were collected, and the events did not seek to validate if there was a single shared position.  Not all views can be represented in this summary. Participants were not required to provide evidence behind claims and views, and the inclusion of a claim or view below does not represent an endorsement by the UK government.

Unlocking the opportunities of AI

Many participants recognised the huge potential of AI to accelerate our economies, contribute to the public good and ultimately improve citizens lives.  Within the public sector, contributors suggested that AI could be used to reduce administrative burdens, freeing more time for use of human intelligence and intuition, and allowing citizens to engage with government more effectively. There could be an opportunity to train large language models drawing on the vast amounts of government data, some participants noted, but this would need to be weighed against the risks associated with such access to data.

Some event attendees suggested successful development of AI for good would require collaboration with the broadest possible set of stakeholders. This would ensure that any uses of AI for social good would reflect public opinion and have the greatest possible impact. Event attendees explained that there is a need to build public awareness and understanding of where AI is already being used to create benefits for citizens. There were also discussions about a need for continuous evaluation and monitoring of AI use cases in the public sector.

Understanding potential societal risks

Session participants broadly agreed with the risks of frontier AI that are being addressed at the Summit and the need to better understand them, to manage them effectively and to promote public trust in the technology. They also highlighted additional potential risks to society posed by AI that were not associated exclusively with the frontier of the technology and could come about from the application of AI to existing harmful practices.

Deep fakes and disinformation were examples of present risks that could be exacerbated by advances in AI, and whose proliferation could be threatening to democracy. When misused, AI technologies may also enable online harms and financial crime. There is also concern among the public and certain sectors – such as the creative industries – that AI technologies, even safely deployed, could impact jobs and employment prospects.

In some situations, attendees raised that there may be heightened risks of violence as a result of AI misuse. This could include direct threats to life, such as deployment of autonomous weapons systems, but participants noted that the use of disinformation during times of conflict can also incite more violence.

Societal risks raised in discussion included perpetuating and creating bias, leading to discrimination against individuals and groups. It was noted that being aware of – and tackling - bias is therefore important, with a potential role for social scientists to develop a better understanding of how AI tools – and the governance around them – can address this.

Participants raised the need for urgency to understand and respond to risks arising from AI deployment. While previous technologies were developed over decades, the pace of change in AI technology means that any response to challenges will need to be rapid. The use of generative AI for phishing scams, for example, has shown how readily AI can be adopted for harmful application.

Better AI systems and collecting higher quality data could help address this risk. However, while there may be technical solutions to some of these risks, some participants argued that a successful approach would need to reflect the ways in which AI interacts with society (a ‘socio-technical’ approach). Better information is needed, it was suggested, on how this interaction takes place. Citizen engagement was suggested to be central to gathering representative information. Ensuring a shared understanding of risks would develop a society-wide ability to meet challenges effectively.

Overall, a key theme arising across contributions was the need for a clearer, evidence-based picture of the risks associated with AI.  Not only did some participants mention this directly, but the sheer range of views and admissions of ambiguity suggested a lack of clarity and consensus on both the risks and the capabilities behind them.  There would be significant benefit in a synthesised assessment of the globally available evidence base, to create a shared understanding for future discussions and ensure policy responses are genuinely science-based. The UK would be well placed to lead on this.

International leadership and collaboration

Event attendees highlighted that the UK has an opportunity to take the lead to manage the risks from the most recent and emerging advances in AI. The Summit will allow governments, academics, companies, and civil society groups to work together to understand these risks, and possible remedies.  There is value in developing a shared understanding based on pooling global expertise, and exploring common ground on measures that might be applied.

Businesses suggested that the UK is already a global leader in developing safe deployment models and there is an opportunity to use the Summit activities to consolidate this position. It was noted that the UK can differentiate itself by developing regulation that incentivises demonstrable safety in AI design, taking a leadership role in the responsible development of AI. Discussions noted that many countries have either not yet devoted significant effort to safe deployment or have adopted a perceived hardline approach that curtails the benefits of AI. With the right expert input, industry backing and clarity from government on ‘guardrails’, the UK could develop leading capabilities in AI safety. Expertise in the safety field could also benefit societies worldwide where the UK shares knowledge and establishes new partnerships to boost investment and collaboration on responsible AI deployment.

Some participants suggested that developing open-source AI could be a competitive advantage for the UK, if it were to democratise access to technology and enable more innovation and faster progress. These participants suggested that most real-world use of AI will rely on open-source models rather than proprietary APIs.

Discussions noted the important role technical standards can play in promoting safe and secure development and adoption of AI, as well as promoting interoperability. While barriers to engagement in standards development can be high and the landscape difficult to navigate, ensuring a wide range of organisations across sectors can contribute to standards development is crucial. Support was given to the government’s work championing the importance of standards and the work of the UK’s AI Standards Hub in supporting stakeholders to navigate the AI standards landscape.

Some event attendees also noted that there was a growing need to consider international action including consideration of real-time evaluation frameworks that can effectively monitor AI models, including Large Language Models (LLMs), once released. This was observed as being a key area for collaborative funding and research, particularly as there is a growing appetite in many countries to use AI to improve the effectiveness and efficiency of critical public services.

A role for regulation on AI

The central role of government in regulating AI to prevent harms was widely recognised and discussed. Support was given to the government’s approach of regulating outcomes rather than technologies. While rigid technique- and technology-based rules can quickly become outdated, outcome-focused guidelines allow room for experimentation.

One area highlighted where a government role would be important is in refuting disinformation that can be more easily generated through misuse of AI technologies. Government should look to manage disinformation risks before these risks gain traction and unjustifiably harm trust in the technology. Sharing examples of where harms have been identified and mitigated would help to ensure a more nuanced public dialogue on AI.

It was noted that complex AI regulations and compliance costs could create an unfair playing field that advantages tech giants over startups and SMEs. It was suggested that a risk would be for the UK to become a place of regulatory capture that would favour larger organisations, often based outside of the UK. Additionally, the need for clearer regulations and guidelines around data access and usage for training AI models was acknowledged, while ensuring that privacy and IP rights are protected.

In forming a government response to recent advances in AI, the importance of hearing public views was raised. The input of social scientists and engaging civil society was highlighted as a way to help address the risk of bias and other social impacts of generative AI.

International collaboration in establishing regulation was also regarded as critical. While a flexible approach to regulation was encouraged, regulatory certainty was also seen as essential. It was also emphasised that there should be clarity on government red lines for deployment and that where they arise the risks associated with monopoly power should be addressed. Participation of technology firms in operationalising AI was seen as a key ingredient to success.

The importance of industry-led, multi-stakeholder, open, transparent, and consensus-based standards development was highlighted, as was the depth of UK expertise across industry, academia, civil society, and government. There is an important role for government to play in supporting the standards development ecosystem, building a talent pipeline of standards experts, driving good governance in standards development organisations, and strengthening collaboration.

Beyond regulation, it was suggested that government also has a role in supporting a balanced narrative around AI that addresses the risks while recognising its benefits. Some contributors felt that demystifying and humanising AI would be key to unlocking its full potential, while others highlighted the role of education. Further communication from government around the benefits and positive use cases of AI would help address public scepticism and fears, which are hindering its adoption. Furthermore, some attendees said that greater public sector adoption of AI would help to build public trust and familiarity, demonstrate its value, and drive more widespread adoption.

Next steps

Government has already sought to address some of the key issues raised through these engagements. The AI Safety Summit primarily intends to focus on the risks of frontier AI and discuss how they can be mitigated through internationally coordinated action. As well as focusing on risks created or significantly exacerbated by the most powerful AI systems, the Summit will also consider how safe AI can be used for public good and to improve people’s lives. The Frontier AI Taskforce announced their second progress report, building an AI research team that can evaluate risks at the frontier of the technology. The AI Safety Institute will carefully examine, evaluate, and test new types of AI to understand what each new model is capable of.  It will explore a wide range of risks including those raised across our stakeholder discussions: from social harms like bias and disinformation through to more extreme safety risks such as cyber and bio-security. Through these functions we will continue to build domestic capability whilst also driving the international conversation.