Last week, on 1 and 2 November 2023, the UK hosted the world's inaugural large-scale AI Safety Summit (the "Summit") at the historic Bletchley Park – once the epicenter of WWII code-breaking efforts. A milestone event, the Summit was attended by 100 global representatives from politics and business1 – spanning from leaders of 28 countries, the European Union ("EU") and United Nations,2 to executives of leading AI companies. The Summit focused on the key themes of (i) understanding and mitigating risks posed by "Frontier AI";3 and (ii) exploring ways to ensure that AI develops as a "force for good"4 and is appropriately regulated.
Bletchley Declaration: International Consensus on AI Safety?
A watershed outcome of the Summit, the Bletchley Declaration5 – unanimously signed by all governments in attendance6 – recognizes the substantial risks that may arise from misuse of AI and affirms the importance of global cooperation to mitigate such risks. Crucially, the Bletchley Declaration signals the growing international consensus that AI brings, on the one hand, "transformative positive potential" for humanity, and on the other hand, significant threats – necessitating an ongoing and "inclusive global dialogue" to ensure that the benefits of AI are "harnessed responsibly for good and for all".7
The Bletchley Declaration represents a pivotal moment in the global race for AI innovation, as it marks the first instance in which the world's foremost AI powers and institutions have unequivocally recognized the importance that "nations … companies, civil society and academia"8 all work together to promote the responsible development of AI. It comes as no surprise that the Bletchley Declaration has since been widely heralded as a positive stride towards the (eventual) goal of coherent global alignment on AI safety standards.
Establishment of UK and US AI Safety Institutes
Just before the Summit, on 26 October 2023, the UK announced the establishment of a UK Artificial Intelligence Safety Institute9 ("UK AISI"). Framed as the world's first state-backed organization focused on advanced AI safety for the public interest, the UK AISI will build on the work of the UK's Frontier AI Taskforce and conduct "fundamental research on how to keep people safe in the face of fast and unpredictable progress in AI".10 Notably, the UK AISI will not in itself be a regulator, but will instead collaborate with organizations across the governmental and private sectors to ensure that the UK's regulatory approach to AI is cohesive, "evidence-based", and "proportionate",11 in line with the UK's AI Regulation White Paper published in March 2023.12
Only days after the UK's announcement, US President Joe Biden signed Executive Order 14110 on the ‘Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence' on 30 October 2023 (the "AI Executive Order"). The AI Executive Order sets out the Biden Administration's vision for how the US intends to pursue AI development and regulation and requires, inter alia, developers of powerful AI systems to share safety test results with the US Government.13 On 1 November 2023, the US also announced that it will launch its own national AI Safety Institute ("US AISI"), which will be steered by the National Institute of Standards and Technology ("NIST"), to facilitate the development of standards for safety, security, and testing of AI models.14 At the Summit, US Secretary of Commerce Gina Raimondo further committed to a formal partnership between the US AISI and UK AISI – in line with her call to action for governments to unite behind the shared goal of "policy alignment across the globe on advanced AI".15
Leading AI companies publish AI Safety Policies & commit to working with UK AISI
In advance of the Summit, the UK Government requested leading AI companies to outline their AI Safety Policies across nine areas of AI Safety, including monitoring structures and policies for preventing misuse. Encouragingly, seven leading AI companies16 voluntarily responded by providing their respective AI Safety Policies, which are linked and openly available to view on the Summit's website.17 The UK Government sees this as a welcome step to "drive transparency" and foster "the sharing of safety good practices within the AI ecosystem".18
Over the course of the Summit, those same AI companies have also voluntarily committed to working closely with the UK AISI to test their latest, cutting-edge AI products before releasing them to the public19 – another positive signal of progress towards greater public and private sector collaboration on AI safety in the UK.
What's next? Looking to the future
Regulatory action and cooperation, at both the national and international levels, look set to continue – for example:
- The Republic of Korea is scheduled to host a mini virtual summit on AI in the next six months, and France will organize the next in-person Summit in 2024.20
- Yoshua Bengio, member of the UN's Scientific Advisory Board and Turing Award winner, will chair a comprehensive ‘State of AI Science' Report summarizing the latest research and identifying areas of research priorities to understand frontier AI risks. The Report will gather input from a panel of leading global AI experts, and is timetabled for publication before next year's Summit.21
- The EU is also in talks to set up a European AI Office,22 which is intended to complement the EU's wider AI strategy and landmark AI Act.23 The EU's AI Act is in the final stages of the legislative process and is expected to be adopted in early 2024.
These developments, coupled with continued steps around the globe to regulate AI, signal that governments are increasingly looking to put guardrails in place to ensure sufficient oversight of the development of this powerful technology.
1 UK Department for Science, Innovation and Technology ("UK DSIT"), "AI Safety Summit: confirmed attendees", 16 October 2023.
2 Including UK Prime Minister Rishi Sunak and Secretary of State for Science, Innovation and Technology Michelle Donelan; US Vice President Kamala Harris and Secretary of Commerce Gina Raimondo; China's Vice Minister of Science and Technology Wu Zhaohui; and EU Commission President Ursula von der Leyen, inter alia.
3 Defined, for the purposes of the Summit, as "highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today's most advanced models". See UK DSIT, "Frontier AI: capabilities and risks – discussion paper", 25 October 2023.
4 UK DSIT, "AI Safety Summit: day 1 and 2 programme", 16 October 2023.
5 UK DSIT, "The Bletchley Declaration by Countries Attending the AI Safety Summit", 1 November 2023.
6 Including the UK, US, China, Japan, Korea, India, Singapore, and the EU.
7 Bletchley Declaration, supra note 5.
8 ibid.
9 UK Government, "Prime Minister's speech on AI", 26 October 2023.
10 UK DSIT, "Introducing the AI Safety Institute", 2 November 2023.
11 ibid.
12 UK DSIT, "A pro-innovation approach to AI regulation", 29 March 2023.
13 As covered in our recent publication: White & Case, "Biden Executive Order seeks to govern the 'promise and peril' of AI", 3 November 2023.
14 The White House, "FACT SHEET: Vice President Harris Announces New U.S. Initiatives to Advance the Safe and Responsible Use of Artificial Intelligence", 1 November 2023.
15 US Department of Commerce, "Remarks by Commerce Secretary Gina Raimondo at the AI Safety Summit 2023 in Bletchley, England", 2 November 2023.
16 Namely, OpenAI, Meta, Google Deepmind, InflectionAI, Amazon, Anthropic, and Microsoft.
17 UK's AI Safety Summit website, "Company Policies", accessed November 2023.
18 ibid.
19 UK Government, "World leaders, top AI companies set out plan for safety testing of frontier as first global AI Safety Summit concludes", 2 November 2023.
20 ibid.
21 UK DSIT, "'State of the Science' Report to Understand Capabilities and Risks of Frontier AI: Statement by the Chair", 2 November 2023.
22 European Commission, "Remarks of President von der Leyen at the Bletchley Park AI Safety Summit", 2 November 2023.
23 European Parliament, "EU AI Act: first regulation on artificial intelligence", updated 14 June 2023.
White & Case means the international legal practice comprising White & Case LLP, a New York State registered limited liability partnership, White & Case LLP, a limited liability partnership incorporated under English law and all other affiliated partnerships, companies and entities. This article is prepared for the general information of interested persons. It is not, and does not attempt to be, comprehensive in nature. Due to the general nature of its content, it should not be regarded as legal advice. © 2023 White & Case LLP