Opinion: AI Safety Summit – what next for this commitment to collaboration?

Opinion: AI Safety Summit – what next for this commitment to collaboration?

Callum Sinclair, Ishbel MacPherson and Michael Horowitz discuss the latest in AI regulation following a major summit.

A UK diplomatic success was announced on 1 November 2023 at Bletchley Park, the birthplace of modern computation, with the signing of the Bletchley Declaration, and the first international summit on AI. The Declaration seeks to address AI on two fronts: by both identifying and seeking to understand the dangers AI may pose to society, while also building cross-country strategies to mitigate them. Signed by delegates from across five continents, totalling 28 countries and the EU, the Declaration hopes to offer a blueprint towards a safety strategy for the world.

Gina Raimondo, the US Commerce Secretary, and Wu Zhaohui, China’s Vice-Minister of Science and Technology, took the stage with the UK’s Technology Secretary, Michelle Donelan, to sign the Declaration in an increasingly rare display of unity on a subject between the two powers. US Vice-President Kamala Harris attended the first day of the Summit and spoke of America’s determination to reach a world-wide consensus on how to deal with the challenges of AI.

Wu stated China’s commitment to mutual benefit, openness and equal rights across all countries to develop AI, while Raimondo announced her country’s formation of the AI Safety Institute, which would act as a neutral third-party to develop the nation’s rules for AI security and safety. Raimondo’s statement comes off the back of President Biden’s recent decree that US companies would have to share the results of their safety tests with the US government before the release of any AI models.

The US Executive Order on Safe, Secure and Trustworthy Artificial Intelligence marks yet another country’s movements toward national AI regulation, something the UK still admits they are far away from doing, acknowledging hesitancy to regulate something as rapidly developing as AI with statutes, which can slow down innovation and ultimately prove inadequate as technology innovates. Despite the UK choosing not to include an AI bill in the King’s speech next week, representatives were keen to show that the country was still at the forefront of technology, citing the UK government’s decision to call the Summit a ‘landmark achievement’.

Russia, notably, was absent from the Summit, as well as representatives from the UK’s devolved powers, despite Scotland’s request to attend. Richard Lochhead, the Scottish government’s Innovation Minister did, however, meet separately with Donelan on the 1st, and noted that the UK government has assured Scotland that they are ‘committed to closely engage with the Scottish government going forward.’ Lochhead has been a vocal critic of what he sees as the current UK government’s ‘hands off, non-statutory regulation of AI’ and has publicly worried that the approach ‘…will not meet Scotland’s needs’, citing that Edinburgh has recently been ranked as the UK’s most ‘AI ready’ city and that ‘Scotland gets AI’.

The AI Safety Summit was attended by academics from think tanks around the world and corporate executives, such as Elon Musk and former Liberal Democrat Deputy Prime Minister and current president of global affairs at Meta, Nick Clegg. Representatives from Deepmind, Google and OpenAI were also in attendance.

Somewhat worryingly, but maybe not surprisingly, was the lack of unified consensus on just what dangers of AI must be confronted. Musk met with Sunak in a streamed fireside chat to discuss the existential threats that AI may bring and rehashed his already famous warnings of AI super-intelligence. Perhaps fortunately, it appeared government representatives were more concerned with the more current and accessible risks, with deepfakes, disinformation and leveraged bias being the common threads in many of the speeches. Clegg spoke with media representatives about the Summit’s hopes to keep a unified strategy current, instead of on ‘speculative, sometimes somewhat futuristic predictions.’ The upcoming elections in the UK, US and India were also marked as major areas of concern.

In addition ahead of the summit, the UK government asked leading AI companies to outline their AI Safety Policies across nine areas - from capability scaling and risk assessments to points around model reporting, information sharing, and vulnerability and security measures. These policies were unveiled during the course of the Summit, with companies such as Amazon, Meta and Microsoft releasing new documents addressing each of the nine main topics.

While the Summit may indeed be called a success, and its importance as a benchmark in attempting to bridge the gap to a unified, world-wide strategy on AI cannot be overlooked, one must wonder just how much cooperation and understanding it may actually bring. Raimondo’s announcement on the US opening a new think tank was a notable disappointment to the UK, who had hoped to garner more international funding and cooperation for AI research within its own borders, and Lochhead’s statements and lack of attendance shows how much disagreement can happen nationally, let alone internationally. The more cynical among us may also cite the success of cooperation on the Paris Agreement or the Kyoto Protocol as other reasons for apprehension. However, the seeming commitment to addressing the tangible, as well as the willingness of so many to start what is a very necessary conversation amongst world leaders, both in the public and private spheres, may be enough.

AI regulation is an amorphous creature at the moment. The Bletchley Declaration seeks to at least begin to define it in a form that can be agreed upon internationally, while seeking common goals to reach towards. Underscoring the urgency of this, and perhaps hinting at how large a task is at hand, are the agreements for South Korea to hold a follow up mini summit in six months, and France to host another full summit in a year’s time. Furthermore, the support from the private sphere and their willingness to adopt and work with governments on AI safety initiatives may be a guiding light in a world where private enterprise and development often outpaces the ability of the public sphere to keep up. We are only beginning to understand the technology that has been developed, and international cooperation will be the only way for as many people as possible to reap the rewards, and avoid the dangers, it may bring.

To take our short AI Survey which aggregates data on development and use of AI within organisations on an anonymised basis, please click here and we’ll share feedback in the coming months.

Callum Sinclair, Ishbel MacPherson and Michael Horowitz are lawyers at Burness Paull LLP

Share icon
Share this article: