US and UK ministers meet to establish a bilateral agreement on AI safety

Us And Uk Ministers Meet To Establish A Bilateral Agreement On Ai Safety

The UK and the US established a Memorandum of Understanding (MOU) on AI safety.

US Commerce Secretary Gina Raimondo and UK Tech Minister Michelle Donelan enacted this bilateral agreement, with Minister Donelan describing AI as “the defining technology challenge of our generation.” 

This partnership builds on the commitments made during the AI Safety Summit at Bletchley Park in November 2023. It convened leading figures in the AI industry and political representatives from multiple nations, including a rare collaboration between the US and China. 

The summit led to the creation of AI Safety Institutes in the UK and the US dedicated to evaluating open- and closed-source AI systems.

Secretary Raimondo said of the agreement, “It will accelerate both of our Institutes’ work across the full spectrum of risks, whether to our national security or to our broader society.” 

Raimondo continued, “Our partnership makes clear that we aren’t running away from these concerns – we’re running at them.”

With this new agreement, the AI Safety Institutes of the UK and US are set to share research, run joint safety evaluations, and provide guidance on AI safety. This includes conducting joint testing exercises, “red teaming,” and exploring ways to share expertise. 

Donselan was hopeful about creating a safe path forward for AI, stating, “Only by working together can we address the technology’s risks head on and harness its enormous potential to help us all live easier and healthier lives, emphasizing the global impact of the audience’s role in AI safety.”

For the UK, which lacks almost any form of AI regulation in the absence of the European Union’s AI Act, this bilateral agreement could be critical.

The AI Act, which begins its phased rollout this year, mandates transparency and risk assessment for AI systems. The UK doesn’t automatically opt into EU regulations post-Brexit and has been sluggish in establishing its own rules.

Prime Minister Rishi Sunak mentioned he wanted to promote a “pro-innovation” framework in the UK, hinting at a deregulated environment. 

The AI industry is now scattered with a patchwork of voluntary agreements and frameworks.

Last year, the Biden administration expanded its voluntary safety framework to tech companies such as Adobe, IBM, Nvidia, and Salesforce, which joined existing participants like Google, Microsoft, and OpenAI. 

Voluntary commitments like these are stacking up, but many are dubious of their efficacy, especially when businesses like Palantir engage in military AI applications and surveillance. 

This US and UK agreement perhaps hints at the UK attempting to separate itself from the EU and offer a US-inspired tech environment.