The UK government has ditched former PM Rishi Sunak’s ambition to position the country as a leader in responsible artificial intelligence relabelling the AI Safety Institute as the UK AI Security Institute.
The move pits the UK against the European Union’s drive to rein in rogue AI and misinformation and aligns it with the US’s far more laissez faire approach to AI development. The UK joined the US in rejecting an international agreement at the AI Safety Summit in Paris treaty earlier this week.
It also aligns the UK with AI provider Anthropic, with an agreement to explore “AI opportunities to help grow the economy as part of our Plan for Change.”
Former PM Rishi Sunak choreographed the first major AI Safety Summit in 2023, which resulted in the Bletchley Declaration, and highlighted risks including deceptive content.
Today, at the Munich Security Conference, science secretary Peter Kyle said the renaming reflected a focus on “Serious AI risks with security implications, such as how the technology can be used to develop chemical and biological weapons, how it can be used to carry out cyber-attacks, and enable crimes such as fraud and child sexual abuse.”
The organization will partner with the Home Office to “conduct research on a range of crime and security issues which threaten to harm British citizens.”
What it will not do is focus on issues such as bias or freedom of speech, according to a government statement, “but on advancing our understanding of the most serious risks posed by the technology to build up a scientific basis of evidence which will help policymakers to keep the country safe as AI develops.”
Meanwhile, according to a memorandum of understanding revealed today, the UK government and Anthropic will “work together on the most consequential opportunities presented by advanced AI: to foster the continued responsible development and deployment of AI that aligns with our societal values and principles of shared economic prosperity, improved public services and increased personal opportunity.”
UK.gov should be fairly familiar with Anthropic – it’s only a few months since the AI Safety Institute, sorry Security Institute, was conducting pre-deployment tests of Claude.
Now, the UK government looks forward to “building on Anthropic’s capabilities and existing UK strengths in R&D and data assets.” It seems Anthropic’s “state-of-the-art systems and tools” with be the default for supporting the UK’s world startup community, universities, and other organisations.
At the same time, according to the statement, the MOU is not legally binding and “does not prejudice against future procurement decisions.”
To be fair, the UK is not alone in changing course on AI. The EU quietly altered its own approach to AI liability earlier this week.