Site icon The Stack

MI5 revealed as silent partner to Alan Turing Institute since 2017

The Alan Turing Institue MI5 partnership has been in place since 2017

Thames House, head quarters of MI5. Image courtesy Wikimedia.

The Alan Turing Institute has been working with MI5 since 2017, the organisations have revealed, as the UK’s defence and intelligence establishment continues to ramp up its focus on Artificial Intelligence.

In a joint announcement, the Alan Turing Institute and MI5 said the two had been working together as part of the institute’s defence and security research programme – presumably since its establishment in 2017. The Ministry of Defence and GCHQ have been publicly involved in the programme since its foundation.

Since a major speech in May 2021 by Commander of Strategic Command, General Sir Patrick Sanders, which described AI as “the one ring to rule them all” of threats, artificial intelligence has quickly moved up the UK’s defence agenda. Earlier this month the MoD announced the opening of a dedicated Defence AI Centre.

See: MoD now has a dedicated AI centre operational

Given the tendency of bleeding-edge defence and intelligence innovations to remain under the radar for many years before public acknowledgement, it should be no surprise the Security Service has been working with the UK’s flagship AI institute all this time. According to this week’s announcement, the Alan Turing Insitute and MI5 have made their collaboration public in order to enable “closer working” with the institute and its academics.

“The UK faces a broader and more complex range of threats, with the clues hidden in ever-more fragmented data. MI5 has a long and proud history of innovation and use of cutting-edge technology in an ethical way; artificial intelligence is another example of that and a vital capability in MI5’s toolkit,” said Ken McCallum, DG MI5..

See also: UK military to go all-in on AI

The Turing has been hard at work on this area of defence-related AI research, particularly on the inter-state threats side (presumably whatever work on weaponised AI is going on is much less public). Earlier this year it launched the Centre for Emerging Technology and Security, in collaboration with the Oxford AI Network.

One Turing fellow, Professor Mariarosaria Taddeo, recently published her thoughts on the “philosophy and ethics of cyber warfare”, alongside the MoD’s paper on the ethical use of AI in defence (Taddeo is also a member of the MoD’s AI Ethics Advisory Panel): “Cyber-attacks are neither victimless nor harmless and can lead to unwanted, disproportionate damage which can have serious negative consequences for individuals and for our societies at large. For this reason, we need adequate regulations to inform state use of these attacks,” wrote Taddeo, noting that a “myopic” approach had failed to regulate inter-state cyber-attacks.

“This is the failure of what I dubbed the ‘analogy-approach’ to the regulation of cyber warfare, which aims to regulate such warfare only to the extent it resembles kinetic warfare, i.e. if it leads to destruction, bloodshed, and casualties. In effect, it fails to capture the novelty of cyber-attack, which is disruptive more than destructive, and the severity of the threats that they pose to a digital society. Underpinning this approach is the failure to recognise the ethical, cultural, economic, infrastructural value digital assets have for our – digital – societies,” she added.

Follow The Stack on LinkedIn

Exit mobile version