Story updated 2 April 2024 at 11:25am to add NCA response
UK law enforcement is ill-equipped to deal with rising AI-enabled criminal threats according to the Turing Institute, and an AI Crime Taskforce and improved training on the tech are needed to urgently address the issue over the next five years.
Law enforcement should also “rapidly adopt AI tools” to tackle the “acceleration in AI-enabled crime”, said the four authors of the AI and Serious Online Crime report as they recommended global cooperation on the issue.
They said: “UK law enforcement is not adequately equipped to prevent, disrupt or investigate AI-enabled crime.
“While legislation may help to deter criminal behaviour in the long term, a more robust and direct approach is needed, centred around proactive deployment of AI systems in law enforcement.”
That approach should include an AI taskforce under the National Crime Agency (NCA) to coordinate a national response to threats by collating data to identify criminal adoption of AI tools and maintain a database on their known uses.
The institute said action is desperately needed if the country is to avoid the AI crime threat accelerating even further over the next five years, citing its growing use in phishing, DDoS, ransomware and online fraud cases.
In response, the NCA told The Stack it welcomed the attention brought to the issue by the institute and would closely examine its recommendations, it also highlighted its existing work on tackling AI-enabled crime with organisations in the UK and overseas.
A spokesperson said: "NCA Director Alex Murray, in his role as the first national policing lead for AI, is overseeing a strategy focussed on tackling the criminal use of AI and using AI to make law enforcement more productive and more effective in cutting crime.
"This includes exploring how AI such as large language models and machine learning algorithms can be used in our mission to protect the public from serious and organised crime."
See also: AI cyberattacks will wreak "existential" damage, Ministry of Defence warns
The call comes as a report published by software company Cognyte claimed 55% of global law enforcement agencies it surveyed had cited GenAI chatbots as the primary technology used by criminals to boost crime, with another 38% pointing to deepfake videos and audio.
While some UK police forces have adopted AI tools, including analytical software Soze, a report by The Police Federation in February found a lack of public information on what these tools are and how they are being used.
Those we do know about cover a wide range from chatbots and translation tools up to facial recognition software and “predictive policing” tools, which forces in Avon and Somerset are reportedly using to see “the places or individuals where they may need to focus their resources”.
While the Turing Institute stopped short of calling for the UK to go full Minority Report, it said AI-powered authenticity verification, data analysis and malicious actor identification could prove vital in the fight against sophisticated criminal activity.
That’s also in line with what authorities told Cognyte, which reported 48% of those it surveyed predicted AI-powered predictive analytics tools would be the most impactful in supporting investigations over the coming year.
Security was also a major point for the institute as it called for the “mainstreaming of AI security into the UK’s approach to international cooperation on cybercrime” and the launch of an AI working group as part of Europol’s European Cybercrime Taskforce.