The UK’s premium AI research institute has called for a Moonshot effort by the US, UK and allies to develop coordinated data collection and AI models to predict crises and stabilize conflict.
Without tighter cooperation, the Alan Turing Institute warns, “the US, UK and their allies could fall behind adversaries in our ability to predict and mitigate crises unless we maximise the ability of AI to inform intelligence assessments that inform strategic decision making.”
But the Institute’s call for cross Atlantic cooperation may be a forlorn hope, as the US takes an increasingly isolationist view of its role in the world.
The report by the Institute’s Centre for Emerging Technology and Security (CETaS) laid out a three phase process for developing an AI powered strategic early warning system. The first would see “substantial efforts” to overhaul the collections of geopolitical event data, including “non-traditional data sources to make sense of complex human behaviour “.
Phase two would mean the development of advanced models to help the intelligence community analyse events, with these “cross checked” by humans and refined over time.
The third phase would be the development of an “AI simulation platform” to predict risks and conflicts.
While individual countries are battling to extend their strategic advantage when it comes to AI, this was too much for any one nation the researchers said, and should be picked up by the US and UK, or the whole five eyes alliance.
The report declares that “Strategic surprise – an unforeseen event or development often driven by an adversary – can jeopardise human lives and impose substantial security and financial costs.” But it points to rapid improvements in AI, and “reports of commercial AI systems successfully predicting events leading up to the invasion of Ukraine, have increased interest in using AI to improve early warning and assessments of geopolitical events.”
The two most promising strands right now are tracking conflict indicators, and identifying possible outcomes after a “shock or trigger”. The biggest inhibitors are data scarcity and inconsistency, and how to model the decisions of individuals.
Current human based intelligence analysis was slow, based on limited data, and open to misinterpretation.
But AI has its limits too. Last year the institute wared that national security policy makers and political leaders needed training on both the potential and limits of AI. They flagged up AI’s ability to “exacerbate dimensions of uncertainty” already inherent in intelligence work. Leaders needed to be aware that AI presents “probabilistic calculations” not certainties, and that many systems are opaque.
The report came just days after the US’s Office of the Director of National Intelligence issued its Annual Threat Assessment.
This flagged up China, Russia, Iran and North Korea as the “major state actors “ causing it most concern. It laid out potential flashpoints around Taiwan. It identified China as “the most active and persistent cyber threat to U.S. government, private-sector, and critical infrastructure networks” And it said “China almost certainly has a multifaceted, national-level strategy designed to displace the United States as the world’s most influential AI power by 2030.”
And it expects China’s military to use LLMs “to generate information deception attacks, create fake news, imitate personas, and enable attack networks. China has also announced initiatives to bolster international support for its vision of AI governance.”
As for Russia, when it comes to cyber, its “unique strength is the practical experience it has gained integrating cyber attacks and operations with wartime military action, almost certainly amplifying its potential to focus combined impact on U.S. targets in time of conflict.”