The US government should tighten privacy and data-sharing rules and actively monitor AI-powered threats to encourage safe AI adoption, a group of scientists has warned, as an industry emboldened by Trump’s laissez-faire attitude pushes back on regulations.
The Federation of American Scientists (FAS) called for stricter standards on processes such as the acquisition of large datasets from third parties by federal agencies in its response to a request for information on a federal AI Action Plan.
The think tank said: “AI holds immense promise for job growth, national security, and innovation, but accidents or misuse risk undermining public trust and slowing adoption—threatening the U.S.’s leadership in this critical field.”
Additionally, the government should install an early warning system for AI threats, including cyber operations and development of chemical weapons, FAS said in its list of 22 recommendations for safe adoption of the technology by government and private companies alike.
The US should also be establishing frameworks for the reporting and monitoring of potentially dangerous AI incidents in the "real world" and providing safe harbour to protect and support “good faith” independent AI security research and those conducting it according to the think tank.
Data-sharing standards are particularly needed for the country’s healthcare system it said, recommending the US follow the UK and Australia in implementing “centralised data-sharing frameworks” or “risk falling behind” on innovation and privacy protections.
See also: US.gov turns to AI to comb social media and identify student who might support terror groups
The advice largely goes against the new Trump administration’s more relaxed approach to AI regulations, with VP JD Vance saying “excessive regulation of the AI sector could kill a transformative industry just as it’s taking off” during a controversial speech at the Paris AI Action Summit.
Though the administration has yet to publish much in the way of concrete policy, an early executive action began development of the AI Action Plan, to be submitted in July, and rolled back safety testing rules to limit obstacles to the tech’s growth.
Businesses have also followed suit, with many big tech firms u-turning on pleas for more regulation under the previous administration to push against tighter state-level AI copyright laws and encourage tax breaks to aid their AI development.
Some of the FAS’ points are more favourable to the industry though, particularly a call to improve public-private AI partnerships by increasing funding for the National Institute of Standards and Technology (NIST) and creating a nonprofit NIST foundation.
Modelled on a foundation for energy investment, it would “expand capacity, streamline operations and spur innovation” by combining federal and private resources, FAS said, citing bipartisan legislation on the idea introduced to congress last year.