European policymakers have buried a controversial “AI liability directive.”
The AI Liability Directive had sought to protect European citizens against damage caused by, for example, a breach of safety rules or unlawful discrimination based on algorithms embedded in an AI system.
The proposed legislation also sought to reduce the burden of proof for those claiming compensation for AI-related damages, legal experts say.
But late Tuesday the European Commission said (buried in an annex of its “work programme” for 2025 that it will “assess whether another proposal should be tabled or another type of approach should be chosen.”
That effectively withdraws the AI Liability Directive – but at least one observer suggested that this was simply an effort to clean up a patchwork of European regulations around AI, including its sweeping “AI Act.”
End of the AI Liability Directive; not AI regulation
Peter van der Putten, director, AI Lab at Pegasystems which builds AI-powered systems for banks, insurers, telcos and others, said: “Entities will still be protected against harm through the EU AI Act, and mechanisms such the right to an explanation can be exercised to get specific background on automated decisions such as ‘why was a declined this loan’ or ‘why am I being reviewed for my government benefit’.
“Whilst it is tempting to frame this [the decision to ditch the AI Liability Act] as a move towards less consumer protection… it can also just be seen as a sensible move to clean up the patchwork of AI regulation, and not incite all kinds of litigation that in the end could either be resolved by existing regulation, or not have been successful for claimants anyway.’
Commenting more broadly on the themes emerging at this week’s AI Action Summit in Paris, Chris Williams, Partner at Global Law firm Clyde & Co, noted: “The ‘safety first’ narrative around AI, which was once prevalent amongst those now in government, has clearly given way to a focus on doing what is necessary to foster innovation, and a good example of this is the UK which aims to become an ‘AI superpower’.
“Whether it be the UK or US, the need to create legislative safeguards are being viewed as ‘nice to haves’ rather than essential cornerstones to developing AI in a way that is safe, responsible and ethical,” he added.
“There remains enormous hype around what AI can actually achieve, but the focus has clearly shifted away from the balancing act of AI safety and innovation. At this stage, the regulatory response might need to be more fluid and less prescriptive to avoid stifling innovation, but it would likely need to include a long-term view of gradually stepping up checks and balances as AI becomes more advanced,” Williams added.
Europe's AI Act, meanwhile, entered into force on 1 August 2024, with a staggered implementation allowing different provisions to come in over a three-year period. Among other clauses, it "prohibits" eight practices:
- harmful AI-based manipulation and deception
- harmful AI-based exploitation of vulnerabilities
- social scoring
- Individual criminal offence risk assessment or prediction
- untargeted scraping of the internet or CCTV material to create or expand facial recognition databases
- emotion recognition in workplaces and education institutions
- biometric categorisation to deduce certain protected characteristics
- real-time remote biometric identification for law enforcement purposes in publicly accessible spaces
Law firm Lewis Silkin has good analysis of that here.