The healthcare AI landscape in the UK is “very confusing” for developers and guidance needs to be streamlined, according to the National Data Guardian for health and social care, Dr Nicola Byrne.
Byrne made her remarks yesterday at a Science and Technology Select Committee session, when asked by Chris Clarkson MP if there was a need for additional legislation or regulation around the use of data sharing for healthcare AI. While Byrne said current legislation was sufficient, more work was needed.
“The landscape for developers is indeed very confusing… and is too complex,” said Byrne.
“I think there is a need to streamline guidance and advice in one place, and also to ensure that there’s harmonised and agile regulation, because obviously, this is quite fast moving landscape, and we need regulation that can flex without breaking and evolve as the technology evolves.”
Byrne is the second NDG, appointed last year folllowing the death of the widely-respected Dame Fiona Caldicott. Last month Byrne told a conference healthcare data initiatives must prove they are “trustworthy”.
Matt Westmore chief executive of the Health Research Authority (HRA), agreed with Byrne in his answers to the committee, saying the HRA would be “slightly cautious about new legislation is because this is one of those areas, which is moving so fast”.
“Part of the challenge is artificial intelligence isn’t a discrete type of technology… And so the kind of technology-neutral approach of the legislation we think is that is the right way to think about it,” said Westmore.
He noted technology such as healthcare AI can introduce or amplify issues, such as bias. But he said this was also true of other advanced analytical techniques, and AI should not be seen as “a new class” of technology.
“But it does need thinking about differently, which is why we’re doing the work, as Dr Byrne says, around work with the other agencies to try to create a clear regulatory pathway,” Westmore added.
Byrne said she was looking forward to seeing what emerges from the NHS AI Lab, the £250 million collaborative project involving the HRA along with the National Institute for Health and Care Excellence (Nice), the Medicines and Healthcare products Regulatory Agency (MHRA), and the Care Quality Commission (CQC).
Follow The Stack on LinkedIn
Westmore emphasised he was concerned about bias and “explainability” – being able to explain how a healthcare AI model got to its answer – and outlined how the HRA and AI Lab would tackle the issues.
“When we see applications for either use of data, where patient consent is not practicable – through our confidentiality advisory group, or for research into AI – we will ask questions to ensure that things like the training data sets are representative of society, we will ask for the researchers to describe how they are going to ensure explainability with their technologies.
On explainability in healthcare AI, Westmore said: “It’s not that this is impossible to understand how one of these technologies comes up with a recommendation or an answer.
“There’s an emerging field that’s looking into how do you explain the results of those kinds of technologies. Again, it’s a relatively new field in a relatively new field, which is why the real focus for us is to try to keep up with that pace.”
Byrne also discussed the need to balance the role of healthcare AI – for example in detecting cancerous tumours – with the role of humans practising the “art” of healthcare – for example in deciding on how to treat a tumour. She said “doubt” was a critical factor which needed to be built into AI healthcare models.
“Doubt is the human attribute that keeps us safe, it protects us from hubris. But it also drives continued innovation that continued questioning and science – have we got the inputs right here, are the outputs what we expect?” said Byrne.