Are we ready for Diagnostic AI?


Image recognition artificial intelligence (AI) is here to stay and has the potential to transform medical diagnostics by prioritising urgent cases and identifying disease at an early stage. As the technology has developed, it has become clear that the ability to identify a broad range of pathologies is important  which means the algorithms need training on ever increasing numbers of patients.

Detecting multiple pathologies requires detailed reporting on thousands, even millions of images, a huge amount of data needs to be collected in order for programmes to ‘learn’ and improve their functionality.

At the same time, there are questions around how diagnostic AI and clinicians should work together – who is liable for a wrong diagnosis? How do medical indemnity providers assess liability risk? How is their use going to be regulated?

What about patient data? Data protection laws insist that patient data is anonymised but the risks of a breach resulting in identifiable data being released  is always there, along with the resultant civil litigation that always accompanies these events.

The UK/EU data protection regime treats medical-related data as a “special category” – in other words, a more rigorous set of rules is applied to the way that data is collected, managed and stored.

Patients must be informed what their data is being used for and how long it is being kept. Patients may also withdraw consent and even insist that their data is destroyed. In reality the risks remain the same, it’s just the volume of data that is changing.

The reputational damage that could be caused by a data leak is very real however and the fallout from a material breach could cripple a tech company  at a critical stage of their research

In order to mitigate the risk , AI algorithms only have access to medical imaging data, details of patients including condition and outcome, remain unknown during the analytical process which further limits the application.

Ultimately, AI should be helping doctors decide on treatment strategies but to do that, the algorithm will need much more access to patient health records, clinical trial results and drug data which means, a regulatory framework and a clearer pathway through patient consent, privacy laws and data security.

If developers go down this route (and it is almost inevitable that they will), there needs to be more thought given to the nature of consent for patients. How will they respond to being part of a study or the result of an AI generated diagnosis rather than a human one? They are entitled to know whether the AI is safe and effective or at an early stage in its development. In the absence of that, the risk of legal recourse, if the outcome is not as they would wish, is pretty much guaranteed.

We don’t need AI to tell us that!

 

News and Articles

VIEW ALL

The Harborne Hospital opens its doors with 32 Medmin Consultants

The Harborne Hospital opened its doors this week and our very own Simon Radley, who is also Medical Director at the HCA facility,  held the first clinic in this brand new £100m, state of the art private hospital.
Read full article

‘Going Private’ is the solution for more impatient patients.

It should come as no surprise that the strain on NHS waiting lists emerged as a significant driver for private healthcare usage, with 34% of those surveyed citing lengthy waiting times as a key motivator.
Read full article

What is PHIN, who owns it, who funds it and what does it do?

The founding of PHIN was a crucial part of the CMA's strategy to increase transparency and competition in the sector. PHIN acts as an independent body to monitor commercial actions in private healthcare
Read full article
© 2025 All Rights Reserved - Designed by Medmin