Leading research organisations have published a pioneering White Paper warning the Government, AI developers and regulators that the potential benefits of AI to patients may be overlooked if urgent steps are not taken to ensure the technologies work for the clinicians using them.
The healthcare sector is one of the biggest areas of AI investment globally and is at the heart of many nations’ public policies for more efficient and responsive healthcare systems. Earlier this year the UK Government set out strategies to ‘turbocharge AI’ in healthcare.
The White Paper – a collaboration between the Centre for Assuring Autonomy at the University of York, the MPS Foundation and the Improvement Academy hosted at the Bradford Institute for Health Research – says the greatest threat to AI uptake in healthcare is the “off switch”, if frontline clinicians see the technology as burdensome, unfit for purpose or are wary about how it will impact upon their decision-making, their patients and their licences.
Among the key concerns in the paper is that clinicians risk becoming “liability sinks” – absorbing all legal responsibility for AI-influenced decisions, even when the AI system itself may be flawed.
The White Paper builds on results from the Shared CAIRE (Shared Care AI Role Evaluation) research project, which ran in partnership with the Centre for Assuring Autonomy. The research examined the impact of AI decision-support tools on clinicians, bringing together researchers with expertise in safety, medicine, AI, human-computer interaction, ethics and law.
The team evaluated different ways in which AI tools could be used by clinicians – ranging from tools which simply provide information, through to those which make direct recommendations to clinicians, and those which liaise directly with patients.
This evaluation led to seven clear recommendations emerging. These include a call for:
- Reform to product liability for AI tools, due to significant difficulties in applying the current product liability regime to an AI tool
- AI tools should not provide recommendations, but only information to clinicians. This will reduce the potential risk to both clinicians and patients, until product liability reform.
- Clinicians to be fully involved in the design and development of the AI tools they will be using.
The White Paper authors say the Government, AI developers and regulators should consider all the recommendations with urgency.
To access the White Paper – click here

Vishal Sharma, Improvement Academy Associate Director and the project’s Analysis Lead, said:
“Clinicians need to understand the intended purpose of an AI tool, the contexts it was designed and validated to perform in, and AI limitations, including potential bias, to deliver the best possible care to patients.”
“The Shared CAIRE project revealed significant consensus between clinicians and patient representatives, particularly regarding AI-clinician liability concerns. Both agreed on preserving clinician autonomy. AI tools can provide salient information that saves the clinician valuable time, enabling them to engage more closely with the patient.
“Among the six AI models evaluated, clinicians preferred the one which highlighted relevant clinical data, such as risk scores, without providing explicit recommendations for treatment decisions – demonstrating a preference for informative tools that support rather than direct clinical judgment.

Professor Tom Lawton, a consultant in Critical Care and Anaesthetics at Bradford Teaching Hospitals NHS Trust, Clinical and AI lead on Shared CAIRE said:
“AI in healthcare is rapidly moving from aspiration to reality, and the sheer pace means we risk ending up with technologies that work more for the developers than clinicians and patients. This kind of failure risks clinician burnout, inefficiencies, and the loss of the patient voice – and may lead to the loss of AI as a force for good when clinicians simply reach for the off-switch. We believe that this White Paper will help to address this urgent problem.”