Will Artificial Intelligence be here to stay?

Share

We are no stranger to digitisation of healthcare in the National Health Service (NHS). There has been a significant increase in the adoption of digital technologies in the past few years, particularly so since the Covid-19 pandemic. A particular subject that is attracting much debate and deemed revolutionary by some, is the use of medical artificial intelligence (AI). It is estimated that global expenditure on healthcare AI technologies will achieve the US$ 45 billion mark by 20261. Internationally, healthcare systems face challenges in ensuring improvements in population health, patient’s experience of care, caregiver experience and in reducing the ever-increasing rise in healthcare costs2. This calls for transformation and innovative adaptations to models of healthcare delivery. Equally pressing is the issue of healthcare workforce shortages as highlighted by The King’s Fund and the World Health Organization2. The utilisation of technology including AI may be able to address some of these shortcomings2.


AI refers to a collection of computational technologies3 that mimic human cognitive behaviours such as learning and problem-solving, through algorithms or a set of rules2. Some specific AI technologies relevant to healthcare include machine learning (neural networks and deep learning), natural language process, rule-based expert systems, physical robots and robotic process automation3.


There are many examples of medical AI technologies. AI chatbots used in Babylon and Ada are helping with symptoms identifications and management steps in community and primary healthcare settings4. AI can be integrated with wearable technologies such as smartwatches to supplement carers and patients’ insights to their behaviour and wellbeing4. Emerald, founded by Massachusetts Institute of Technology faculty, is a machine learning platform utilising wireless technology in monitoring sleep, breathing and behaviour4. The Nuance Dragon Ambient eXperience for instance, leverages national language processing technology to automate administrative jobs such as documenting patient interactions, enabling care providers to dedicate more time on actual patient care activities4. AI/machine learning based medical devices are becoming common in the diagnostic imaging field, aligning with, and sometimes out-performing human experts in detecting pneumonia and certain cancers on radiological images, classifying skin lesions in dermatology, detecting lymph node metastases in pathology, and making heart attack diagnosis using a deep learning algorithm4. AI-based Inner-Eye open-source technology can shorten preparation time for radiotherapy for prostate, head and neck cancers by 90%4. AI-driven drug discoveries are already setting new frontiers with examples such as DeepMind and AlphaFold, discovering targeted therapeutics for both rare and common diseases4.


Still within the research realm, data mining and machine learning techniques are deployed to study public health problems5. Though less revolutionary, AI utilisation in administrative processes can encourage substantial efficiencies3, potentially reducing staff burn-out and improving their well-being. Presentations from Trendlytics using AI/Machine Learning-powered forecasting to prioritise human resources within the emergency department of Hertfordshire NHS Trust, and the introduction to robotic process automation (RPA) by a digital innovation manager at Buckinghamshire, Oxfordshire and Berkshire West (BOB) Integrated Care System (ICS) at the recent Digital Healthcare Show April 2024 are showing promise in solving demand and capacity issues in the workforce within healthcare.


Before we ride on the hype of healthcare AI, we need to acknowledge and address the various challenges related to the safe implementation of digital healthcare. We need to be aware of the issues with data literacy, data maturity, data competence in the workforce, trust, digital poverty, and digital exclusion. Whilst 10 million more people in the UK used NHS websites or digital applications in 2021 compared with 2020, the benefits are not yet accessible for everyone6. Sadly, 7% of households do not have home internet access, 1 million people cancelled their broadband package in the last 12 months owing to rising cost of living, and 10 million adults are estimated to lack foundation-level digital skills6. In fact, an estimated 30% of people who are offline feel that the NHS is one of the most challenging organisations to interact with. Certain groups may face higher risk of digital exclusion, therefore compounding health inequalities6. Given that digital inclusion is a whole-of-society issue, there is an ongoing initiative to ensure digital transformation of the NHS is inclusive, effective and has the potential to reduce health inequalities6.


At the Digital Healthcare Show 2024, types of leadership essential to addressing digital transformation was explored. It should very much be a bottom-up approach, favouring end-users, employee-led, patient co-designed, with collaborative style of leadership spanning across various disciplines and industries. The framework for NHS action on digital inclusion focuses on five domains: access to devices and data, accessibility and ease using technology, skills and capability, beliefs and trust, leadership and partnerships6. It can be perceived that the development and distribution of health technology are driven by suppliers and procurement processes, as opposed to actual needs of the NHS workforce7. Tehcnology adoption in the NHS will likely be more successful if staff are involved more in demand signalling and deployment of those technologies7. This is in recognition within the 2024 Spring Budget and NHS Long Term Workforce Plan that such technologies have significant potential in supporting workforce capacity in the NHS7. Research done by the Health Foundation concluded that many immediate gains would arise from optimising existing technologies such as electronic health records and tools for interprofessional communication7.


Alongside improving digital adoption and capabilities, we should also consider challenges around trust, liability, and safety of adopting AI technology. The House of Lords Select Committee on AI recognised that public trust and confidence in AI generally needs to be fostered8. The Department of Digital, Culture Media and Sports (DCMS) Centre for Data Ethics and Innovation (CDEI) summarised that ‘low levels of public trust’ and especially prominent distrust amongst clinicians remain the main barriers to opportunities with AI8. The European Union’s (EU) listed the aim is for ‘the development of an ecosystem of trust by proposing a legal framework for trustworthy AI’, culminating in the Commission’s 2021 Proposal for Regulation of AI8. An assortment of intergovernmental entities (The White House 2022), private enterprises (Google AI), and academic communities (MITRE, 2023) have issued frameworks to guide responsible use and mitigate the risks associated with AI9. However, clinicians and all users of AI clinical decision support systems are worried about being held liable when such systems make mistakes. The Academy of Medical Royal College has questioned the medico-legal position for a clinician who disagrees with the AI output/recommendations8. The College has also implied that the nature of negligence claims may alter as patients adapt to the availability of AI-generated decisions and recommendations8. This will in turn impact the stance of medical defence organisations8. The Steering Committee of the National Academy of Medicine (NAM)’s project on Artificial Intelligence in Health, Health Care, and Biomedical Science offer discussions on a draft framework for an AI Code of Conduct, highlighting ten Code Principles9. These include Safe, Effective, Equitable, Efficient, Accountable, Transparent, and Secure9.


Henceforth, we at the Improvement Academy embarked on the Shared CAIRE (Shared Care AI Role Evaluation) project. It focusses on the human-machine interaction models around shared decision-making within a clinic or healthcare setting. Working alongside an expert team consisting of computer science researchers, reader of law, and AI ethics specialist at the University of York under the Assuring Autonomy International Programme (AAIP), we aim to study ways to optimise the human-machine interaction, explore thoughts of the users on liability and trustworthiness, particularly on ‘vicarious liability’10. This transcends into the ethical and legal implications of AI use within the NHS. The project is funded by the Medical Protection Society (MPS). We presented the design of our AI consultation model at the recent Designing Interactive System (DIS) 2024 in Copenhagen, Denmark, hosted by the Association for Computing Machinery (ACM), and won an honourable mention award.

References:

1) Wu et al. BMJ Open Public perceptions on the application of AI in healthcare: a qualitative meta-synthesis
2) Artificial intelligence in healthcare: transforming the practice of medicine, Future healthcare Journal
3) The potential for Artificial Intelligence in healthcare. Future Healthcare Journal. Davenport T, Kalakota R.
4) Artificial Intelligence in Healthcare: transforming the practice of medicine. Bajwa, Munir, Nori, Williams.
5) The role of artificial intelligence in healthcare: a structured literature review. Secinaro, Calandra.
6) NHS England » Inclusive digital healthcare: a framework for NHS action on digital inclusion
7) Which technologies offer the biggest opportunities to save time in the NHS? – The Health Foundation
8) Artificial intelligence and clinical decision support: clinicians’ perspectives on trust, trustworthiness, and liability. Medical Law Review 2023. Jones, Thornton, Wyatt
9) Artificial Intelligence in Health, Health Care, and Biomedical Science: An AI Code of Conduct Principles and Commitments Discussion Draft. 2024. Adams et al.
10) Clinicians risk becoming ‘liability sinks’ for Artificial Intelligence. Lawton et al.