Confidence that artificial intelligence (AI) technologies are being integrated in clinical workflows and pathways in a safe, efficient, and ethical manner.

Interviewees for this research noted that confidence in AI technologies will depend on perceptions of their safe and efficient integration into clinical workflows and pathways.

Other research has found that AI technologies need to work quickly, reliably and effectively to instil confidence in the healthcare workforce,9 and this was reemphasised by clinician interviewees for this research. They noted that front line healthcare workers are often frustrated by unreliable hardware and software that impact their ability to deliver good quality care. They perceived NHS healthcare software such as electronic health records (EHRs) to be difficult to use, requiring extensive training and support, and with limited capacity for customisation to meet user needs.

Interviewees expected new technologies to improve on these legacy systems by being user-friendly, intuitive, and where possible, customisable. For example, the use of AI technologies should preferably not require logging into separate systems, and if appropriate, AI-derived information should be stored as part of the patient record, not separately. Ideally, AI technologies will streamline existing workflows, and reduce complexity by processing data automatically. Seamless integration will also enable robust working practices for the ongoing monitoring, evaluation and audit of AI technologies, making good practice easier to achieve and building system-level confidence.

Interviewees noted that health and care settings may need to review and revise their existing systems through new ways of thinking about clinical practices, patients, and their care. Expanding their capabilities to change and adapt, including in informatics and data collection will also be important. Broader efforts for digital transformation and change management can support such transitions as described in sections 4.1 and 4.2.

Interviewees cautioned that AI technologies that entail higher clinical consequences relating to patient triage, diagnosis or care will require appropriate measures to secure patient safety, and to clarify steps for reporting adverse effects in case of system failure. These could include error reporting pathways, effective use of national reporting (for example through regulatory requirements), and fallback workflows for system failures or unsuitable user cases. Development of protocols to clarify related actions and ensure human oversight will be essential.

Post-market surveillance of medical device technologies is currently managed through the Medicines and Healthcare products Regulatory Agency's (MHRA) Yellow Card system. The NHS AI Lab is currently supporting the re-design of this system using data-driven technologies to better identify and track trends with incidents of adverse performance.

In addition to efficiency and safety, an ethical approach to AI is essential to achieving confidence in these technologies. Interviewees for this research highlighted that an ethical approach to implementing AI should include, at a minimum, the principles of fairness, transparency, and accountability, and aim to ensure equitable benefits across patient groups. 

Finally, the way in which AI is integrated into clinical workflows and pathways may impact clinical decisions. As explored in greater detail in Chapter 5, research suggests that clinicians who perceive themselves to be domain experts will typically view and use AI-derived information differently to non-specialists, especially in time-pressured decision making.68 This suggests that testing the real-world impact of implementing AI, including the timing and manner of data presentation may be necessary to allow clinicians to correctly assess and use AI-derived information.

Information:

Systems impact - Key confidence insights

  • Healthcare workers will be more confident in AI technologies that are safely, efficiently, and ethically integrated in clinical workflows and pathways.
  • Ideally, AI technologies should streamline existing workflows and be seamlessly integrated to improve their adoption.
  • Clear pathways should be established for reporting safety events with AI technologies.
  • An ethical approach to AI will be essential to achieving confidence in AI. At a minimum, this should include the principles of fairness, transparency, and accountability, and aim to ensure equitable benefits across patient groups. 
  • The way in which AI is integrated into clinical workflows and pathways may impact clinical decisions and should be considered during its design process. 

References

9 Sinha S, Al Huraimel K. Transforming Healthcare with AI. In: Reimagining Businesses with AI; 2020:33-54. doi:10.1002/9781119709183.ch3

68 Gaube S, Suresh H, Raue M, et al. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digit Med. 2021;4(1):1-8. doi:10.1038/s41746-021-00385-9

Page last reviewed: 12 April 2023
Next review due: 12 April 2024