The research shows that it is important for the healthcare workforce to feel confident in their use of artificial intelligence (AI) technologies.

Interviewees for this research stressed the importance of the healthcare workforce being confident in their own ability to adopt and use AI technologies.  

Low confidence may limit the use of AI technologies and result in wasted resources, workflow inefficiencies, substandard patient care and potential disparities in who gets to benefit from AI technologies, which may be unethical.

During clinical decision making, inappropriate levels of confidence in AI-derived information could lead to clinical errors or harm in scenarios where the AI underperforms, without being properly assessed or checked. This includes a phenomenon known as automation bias where the user inappropriately and uncritically favours suggestions made by automated decision making systems.

The main recommendation of this report is therefore to develop and deploy educational pathways and materials for healthcare professionals at all career points and in all roles, to equip the workforce to confidently evaluate, adopt and use AI. During clinical decision making, this would enable clinicians to determine appropriate confidence in AI-derived information and balance this with other sources of clinical information.

The factors influencing confidence in AI, as detailed in this report, can help to determine the educational requirements to develop such confidence across the NHS workforce. The second report from this research will outline suggested pathways for related education and training.

Interviewees for this research identified broader efforts that primarily aim to improve patient safety and service delivery, but could also contribute to developing confidence in AI within the healthcare workforce.

Figure C shows these efforts mapped across this report’s conceptual framework.

Much of this work is already underway, being led by Health Education England, the NHS Transformation Directorate, Integrated Care Systems and trusts, regulators and moderators, legal professionals, academics, and industry innovators.

A forthcoming project will involve engagement with these organisations and relevant groups and sharing of updates on progress being made on these efforts.

Figure C: Efforts that can contribute towards confidence in AI

Governance
  • Development of professional guidelines on creating, implementing and using AI for all clinical staff groups.
  • Further development of regulatory frameworks for AI performance, quality and risk management.
  • Finalisation of formal requirements for evidence and validation of AI technologies.
  • Development of AI specific pathways for prospective clinical studies of new technologies.
  • Further development of guidance on liability for AI (including autonomous AI).
  • Establishment of flexible and dynamic processes for developing clinical guidelines on AI-assisted clinical tasks and technologies.
  • Development of clear oversight and governance pathways for AI, including AI not classified as a medical device.
  • Development of standards for developing AI for health and care settings (including co-creation with users, model transparency and mitigation of model bias).
Implementation
  • Further development of advice, guidelines, and prototypes for information technology (IT) and governance (IG) supporting adoption of AI technologies.
  • Development of strategies and assignment of resources to encourage organisational cultures that support innovation, co-creation, and robust appraisal of AI technologies.
  • Encouragement of collaboration and sharing of knowledge across NHS sites that are adopting AI technologies.
  • Development and resourcing of multi-disciplinary teams across clinical, technical, an administrative roles to enable implementation, local validation, audit and maintenance of AI technologies.
  • Establishment of pathways for ongoing monitoring, performance feedback and safety event reporting involving AI technologies.
Clinical use
  • Development of internal systems to record AI-assisted CRDM, including how AI has infuenced or changed the decision.
  • Further research on explainable AI and its safe use in clinical reasoning and decision making (CRDM).
  • Further research to understand and optimise the presentation of AI-derined information for CRDM.
  • Futher research to undersand how certain AI model features influence confidence.
  • Development of confidence in AI technologies across patients and communities via engagement and education activities.
  • Support for clinicians to determine appropriate confidence in AI-derived information and balance it with conventional clinical information for CRDM.

Page last reviewed: 11 April 2023
Next review due: 11 April 2024