Recommendations from this report.

The main recommendation of this report is to develop and deploy educational pathways and materials for healthcare professionals at all career points and in all roles, to equip the workforce to confidently evaluate, adopt and use artificial intelligence (AI). During clinical decision making, this would enable clinicians to determine appropriate confidence in AI predictions and balance these with other sources of clinical information.

The factors influencing confidence in AI, as detailed in this report, can help to determine the educational requirements to develop such confidence among healthcare workers. The second report from this research will outline suggested pathways for related education and training.

Interviewees for this research identified broader efforts that primarily aim to improve patient safety and service delivery, but could also contribute to developing confidence in AI within the healthcare workforce.

Figure C shows these efforts mapped across this report’s conceptual framework:

  • confidence influenced by the governance of AI technologies, with factors relating to regulatory oversight, validation and liability
  • confidence influenced by the implementation of AI technologies at local settings, with factors relating to strategy, culture, information technology (IT) and information governance (IG), local validation, and workflow integration
  • assessment of appropriate levels of confidence in AI-derived information during clinical decision making, with factors relating to clinician attitudes and cognitive biases, the clinical context, and the AI’s features (including explainability)

Many of the identified efforts are already underway, being led by Health Education England, the NHS Transformation Directorate, Integrated Care Systems and trusts, regulators and moderators, legal professionals, academics, and industry innovators.

A forthcoming project will involve engagement with these organisations and relevant groups and sharing of updates on progress being made on these efforts.

Governance
  • Development of professional guidelines on creating, implementing and using AI for all clinical staff groups.
  • Further development of regulatory frameworks for AI performance, quality and risk management.
  • Finalisation of formal requirements for evidence and validation of AI technologies.
  • Development of AI specific pathways for prospective clinical studies of new technologies.
  • Further development of guidance on liability for AI (including autonomous AI).
  • Establishment of flexible and dynamic processes for developing clinical guidelines on AI-assisted clinical tasks and technologies.
  • Development of clear oversight and governance pathways for AI, including AI not classified as a medical device.
  • Development of standards for developing AI for health and care settings (including co-creation with users, model transparency and mitigation of model bias).
Implementation
  • Further development of advice, guidelines, and prototypes for information technology (IT) and governance (IG) supporting adoption of AI technologies.
  • Development of strategies and assignment of resources to encourage organisational cultures that support innovation, co-creation, and robust appraisal of AI technologies.
  • Encouragement of collaboration and sharing of knowledge across NHS sites that are adopting AI technologies.
  • Development and resourcing of multi-disciplinary teams across clinical, technical, an administrative roles to enable implementation, local validation, audit and maintenance of AI technologies.
  • Establishment of pathways for ongoing monitoring, performance feedback and safety event reporting involving AI technologies.
Clinical use
  • Development of internal systems to record AI-assisted CRDM, including how AI has infuenced or changed the decision.
  • Further research on explainable AI and its safe use in clinical reasoning and decision making (CRDM).
  • Further research to understand and optimise the presentation of AI-derined information for CRDM.
  • Futher research to undersand how certain AI model features influence confidence.
  • Development of confidence in AI technologies across patients and communities via engagement and education activities.
  • Support for clinicians to determine appropriate confidence in AI-derived information and balance it with conventional clinical information for CRDM.

Page last reviewed: 14 April 2023
Next review due: 14 April 2024