Healthcare workers will require product-specific user training.

The diversity of applications for artificial intelligence (AI) technologies in healthcare, and the way these technologies are designed and deployed in different health settings suggest that healthcare workers will require training that is specific to each technology that is introduced and used in their setting.

Factors like the technology’s intended use, technical basis, user interface and workflow integration will inform specific educational requirements and training.

Interviewees for this research noted that, at present, most users of AI technologies rely on product providers (industry innovators) for training on their AI technologies. Currently, there are no standards or regulations governing the requirements for such training. Further, the burden of delivering such training may fall on small to medium-sized enterprises who may not have resources to educate large cohorts of NHS staff, across multiple settings.

Given these limitations, future training for specific AI technologies is likely to require a collaborative effort between industry innovators and internal teams in health settings. This will enable product-specific training that reflects the local workflows and clinical setting, enabling a more bespoke approach that will better equip users of that technology.

The HEE report ‘Data driven healthcare in 2030’ recommended developing a ‘programme to develop professionals and managers in the field of IT education and training.’7 This is supported by the Goldacre Review, which recommends creating ’a technical team to house and develop continuing professional development resources.’ The review states that ’providing a team of technical specialists with adequate funding to develop, deliver, share, and curate training ... will be essential if training is to be high-quality and up to date’.4

As noted in Box 3, creating, delivering and continually updating product-specific user training as products iterate and change can be a key responsibility of AI-specific multi-disciplinary teams (MDTs). Specialist technical educator roles within these teams would likely be required to support such training.

Table 2 lists areas of knowledge and skills required concerning a specific AI technology to be deployed at a healthcare setting. These are intended to guide the information made available to users during AI product-specific training.

Table 2: Requirements for product-specific training

Governance - knowledge taxonomy
  • Familiarity with the clinical guidelines that apply to the use of the AI technology.
  • Familiarity with the implications of using the technology outside of the guidelines.
  • Understanding of clinician liability for the product, including in the scenario when the output is incorrect and leads to patient harm.
  • Understanding of the legal implications of either using or ignoring an AI-derived information in clinical reasoning and decision making (CRDM), including current uncertainty relating to liability for AI in CRDM.
Implementation - knowledge taxonomy
  • Familiarity with the technology’s intended use, and inclusion and exclusion criteria for that use.
  • Familiarity with reporting requirements for potential errors or safety concerns with the technology.
  • Familiarity with product specific factors which may affect fairness, transparency and equitable outcomes in the use of the technology.
Clinical use - knowledge taxonomy
  • Familiarity with who to contact with questions about the use, performance or monitoring of the AI technology.
  • Familiarity with the potential clinical consequences of false-positive and false-negative errors, and how these should be managed.
  • Familiarity with model limitations and situations where the AI technology is more likely to make an error or be unreliable, including the identification of potential ‘outlier’ cases.
  • Familiarity with how assisted decision making should be recorded in this scenario.
  • Familiarity with the pathway for patients to query decisions made with AI.
  • Understanding of factors that may influence how clinicians weigh the AI derived information in this specific scenario, including their level of clinical experience at the task.
  • Understanding of how any explanation or probability provided with a prediction from the AI should be interpreted in assisted clinical decision making.
  • Understanding of how the workflow integration might affect the interpretation of AI derived information, including the timing of AI information relative to clinician assessment.
  • Understanding of how to respond to situations when the AI contradicts clinical intuition, and any processes in place in terms of referral, arbitration and documentation.
Clinical use - skill taxonomy
  • Capable of weighing a prediction from the AI against other forms of clinical and demographic information during clinical decision making.
  • Capable of counselling patients about the use of their data by the AI technology, and how this data is accessed, processed and stored.
  • Capable of explaining this tool and its impact on CRDM to the patient, including access to patient communication materials.
  • Capable of communicating the risks and limitations of the AI technology to guide the patient through decisions about their health.

References

7 Health Education England. Data Driven Healthcare in 2030: Transformation Requirements of the NHS Digital Technology and Health Informatics Workforce. 2021. https://www.hee.nhs.uk/our-work/building-our-future-digital-workforce/data-driven-healthcare-2030 Accessed May 24, 2022.

4 Goldacre B, Morley J. Better, Broader, Safer: Using health data for research and analysis. A review commissioned by the Secretary of State for Health and Social Care. Department of Health and Social Care. 2022. https://www.goldacrereview.org/ Accessed May 24, 2022

Page last reviewed: 18 April 2023
Next review due: 18 April 2024