Appendix A.

This Appendix outlines the key educational and training requirements to develop advanced artificial intelligence (AI) related knowledge, skills and capabilities, as outlined in section 3.3.

These requirements will be additional to the foundational requirements outlined in section 3.2 and the requirements for product-specific education in section 3.4.

The broad aim of the advanced requirements is to develop in-depth understanding of, and skills related to, the subject area following the taxonomies illustrated in Figure 4. These will enable healthcare workers to lead different aspects of the deployment of AI technologies in health settings and advise others.

Interviewees for this research proposed that the need for these advanced requirements should be based on the roles and responsibilities of the healthcare workers (essentially, the workforce archetypes outlined in Chapter 2).

Figure A1 presents an overview of how the advanced requirements relate to the workforce archetypes and the factors that influence confidence in AI (see section 1.2 and the first report1). The following subsections provide details of these requirements for each archetype.

Figure A1: Workforce archetypes and needs for advanced AI education

Shapers
AI literacy Not applicable
Governance - Regulation and standards Advanced
Governance - Evaluation and validation Advanced
Governance - Guidelines Advanced
Governance - Liability Advanced
Implementation - Strategy and culture Advanced
Implementation - Technical implementation Not applicable
Implementation - Local validation Not applicable
Implementation - Systems impact Advanced
Clinical use - AI model and product design Not applicable
Clinical use - Cognitive biases Not applicable
Clinical use - Interface with patients Not applicable
Drivers
AI literacy Not applicable
Governance - Regulation and standards Advanced
Governance - Evaluation and validation Advanced
Governance - Guidelines Advanced
Governance - Liability Advanced
Implementation - Strategy and culture Advanced
Implementation - Technical implementation Not applicable
Implementation - Local validation Advanced
Implementation - Systems impact Advanced
Clinical use - AI model and product design Not applicable
Clinical use - Cognitive biases Not applicable
Clinical use - Interface with patients Not applicable
Creators
AI literacy Advanced
Governance - Regulation and standards Advanced
Governance - Evaluation and validation Advanced
Governance - Guidelines Advanced
Governance - Liability Advanced
Implementation - Strategy and culture Not applicable
Implementation - Technical implementation Advanced
Implementation - Local validation Advanced
Implementation - Systems impact Advanced
Clinical use - AI model and product design Advanced
Clinical use - Cognitive biases Advanced
Clinical use - Interface with patients Not applicable
Embedders
AI literacy Advanced
Governance - Regulation and standards Advanced
Governance - Evaluation and validation Advanced
Governance - Guidelines Advanced
Governance - Liability Advanced
Implementation - Strategy and culture Not applicable
Implementation - Technical implementation Advanced
Implementation - Local validation Advanced
Implementation - Systems impact Advanced
Clinical use - AI model and product design Advanced
Clinical use - Cognitive biases Advanced
Clinical use - Interface with patients Not applicable
Users*
AI literacy Not applicable
Governance - Regulation and standards Not applicable
Governance - Evaluation and validation Advanced
Governance - Guidelines Advanced
Governance - Liability Advanced
Implementation - Strategy and culture Not applicable
Implementation - Technical implementation Not applicable
Implementation - Local validation Not applicable
Implementation - Systems impact Not applicable
Clinical use - AI model and product design Advanced
Clinical use - Cognitive biases Advanced
Clinical use - Interface with patients Advanced

*Not all user will require education in these areas, this is discussed in more detail in section 5.

Clinical use - this layer is only relevant to AI used in clinical decision making.

The requirements are collective rather than individual. An individual will not always meet all the requirements themselves, but work within a team where others bring advanced knowledge and skills. Section 4.1 provides further discussion on the development of AI multi-disciplinary teams (MDT).

A.1 Advanced AI education for Shapers

Shaper
Example responsibilities Decide on AI policies within healthcare at a national level.

Author and enforce regulation for AI technologies, for professionals creating and using AI and for healthcare settings implementing AI.

Create guidelines for the creation, procurement, deployment and use of AI.

Produce national procurement frameworks for AI technologies.

Guide training of healthcare professionals.
Examples of individuals who may take on this archetype role NHS leadership and policymaking teams.

Executives at arm’s length bodies (ALBs).

Product regulators.

Regulators of healthcare workers.

Regulators of healthcare settings.

Developers of healthcare technology standards.

Developers of procurement guidelines.

Developers of product development and implementation guidelines.

Developers of clinical guidelines.

Professional educators.

Shapers are responsible for making key decisions that contribute to establishing the trustworthiness of AI technologies, through formal means of governance and oversight. For example, they may develop standards for the development and deployment of AI technologies, and set the agenda for the regulation, validation and procurement of AI technologies within health settings.

These responsibilities suggest that Shapers will require advanced leadership skills and knowledge in how AI is governed, as detailed in Table A1.

Shapers will also need advanced understanding of some factors related to the implementation of AI technologies. This is to ensure their decisions support and empower other archetypes to facilitate the safe, effective and efficient deployment of AI technologies.

The specific level of expertise required in each of the factors listed in Table A1 will vary according to Shapers’ responsibilities and their organisations. For example, individuals working for regulatory organisations will be expected to have more expert knowledge relating to regulation and standards compared to Shapers working for other organisations.

As noted in the Conclusion, the development of education and training offerings for Shapers can be prioritised to support the development of robust foundations for confidence in AI across the workforce.

Table A1: Shapers: Requirements for advanced AI education

In addition to the knowledge requirements outlined in foundational AI education, Shapers will need the following more advanced knowledge and skills.

Governance

Regulation and Standards - Knowledge Taxonomy
  • Familiarity with the clinical governance process for AI software as a medical device (SaMD) including clinical audit, clinical risk management, quality assurance and clinical effectiveness.
  • Familiarity with GDPR applied to healthcare AI products.
  • Familiarity with NHS digital information standards relating to AI (for example, DCB 0129, DCB 0160).
  • Understanding of CE/UKCA marking and methods for obtaining certification.
  • Understanding of different classes of medical device for AI software under UKCA and the related regulatory requirements.
  • Understanding of the limitations of CE/UKCA marking, particularly relating to performance and evaluation.
Regulation and Standards - Skill Taxonomy
  • Capable of engaging with international colleagues to improve alignment of regulation and standards across markets and healthcare systems.
Evaluation and validation - Knowledge Taxonomy
  • Awareness of testing and validation techniques for different types of AI algorithms and products (for example, hold out set validation or K-fold cross validation).
  • Awareness of appropriate methodology for prospective clinical studies of AI technologies.
  • Awareness of guidelines for AI clinical trials.
  • Awareness of approaches to model bias measurement and mitigation.
  • Familiarity with evidence standards for AI products (for example, NICE evidence standards framework).
  • Understanding of the role of local model validation and circumstances in which this may be required.
  • Understanding the need for ongoing monitoring and evaluation requirements for AI technologies after clinical deployment.
Guidelines - Knowledge Taxonomy 
  • Familiarity with AI medical device development guidelines.
  • Familiarity with AI procurement guidelines.
Liability - Knowledge Taxonomy
  • Familiarity with f legal frameworks applying to the use of AI in clinical decision making (for example, negligence and product liability).
  • Understanding the relevance of liability to implementation and adoption of AI in healthcare.

Implementation

Strategy and culture - Knowledge Taxonomy
  • Familiarity with how AI technologies can be critically appraised, including the assessment of the impact on patients, financials and systems.
  • Understanding the challenges of, and opportunities for collaboration and product co-design with industry innovators.
  • Understanding the principles of digitising, connecting and transforming healthcare services safely and securely.
  • Understanding of ways that AI technologies may lead to inequitable distributions of patient outcomes or disadvantage certain patients.
Strategy and culture - Skill Taxonomy
  • Capable of developing organisational cultures and leaders who support innovation and collaboration.
  • Capable of leading and supporting the development of multi-disciplinary teams for development and deployment of AI technologies.
  • Capable of leading the development of data science and information governance skills in the NHS workforce.
  • Capable of addressing the learning and development needs for NHS staff in relation to AI technologies.
  • Proficient in establishing senior leadership buy-in and supporting internal champions for change when introducing AI technologies.
Systems impact - Knowledge Taxonomy
  • Familiarity with the impact of workflow design on the usability and efficiency of the AI technology.
  • Understanding the importance of robust systems for detecting, reporting, and managing adverse effects or serious incidents related to AI.
  • Understanding the difference between algorithm performance and clinical or healthcare economic benefits.
Systems impact - Skill Taxonomy
  • Capable of supporting interoperability and integration in the context of healthcare technologies.
  • Capable of evaluating and addressing the potential impact of AI on workforce and job roles.
  • Proficient in supporting systems of effective change management.

A.2 Advanced AI education for Drivers

Drivers
Example responsibilities Set the vision for digital and AI transformation at a regional/local level.

Champion AI technologies, by communicating the value and benefits, as a recognised and trusted leader.

Lead the systems change required to deploy AI technologies effectively.

Strategic decision-making related to AI procurement and deployment at a regional/local level.

Implement local AI governance infrastructure to ensure that AI is being deployed safely.

Promote funding and resource allocation for AI at a regional/local level.

Recruit and lead NHS AI multi-disciplinary teams (MDT).
Examples of individuals who may take on this archetype role NHS regional leaders.

ICS boards.

Chief Information Officers (CIO).

Chief Clinical Information Officers (CCIO).

Project Management Office (PMO) leads.

Digital transformation leads.

Service leads.

Clinical commissioners.

As detailed in the first report1, the adoption of AI technologies within health settings will require strong leadership that promotes a culture of innovation. The ideal leader would co-ordinate funding for AI projects, manage change, and build multi-disciplinary teams to lead deployment of AI technologies.

Drivers will play an essential role in leading digital transformation within the NHS and making related strategic decisions, including in relation to AI.

Interviewees for this research noted that Drivers will need to be supported through education and training to understand the urgency and value in their settings’ digital transformation with AI and have the skills and knowledge to drive it forward. This includes being intelligent customers for AI technologies, as supported by the recommendation from the Goldacre Review to ‘train senior non-analysts and leaders in how to be good customers of data teams’. Leaders making procurement decisions need the knowledge and understanding to critically appraise AI technologies and ask the right questions to ensure AI is safe, effective and financially viable for their settings (see Table A3). A detailed understanding of the governance of AI technologies will assist these decisions.

Drivers will also need to manage systems change when deploying AI technologies. They will need to critically appraise technologies prior to their deployment, and then oversee how they are integrated into clinical workflows, monitored and updated to maintain their performance. Knowledge of the impact of AI systems will therefore be an important aspect of the advanced education of Drivers.

Drivers will be responsible for building AI multi-disciplinary teams (MDTs) to manage the deployment of AI technologies within their settings. They will need to be particularly aware of the value of Embedders with specialist skills in healthcare informatics and data science. The 2021 HEE report ‘The future of clinical bioinformaticians in the NHS’ highlighted the current lack of understanding of the value of clinical bioinformaticians at the managerial level. Individuals in senior positions are not sufficiently aware of the importance, needs, and capabilities of clinical bioinformaticians.8

Drivers will need to understand the roles and responsibilities of Embedders and how to recruit, train support and retain these individuals. This can be achieved through advanced education relating to strategy and culture.

Table A2 lists the suggested advanced educational requirements for Drivers.

Table A2: Drivers: Requirements for advanced AI education

In addition to the knowledge requirements outlined in foundational AI education, Drivers will need the following more advanced knowledge and skills.

Governance

Regulation and standards - Knowledge Taxonomy
  • Familiarity with CE/UK Conformity Assessed (UKCA) marking and methods for obtaining certification.
  • Familiarity with different classes of medical device for AI software under UKCA and the related regulatory requirements.
  • Familiarity with the limitations of CE/UKCA marking, particularly relating to performance and evaluation.
  • Understanding of the clinical governance process for AI software as a medical device (SaMD) including clinical audit, clinical risk management, quality assurance and clinical effectiveness.
  • Understanding of GDPR applied to healthcare AI technologies.
  • Understanding of NHS digital information standards relating to AI (for example, DCB 0129, DCB 0160).
  • Understanding of HRA definitions of clinical research and service evaluation as they relate to AI evaluation and implementation, and following the appropriate governance for each.
Evaluation and validation - Knowledge Taxonomy
  • Awareness of testing and validation methodologies for different types of AI technologies.
  • Awareness of methodology for prospective clinical studies of AI technologies.
  • Awareness of guidelines and governance of AI research projects and clinical trials.
  • Awareness of approaches to model bias measurement and mitigation.
  • Awareness of potential sources of error and bias resulting from AI validation design.
  • Familiarity with evidence standards for AI products (for example, NICE evidence standards framework).
  • Understanding of the need for ongoing monitoring and evaluation requirements for AI technologies following their deployment.
Guidelines - Knowledge Taxonomy
  • Understanding of AI procurement guidelines.
Liability - Knowledge Taxonomy
  • Understanding of legal frameworks applying to the use of AI in healthcare (for example, negligence and product liability).

Implementation

Strategy and culture - Knowledge Taxonomy
  • Understanding of the learning and development needs for NHS staff in relation to AI technologies.
Strategy and culture - Skill Taxonomy
  • Capable of evaluating the strategic impact of AI technologies - including on patient care, financial and healthcare systems.
  • Capable of approaching industry for collaborative problem solving and product co-design.
  • Capable of leading digitisation projects to connect and transform healthcare services safely and securely.
  • Capable of developing and supporting AI MDTs for AI development and deployment.
  • Capable of developing data science and information governance skills in the NHS workforce.
  • Proficient in developing an organisational culture that supports innovation and collaboration.
Local validation - Knowledge Taxonomy
  • Familiarity with ongoing monitoring requirements for AI algorithms.
  • Understanding of the role of local validation and the circumstances in which this may be required.
  • Understanding of the skills and expertise required to undertake local validation and ongoing monitoring and how to train and recruit individuals with these skills.
  • Understanding of the need for algorithmic audit36 when implementing a new data-driven technology.
Systems impact - Knowledge Taxonomy
  • Familiarity with the impact of workflow design on AI usability and efficiency.
  • Understanding the importance of robust systems for detecting, reporting, and managing adverse effects or serious incidents related to AI.
  • Understanding the difference between algorithm performance and clinical or healthcare economic benefit.
  • Understanding the importance of interoperability and seamless integration in healthcare technology.
  • Understanding the impact of workflow design on AI product usability and efficiency.
  • Understanding the risk of AI technology resulting in deskilling of clinical workforce and how product design and workflow integration should be optimised to ensure that these risks are mitigated.
Systems impact - Skill Taxonomy
  • Capable of evaluating and addressing the impact of AI technologies in terms of service efficiency, patient outcomes, workforce and job roles.
  • Proficient in the application of the principles of effective systems change.

When an AI technology is being considered for procurement, certain questions should be asked to ascertain its suitability and performance, the requirements for deployment and the impact of integration into the clinical workflow.

Table A3 provides a comprehensive list of the questions that Drivers will need to have answered, to be confident when procuring AI technologies. It is not necessary for Drivers to understand all the technical details of implementation and AI design, but they should be aware of the need for this information within the AI-MDT, which will include embedders as technical experts.

The questions can be used in addition to the questions outlined in the NHS AI Lab’s ‘A buyer’s guide to AI in health and care’,37 and include further areas of inquiry aligned to the factors that influence confidence in AI identified in our first report1.

Table A3: Drivers: Questions to establish confidence in specific AI technologies

Governance

Regulation and standards
  • Is the AI technology a medical device according to UK regulation? If so, does it have appropriate UKCA/CE marking? What class of medical device has it been designated and is this appropriate?
  • What is the manufacturer’s intended use of the product? (including any exclusions or limitations of scope)
  • Is it compliant with NHS digital standards (for example, DCB 0129, DCB 0160)?
  • Is it compliant with appropriate ISO standards (for example ISO 82304, ISO 13485)?
  • Does this technology abide by regulation relating to access and procurement of data (for example, GDPR)?
  • Does the technology meet GDPR/ICO/MHRA transparency standards?
Evaluation and validation
  • Has this AI technology met appropriate evidence standards in accordance with the NICE evidence standards framework for digital health technologies?
  • How has the AI been clinically validated? Has it undergone internal validation, external validation and prospective clinical studies?
  • Is there appropriate transparency about the training data set? Is the training data set appropriate for the local context (likely to generalise well)?
  • What AI model performance metrics have been used, are they appropriate and are the results acceptable?
  • Does the technology require additional local validation due to generalisability concerns or specific local factors?
  • What are the ongoing validation and monitoring requirements for this technology once deployed?
  • What are the specific risks of bias in model design and training for this technology?
  • What steps have been taken to mitigate against these biases? How can they be further mitigated during implementation and local validation?
Guidelines
  • What clinical guidelines apply to using this AI technology?
  • What are the implications of using this technology outside of the guidelines?
Liability
  • Where does liability lie if the AI output is incorrect and leads to patient harm? Is it clear what is considered product failure versus human error in using the product?
  • What are the potential legal implications of either using or ignoring an AI output in clinical decision making?

Implementation

Strategy and culture
  • What are the strategic advantages of introducing the AI technology? How will it impact clinical outcomes, workflow efficiency, releasing time to care and financial costs?
  • Is the AI technology the most effective and efficient solution to address patient and organisational needs, and how does it compare to alternative solutions?
  • What are the possible risks associated with the AI technology and how can we account for these?
  • Will the AI technology lead to inequitable distribution of patient outcomes or disadvantage any patients? Will it exacerbate existing health inequalities or develop new inequalities?
Technical implementation
  • What are the technical requirements for the AI technology? Will the technology integrate with existing IT systems and use open standards or does it need development of new systems?
  • What are the requirements for data generation, recording, curation, processing, dissemination, sharing, and use? What are the developer's expectations regarding ongoing data sharing post-deployment to facilitate model iteration and development?
  • What IP arrangements are needed?
  • What are the risks for data security and privacy? What arrangements are needed for data protection and privacy?
  • What levels of staff support and infrastructure are needed for the ongoing technical maintenance and updates of the AI technology?
  • What is the developer’s approach to ongoing updates and how will this be financed?
  • How will product decommissioning be managed with the developer including access and storage of patient data?
Systems impact
  • How will the AI technology affect the current workflows and pathways (clinical and administrative)?
  • Are there opportunities for more efficient workflows and effective pathways with introducing the AI technology? Is broader redesign needed to take advantage of the technology?
  • How can we ensure fairness, transparency and equitable outcomes in the use of this technology?
  • What will be required to facilitate the reporting and actioning of safety concerns and adverse event reporting?

Clinical use

AI model and product design
  • In what clinical settings and scenarios is the product appropriate for use? Are there any exclusion criteria or limitations of approval scope?
  • How is this technology used in a clinical workflow? Is it autonomous AI or a human-in-the loop system?
  • Does the technology show how it has come to a decision (explainable AI)? Has this been validated clinically?
  • Does the technology display certainty estimates? Is it clear how these should be interpreted?
  • What input data is the model using to make a decision?
  • Can the AI system detect and/or reject outlier cases?
  • What are the risks of model bias for this technology? Have appropriate steps been taken by the developer to mitigate this?

A.3 Advanced AI education for Creators

Creator
Example responsibilities Create AI algorithms independently or through collaboration with industry innovators and/or academia.

Align AI algorithm development with appropriate regulation, evidence standards and technical guidelines.

Conduct user research with healthcare professionals.

Test and validate AI algorithms during product development and subsequent releases.

Iterate and improve AI algorithms.

Evaluate AI in terms of performance and clinical impact.

Set up systems for the ongoing monitoring of AI algorithms to assess for any model drift.

Conduct clinical trials of AI algorithms.
Examples of individuals who may take on this archetype role Specialist digital clinicians.

DDaT data professionals (data analysts, data engineers, data scientists).

Clinical informatics professionals, including clinical scientists (such as clinical bioinformaticians).

Software engineers.

NHS AI researchers.

Interviewees for this research noted the AI technologies currently deployed in health and care settings are either being purchased ‘off the shelf’, developed by internal teams, or developed through collaboration with industry innovators.

The Creator archetype includes healthcare workers who are involved during any aspects of the development of AI technologies, including scoping, design or testing. These individuals can represent a variety of clinical and non-clinical professional groups. Medical, scientific and informatics professions are likely to be strongly represented in this archetype and can bring a unique blend of clinical and technical expertise to product design and development.

Due to the demands of their role, Creators will require advanced knowledge of most factors associated with confidence in AI technologies, although as discussed earlier it is not necessary for every individual to possess the full range of knowledge and skills (provided it is represented within the AI MDT; see section 4.1). It is likely that clinical and industry team members will require different skills that depend on the nature of each project.

Creators will need to be supported through education and training to understand:

  • the impact of AI technologies on clinical workflows and clinical decision making
  • user-driven design and workflow integration
  • cognitive biases, their impact on assisted clinical decision making and ways to mitigate their impact
  • clinical risks and legal responsibilities associated with the implementation of AI technologies in healthcare

Creators will also need to have a detailed understanding of machine learning and AI as well as the clinical questions to be addressed. They will require an appreciation of the regulatory healthcare landscape around software and medical devices, and the broader healthcare technology ecosystem.

Health settings may need to recruit or train staff with relevant Creator skill sets, such as software engineers and data scientists. This is discussed in detailed in section 4.2.

Table A4 lists the suggested advanced educational requirements for Creators.

Table A4: Creators: Requirements for advanced AI education

In addition to the knowledge requirements outlined in foundational AI education, Creators will need the following more advanced knowledge and skills.

Creators

AI literacy - Knowledge Taxonomy
  • Familiarity with the data provenance, quality and structure requirements for developing AI models.
  • Familiarity with different project methodologies used for software development (for example, agile, waterfall) and their relative merits.
  • Understanding of data-driven problems (for example, classification, regression, generative).
  • Understanding of different types of algorithms, their benefits and limitations (for example, logistic regression, decision trees, support vector machines, random forest, K-means clustering, neural networks, Bayesian approaches).
  • Understanding of neural networks and their common variants (for example, fully connected network, convolutional neural network, recurrent neural network, generative adversarial network).
  • Understanding of learning methodologies (for example, supervised, unsupervised, reinforcement learning, ensemble learning, distributed learning).
  • Understanding of methods for validating AI models (for example, hold out method, cross validation).
  • Understanding of the limitations of trained AI models for prediction and generative tasks, deriving from limitations of the available data sets and the statistical nature of the algorithm.
AI literacy - Skill Taxonomy
  • Capable of coding in languages and tools used for the creation and analysis of AI algorithms (for example, R, Python, Jupyter notebooks).
  • Capable of engaging with the appropriate software development methodologies (for example, agile) to meet the needs of a given project.
  • Capable of identifying the most appropriate type of algorithm/methodology to solve a given problem.
  • Capable of data extraction and wrangling (for example, feature labelling/extraction, dimensionality reduction, normalisation).
  • Capable of model training and optimisation (for example, tuning hyper parameters, internal validation, optimal stopping).
  • Proficient in evaluating AI models using common metrics (for example, precision, recall, F1 score, Receiver Operator Characteristic analysis).

Governance

Regulation and standards - Knowledge Taxonomy
  • Understanding of the different classes of medical device for AI software under UKCA and the related regulatory requirements.
  • Understanding of the limitations of CE/UKCA marking, particularly relating to performance and evaluation.
Regulation and standards - Skill Taxonomy
  • Capable of contributing to the appropriate processes for obtaining UKCA certification.
  • Capable of managing clinical governance processes required for AI software as a medical device (SaMD) including clinical audit, clinical risk management, quality assurance and clinical effectiveness.
  • Proficient in applying GDPR (General Data Protection Regulation) to the creation and deployment of AI solutions.
  • Proficient in applying NHS Digital Standards for clinical risk management (for example, DCB 0129 and DCB 1060).
Evaluation and validation - Knowledge Taxonomy 
  • Familiarity with evidence standards for AI products (for example, NICE evidence standards framework).
  • Understanding of guidelines and governance for AI research projects and clinical trials.
  • Understanding of performance metrics for different AI algorithms and their interpretation, including limitations of common metrics.
  • Understanding of sources of possible error and bias resulting from AI model and validation design.
Evaluation and validation - Skill Taxonomy
  • Proficient in evaluating different types of AI algorithms and models.
  • Proficient in designing and undertaking prospective clinical evaluation of AI products.
  • Proficient in model bias assessment.
Guidelines - Skill Taxonomy
  • Capable of following medical device development guidelines.
Liability - Knowledge Taxonomy
  • Familiarity with legal frameworks applying to the use of AI in clinical decision making. For example, negligence, product liability.

Implementation

Technical implementation - Knowledge Taxonomy
  • Familiarity with the specific challenges associated with integrating AI into existing healthcare IT systems.
  • Familiarity with challenges in software integration and interoperability in a healthcare environment.
Technical implementation - Skill Taxonomy
  • Capable of applying principles of data governance in relation to AI deployment.
Local validation - Knowledge Taxonomy
  • Understanding of local validation methodology and interpretation.
  • Understanding of the concept of ‘outlier cases’ and how they may be identified.
Local validation - Skill Taxonomy
  • Capable of ascertaining circumstances in which additional local validation of AI technologies may be required.
  • Capable of designing ongoing monitoring and evaluation of AI tools after their deployment.
  • Capable of managing continued iteration and release of AI software.
  • Proficient in defining scope of use and exclusion criteria for AI technologies.
Systems impact - Knowledge Taxonomy
  • Understanding of how to develop AI technologies to streamline existing workflows with seamless integration.
  • Understanding of how AI clinical workflow integration can impact clinical decision making and human cognitive biases.
Systems impact - Skill Taxonomy
  • Capable of testing and evaluating the impact of an AI product workflow integration during product development.
  • Proficient in establishing safety event reporting mechanisms.

Clinical use

AI model and product design - Knowledge Taxonomy
  • Familiarity with outlier detection and its confidence impact.
  • Understanding of the impact of using autonomous AI on adoption in terms of risk and confidence.
  • Understanding of the methods, benefits and limitations for model explainability in AI.
  • Understanding of the use of probability and certainty estimates in the presentation of AI predictions, and how these can impact clinician confidence.
AI model and product design - Skill Taxonomy
  • Capable of applying user-centred design and co-design principles.
  • Capable of applying AI model transparency guidelines and standards for healthcare (for example, the Central Digital and Data Office’s algorithmic transparency template38 and model facts labels39 or cards40).
Cognitive biases - Knowledge Taxonomy
  • Familiarity with different AI methods (for example, decision trees versus deep learning) and their impact on clinician confidence.
  • Understanding of AI failure modes and how these differ from human errors in clinical reasoning and decision making (CRDM).
  • Understanding the risks of cognitive biases in CRDM and how to mitigate these.
  • Understanding how CRDM context affects the tendency to automation, aversion, confirmation or rejection bias, and alert fatigue, regarding AI predictions.

A.4 Advanced AI education for Embedders

Embedder
Example responsibilities Implement and integrate AI systems in healthcare settings.

Conduct technical implementation and systems integration.

Ensure that healthcare data used by AI technologies is managed safely and securely.

Establish and manage safety processes for reporting AI technology issues and back-up pathways for when products fail.

Conduct local validation of AI technologies if required.

Evaluate AI in terms of performance and clinical impact.

Participate in ongoing monitoring of AI technologies assessing for any model drift, including designing and performing algorithmic audits.

Design, deliver and continuously update product-specific user education, guiding users about how to use AI technologies safely and effectively.
Examples of individuals who may take on this archetype role Specialist digital clinicians.

DDaT data professionals (data analysts, data engineers, data scientists).

Clinical informatics professionals, including clinical scientists (such as clinical bioinformaticians).

Statisticians.

Information technology (IT) teams.

Information governance (IG) teams.

Clinical Safety Officer (CSO) and clinical safety teams.

Knowledge Managers.

Embedders are crucial in understanding and evaluating AI technologies in local settings, conducting local validation, monitoring and maintaining AI technologies and overseeing the systems change, workflow integration and staff training that will be necessary to deploy AI technologies.

Embedders will likely make up the majority of the AI multi-disciplinary teams (MDTs) discussed in detail in section 4.1. They may include Digital, Data and Technology (DDaT) data family and clinical informatics professionals (data analysts, engineers, data scientists, and clinical scientists - including clinical bioinformaticians), specialist digital/AI clinicians, change management specialists, commercial strategy leads, amongst others.

Interviewees for this research noted that Embedders represent one of the least common archetypes in the current healthcare workforce. Strategies for training and resourcing growth for Embedder skills at a system-wide level are discussed in section 4.2.

Similar to Creators, it is not necessary for each individual Embedder to satisfy all advanced educational requirements. It is likely that AI teams will require different skills that depend on the nature of each project.

Table A5 lists the suggested advanced educational requirements for Embedders.

Table A5: Embedders: Requirements for advanced AI education

In addition to the knowledge requirements outlined in foundational AI education, Embedders will need the following more advanced knowledge and skills.

Embedders

AI literacy - Knowledge Taxonomy
  • Familiarity with the data provenance, quality and structure requirements for developing AI models.
  • Familiarity with types of data-driven problems (for example, classification, regression, generative).
  • Familiarity with types of algorithms, their benefits and limitations (for example, logistic regression, decision trees, support vector machines, random forest, K-means clustering, neural networks, Bayesian approaches).
  • Familiarity with types of neural networks and their common variants (for example, fully connected network, convolutional neural network, recurrent neural network, generative adversarial network).
  • Familiarity with types of learning methodology (for example, supervised, unsupervised, reinforcement learning, ensemble learning, distributed learning).
  • Understanding of limitations of trained models, deriving from limitations of the available data sets.
AI literacy - Skill Taxonomy
  • Capable of identifying appropriate types of algorithm to solve a given problem.
  • Proficient in coding languages and tools used for the analysis of AI algorithms (for example, R, Python, Jupyter notebooks).
  • Proficient in validation methods (for example, hold out method, cross validation).
  • Proficient in interpreting metrics used for AI model evaluation (for example, precision, recall, F1 score, Receiver Operator Characteristic curve, Area Under Curve measures).

Governance

Regulation and standards - Knowledge Taxonomy
  • Understanding of different classes of medical device for AI software under UKCA and the related regulatory requirements.
  • Understanding of the limitations of CE/UKCA marking, particularly relating to performance and evaluation.
  • Understanding of GDPR applied to healthcare AI products.
Regulation and standards - Skill Taxonomy
  • Proficient in managing clinical governance processes required for AI software as a medical device (SaMD) including clinical audit, clinical risk management, quality assurance and clinical effectiveness.
  • Proficient in applying NHS Digital Standards for clinical risk management (for example, DCB 0129 and DCB 1060).
Evaluation and validation - Knowledge Taxonomy
  • Familiarity with evidence standards for AI products (for example, NICE evidence standards framework).
  • Familiarity with guidelines and governance of AI research projects and clinical trials.
  • Understanding of performance metrics for different AI algorithms and their interpretation, including limitations of common metrics.
  • Understanding of sources of possible error and bias resulting from AI validation design.
Evaluation and validation - Skill Taxonomy
  • Capable of undertaking prospective clinical evaluation of AI products.
  • Proficient in evaluating different types of AI algorithms.
  • Proficient in identifying potential model bias and designing appropriate bias testing and mitigation.
  • Proficient in designing and performing algorithmic audits and ongoing monitoring.
Guidelines - Knowledge Taxonomy
  • Awareness of AI medical device development guidelines.
Liability - Knowledge Taxonomy
  • Awareness of legal frameworks applying to the use of AI in clinical decision making (for example, negligence, product liability).

Implementation

Technical implementation - Knowledge Taxonomy
  • Understanding of the challenges associated with integrating AI into existing healthcare IT systems.
Technical implementation - Skill Taxonomy
  • Proficient in software integration and interoperability in a healthcare setting.
  • Proficient in applying principles of data governance in relation to AI product implementation.
Local validation - Skill Taxonomy
  • Proficient in ascertaining circumstances in which additional local validation of AI technologies may be required.
  • Proficient in conducting and interpreting local validation of AI technologies.
  • Proficient in conducting ongoing monitoring and evaluation of AI technologies after their deployment.
  • Proficient in defining the scope of use and exclusion criteria for AI technologies.
Systems impact - Skill Taxonomy
  • Proficient in testing and evaluating the impact of AI technologies on clinical decisions and workflow integration during deployment.
  • Proficient in establishing and managing internal and external processes for detecting, reporting, and managing adverse effects or serious incidents related to AI.
  • Proficient in evaluating the impact of AI technology in terms of service efficiency, patient outcomes and workforce.
  • Proficient in deploying AI technologies in a manner that streamlines existing workflows with effective systems integration.

Clinical use

AI model and product design - Knowledge Taxonomy
  • Awareness that key factors in product design (for example, transparency, level of detail, ease of user experience) can influence user confidence for AI in healthcare.
  • Familiarity with outlier detection methods and the impact of outliers on clinical confidence.
  • Familiarity with the methods, benefits and limitations of explainability in AI.
  • Understanding of the use of probability and certainty estimates in the presentation of AI predictions, and how these can impact clinician confidence.
AI model and product design - Skill Taxonomy
  • Capable of applying AI model transparency guidelines and standards for healthcare.
Cognitive biases - Knowledge Taxonomy
  • Awareness of how a clinician’s level of expertise in a given clinical area can influence their interpretation of AI information.
  • Awareness of how AI failure modes can differ from human error, and the significance for CRDM.
  • Familiarity with various types of algorithms, and the impact of algorithm type on trustworthiness and clinician confidence.
  • Understanding cognitive biases, their risks, and approaches to mitigate these.
Cognitive biases - Skill Taxonomy
  • Proficient at evaluating published AI performance metrics and evaluation data and communicating their real-world relevance to clinicians.

As part of their responsibilities, Embedders will need to appraise AI technologies to determine whether they have met appropriate governance standards and display acceptable performance for the clinical scenario, to assess the requirements for their implementation and to ascertain the potential impact on patient outcomes and clinical workflows.

Table A6 lists the questions that Embedders will need answered to develop confidence in the suitability of a specific AI technology.

Table A6: Embedders: Questions to establish confidence in specific AI technologies

Governance

Regulation and standards
  • What are the clinical safety processes for detecting, reporting, and managing serious incidents due to software errors?
  • Is the AI technology a medical device according to UK regulation? If so, does it have appropriate UKCA/CE marking? What class of medical device has it been designated and is this appropriate?
  • What is the manufacturer’s intended use of the product? (including any exclusions or limitations of scope).
  • Is it compliant with NHS digital standards (for example, DCB 0129, DCB 0160)?
  • Is it compliant with appropriate ISO standards (for example, ISO 82304, ISO 13485)?
  • What are the limits of regulation for this product and is local evaluation required to fill the gaps?
Evaluation and validation
  • What are the specific risks of bias in AI model design and training for this technology?
  • What steps have been taken to mitigate against these biases? How can they be further mitigated during implementation and local validation?
  • What are the limitations in how the AI technology was designed, trained and validated?

Implementation

Technical implementation
  • What are the technical requirements for the AI technology? Will the technology integrate with existing IT systems or does it need development of new systems?
  • What are the requirements for data generation, recording, curation, processing, dissemination, sharing, and use?
  • What IP arrangements are needed?
  • What are the risks for data security and privacy? What arrangements are needed for data protection and privacy?
  • What arrangements are needed for the ongoing technical maintenance and updates of the AI technology?
Local validation
  • Does the published validation cohort for this AI technology represent the local clinical population sufficiently well to be transferrable?
  • Is local validation and/or other actions (for example, change in scope of use) needed to address the AI technology’s limitations?
  • If required how could local validation be conducted for this AI technology?
  • What level of ongoing model surveillance is appropriate for safety and maintenance of the AI technology’s rigour?
  • How can we develop a robust performance monitoring plan?
  • What levels of staff support and infrastructure are needed for the ongoing technical maintenance and updates of the AI technology?
Systems impact
  • How will the AI technology affect the current workflows and pathways (clinical and administrative)? Is broader redesign needed to take advantage of the technology?
  • How should we set up backup workflow if the AI technology fails?
  • How best to ensure clinical safety in the local deployment of AI technologies?
  • What processes do we need to review and respond to relevant safety recommendations and alerts?
  • How can we ensure fairness, transparency and equitable outcomes in the use of AI technologies?

A.5 Advanced AI education for Users

User
Example responsibilities Use AI within healthcare settings in accordance with guidelines.

Employ appropriate safety measures related to the use of AI.

Communicate with patients and the public about AI.
Examples of individuals who may take on this archetype role Clinicians using AI.

Non-clinical staff using AI.

Clinical researchers using AI.

Interviewees for this research noted the need for advanced education for Users will depend on their specific responsibilities and the risks associated with using the AI technology. For example, administrative staff using an AI technology to automate appointment bookings will not require the same detailed knowledge of AI as a clinician using an AI technology to assist in high-stakes clinical decision making.

All Users will require product-specific training to educate them on how to use AI technologies safely and effectively as detailed in section 3.4.

Using AI for clinical reasoning and decision making (CRDM)

Interviewees for this research highlighted the need to provide clinicians with the knowledge and skills they require to use AI as these technologies become increasingly rolled out in healthcare settings.

As discussed in the first report1, clinicians vary in their attitudes towards technology and change, which can dictate their willingness to use AI in their work. However, it is important that AI and CRDM-related education reaches all clinicians to optimise patient care throughout all settings and minimise any disparities in how AI technologies are adopted and used.

Ideally, CRDM-related education for clinician Users will be tailored to specific clinical roles, levels of seniority and speciality. This education should reach clinicians in training as well as those who are fully qualified. Education should be directed to all professionally trained clinical staff including doctors, nurses, pharmacists and allied healthcare professionals (AHPs).

Clinician Users will require advanced knowledge of the impact of AI in CRDM to ensure that AI technologies are being used in a fair, robust and safe manner. This should include an understanding of the limitations and weaknesses of machine learning algorithms, and situations in which they are most likely to underperform or erroneous results. It should also include an awareness of the potential impact of using AI technologies on decision making, including the influence of human cognitive biases and the risks of over and under confidence in AI.

Communication skills to enable discussion with patients and caregivers about AI technologies should form an important part of clinician User education. Broad adoption of AI technologies has the potential to reshape the relationships between clinicians and patients, requiring clear communication and soft skills to help counsel patients about the use of AI technologies and guide shared clinician-patient care decisions. Clinician Users will need to know how to advise patients about the ownership and sharing of their personal data, and be able to discuss the implications of using AI technologies on their personal data and care.

Some Users may undertake additional education and training to become specialist digital/AI clinicians with a mixture of clinical and technical skills and may end up taking on additional Embedder and/or Creator roles. This is discussed further in section 4.2.

Table A7 lists the suggested advanced educational requirements for Users.

Table A7: Users: Requirements for advanced AI education

In addition to the knowledge requirements outlined in foundational AI education, Users will need the following more advanced knowledge and skills.

Governance

Evaluation and validation - Knowledge Taxonomy
  • Familiarity with the limitations of CE/UKCA marking, relating to performance and evaluation..
  • Familiarity with evidence standards for AI products (for example, NICE evidence standards framework).
  • Familiarity with guidelines and governance for AI research projects and clinical trials.
  • Understanding of performance metrics for different AI algorithms and their interpretation, including limitations of common metrics.
  • Understanding of sources of possible error and bias resulting from AI model and validation design.
Evaluation and validation - Skill Taxonomy
  • Capable of critically appraising published evidence for the performance of an AI algorithm in their area of practice.
Guidelines - Knowledge Taxonomy
  • Familiarity with clinical guidelines relating to use of AI in their area of practice.
Liability - Knowledge Taxonomy
  • Familiarity with legal frameworks applying to the use of AI in clinical decision making. For example, negligence, product liability.

Clinical use

AI model and product design - Knowledge Taxonomy
  • Familiarity with different ways in which bias in AI algorithms may occur and the potential impact on clinical decision making.
  • Understanding of the methods, benefits and limitations for model explainability in AI.
  • Understanding of the role of AI model explainability in clinical confidence, including limitations of some explainability approaches for confidence in predictions for specific patients.
Cognitive biases - Knowledge Taxaonomy
  • Awareness of where to seek guidance on the use of AI technologies in complex clinical scenarios.
  • Familiarity with of potential AI failure modes, how these may differ from human error and how to report AI safety incidents.
  • Understanding of how and why some clinicians may be under- or over-confident in information derived from AI technology.
  • Understanding the effect of clinical domain expertise on being under- or over-confident in information derived from AI technology.
  • Understanding AI performance metrics and how to interpret published research relating to AI technologies.
  • Understanding of ‘appropriate confidence’ in relation to CRDM, including the fact that the optimal confidence level may vary from case to case.
  • Understanding how information derived from AI may differ from other information used in CRDM and what this means for making clinical decisions.
  • Understanding the risks of cognitive biases in CRDM and approaches to mitigate against these
  • Understanding how CRDM context (time criticality, patient involvement, clinical risk) and workflow integration affect the tendency to accept or reject AI-derived information
Cognitive biases - Skill Taxonomy
  • Capable of balancing the risks and benefits of AI-assistance in CRDM for a given clinical task.
  • Capable of identifying instances in which it is, and is not, appropriate to use a specific AI technology for decision support.
  • Proficient in how to address disagreement between clinical intuition and information derived from AI technologies.
Interface with patients - Skill Taxonomy
  • Capable of discussing issues relating to data privacy and data use with patients in relation to AI technologies.
  • Proficient in communicating how AI-derived information has been incorporated in clinical decision making alongside other clinical information.
  • Proficient in counselling patients about the benefits and risks of AI technologies and their impact on shared clinical decision making.

 

References

1 Nix M, Onisiforou G, Painter A. Understanding healthcare workers’ confidence in AI. Health Education England & NHS AI Lab. 2022. https://digital-transformation.hee.nhs.uk/binaries/content/assets/digital-transformation/dart-ed/understandingconfidenceinai-may22.pdf Accessed 29 June, 2022.

8 Health Education England. The Future of Clinical Bioinformaticians in the NHS. 2021. https://www.hee.nhs.uk/our-work/building-our-future-digital-workforce/future-clinical-bioinformaticians-nhs Accessed May 24, 2022.

36 Liu X, Glocker B , McCradden M, Ghassemi M, Denniston A, Oakden-Rayner L. The medical algorithmic audit. 2022. https://pubmed.ncbi.nlm.nih.gov/35396183/  Accessed May 24, 2022.

37 NHS. A buyer's guide to AI in health and care. 2020. https://www.nhsx.nhs.uk/ai-lab/explore-all-resources/adopt-ai/a-buyers-guide-to-ai-in-health-and-care/  Accessed May 24, 2022.

38 Central Digital and Data Office. Algorithmic transparency template. 2021. https://www.gov.uk/government/publications/algorithmic-transparency-template/algorithmic-transparency-template  Accessed May 24,2022.

39 Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. npj Digit Med. 2020;3. https://www.nature.com/articles/s41746-020-0253-3?proof=t2019-5-29  Accessed May 24,2022.

40 Miitchell M, Wu S, Zaldivar A, Barnes P, Vasserman L, Hutchinson B, Spitzer E, Raji ID, Gebru T. Model Cards for Model Reporting. 2019. https://arxiv.org/abs/1810.03993  Accessed May 24,2022.

Page last reviewed: 21 April 2023
Next review due: 21 April 2024