Confidence that the right artificial intelligence (AI) technologies are being procured and deployed.

Interviewees for this research noted that procurement, ethical and clinical use guidelines can steer how artificial intelligence (AI) is adopted and used within healthcare, and drive confidence in these technologies. They cautioned that effective guidelines would require a dynamic creation process to keep pace with AI development and adoption.

3.3.1 Procurement guidelines

Interviewees suggested that confidence in procuring suitable AI solutions within health and care settings is an important initial step to the safe and effective adoption of these technologies.

The NHS has developed initial guidance in its Buyer’s Guide to AI in Health and Care’.45 The guidance sets out important questions for public sector entities to consider when purchasing ‘off-the-shelf’ AI products (developed by industry innovators and packaged as ready for deployment). These include clarifications on the suitability of the solution, and regulatory, performance and ethical considerations.

The Digital Technology Assessment Criteria (DTAC) for health and social care can support further confidence in meeting clinical safety, data protection, technical security, interoperability and usability and accessibility standards.17

Disclosure

Industry innovators can be reluctant to share details of their AI products (including information on computational methods, and the robustness and completeness of their training data) due to commercial considerations and intellectual property rights. This can impact the confidence of those procuring, implementing and using AI technologies who may find it challenging to compare products, assess potential risks, determine the need for additional local validation, and communicate with patients about how the technology works.

Several significant developments in regulation and law relating to disclosure and transparency in AI models can guide future approaches to these challenges. Box 4 summarises some of these initiatives.

Box 4: AI disclosure and transparency initiatives

GDPR transparency standards

The 2018 General Data Protection Regulation (GDPR) states that when automated decision making is used, the person to whom it relates should be able to access ‘Meaningful information about the logic involved’. The guidance that accompanies the GDPR text gives further insights into the nature of this information, describing it as ‘not necessarily a complex explanation of the algorithms used or disclosure of the full algorithm. The information provided should, however, be sufficiently comprehensive for the data subject to understand the reasons for the decision’. It will fall to legislators, data protection authorities, and courts to interpret when particular information will ultimately be classed as ‘meaningful’ and ‘sufficiently comprehensive’ without infringing on intellectual property rights.

CDDO algorithmic transparency standard

The Central Digital and Data Office (CDDO) recently announced an algorithmic transparency standard to help government departments provide clear information about the algorithmic tools they use, and why they’re using them.46

Model cards

Google have launched ’model cards’ for AI algorithms - a structured way of sharing essential facts of machine learning models including their limitations.47 ‘Model Facts’ labels specific to healthcare AI technology have also been created.48

ICO and The Alan Turing Institute: Explaining AI in practice

The ICO and the Alan Turing Institute have released joint guidance on explaining decisions made with AI to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.49

MHRA Project Glass Box

Project Glass Box (AI Interpretability) is one of the packages in the MHRA Software and AI as a Medical Device Change Programme.18 It aims to develop interpretability frameworks for AI algorithms to ensure they are sufficiently transparent to be robust and testable.

3.3.2 AI development guidelines

Guidelines for the development of AI can support industry innovators to create technologies in a safe and responsible way. Knowing that AI technologies have complied with these guidelines can also provide reassurance to those procuring and using these technologies.

There are currently several published initiatives to guide industry innovators and healthcare procurement, including:

  • the Department of Health and Social Care’s ‘Guide to good practice for digital and data-driven health technologies’,50 is designed to support industry innovators in understanding what the NHS is looking for when buying digital and data-driven technology for use in health and care
  • the NHS’s What Good Looks Like framework51 is a guide for NHS leaders to digitise, connect and transform their services safely and securely. The framework involves 7 success measures, including in relation to governance, resources and standards for safe care that can provide foundations for the development and deployment of AI technologies
  • the Office for Artificial Intelligence’s ‘Guide to using artificial intelligence in the public sector’,52 outlines how to build and use AI in the public sector
  • ‘Good Machine Learning Practice for Medical Device Development: Guiding Principles’53 jointly published by the MHRA, The U.S. Food and Drug Administration (FDA) and Health Canada. These are 10 guiding principles that can inform the development of medical devices that use artificial intelligence and machine learning.

3.3.3 Clinical guidelines

Clinicians often use guidelines to steer their diagnosis and management of patients.

Feedback from the interviews conducted for this research suggests that clinicians expect specific guidelines on the use of AI technologies to be developed and distributed by entities like the Royal Colleges and the National Institute for Health and Care Excellence (NICE). They consider these guidelines a key contributor to establishing their confidence in using AI technologies. The endorsement of an AI technology by a formal body like the Royal Colleges or NICE would be perceived as a key driver for adopting that technology.

However, the appropriate use of AI technologies will depend on specific features, like those outlined in section 5.2, as well as the clinical context. Interviewees cautioned that, while general guidelines may be sufficient for products with low clinical consequences, it is likely that specific guidelines will need to be developed for individual AI technologies that entail higher clinical consequences.

NICE has 2 programmes in which diagnostic technologies may be evaluated: the Medical Technologies Evaluation Programme (MTEP)54 and the Diagnostics Assessment Programme (DAP).55 Guidance has been published for several AI products.56,57

Submission to MTEP or DAP is not a mandatory requirement for AI technologies and some industry innovators are uncertain about the suitability of these programmes for digital products. As documented in related research, innovators are deterred by the long timescales required to gather the necessary clinical trial evidence for the MTEP and DAP processes are at odds with the rapid iteration of digital technologies. They are concerned that their products may become outdated by the time they gain approval.58

Further, interviewees for this research noted that smaller-size industry developers may not have the resources required to produce the level of clinical evidence required for their product’s assessment.

NICE Medtech Innovation Briefings (MIBs) offer a faster way of obtaining NICE advice, taking around 4 months to produce. They do not provide the full guidance offered by MTEP and DAP but include a summary for the product, existing evidence, place in healthcare and expert opinion.59,60

Interviewees for this research suggested that NICE guideline processes may be limited in scalability. The sheer volume and development lifecycle of AI technologies entering the market will potentially make it challenging for NICE to meet the demand for product-specific guidance. As multiple AI technologies for a given clinical task become available, it may be appropriate to move towards task-level guidance.

3.3.4 Ethical guidelines

The ethical dimensions of AI are currently being debated and defined, with some commonalities across the plethora of available frameworks. A worldwide study of related publications found universal inclusion of the principles of fairness and non-discrimination. Other prominent principles included privacy, accountability, and transparency. 61

Although there is no universally adopted ethical guidance on AI, health and care settings and industry innovators can draw from available frameworks to inform their practices. These include:

  • the Central Digital and Data Office’s ‘Data Ethics Framework’,62 which guides appropriate and responsible data use in government and the wider public sector
  • the World Health Organisation’s ‘Ethics and governance of AI for health’,63 which sets out 6 key ethical principles: protecting human autonomy; promoting human well-being and safety and the public interest; ensuring transparency, explainability and intelligibility; fostering responsibility and accountability; ensuring inclusiveness and equity; and promoting AI that is responsive and sustainable.

Interviewees for this research highlighted the importance of developing awareness and recognition of ethical considerations like fairness, transparency, and accountability in health and care settings to complement regulatory and governance oversight.

Information:

Guidelines - Key confidence insights

  • Available guidelines can contribute to confidence in procuring, developing and using AI technologies. Knowing that AI technologies follow accepted guidelines can also contribute to confidence.
  • Clinicians may not feel confident using AI products in clinical decision making until they are included in established clinical guidelines.
  • Although general guidance may be helpful for AI products with low clinical risk, higher risk technologies are likely to require individual guidance. As more AI technologies enter the market, task-level guidance for AI technologies may be appropriate.
  • To support their confidence in AI, healthcare workers will need to develop awareness and recognition of ethical considerations like fairness, transparency, and accountability.

References

45 A buyer’s guide to AI in health care - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/explore-all-resources/adopt-ai/a-buyers-guide-to-ai-in-health-and-care/. Accessed March 8, 2022.

17 Digital Technology Assessment Criteria (DTAC) - Key tools and information - NHS Transformation Directorate. https://www.nhsx.nhs.uk/key-tools-and-info/digital-technology-assessment-criteria-dtac/. Accessed March 7, 2022.

46 CDDO. Algorithmic Transparency Standard. GOV.UK. https://www.gov.uk/government/publications/algorithmic-transparency-data-standard. Published 2021. Accessed March 7, 2022.

47 Google Cloud Model Cards. https://modelcards.withgoogle.com/about. Accessed March 7, 2022.

48 Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. npj Digit Med. 2020;3(1):1-4. doi:10.1038/s41746-020-0253-3

49 Leslie D. Explaining Decisions Made with AI. SSRN Electron J. 2022. doi:10.2139/ssrn.4033308

18 MHRA. Software and AI as a Medical Device Change Programme. https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme. Published 2021. Accessed March 7, 2022.

50 A guide to good practice for digital and data-driven health technologies - GOV.UK. https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology. Accessed March 7, 2022.

51 What Good Looks Like framework - What Good Looks Like - NHS Transformation Directorate. https://www.nhsx.nhs.uk/digitise-connect-transform/what-good-looks-like/what-good-looks-like-publication/. Accessed March 7, 2022.

52 A guide to using artificial intelligence in the public sector - GOV.UK. https://www.gov.uk/government/publications/a-guide-to-using-artificial-intelligence-in-the-public-sector. Accessed March 7, 2022.

53 Good Machine Learning Practice for Medical Device Development: Guiding Principles - GOV.UK. https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-development-guiding-principles. Accessed March 7, 2022.

54 Medical Technologies Evaluation Programme - NICE guidance. https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance/nice-medical-technologies-evaluation-programme. Accessed March 7, 2022.

55 Diagnostics Assessment Programme - NICE guidance - Our programmes. https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance/nice-diagnostics-guidance. Accessed March 7, 2022.

56 HeartFlow FFRCT for estimating fractional flow reserve from coronary CT angiography - Guidance - NICE. https://www.nice.org.uk/guidance/mtg32. Accessed March 7, 2022.

57 Zio XT for detecting cardiac arrhythmias - Guidance - NICE. https://www.nice.org.uk/guidance/mtg52. Accessed March 7, 2022.

58 An Innovator’s Guide to the NHS.; 2020. https://www.boehringer-ingelheim.co.uk/sites/gb/files/documents/innovators_guide.pdf. Accessed March 7, 2022.

59 NICE. Medtech innovation briefings. https://www.nice.org.uk/about/what-we-do/our-programmes/nice-advice/medtech-innovation-briefings. Accessed March 7, 2022.

60 NICE. The technologies - Artificial intelligence in mammography. https://www.nice.org.uk/advice/mib242/chapter/The-technologies. Accessed March 7, 2022.

61 Principled Artificial Intelligence - Berkman Klein Center. https://cyber.harvard.edu/publication/2020/principled-ai. Published 2020. Accessed March 7, 2022.

62 Government Digital Service. Data Ethics Framework - GOV.UK. Government Digital Service. https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020. Published 2020. Accessed March 7, 2022.

63 WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance.; 2021. http://apps.who.int/bookorders. Accessed March 7, 2022.

Page last reviewed: 12 April 2023
Next review due: 12 April 2024