Confidence that artificial intelligence (AI) technologies are included in formal governance and oversight.

Interviewees for this research highlighted that a robust, efficient and transparent regulatory system can support confidence in the safe and effective adoption of artificial intelligence (AI) technologies in health and care settings. This includes the regulation of AI products, the regulation of healthcare settings and the regulation of healthcare professionals.

Navigating the regulatory landscape for AI technology can be complex and confusing for both industry innovators and adopters of AI. To simplify this, key UK regulatory and arm’s length bodies (the National Institute for Health and Care Excellence, the Medicines and Healthcare products Regulatory Agency (MHRA), the Health Research Authority, and the Care Quality Commission (CQC)) have developed the AI and Digital Regulations Service, a cross-regulatory advisory service for developers and adopters of AI.16 The AI and Digital Regulations Service will create educational material about the regulation of AI technology and provide access to the information developers and procurers of AI need to ensure products are meeting regulatory requirements.

3.1.1 Regulation of AI products

Regulation can support confidence in AI products used in health and care, giving assurance that it has been developed responsibly, works as advertised, and that patient data is used in a safe, secure and responsible way.

Regulatory requirements for AI products vary depending on whether an AI product is classed by the MHRA as a medical device.

Medical devices must be registered with the MHRA and are subject to Medical Device Regulations, the UK Medical Device Recognition (MDR) 2002. This regulation is supported by standards (for example, from the International Organisation for Standardisation) that can be used to demonstrate conformity with medical device regulation.

AI products used in health and care settings that are not classed as medical devices, such as products used to automate administrative processes, are not regulated by the MHRA. These products must however conform with other regulations like the General Data Protection Regulation (GDPR) and the NHS Digital Technologies Assessment criteria (DTAC) framework.17

Regulation of medical devices

Interviewees for this research suggested that there are gaps in the existing regulatory landscape for medical devices. They suggested that regulatory approval does not meet the expectations of AI users and that additional, AI-specific, regulation may be required.

All medical devices, including software as a medical device (SaMD), marketed in the UK must be registered with the MHRA and comply with the UK MDR 2002. UK Conformity Assessed (UKCA) certification is required for a device to be placed on the UK market (CE marking, the EU equivalent, will no longer be valid after 30 June 2023).

Devices are classified in accordance with UK MDR 2002 based on their clinical risk (Class I, IIa, IIb and III) with higher classifications associated with higher clinical risk and more stringent regulatory requirements.

Interviewees for this research suggested that there is very limited understanding of medical device and software regulation amongst the healthcare workforce. For example, interviewees perceived that most NHS professionals are unaware of what classifies a product as a medical device or the distinctions between classes of medical device, and what this means for product assessment and deployment.

In addition, interviewees perceived that healthcare workers often equate regulatory approval with proof that a product has met certain standards and can be trusted, for example, that it works in real-world clinical settings. However, current regulatory standards may not provide the assurances that healthcare workers assume they do. In particular, MDR compliance is focussed on quality systems for recording design decisions and testing processes, rather than clinical or technical evidence of performance. The performance of AI technologies requires a different evaluation from other medical devices. This suggests the importance of the workforce understanding the remit of regulatory approval, and clarifying what it does and does not guarantee for a given AI technology.

The regulation that currently applies to AI medical devices is the same regulatory framework that is used for any SaMD. Many of our interviewees felt that tailored AI regulation may be necessary to address AI-specific risks.

These developments are already in progress. In September 2021, the MHRA announced the Software and AI as a Medical Device Change Programme,18 which includes 3 packages specific to AI as a medical device.

  • Project AI RIG (AI Rigour) - to ensure AI is safe, effective and fit for purpose for all populations that it is intended to be used on.
  • Project Glass Box (AI Interpretability) - to outline the impact of interpretability on the safe and effective development and use of AI medical devices.
  • Project Ship of Theseus (AI Adaptivity) - to create guidance that allows for adaptive AI that does not fit within existing change management processes.

Standards

The UKCA marking and approval of SaMD is dependent on conformance with the UK MDR 2002, which requires manufacturers to maintain quality management systems. One way of demonstrating conformance is to meet a ‘designated standard’.19

As detailed in Box 2, International Organisation for Standardisation (ISO) standards can provide a framework for manufacturers to demonstrate the suitability of their product design and quality management systems. 

In order to implement software within the NHS, all digital health products must also comply with NHS digital, data and technology standards.20 These include DCB 0129 and DCB 0160, which set out the clinical risk management framework for health organisations and suppliers of digital technology used in the health and care environment.

Box 2: ISO standards

The standard ‘ISO 13485:2016 Medical devices - Quality management systems - Requirements for regulatory purposes’ specifies requirements for a quality management system to demonstrate a manufacturer’s ability to provide medical devices and related services that consistently meet customer and applicable regulatory requirements. The ‘ISO 14971 Medical devices - Application of risk management to medical devices’ standard provides further details on risk assessment, control, review and monitoring. 

While it is not mandatory to follow the ISO 13485 standard, it is an effective option to demonstrate compliance with UK MDR 2002 in quality management and to follow internationally recognised best practice. For example, the UKCA marking is based on the requirements of the EU MDD (Directive 93/42/EEC), which states in clause 12.1a For devices which incorporate software or which are medical software in themselves, the software must be validated according to the state of the art taking into account the principles of the development lifecycle, risk management, validation and verification.’ The ISO 13485 standard can guide these aspects of medical device development, detailing also the requirements for validation.

ISO 13485 mentions software explicitly, following the categorisation of SaMD guidance published by the International Medical Device Regulators Forum (IMDRF).21 ISO 13485 does not differentiate SaMD from conventional hardware medical devices in terms of requirements for quality management but recognises there may be differences in the way in which the requirements are met.

Although SaMD is recognised in ISO13485, AI as a medical device (AIaMD) is currently not mentioned in standards surrounding medical devices, leading to the need to interpret the requirements for this context. Standards and guidance accompanying the MHRA Software and AI as a Medical Device Change Programme will aim to clarify the specific requirements for AIaMD.18

The usability elements of the ISO 9241 standard are also applicable to AI in healthcare, describing user interface and experience principles for human-system interaction. Part 810 of the standard discusses the usability of Robotic, intelligent and autonomous systems’, highlighting some of the system complexity and human-system interaction challenges relevant to AIaMD. Compliance with this standard is not currently a requirement of UKCA or CE marking but could be considered best practice towards building confidence in AI technologies.

3.1.2 Regulation of healthcare settings

While medical devices are regulated by the MHRA, healthcare settings are regulated by the Care Quality Commission (CQC).

The CQC monitor and inspect services and assess whether they are safe, effective, caring, responsive and well led.22 They publish standards of care, setting out what good and outstanding care looks like and make sure services meet fundamental standards below which care must never fall.23

In the context of AI-enabled services, the CQC’s role includes ensuring healthcare settings meet fundamental standards of quality and safety during an inspection, regardless of the medical device status of the technology used.

The CQC has published principles for the inspection process for particular types of technology such as surveillance closed-circuit television (CCTV).24 These principles assess whether surveillance technology is safeguarded, secured, lawful, transparent, operated by trained staff and used in a manner that maintains patient involvement, privacy and dignity.25 Principles for the safe and effective use of AI technologies may also be appropriate.

3.1.3 Regulators of healthcare workers

Regulators of healthcare workers, like the General Medical Council (GMC) and the Nursing and Midwifery Council (NMC), are responsible for setting standards of competence and conduct, and assessing the quality of education and training courses to ensure healthcare workers have the skills and knowledge to practise safely and competently. These standards may need to be revisited in the context of AI technologies.

Interviewees for this research noted that clinicians look to regulators for guidance on how they should use AI technologies and for reassurance that using AI in clinical practice will not threaten their professional registration. Therefore, the position regulators take about AI technologies will significantly influence clinician confidence in these technologies.

The General Medical Council (GMC) and Health and Care Professionals Council (HCPC) codes of conduct require that clinicians must be prepared to explain and justify their decisions.26,27 This may be challenging in situations in which ’black box AI’ is used in clinical decision making where the clinician cannot explain how an algorithm has reached a given conclusion.

Regulatory standards apply both to clinicians using AI in clinical decision making and those involved in the design, testing and validation of AI products. Interviewees noted that the latter are undertaking roles that were not traditionally within the remit of regulators of healthcare workers, and may require particular consideration and specialised guidance.

Non-clinical developers of AI products used in healthcare are not regulated in the same way as clinical professionals. Feedback from the interviews conducted for this research suggests that formal registration and accreditation for these roles by a regulatory body may be beneficial. This might include a systemised set of training protocols including technical, ethical and safety standards. A formal accreditation could act to promote the development of safe and effective AI and improve public trust in these technologies.

Information:

Regulation and standards - Key confidence insights

  • A robust regulatory system is key to ensuring that technologies are safe and effective, which contributes to the trustworthiness of AI systems.
  • Healthcare workers may assume regulatory approval proves that an AI product works in a real-world clinical setting. However, current regulatory standards do not provide this level of assurance regarding performance.
  • The healthcare workforce will need to understand the remit of regulatory approval of medical devices.
  • Principles for the safe and effective use of AI technologies from regulators of healthcare settings may be appropriate.
  • Regulators of healthcare workers can consider how to advise clinicians who develop, validate and use AI technologies.
  • Formal registration and accreditation of non-clinical developers of healthcare AI products may be beneficial to promote the development of safe and effective AI.

References

16 The multi-agency advice service (MAAS) (now know as the AI and Digital Regulations Service) - Regulating the AI ecosystem - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/ai-lab-programmes/regulating-the-ai-ecosystem/the-multi-agency-advice-service-maas/. Accessed March 7, 2022.

17 Digital Technology Assessment Criteria (DTAC) - Key tools and information - NHS Transformation Directorate. https://www.nhsx.nhs.uk/key-tools-and-info/digital-technology-assessment-criteria-dtac/. Accessed March 7, 2022.

18 MHRA. Software and AI as a Medical Device Change Programme. https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme. Published 2021. Accessed March 7, 2022.

19 HM Government. Designated standards: medical devices - GOV.UK. https://www.gov.uk/government/publications/designated-standards-medical-devices. Accessed March 7, 2022.

20 NHS digital, data and technology standards - NHS Digital. https://digital.nhs.uk/about-nhs-digital/our-work/nhs-digital-data-and-technology-standards. Accessed March 7, 2022.

21 IMDRF. International Medical Device Regulators Forum - Software as a Medical Device (SaMD): Key Definitions. 2013. https://www.imdrf.org/documents/software-medical-device-samd-key-definitions. Accessed March 7, 2022.

22 CQC. The five key questions we ask - Care Quality Commission. Care Quality Commission. https://www.cqc.org.uk/what-we-do/how-we-do-our-job/five-key-questions-we-ask. Published 2016. Accessed March 7, 2022.

23 QC. What we do - Care Quality Commission. https://www.cqc.org.uk/what-we-do. Published 2019. Accessed March 7, 2022.

24 Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. npj Digit Med. 2021;4(1). doi:10.1038/s41746-021-00509-1

25 Wall E, Stasko J, Endert A. Toward a Design Space for Mitigating Cognitive Bias in Vis. 2019 IEEE Vis Conf VIS 2019. 2019:111-115. doi:10.1109/VISUAL.2019.8933611

26 Anwar R. Good medical practice. BMJ. 2003;327(7425):1213. doi:10.1136/bmj.327.7425.1213

27 Smith H. Clinical AI: opacity, accountability, responsibility and liability. AI Soc. 2021;36(2):535-545. doi:10.1007/S00146-020-01019-6/FIGURES/1

Page last reviewed: 12 April 2023
Next review due: 12 April 2024