Confidence that attribution of liability is clear in relation to artificial intelligence (AI) technologies.

Interviewees for this research noted that clarity in the attribution of liability is crucial to increase confidence in, and enable the safe and ethical deployment of artificial intelligence (AI) technologies. 

They expressed concern about the current uncertainty in who will be legally accountable for AI technologies used in the clinical decision making process. They highlighted that establishing the liability of the various parties involved in designing, deploying and using AI will be important to promote confidence in these technologies. 

Liability is a legal duty or obligation to take responsibility for one's acts or omissions. It is a longstanding legal principle that applies across sectors. However, it is unclear how liability will be applied to AI used in clinical decision making as there is a lack of established case law in this area.27  The challenge of AI and liability is not unique to the healthcare industry and applies to various other sectors.

Responsibility for AI used in clinical decision making could fall to the clinician who uses the technology, the deploying organisation, the industry innovator that developed the technology or those who validated and approved the technology for clinical use. Various legal frameworks may be applicable including negligence, product liability and vicarious liability.

AI technologies that are used as a tool for decision making could feasibly be treated like other clinical decision making tools, with a potential focus on clinician accountability under medical negligence law. However, this may depend on the way in which an algorithm is used in the decision making process.

‘Black-box’ algorithms pose a particular challenge. If a clinician cannot fully understand and explain how a ‘black-box’ AI algorithm reaches its prediction, they cannot reasonably be considered accountable or responsible for the AI prediction itself.64 However, they may still be held accountable for a decision made using the AI prediction.

In the case of autonomous AI, there is a potential that clinicians may be removed from the decision making process (for example, if AI were used to triage referrals or patient electronic consultations). If AI algorithms were fully responsible for a clinical decision in this manner it is unclear how existing legal frameworks would be applied.

The NHS AI Lab Futures Portfolio is looking at these issues in more detail, including through its ‘Liability and Accountability programme’, conducted with NHS Resolution and their expert legal panel, and a collaborative programme to assess the impact of meaningful human control in AI.

Clarity in liability will also influence developments in establishing related regulation and guidelines as discussed in sections 3.1 and 3.3.

Information:

Liability - Key confidence insights

  • Establishing the liability of the various parties involved in designing, deploying and using AI will be crucial to shaping confidence in AI.
  • Currently, there is uncertainty as to who will be held to account if AI products are used to make clinical decisions that lead to patient harm.
  • Clarity in liability will influence developments in establishing related guidelines and regulation.

References

27 Smith H. Clinical AI: opacity, accountability, responsibility and liability. AI Soc. 2021;36(2):535-545. doi:10.1007/S00146-020-01019-6/FIGURES/1

64 Hesketh R. Trusted autonomous systems in healthcare A policy landscape review. 2021. doi:10.18742/pub01-062

Page last reviewed: 12 April 2023
Next review due: 12 April 2024