An overview of the key concept of confidence in artificial intelligence (AI).

Interviewees for this research noted that many of the UK healthcare settings that are currently adopting artificial intelligence (AI) technologies are at a critical juncture. They are moving through the early stages of an AI technology’s development (from the proof-of-principle to proof-of-efficacy stages) and introducing AI technologies in clinical trials.

Currently, the main challenges of these rollouts involve considerations around information technology (IT) systems, interoperability, and information governance. Interviewees noted that, as these initial challenges are resolved, issues relating to workflow integration, performance monitoring, demonstrating evidence of safety and effectiveness, and securing the trust and confidence of the workforce in AI technologies will become prominent.

The latter challenge is the focus of this research, starting with clarifying what trust, trustworthiness and confidence mean at the intersection of AI and healthcare.  

Moving from trust to appropriate confidence

The literature review and analysis of the interviews conducted for this research suggest that trust and confidence are often used interchangeably, with increased trust in AI often stated as a desirable objective in health and care settings. 

Therefore, when considering how AI is used in health and care settings, it is important to differentiate between trust (which is placed in a product or system), trustworthiness (which is earned), and confidence (which is held individually or collectively). These distinctions have informed this report’s conceptual framework, which uses the term confidence rather than trust.

The term 'trust' is a belief in a product or system and is typically a binary concept such that the subject is either trusted or it is not.

'Trustworthiness' encompasses the quality of a product or system being deserving of trust or confidence.

'Confidence', like trust, conveys a belief in a product or system. However, unlike trust, it is not generally considered a binary concept. Instead, confidence can be understood as continuously variable and depending on various factors. Confidence can account for the nuances of using AI clinically, where higher confidence in AI-derived information is not always a desirable objective. It allows for a more dynamic exploration of related influences and behaviours, including where lower confidence may be justified.

Interviews for this research suggest that confidence in any AI technology or system used in health and care can be increased by establishing its trustworthiness. Increasing confidence for this purpose is desirable and can be accomplished through a multifaceted approach including regulatory oversight, real-world evidence generation and robust implementation.4

In the context of clinical decision making, once trustworthiness in AI technologies has been established, high confidence in AI-derived information (the output provided by an AI system to a clinician) may not always be desirable. Instead, different levels of confidence may be held in individual outputs from a given AI technology, depending on the context and circumstances. During clinical decision making, confidence in AI-derived information will depend on numerous factors including the clinical scenario and other available sources of information. The challenge, therefore, is to enable users to make context-dependent value judgements and continuously ascertain the appropriate level of confidence in AI-derived information, balancing AI-derived information against conventional clinical information.

References

4 Spiegelhalter D. Should We Trust Algorithms? Harvard Data Sci Rev. January 2020:1-12. doi:10.1162/99608f92.cb91a35a

Page last reviewed: 12 April 2023
Next review due: 12 April 2024