The literature review and analysis of the interviews conducted for this research suggest that trust and confidence are often used interchangeably.

The literature review and analysis of the interviews conducted for this research suggest that trust and confidence are often used interchangeably, with increased trust in artificial intelligence (AI) often stated as a desirable objective in health and care settings. 

Therefore, when considering how AI technology is used in healthcare, it is important to differentiate between the terms trust (which is placed in a product or system), trustworthiness (which is earned), and confidence (which is held individually or collectively). These distinctions have informed this report’s conceptual framework, which uses the term confidence rather than trust.

The term 'trust' is a belief in the reliability of a product or system and is typically a binary concept such that something is either trusted or it is not.

In this context, 'trustworthiness' encompasses the quality of a product or system, being deserving of trust or confidence.

'Confidence', like trust, conveys a belief in a product or system. However, unlike trust, it is not generally considered a binary concept. Instead, confidence can be understood as continuously variable and depending on various factors. Confidence can account for the nuances of using AI in clinical decision making, where high confidence in AI-derived information is not always a desirable objective. It allows for a more dynamic exploration of related influences and behaviours, including where lower confidence may be justified.

Interviews for this research suggest that confidence in any AI technology or system used in health and care can be increased by establishing its trustworthiness. Increasing confidence in this way is desirable and requires a multifaceted approach including regulatory oversight, real-world evidence generation and robust implementation.4

In the context of clinical decision making, once trustworthiness in AI technologies has been established, high confidence in AI-derived information (the output provided by an AI system to a clinician) may not always be desirable. Instead, different levels of confidence may be held in individual outputs from a given AI technology, depending on the context and circumstances. During clinical decision making, confidence in AI-derived information will depend on numerous factors including the clinical scenario and other available sources of information. The challenge, therefore, is to enable users to make context-dependent value judgements and continuously ascertain the appropriate level of confidence in AI-derived information, balancing AI-derived information against conventional clinical information.

References

4 Spiegelhalter D. Should We Trust Algorithms? Harvard Data Sci Rev. January 2020:1-12. doi:10.1162/99608f92.cb91a35a

Page last reviewed: 11 April 2023
Next review due: 11 April 2024