4.3 Local validation
Confidence that artificial intelligence (AI) technologies are having the right impact locally.
Section 3.2 provides a detailed discussion on the various evaluation and validation approaches for artificial intelligence (AI) technologies, including local validation.
Local validation of AI technologies may be needed to ensure that published data on the performance of AI technologies are reproducible in the local context. Such validations will vary depending on the technology and how the technology will be implemented, and may also involve distinct local settings or clusters: from individual practices to Integrated Care Systems.
There are many unknowns and potential risks involved in ‘translating’ AI technologies from controlled development and validation settings to complex and highly individual real-world settings. These risks can relate to the ability of settings to understand the suitability and performance of the AI technologies locally (including in relation to local populations, practices, hardware, and data pipelines), to maintain the ongoing rigour of that performance, and to minimise any unfair impact on or harm to their patients.
Interviewees for this research noted that being unable to assess whether an AI technology is suitable to their local settings and populations may contribute to unease and hesitancy to adopt AI technologies within workers. The ‘blind’ acceptance of validations conducted in different settings and populations is an additional risk.
However, interviewees were cautious as to whether each health setting can sustain the resources required for the local validation and ongoing maintenance of AI technologies, including the capacity of clinicians to participate in these processes. They noted that clinicians may hesitate to adopt any AI technologies that expand their workload unless mandated to do so. They may not welcome additional responsibilities for acting as ‘gatekeepers’ of AI, especially without sufficient specialist knowledge and training. Interviewees suggested that this responsibility could be undertaken by local specialists (for example clinical scientists) or centralised entities (as discussed in the second report).
Interviewees suggested that conducting research related to AI (including in relation to evaluation and validation) is an excellent training opportunity for internal teams to identify risks and understand the benefits and value of AI technologies.
However, they noted that these opportunities are typically only available to staff at larger healthcare centres. Creating opportunities for staff from smaller organisations to be seconded onto exemplar projects could be encouraged to support this model of training.
Furthermore, hands-on experience under the guidance of expert peers could be encouraged alongside educational programmes and materials. Centres with expertise in these areas could disseminate their knowledge and support colleagues in organisations with less experience (as discussed also in 4.1 and 4.2).
Local validation - Key confidence insights
- Health and care settings will need to understand the suitability and performance of AI technologies locally (including in relation to local populations, practices, hardware and data pipelines), maintain the ongoing rigour of that performance (post-market surveillance), and minimise any unfair impact or harm on their patients.
- Providing universal opportunities for staff to engage with experts and AI-related research and validation projects could enhance their confidence in AI.
Page last reviewed: 12 April 2023
Next review due: 12 April 2024