Horizon scanning

The AI Roadmap

Working with Unity Insights, the AI Roadmap (and interactive dashboard) was published in January 2021 which refines a database of 240 AI technologies which are ready or almost ready for deployment. The roadmap helps to provide evidence for prioritisation through an assessment of: 

  • Which clinical area and workforce groups will be affected by the technology 

  • Likely time to deployment 

  • Impact of the primary workforce group, pathway and system. 

Healthcare workers’ confidence in AI

Read report one - Understanding healthcare workers’ confidence in AI

We have partnered with The NHS AI Lab to research factors influencing healthcare workers’ confidence in AI-driven technologies and how their confidence can be developed through education and training. We will publish two reports in relation to this research. 

As part of The Topol Review in 2019, it was recommended that the NHS should develop a workforce able and willing to transform into a world leader in the effective usage of healthcare AI and robotics.

The first report argues that confidence in AI used in healthcare can be increased by establishing its trustworthiness through the governance and robust implementation of these technologies.

In the context of clinical decision making, once trustworthiness in AI technologies has been established, high confidence in AI-derived information may not always be desirable. For example, a clinician may accept an AI recommendation uncritically, potentially due to time pressure or limited experience in the clinical task - a tendency referred to as automation bias.

The report concludes that clinicians must be supported through training and education to manage potential conflicts between their own intuition or views about a patient’s condition and the information or recommendations provided by an AI system. 

The report identifies broader efforts that primarily aim to improve patient safety and service delivery, but could also contribute to developing confidence in AI within the healthcare workforce. These include further development of regulatory frameworks for AI performance, quality, and risk management, and finalisation of formal requirements for evidence and validation of AI technologies. 

Much of this work is already underway, being led by Health Education England, the NHS Transformation Directorate, Integrated Care Systems and trusts, regulators, legal professionals, academics, and industry innovators. The AI Ethics Initiative is working with these organisations to ensure our findings on AI confidence are considered as part of this broader work. 

The second report, which will be published later this year, will determine educational and training needs, and present pathways to develop related education and training offerings.