Digital, Artificial Intelligence and Robotics (DART-Ed) webinar series
We are running a series of webinars looking at recent exciting developments in artifical intelligence (AI), robotics and digital technologies.
On this page
Webinar 1 - Digital, Artificial Intelligence and Robotics across the system - an introduction
The first webinar took place on 18 November 2021 providing an introduction to the work of Health Education England’s DART-Ed programme. We looked at the wider landscape of activity across the system, particularly the work of our partners, including NHSX (now the NHS Transformation Directorate), and examining what the main issues, barriers and benefits are in implementing these technologies with an opportunity for participants to ask our expert panel questions and find out more.
On the panel were:
- Patrick Mitchell (Chair) - Director of Innovation, Digital and Transformation, Health Education England
- Dr Hatim Abdulhussein - Clinical Lead of DART-Ed Programme, Health Education England
- Richard Price - Project Manager, Technology Enhanced Learning, Health Education England
- Brhmie Balaram - Head of AI Research & Ethics, NHS AI Lab, NHSX
Watch a recording of this session below.
Watch webinar 1
Patrick Mitchell: Good to see so many people joining us this afternoon. My name is Patrick Mitchell. I'm the director of Innovation, Digital and Transformation at Health Education England and the senior responsible Officer for the data programme, as well as the digital readiness sales arm in. HE delighted. We've got so many people. Joining us, can I ask people to turn their cameras off? And to put themselves on mute please during the session and obviously as we come to Q and , we can bring people back back back in a bit. Use your help if you could cameras off and mics off. Thank you. Absolutely delighted that HE is able to work across the digital skills agenda, particularly looking at the implications of AI, digital healthcare and robotics on the NHS workforce. Following the Topper review which was published. In February 2019. This is one of a number of webinars we're going to be running and I'm now going to ask the team to introduce themselves and then get into the presentation. So we'll kick off with her team.
Fatima Bonsai: Thanks, Patrick. My name is Fatima Bonsai: and I'm a GP registrar in northwest London and I've been a health education and working as part of our new digital AI, Robotics and education and training programme.
Patrick Mitchell: Thanks to Team Brahmi.
Brhmie Balaram: Hi everyone. It's good to be here. I'm Brhmie Balaram. I'm the head of AI research and ethics and the NHS AI lab and I lead our AI AI ethics initiative.
Patrick Mitchell: Fantastic. And Richard?
Richard Price: Good afternoon, everyone. Richard Price, I'm the Learning technologies advisor here at Health Education England. Part of the technology enhanced learning team. So we look at all of the different touch points where potentially technology might be able to support learners with their education and training.
Patrick Mitchell: Great. Thank you, Richard. So we'll kick straight off. Her team's going to give an introduction. Bromley is then going to do an introduction to the Eye lab and the collaboration with our programme and Richard's going to be talking about our technology enhanced learning programme and how it links to this piece of work. So over to you. Team. Thank you.
Fatima Bonsai: Magic. I'm just going to bring. My screen up and just got. Some slides to share with you today. That give you an overview. So I mean, this is really all started with the top review, which is published in 2019, which made quite clear recommendations around education and training of our current and future workforce around increasing the numbers of clinicians with knowledge and skills around AI and robotic technologies and other digital healthcare technologies as well as some specialist skills in, in our healthcare scientists, our technologies and our knowledge specialists as well. And it's quite clear from our point of view on health education England to continue looking at our current workforce and how we can support and work and digitally enabled system and our future workforce to ensure that they'll be able to work into a system that's gonna continuously adapt and transport digitally. Some of the definitions are really important. So the top of. Your defined AI as a broad field of science and computing, not just computer science, but psychology, philosophy, linguistics, and other areas, and is essentially concerned with getting computers to do tasks that normally would require human intelligence. The top review also defined digital medicine as digital products and services that are intended for the use of diagnosis. Prevention, monitoring and treatment of disease, conditional syndrome and this can income as a whole range of technologies. Many of these technologies are now using daily in practise, like telemedicine. There are many patients or consumers that are using some of these technologies themselves in terms of smartphone applications and wearable devices, for example. And coming to have conversations with us about the. And there's emerging sort of softwares that we use such as E prescribing electronic healthcare records point of care testing and in emerging technologies like extended reality technologies such as virtual reality, augmented reality could have quite interesting healthcare applications. So our programme. Has three key projects. The first project is looking at the impact of these AI and digital healthcare technologies on education and. Training we've been working. For a wide range of partners with. The second project is with the Royal College. Third is looking at robotics literacy as part of their radar initiative, and we've also been working quite closely with British Association of dermatologists, looking at developing digital leaders in dermatology. And we'd love to continue that sort of relationship with the British Association Dental, but other faculties, other colleges in, in, in thinking about that journey, all of this work is overseen by our digital and education. Group. So we're looking at this from free means. Really one is horizons cunning knowing what's coming. And so we've been looking at AI technology in particular and creating AI road map to see how many technologies are out there currently in the system. How far are they away from scalability or deploy? What kind of workforce groups are they affect affect and what kind of workforce? Impact. Do they then create? And also taking some deeper dives into a couple of these technologies to. Understand that a little. Bit further we also. Need to be really proactive and really assess what the learning needs are, so we're continuing some of the work we've done previously at hitch you to develop a digital literacy framework to understand what their learning needs are going to be and the capabilities that our workers are going to need for. AI and and digital healthcare technologies with a view that actually this will provide a a basis for a curriculum. You know you can look at this, you can pick up the capabilities and map it to your professional workforce group and build that into a undergraduate and postgraduate curriculum. And also can help us to direct where we're going to deliver some of our education and training and opportunities in this space. So if you go to learning Hub now, you'll be able to see one of our catalogues which is. Due to the. Way our better technologies in education and there will be some learning materials that are already signposted to within that catalogue. But what we hope to do is continue to build that catalogue to to for it to be sort of a go to space. You want to look for extra learning in this area. And the reason why you're all here today? Is because you are interested. I want to quickly run through in terms of what the plan is for the future, so this is the start of the data webinar series. All webinars will be free to attend or webinars will be recorded and added to the dated catalogue on the NHS Learning Hub and and if you keep an eye on the data website and and the HD digital and social media platforms that will give you an idea for when their webinars will be coming in terms of exact dates and what they'll be covering. This is our first one. Now we're talking about digital aerobics across the system. In the future, evidence we may explore themes such as ethics and confidence in AI in early next year in the spring and next year, we're gonna be looking at nursing and midwifery and digital and AI and nursing in that. This we're going to have a spotlight in AI in healthcare in in, in this late spring of next year, followed by a webinar over the summer around dentistry, and they're culminating in a in a webinar around robotics cystine surgery in the autumn of next year. So that's the current plan going forward, but we hope to continue these webinar series and we're really pleased to see the engagement that we've had today. If anyone wants to get in touch. Feel free to reach out and I'm now going to pass over to my colleague Richard to to my.
Brhmie Balaram: Thanks, Hampton. OK, so the NHS AI Lab was established in 2019 with the mission to enable the development and adoption of safe, ethical and effective AI driven technologies in the UK Health and care systems. So one of the main ways that we're doing that is through supporting innovators to carefully scale their technologies. And use across the NHS. And we primarily do this through the AI and health and care awards, and we have a number of technology companies that are seeing marked meaning that they have regulatory approval and that we are now supporting with evaluation of their product. So alongside the AI award programme, we also have Skunk Works which develops proof of concepts that are often proposed by healthcare practitioners who are who are identifying a particular need. We have the AI imaging programme that is currently standing up a national medical imaging platform to support training and testing of image. Imaging based AI products and we also have the regulatory programme that is working with organisations like MHRA and Nice to streamline the regulatory pathway. So, for example, they're collaborating on the creation of a multi agency advice service which would bring together regulatory information and make it accessible through. A single portal. I lead our AI ethics programme, which was launched earlier this year in February. We support research and. Practical interventions that can complement and strengthen. Existing efforts to validate, evaluate and regulate AI technologies in health and care, and we have a particular focus on countering the inequalities that could emerge from the way that AI is developed or deployed. And and one of our main objectives is to support a key goal of the lab, which is to build trust and confidence in AI systems, so we're therefore investing in a number of practical interventions, such as working with the ADA Lovelace Institute to develop a model for algorithmic impact assessments that can be used as a condition of accessing. Data from excuse me from the National Medical imaging platform. The project that I wanted to highlight, however, is a partnership with Health Education England. It's being led by Doctor Michael Mix and Doctor Anvil painter with support from one of our. Colleagues in the. Lab George and the project is specifically exploring how to build appropriate confidence in AI among the health care workforce. So we want staff to feel like they have enough confidence to use these technologies, but we don't want them to be. Overconfident in these technologies to the point where they are overriding their own judgement or placing more trust than is warranted in a certain product. So this is. Obviously a very difficult balance to strike, but that's what we're trying to aim for. And I think part of how we will get there is by being clearer about the expectations we have of these healthcare staff when using these technologies. So for example, in relation to their role in post market surveillance, there's a certain level of knowledge and understanding that healthcare staff. Need and that can be gained through a training course, but they will also need to be supported at national and local levels, so this project also looks at the role the roles of senior leadership and driving and shaping the use of these technologies in the workforce.
Patrick Mitchell: Thank you. I'll handing over to Richard Price. Richard.
Brhmie Balaram: Richard.
Richard Price: Thank you, Patrick. Thank you. Very good to be here. My name is Richard Price. As I said and I work in a team called technology enhanced learning. So that's a team within health education and it's a specialist team, was responsible for looking at all of the different points where potentially we might come across education and training where technology might be able to support that. Education and training, and that's all the way through. Pre 16 when someone used to first started to think about career in healthcare through their undergraduate and postgraduate education all the way into the workforce and potentially beyond as well. So like I say, a huge area to cover look. At the sort. Of 1.21 point 3 million people and how technology can potentially support all of those things, and we do that in a number. The ways so the main delivery mechanism that everyone's familiar with our our delivery platform so has been mentioned briefly in the learning hub, which is our new platform and it provides free access to a wide range of informal resources that are shared and contributed by the Community. So there are potentially things like e-learning courses but also things like video, audio. Images, podcasts, web links, all of those different things that you might come across from the education and training content. We then have a second platform called ELEARN for healthcare and she's very much our formal offer. It's 450 or so programmes on that, including against our flagship programme, language use coronavirus and the vaccination programme. We've had millions of users. That have have. Accessed that and gone through that training. So we're really proud of that, that was. Produced in response to the pandemic. So that again is freely available and we're starting to see some content on AI starting to make its way into all of these platforms. The third platform that we offer is called digital learning solutions and that is very much about the IT side of training. So it's that it hosts that sort of national digital capability, digital literacy content as well as sort of details of clinical systems and the more familiar sort of office tools that you probably familiar with and things. So a quite a broad range of different systems. That we we offer as part of that. Alongside that, we have programmes that support simulation immersive technologies, so you might have seen some of our work around HoloLens, for example, which is a headboard display that provides virtual augmented reality. We also have virtual reality offers and things like that as well as part of that, again all. Or via those 3 platforms. Third area just wanted to focus on was our regional networks. So these are a a new set of partnership managers essentially that we've created at regional level that give us access to individuals working in hospitals in local care offers and things like that. That is, is a way of supporting those individuals, those organisations with access to IT, for example, so making sure they've got access to sufficient Wi-Fi and things like that to be able to deliver some of these AI enabled features that we're talking about today. So it's a really exciting thing, lots happening and that's a very, very brief overview.
Patrick Mitchell: Brilliant. Thank you very much. All three of our speakers. I hope that's giving people a taste of what we're up to. I've got three questions that came in from people earlier, which I'm going to post to you all, and then we'll have time for Q and from the audience. Do, but the first question, what new skills do healthcare professionals need to develop and how do you plan to help patients and staff adapt to technological changes? So team, do you want to take that first?
Fatima Bonsai: Thanks, Patrick. This is where I I believe our our capability framework will come in. So this is where I mentioned the learning needs analysis beforehand. We need to spend some time understanding what those skills are gonna be. So how we've done this is we worked with the University of Manchester, we did a systematic review to. Look at their existing literature. Out there, we have a series of of focus groups and workshops. With their key subject matter experts, those that are working directly with technologies on the front line, some leaders in education as well as well as industry experts who are developing and innovating these terms of these technologies to identify what they felt based on those emerging technologies, our healthcare professionals are going to need to understand or need to know. And this framework will provide that guidance across various factors. Some of the skills that it talks about, some of the skills that the measure of top review around data creation, integration, provenance, security and safety around that data. It also goes into some basic skills around digital transformation. There are skills around some of the human factors. So with interacting these technologies, what kind of impact does it have on that relationship with your patient and what kind of impact does it have on the relationship with your team that you work with in? There are also factors around ethics and sort of primal insight into this and and these core themes will essentially form what is the curriculum in in AI and digital healthcare that can only be lifted and and and and used by educators to be put into curriculum and training, but also can be used by an individual learner who wants to learn in this space. Say actually this is.
Patrick Mitchell: You're gonna mute the team.
Fatima Bonsai: Ohh sorry, when I got off and so can we use my educators to to lift and put into education and training, but it can also be used by the individual learner to say this is what I now need to know. How can I then use this to go out and and and find the resources and the knowledge that I need based on these factors. So I think it. Empowers individual learner because we know actually. On the evidence that we are all seeking knowledge in our own time and space around these things.
Patrick Mitchell: Thank you, Brhmie.
Brhmie Balaram: I think so. I think on this question, healthcare practitioners will need a foundation in digital literacy to build from, but they also need to acquire AI specific skills and knowledge. So for example, they will need to learn to understand AI outputs and their limitations and they will need to be reasonably critical of the information. Presented by an eye to avoid automation bias, and I also think that we should emphasise the need for soft skills to support changes in the relationship between patients and clinicians. So as AI is increasingly used for diagnosis, for example, healthcare practitioners will need to take on more of a health counselling role as opposed to just relaying information as they take some more physical, repetitive and basic cognitive tasks. It'll be more important than ever that clinicians are able to demonstrate empathy. And emotional intelligence in their communication with patients. And as we're preparing the public, I think that there will need to be greater engagement and education in order to ensure a holistic approach to deploying anti technology safely and effectively. So for. There are some instances of healthcare practitioners and industry exploring how to engage patients on issues of consent, and I think they're it'd be useful if there could also be more engagement to demystify AI. Also to address fears and clarify the limitations of these technologies as well.
Patrick Mitchell:
Brilliant. Thank you. And Richard.
Richard Price:
Thanks, Patrick. So I think what we're seeing in the education space is that people are starting to move into this hybrid working model. So we're seeing a lot of people with portfolio careers and perhaps sharing their their skills and competencies across different organisational boundaries and things like that. So I think what we're seeing is learners starting to manage their own, learning their own portfolios and their own. And competence as a result of that. So for me, I know both dummy and Hatim have talked about this, but I think for me the digital capabilities are going to be key. So that is things like to introduce another framework, I suppose our own digital literacy framework of digital capabilities that are part of that, we're gonna need people that are flexible and adaptable. As though that different culture that's gonna happen as a result of these different ways of working and the introduction of these AI and robotic surgery technologies. And and I think it to some extent, we're preparing a workforce for a future that don't, we don't even know what it looks like yet. The top all you talked about some of those different changes are challenges that are gonna be presented. But actually we're not, we're not in entirely clear on what that's gonna look like ourselves yet. So a lot of this is preemptive work and and like I say getting people ready for those changes. And I think due to literacy. Really underpins that and is key to getting people in the right the right space. The right frame of mind for that.
Patrick Mitchell:
Brilliant. Thanks, Richard. Second question, how are or should HID's equip pre registration students in the use of AI and robotics in healthcare? So Grammy, do you want to kick that one off?
Brhmie Balaram:
Yeah. So I think that higher education institutions could start to embed a basic level of AI specific knowledge within clinical education. So for example, they could teach students to appraise evidence from AI derived information, which would be similar to the usual criteria for assessing academic papers. They could teach students about how AI technologies are developed and trained for use in. Care they could relay what's required to develop and maintain optimal AI solutions, so introducing concepts of data quality we could, I think we could also teach students about how AI is validated and and I think it's really important that they reflect on critical points of integration with the health system. And how poor implementation could actually weaken the performance of AI? So I think it'd be really important for students to understand that algorithms can perform differently in different settings because of the way. That they're introduced or operationalized in a clinical pathway. I think that. There probably actually isn't a need to teach code or advanced statistics, so basic concepts and statistical concepts are already being taught in medical school like diagnostic accuracy, sensitivity, and specificity. So I think that that should be sufficient. But again, I think that there needs to be an emphasis on soft skills, so kind of what I mentioned earlier in terms of thinking about how the relationships between patients and clinicians are getting changes and the result of the introduction of of some of these technologies.
Patrick Mitchell:
Brilliant. Thank you. I'm glad. I'm glad you said we don't need to teach code or not for this, not for this group of professions. Anyway. We certainly need people who can do code, but that's in a different group of the digital workforce, Richard.
Richard Price:
Thanks, Patrick. So as I said, I think we're preparing a workforce for a future that we don't even know what that looks like yet. But I think what we are seeing is a trend toward. As when we touched on sort of patience, we'll know better word. So patients starting to manage their own care at home using wearable devices and things like that. And AI is really key to being able to analyse that and notify condition when they need to do an intervention and things like that. So I think we're gonna need a workforce that recognitions and relying much more on that technology. Potentially using much more telehealth, telemedicine. So those are gonna be skills that we need to teach our workforce to be able to, to deal with that different way of dealing with patients, but still care being at the heart of. That still that. That bedside manner, as we used to call it, being at the heart of that so. From an education point of view, I think where we're going to see AI coming in is much more personalising adaptive learning. So if you think about somebody about to walk around, for example, they might have a tablet in front of them that has a list of all of their patients that they're about to see. If we can map that information on the screen there to their own training record. You can go. OK. This patients, this clinician is about to see a patient with sepsis for example. We know from their training record that they've not seen a patient with sepsis or they've not done. They've not done any training on sepsis for a number of years. So actually we can intervene at that point, give them some personalised just in time learning which might be a video or something to give them that confidence to be able to treat that patient and and give them the the information they need. So that's that kind of predicting learning of what they need. We can also use start, see AI being used to. Adapt to the way that we deliver training, so if somebody struggling with a particular subject, we perhaps focus a little bit more on that and we can start to look at outliers and things and and and support people before they start to to struggle. And so actually just in time interventions that are actually supporting their learners before they even start to get to a point where potentially gonna fail or struggle. With their course. So there's some real. Opportunities, I think in how we can use AI in the education space.
Patrick Mitchell:
Thank you. I think that one in particular around being able to track people and help them where they're particularly struggling with particular subject areas. It's going to be hugely helpful when you think of the cost of attrition from healthcare programmes. It's it, it could, could be a real opportunity there to to gain and to make sure that we don't lose those people from the training. Pipeline final question and this is one I think for her team what medical specialties and roles can we anticipate anticipate being heavily impacted by automation resulting in greater or fewer posts and vacancies. So her team do you want to take that one?
Fatima Bonsai:
Thanks, Patrick. And the first thing to say is this is a really interesting topic and and one that we can really share a little bit of insight from from our the work that we're doing around the AI road map and the dashboard that we're actually developing and we're soon to publish. So do keep your eyes. Out for that and. That is really important because it gives us an idea of currently what kind of technologies are what kind of AI and data-driven technologies are in the system and how many of them are there. And I mentioned before, it talks about clinical workforce groups that are going to be affected by these technologies. And now first thing I want to caveat with is actually regardless of of the technology of the workforce groups that we know are gonna be affected by AI, we have to recognise that actually to work effectively with AI, you still need a human in the loop. You need someone in that process. So it's not about change in, in, in the number of people we necessarily need in. A lot of these. Areas and specialty. These it's more about how we support those areas, those working in those areas to work effectively with the technology that they're going to be using to enhance patient care and optimise the patient pathways that we have in our in our current system. That's the most important thing to recognise. We know at the moment there's a clear shortage of of of, of you know, certain workforce groups, so actually. There's a big role here that digital transformation can play to actually enhance and support us. In delivering a care and and making us sort of more efficient in the way we deliver care as well. So the iron map tells us that there are quite obvious clinical workforce groups that are most likely to be affected by AI and data-driven technologies in in, in the short term. And these are groups such as clinical radiologists and radiographers. It's also workforce groups such as general practise cardiology, adult nursing. These are some of the workforce groups that are quite clearly coming up in that chain of of of workforce insights that that we do have now an important thing to recognise is actually when I say. A workforce group. There's a massive team that works around that workforce group, so if we talk about. On practise, on a day-to-day basis, I work in a surgery with a practise nurse, advanced nurse practitioner and sometimes paramedics and physician associates, significant administrative and and reception support and so if I'm going to be using an AI data-driven technology in my practise.
Fatima Bonsai: What impact does? That have on my colleagues. So if we say it affects the general practitioners, it's also gonna affect that wider team. What does that mean in terms of change in workloads? So if I'm using? Something to then. Triage patients. Uh. Through through automation. The type of work that he consult are doing, and they've been awarded an AI award for when they triage, that patient is gonna go to a doctor. Might go to a pharmacist. Who's the point of contact when that when that page is triaged, is that the receptionist? Well, this is what we need to try and delve deeper and understand. So the dashboard is really interesting. It gives us that initial insight, but now we need to look at specific case studies. We need to look at specific technologies and what impact they're having at patient level. So in the here, I wrote that report that you'll soon hopefully have eyes on in in the near future. There will be two case studies within that and one of those is obviously health, which is a technology which is used on mental health. And and essentially allows patients to be monitored on the ward by the nursing staff by the clinical staff on the ward and more effectively and in that case study, you're really able to dive deep and say, OK, what does that actually mean for the way we then deliver that care in that ward and the way you monitor those patients on the ward and what impact do they have on the patients themselves and you'll find that. The patients found it really helpful because they didn't need to be woken up to have themselves to have their, you know. Their observations taken. Every an hour or so, or every two or three hours, this was autonomously. It was sent back to the nurse. Staff and it created a better relationship between the patients and between the staff delivering care. Another technology that we're gonna have a deeper dive into something called atellan health, which looks at diagnostic radiography. And again if if we're improving diagnostic radiography, what impact does it have on radiography and the radiologist that are within that workflow and also the medics who are requesting their the the scan in the first place. And ultimately, what is the patient impact? So as we dive deeper into the understanding these, we can actually use these technologies effectively, but also understand what skills we need to support in terms of our training for the workforce that are going to be using these. And I think that's the most important thing to recognise, rather than worrying about vacancies and and and and jobs, is how can we make sure that we have the right skills to to use technology.
Patrick Mitchell: Thanks very much your team. Really helpful. I want to open up to questions. And Chris Munch has asked the first question. Chris, do you want to? Answer Answer openly to the audience.
Chris Munch: Yeah, I just. I'm just a word of caution really. We we we we know that about 80% of healthcare data is unstructured. It's a mess basically and it's not easily available to to AI to to to create the algorithms. Is it not? Should we not be getting our data in order and into shape before we get too carried away with the potential for AI?
Patrick Mitchell: Chris, good challenge. Who wants to take that one first?
Fatima Bonsai: I'm happy to take it. I mean, I think. I think it's very. And Chris, it's a, it's a really relevant point and we have to establish that actually from an education and training point of view, it's irrespective of of how good the technology is yet because we still need to understand it and actually if we're going to get to where we want it to get, if we want that data to be more structured and to be used in practise. You need both data scientists. You need understanding amongst healthcare professionals in terms of putting that data into the system, coding that data appropriately, and I'll give you a very. Real life example. And something that happened to me last week in practise. So I was there going through our our results. I was going through my labs, I was looking at the COVID results that came back and I was just filing the code ones that are positive and negative. And my trainer came into the room and had a conversation with me and said actually, you know, the ones that we actually code are that come up COVID. Positive. We need to make sure we code them as COVID positive. Something that I, you know, I hadn't been aware of, wasn't aware of, but actually has a significant impact in terms of the data that NHS digital for example collects and a significant impact in any potential technologies that are going to be using this data, you know. So if we don't code things properly, we don't understand why that's important then we're. Not gonna get. To where we want to get to. So that's. Really important to. Understand is our role really from our from our? The chess point of view is is to is to ensure that we give the right skills to actually get to the where we want to get to and and then we need to be. Doing that that. Right now, so we're not necessarily running before we walk. We're putting everything in place that. We can do to help us walk in the first place.
Brhmie Balaram: Can I also comment on this question?
Patrick Mitchell: Please.
Brhmie Balaram: Yeah. So I agree with what happened, saying in terms of we're able to do a lot of things like simultaneously in terms of making sure that the adoption of these technologies is effective and safe and ethical. But just to highlight the importance of the question that was asked, I think that there's like a really interesting. Analysis that was carried out by the University of Cambridge relating to algorithms that were developed during the pandemic. So. They had reviewed a number of academic publications related to these algorithms, and they found that out of the thousands of papers that there were only like 29 algorithms that had sufficient documentation to be reproducible, but that a number of them still had severely biassed. Datasets. They weren't externally validated. And and you know they have proper sensitivity or robustness analysis and what the authors of this paper talk about is. This idea of. Frankenstein public data set so. Public data sets that are combined and are used by researchers and developers who are knowingly training and testing their models. On data sets that are are overlapping. So I think that we've recognised in the AI lab that. There's, you know, a lot of like demand or something like their national medical imaging database that's essentially curating a publicly available data set that will hopefully be interoperable, but will also enable proper validation of these types of algorithms. So that they can be used on on their intended populations in the UK. So. So this is something that we are currently working on actively.
Richard Price: Absolutely. And if I could just add to that in terms. Of. Thinking about the education side of things, again, a lot of the data I would go a step further than what Chris is asking his question. I don't. I think in some cases we don't the data doesn't exist, so we need to start capturing that. And actually I think if you think about simulation, for example simulation. Mankins are really sophisticated now. They can produce huge amounts of data. And that tends to not be stored anywhere captured anywhere. So a lot of us are working behind the scenes now to look at AI ready data standards. So the experience API being an example that X API or project in cannon you might heard it called in the past. It's all about capturing all of that really rich data set in a format that's interoperable. And allows us to do an awful lot with it. I think the other thing just wanted to briefly touch upon is the power of taxonomies in all of this. I know those of you that know me have know that I I bore everybody to death with my love of taxonomies. But do you know what they're really important. They're our superpower. If you don't get your clinical coding right, if you don't get your. Your your richness of your data, it's impossible to match everything up if you're thinking about trying to map what I was talking about earlier. Clinical data sets with education data sets the only way you're gonna be able to do that is to have a bunch of signing. It was behind the scenes that link different clinical codes together, so there's some some huge fundamental work to do there. With tax on these. I'll get myself now. Thank you.
Patrick Mitchell: Thank you. Thank you, Richard, for anybody who knows Richard and goes to meetings with Richard, you'll know that taxonomy comes up at least once in the meeting, which is, which is great because it really stresses the importance and reminds us of the importance of.
Richard Price: Getting it right.
Patrick Mitchell: We've got a couple. Of questions coming in. So Dermot first and then Rosie Dermot.
Rosie Dermot: Well, thank you very much. I kind of regret asking the question now, but it's actually on taxonomy because we have an AI which actually manages the mappings between clinical codes and it's just come to market this year and we're just seeking the right area of the NHS to bring it to. We are trialling it with one of the large. Companies in the UK at the moment, but we really want to get it into a real clinical setting to find out how to make it ergonomic. It's actually quite simple to operate, but we want to make it even simpler. So any advice on angles there would be really appreciated.
Patrick Mitchell: Who wants to take that one?
Richard Price: I I think you're best in me if. It's taxed on the fees. I I guess from from our point of view, I think we we've learned a lot of lessons along the way when we've been trying to implement taxonomies. I think we have. As an idea of. Reinventing the wheel, if I'm honest. When we first started going, yeah, we'll we'll build our own tax on me. That's. But that was a really stupid idea. There's some fantastic open source taxonomies out there that we can use and you, I guess you you end up sort of just doing your bit you specialise in. So use SNOMED use mesh, use the World Health Organisation ID tax on me. It's all there. And the taking what I would say is those are really specialist economies and they tend not to cover some of the nuances of the UK landscape. So where that's where we've had to build the our own taxonomies to fill those. Gaps and and I think again, I would focus on your your area that you're specialist in. There's no point in doing the attacks on it. I don't know anything about radiology for example. So why would I build a radiology taxonomy? I would build one about an area I do know about, which is education and training and then outsource that to an area of the NHS that knows that area really well. And then you end up with specialist tags on these being developed, maintained by specialists across the system and sharing those as one big happy NHS family, as it were.
Patrick Mitchell: Thank you, Richard. Rosie, your question.
Rosie: Yeah. Hi everybody. So to introduce myself, I'm Rosie and I work for HG SE and I'm programme lead for simulation tell and patient safety, and I've now got a programme boards with two associate 3 associate Deans and so we've got the governance structure. So we're in a really good setup and I've just had funding for. Simulation fellows in each of our counties. So actually I've got 8 hopefully starting in January and then another five next August. And they'll be doing a PG search in simulation with this. I just wanted to. It's not a question. I just wanted to say that this is what I've got. So I think. It'd be great if we could link up or you know, when I get these fellows. How could I steer them into? Their projects bringing something you know, a really coordinated kind of output at the end of it. So just to just to say hello really and hopefully invite you at some point to come and talk to my some fellow network would be brilliant.
Patrick Mitchell: Thanks Rosie.
Fatima Bonsai: Can I just say a few words about that? I mean, I think there's two aspects of this. There's one using digital and education. So the technology, enhanced learning and and simulation is obviously the course of that. And actually, if you build on simulation and you use immersive technologies on top of that and maybe in the future start to use, you know, AI and data principles on top of that as well to create a really interesting.
Patrick Mitchell: Have another question?
Fatima Bonsai: Educational offer that can be fascinating. And then of course, is the other side of things around the literacy and the capabilities that you wanna get into training and some of that is getting hands on experience of these technologies in practise and some of that is is just understanding what you need to know at a base level. So from my point of view, this is exactly what we're trying to do. We're trying to build that network, you know, so we're trying to understand what's happening regionally and really understand what a good evidence and the good practise is happening and be able to scale it up as well and and then share that. So I would love to connect Rosie.
Patrick Mitchell: Answer to can I put another question that's coming separately from the audience? How do you plan to help patients and staff adapt to technological change? Who wants to take that in the first instance?
Fatima Bonsai: I can have a go so I. Can. Yeah, absolutely.
Patrick Mitchell: Have a go.
Fatima Bonsai: So I think the first thing is, is we have to recognise that everyone has different levels of of digital literacy and digital capability. Uh and and we have to recognise that because it's really important because it gives us a base from where to work from another reason why it's really important to recognise is because what we want to avoid here is digital. We don't want patients missing out on good healthcare because of not being able to access care due to technology. What we want to do is we want to use technology to enhance patient care. So with that context, I think it's really important that. We're gonna need. To be able to educate and support and increase skills amongst patients, and I think that's something that we then we just. Really need to start thinking about and. And it's really important because as healthcare professionals, we can do that to a certain extent because we always have patient contact. But at the same time, we also have lots of other responsibilities and the and the core part of that patient contact is providing care and providing you know healthcare. So it shouldn't be really necessarily built into our world, that's a must. That's something that we have to do, but it's something that we should be able to help a patient for a process, but we still need some sort of digital champions who are gonna help patients become more digital literate and use some of the tools that they might be using to access care. I think that's really. And in terms of staff, again I think organisations need to take responsibility for this to some extent and and and and recognise that that this is something that we need to work on. We need to make sure that our our organisational members are digitally literate to a certain extent, to use the tools that they need to use they and day out. I've had many an occasion and I'm sure many on the call will have had where you've gone into a department. And a technology has been brought in and you've not had any education and training around that. And that is a concern for me because actually you're then expected to to use a new system without the support to use that system. And we're always gonna have new systems because this transformation is happening rapidly. So we need to ensure that actually anything that is brought in and even that is procured, anything that is implemented has a quite clear education and training strategy around that. To support the staff to use it.
Patrick Mitchell:
Thank you for me.
Brhmie Balaram:
Can I? Yeah. Can I come in on this? Just thinking about patient specifically. So I think that really early patient engagement is very important here and we can kind of see that in relation to. Like the rollout of like GPR for example like so there was you know a million people that opted out which pushed back the rollout and you know we're now reconsidering how we actually engage patients on this matter. But I think that you know the lessons that can be learned from something like that can also be applied within the context of. So we roll out AI technology. So for example, with our work with the ADA Lovelace Institute that I mentioned, we're we're trialling these impact assessments and there is going to be. A public and patient engage. Aspect of that. So for example, when these technology companies or research institutions apply for access to data through the national medical imaging platform, we will ask them to fill out an assessment that prompts them to consider the legal, social and ethical ethical implications of their technologies. But we'll bring in patients to also help them consider some of the potential impacts of these technologies and different patients and communities and thinking about helping them think through potential unintended consequences or even like the positive benefits of these technologies, so. We think that. Hopefully, by involving patients with that really early stage, but that will also positively influence how these technologies are developed. And then also for patients to better understand you know like how these technologies can be applied for. What's for their benefit?
Patrick Mitchell:
Right. Thank you. I've got a final question coming in. Do panellists feel that we are placing enough focus on dotted in the curricula for the composition of teams required to push forward developments?
Fatima Bonsai: I mean, I'm going to. Say no to. I'm going to say we are, but I'm going to say the system perhaps is is not quite aware of where we would like it to be and that's fine because I think it's recognising that there's always lots of other, you know challenges that we are dealing with and we're also still in the middle of the pandemic, you know, so we have to recognise that. But there is I think there is some meaningful change happening. So the Royal College of Radiologists for example. Adopted their curriculum to ensure that there are aspects of AI within that, and same for the radiographers and same for pharmacists and their postgraduate curriculum. So. These early examples. Are really useful because you you need early. Implementers to to be able to create further ripples of effect and ripples of change elsewhere, and I understand is happening undergraduate level, so there's many universities that are starting to to look at this in their curriculum. There's many, you know, people are interested in this. The challenge we do have is, is educators that understand this space. And I mean, that was highlighted in total review. We need a cadre of educators. That understand digital tech AI. To really get the changes that we want and and need in in practise, and I think that's starting to happen. And so that's really important is is if we can recognise those talented individuals that are that are keen to progress this this agenda forward and and keen to implement things locally in curricula, in teaching, in. Indication. Then we need to enable them and have to give them the space to be able to do that. Now the challenge we always have is, is is there actually space in the curriculum for this? Well, I think. There is because. Digitally integrated in everything we do, it's not something that you will only do in a certain placement, for example, or a certain piece, a piece of work or a certain professional group. All of us will be interacting with technology. All of us will be affected by emerging technologies like AI and data-driven technologies. So it's something that we need to recognise. It doesn't need to be, it needs to be integrated to curricula. It doesn't need to be something that we need to add on it. It needs to be integrated in terms of what we already do.
Patrick Mitchell: I'd I'd add to that. And if you look at the work that HR is doing in its digital readiness programme, we're now broadening the offer out for a whole range of learning materials, particularly that people can do in bite size chunks so that they can slowly develop the skills they require, the information they require. Across the whole, the whole digital spectrum and particularly now focused on individual professions, not just the mass. So although we're doing a self-assessment tool for digital skills for everybody in the NHS, which will be available up to Christmas, we're looking to now try and do really niche digital skills needs for the individual professions, working with the chief professional officers and their teams to work out what is needed. So we're still very much in the foothills, but look out for more and this will all be available through the newly formed Digital Academy and as as we move that forward next year, I'm conscious of time and we had 45 minutes and we're due to finish at. 1:15 so can I thank our panel members for their time today and for sharing, sharing their thoughts? And can I thank the audience for joining us from what I hope will be one of a number of webinars on this very topic which we so please do give us feedback on. Did this tick the box as a starter for you? Please let us know other topics you'd like us to cover and we'd be very happy to do so as we roll these webinar our webinars out in the new year. So thank you very much and have a great afternoon.
Media last reviewed: 3 May 2023
Next review due: 3 May 2024
Webinar 2 - Building appropriate confidence in AI to enable safe and ethical adoption
The second instalment of the DART-Ed webinar series took place on 1 February 2022 looking at how safe, effective, and ethical adoption of Artificial Intelligence (AI) technology in healthcare relies on confidence in using AI products. Participants are introduced to the findings of a report, produced by HEE and NHSX, which will be published in Spring 2022, outlining the educational needs of NHS healthcare professionals, through a detailed understanding of how we build confidence in the use of AI algorithms.
On the panel were:
- Dr Maxine Mackintosh (Chair) - Programme Lead - Diversity, Genomics England
- Dr Hatim Abdulhussein - Clinical Lead of DART-Ed Programme, Health Education England
- Dr Annabelle Painter - Clinical fellow - AI and Workforce, Health Education England and NHSX
- Dr Michael Nix - Clinical Fellow - AI and workforce, Health Education England and NHSX
- George Onisiforou - Research Manager, NHS AI Lab, NHSX
Watch a recording of this session below.
Watch webinar 2
0:06
Cool, so it feels like not many people coming in anymore so we'll get going so panelists put your
0:12
camera on and welcome everyone. So my name is Maxine Macintosh. I'm the program lead for
0:17
an initiative at Genomics, England called diverse data which is all around, unsurprisingly making the genetic data
0:22
we have more diverse, so this session is all about building appropriate confidence in AI to
0:28
enable safe and ethical adoption. It's collaboration between NHS AI lab and the NHS transformation
0:33
Directorate and health Education England. You'll hear a lot more background as to DART-Ed and
0:38
about this program and about this research very shortly but the primary aim is to identify
0:44
requirements for knowledge, skills and capabilities to develop appropriate confidence in health care professionals to implement
0:50
and use AI safely and effectively. So this is one of three reports. The first one is a more kind of conceptual piece,
0:55
and then the second one is about educational pathways. And then there's gonna be a third output, which is really around the broader
1:02
skills and capabilities required for adopting AI and healthcare settings. So we have a truly amazing panel.
1:07
We've got a about, say, the top of the list is is me. I'm not on the panel, I'm just here to be the social
1:13
lubricant and I'm not amazing, so I've already introduced myself and Next up we've got a Hatim, Abdulhussain, who is the clinical lead of the DART
1:20
-ed program at Health Education England. We also have Annabelle Painter who is clinical fellow for AI and workforce
1:26
at Health Education England and NHSX, and we've got Mike Nix who is clinical fellow AI and workforce at
1:31
Health Education England and NHSX, and we've got George Onisiferou who is a research manager at NHS AI Lab
1:38
at NHSX, so a kind of good mix of health education England AI lab NHSX ,
1:44
and sufficiently complex names. I probably mispronounced some of them. So this is a 45 minute session
1:49
as I've said to you at the beginning, but a few people have joined since. This will be recorded
1:55
so anything you put in the chat or you say out loud will be there in perpetuity.
2:01
Much to dismay Maybe, as Weber reminder Microsoft cameras off just because of the the scrambling that
2:07
happens on the screen when we record it. And obviously if you want to get involved in social media, the handle is at HEE underscore Digi Ready.
2:15
So you've heard enough from me, I will hand over to the panel and 1st off I'm going to hand over to Hatim.
2:22
Thanks, Maxine. So I'm just gonna give us a brief introduction into where we are
2:29
with some of our DART-Ed work and in the previous web I gave an introduction into what the
2:34
data program necessarily involves and how it was developed from its conception around delivering on some
2:39
of the recommendations made in TOPOL review. Today I'm just going to give you a brief update on where we are before I pass on to my colleagues
2:45
who are going to do a lot of interesting talking around the core topic of today's webinar.
2:50
So I just want to tell you now that we published our AI road map it was published last week and
2:56
gives a clear oversight to us in terms of the types of taxonomies of technologies in terms of AI
3:01
in our current health care system. And you can see from this diagram here
3:06
we we found that 34% of current AI technologies that are around diagnostics, 29% automation service efficiency and in a
3:13
group of emerging technologies around people, medicine, remote monitoring and therapeutics.
3:20
Report will also give a bit of an expression in terms of how these technologies will then impact our
3:25
workforce and our different workforce groups. Very interestingly, you won't be surprised to some extent that radiologists and radiographers are
3:31
quite the topic type of their workforce. Groups that are likely to be affected by these technologies, but on top of that general practice,
3:37
cardiologists, nurses are other workforce groups that we feel will be impacted by AI.
3:42
And it has a couple of case studies in there which really dug deeper into that to help understand how
3:48
a couple of technologies that have been implemented in practice have affected the team that have been involved in using these technologies.
3:54
So it's a nice way to bring some of these. some of this data to life and it
4:00
helps us to think about how we're going to tackle the challenge in terms of developing the skills and
4:05
capabilities in our workforce to work effectively with AI, and in effective that's what the discussion
4:10
today is going to be about. Just a reminder that all the webinars are free to attend. They're all recorded and available
4:16
on the HEE YouTube channel as well as on the DART-Ed website, and we have a whole host of a future
4:22
topics that we will be covering through the webinar series. We're looking to have a webinar
4:28
specifically around nursing and midwifery. We're gonna have a spotlight on AI healthcare applications and hopefully a few of our TOPOL fellows will
4:35
come and share some of the projects that they've been working on In that webinar. We are going to have again a spotlight on
4:41
dentistry itself and how dentistry is getting digitally and AI ready. And I want to talk about the work
4:46
that we're doing around robotics, assisted surgery and robotics literacy with the Royal College of Surgeons. So keep a lookout on our social
4:53
media channels keep a lookout on the DART-Ed website for further information about these future webinars and if there's anything you want to feedback to us,
4:59
feel free to get in touch with myself directly. Thanks. Amazing thanks.
5:05
So now we're going to have about a 15 minute presentation from Mike and Annabelle.
5:12
And so whilst they're presenting, do think about some questions you have. I know that a number of you pre
5:17
submitted them the thing in my pre submission is that it's very easy for me as the chair to take them as my own.
5:22
So by all means repost them or re-ask them and put them in the chat and then depending
5:28
on how the conversation evolves, will ask you to un-mute, reveal yourself and
5:33
ask a question in person. So whilst Mike and Annabelle are presenting, please do think about
5:40
questions that you have for the speakers. I'm noticing some technical problems,
5:46
Mike? Yeah we seem to have a permissions problem on teams. I've just sent my slides to Hatim
5:52
by email so hopefully Hatim will be able to share them because I can't. So give us a few seconds
5:58
and hopefully we'll be up to speed with that. I'm really looking forward to the next slide please,
6:05
but I'm bringing them up now. I'll give you the full Chris Witty, Hatim! Thank you very much.
6:11
I can probably just start off to give a little bit of background until we we see the slides,
6:19
If that's OK. So yeah. Hi everyone, my name is George Onisforou and I'm a research
6:25
manager at NHS AI Lab and we conducted this research with Health Education England.
6:32
I worked with Annabelle and Mike on the coming reports. As a little bit of background,
6:38
this research involved a literature review. And we also interviewed over 50 individuals
6:46
in healthcare settings regulatory bodies, people in in the private sector,
6:52
developers of AI and also academics in this field. And we also tried to speak with
6:59
professionals with different levels of experience with AI technologies. And what we aim to do with this
7:06
research is to get an understanding of the requirements for knowledge, skills and capabilities that will
7:14
be needed to develop confidence in healthcare workers when using
7:19
AI technologies. The word confidence is important from early on in our discussions and also
7:25
what we saw in the literature review the terms trust and confidence can be
7:31
used interchangeably, but we did felt that they they need to be distinguished and we felt that
7:39
confidence is the most appropriate term for the context of using AI technologies
7:44
in health care settings as it allows for a little bit more of a dynamic exploration of what is involved.
7:50
Particularly in the different circumstances during clinical decision making that Mike will explain later.
7:58
So with this in mind, we went about trying to understand what influences levels of confidence and
8:07
developed this conceptual framework. If you can go to the next slide, please. And I should say that we will be focusing
8:16
on this framework in our initial report, which is a little bit more conceptual
8:22
in nature and and and it would set the scene for a second report that would
8:27
outline suggested educational pathways to develop this
8:33
confidence. And what we're presenting today is just like a sneak preview.
8:38
So please wait for for the final information and visuals in
8:43
the report that will come out. So what we're saying with this framework
8:48
is that professionals can develop confidence in the technology through two initial layers, the baseline
8:55
layer and the implementation layer. There are several factors under this that we will go more in detail.
9:02
And then there's two layers of confidence in an AI Technology can enable clinicians to
9:08
assess what is the appropriate level of confidence that they should have
9:14
in AI derived information during their clinical decision making. And
9:21
and that's the third layer. So we'll talk now a little bit more in detail about each of these layers.
9:27
First Annabelle will take us through the baseline layer. Thanks George.
9:33
So yes. So starting with this baseline layer so if you could go on to the next slide that would be great.
9:38
Thank you. So the baseline layer is really about the foundations of AI confidence.
9:44
So it's the key components that underpin confidence in the implementation
9:49
and clinical use layer. So what we're really saying here is that each of these components needs to have a really strong and robust
9:55
foundation so that we can build from that with any kind of AI implementation or use in a clinical setting.
10:01
So there are five components within the baseline layer which our product design, regulation and standards, evidence and
10:09
validation, guidance and liability. So I'm just going to go through each of those components in a bit more detail.
10:15
So starting off with product design with this component, what we're really talking about is
10:21
how do we create AI products in a way that inherently and fundamentally
10:27
improves end user confidence? And there are several facets to this, so some of them are fundamental things,
10:34
for example, what is the task AI is doing and what's the level of clinical
10:39
risk associated with that task? And also, is the AI making a decision autonomously or is it part of a joint decision
10:47
making process with a human in the loop? There's also factors here about usability of the AI products,
10:53
so how intuitive is that AI product to use and how seamlessly does it integrate with existing healthcare systems?
11:01
And then there are also some technical considerations. So for example, the type of algorithm that's used
11:07
can influence confidence and also how much we can tell about how their AI is made a decision.
11:12
So this moves into the territory of things like explainability, and Mike is going to talk more
11:18
about that a little bit later. But another important thing to think about is transparency. So this is more about getting
11:24
information from those who develop the AI about the type of algorithm that's being used, how it's been trained,
11:29
the kind of data sets that have been used. And any potential limitations or weaknesses in the model.
11:35
And there have been several transparency standards that have been released that could be helpful with this.
11:40
So moving now onto regulation, so having strong robust regulatory
11:45
regulation is really is key to building AI confidence and what we've
11:51
learned from our research is that health care professionals generally equate achieving regulatory approval
11:57
for medical devices as proof that that AI product is safe to use.
12:04
But in reality, the current regulatory standards often don't actually meet that,
12:09
and in addition, there aren't any specific AI regulation at the moment and we we feel like
12:15
these are the two things that need to be addressed during regulatory reform. And that's exactly what the
12:22
MHRA are currently looking at. So they have a software in AI as a medical device change program
12:28
that's recently been announced, and they're looking at several ways of addressing these things.
12:33
And within regulation there's also professional regulation, which is an important thing to think about.
12:38
So as health care professionals, we generally look to our regulators for advice on how we should behave
12:45
and that extends to how we should interact with things like artificial intelligence. And they're not only applies to
12:50
the clinicians who are using AI to make clinical decisions,
12:55
but also to those who are actually creating these AI products and also involved in their
13:01
validation and testing. There may also be an argument for some kind of professional regulation of the non clinical
13:07
healthcare professionals who are involved in making these products. So by that I mean software
13:13
engineers or data scientists who are working on healthcare AI products. So next,
13:19
moving on to evidence and validation. So it's essential that we know that AI products that are being
13:24
released into the healthcare system work and that as they say they do.
13:30
And for that it's important that we have good guidance on what kind of evidence we may
13:36
expect at the moment in terms of the regulatory requirements. There's no explicit requirement for
13:42
any external validation by a third party or of AI products or any prospective
13:48
clinical trials of AI products, and our research suggests that
13:54
there's definitely an argument for having that as a requirement for any AI product that
13:59
carries significant clinical risk. This is something that's being looked at at the moment by NICE as
14:05
part of their their digital health evidence and frameworks that are being reviewed at the moment.
14:11
And then moving on to guidance. So guidance is important for steering how AI is is procured and how it's used,
14:19
and there's several different types of guidance, so you know. With guidance, there might be
14:27
clinical guidelines. Sorry if anyone got their mic on, do you mind?
14:37
OK, so just honing in on clinical guidelines. So what we've heard from our
14:45
research is that clinicians expect to be given clinical guidelines about how to use AI technology in the same way
14:51
they currently are for say, medication. But the slight issue at the moment is that the processes involved in
14:58
getting specific product level guidance. for AI technology It's not really scalable and it's not
15:04
able to meet the demand of volume of products that are coming onto the market. So again, this is something that's
15:10
being looked at by NICE at the moment, and ultimately it may be that a more agile guideline process is required
15:16
and potentially a move towards rather than product specific guidance. There's maybe like a class based
15:22
guidance in the same way we sometimes see that for medication. As well. And finally,
15:27
moving on to liability. At the moment it's it's unclear from a liability and point of view about
15:33
who would be held to account in a situation in which an AI was to make a clinical decision that led to patient harm.
15:40
So for example, it could be a clinician who's using that a product AI product to make a decision.
15:45
It could be the person who or the company that made the product. It could be those involved
15:51
in commissioning it, and it could be those involved in say testing or regulating or validating it.
15:57
This becomes even more complex when we think about autonomous AI, where a human is actually removed
16:02
from their clinical decision making process entirely. So some kind of guidance and steering on this will be will be important
16:08
for building confidence in AI. So that concludes our baseline layer and so now I'll handover back
16:14
to George to talk about implementation. Thanks, Annabelle can please go to the next slide.
16:21
Thank you. So the implementation layer basically reflects one of the most consistent feedback that we've heard
16:28
during the interviews for this research, and that being that the safe, effective and ethical implementation
16:35
of AI in local settings contribute significantly to building confidence in
16:42
these technologies within the workforce. And the comments focused on 4 main areas
16:48
As you can see here, the first one is around strategy and culture, and what we have heard is that
16:56
establishing AI as a strategic and organizational asset can enhance confidence,
17:02
including through developing relevant business cases and maintaining a
17:08
culture that nurtures innovation, collaboration, and engagement with the public.
17:14
So these conditions allow for a confidence that the right decisions are
17:20
being made, and also that each setting can sustain these type of innovations.
17:27
The second factor is technical implementation, and that refers to arrangements
17:33
around information technology, data governance and issues
17:39
on interoperability. We heard that a lot of the current challenges for deploying
17:45
AI relate to these arrangements, and particularly that agreement on
17:51
information governance settings and data management strategies to handle the
17:57
data associated with AI technologies are highly important at this stage. We got the impression
18:04
that unless these are clarified, many clinicians would hesitate to use AI.
18:12
Annabelle talked about evidence and validation and local validation
18:17
is an extension of that which is the third third factor here. Local value validation may be needed
18:25
to ensure that the performance of an AI technology can be reproduced in
18:32
the local context so that we don't assume that an AI system that may
18:39
have good published performance data will generalize well to local situations.
18:47
And the last factor in this layer is system impact essentially being confident
18:53
that AI is properly integrated in clinical workflows and pathways.
18:59
And what we heard here is about the importance of seamless integration with
19:06
existing systems of clear reporting safety pathways, and of ethical
19:13
practices and that all of these build confidence and address
19:18
inhibitions to adopt AI. Now, the way that AI is integrated into
19:24
clinical decision making is particularly important as it may impact these kinds of decisions,
19:31
and this is something that we explore further in the third layer
19:36
that Mike will explain. Great, thanks very much George. Next slide please.
19:41
Hatim thank you. The third layer is the point at which
19:48
the clinical decision making process and the AI technology interact,
19:53
and this is the first point in our pyramid where we are really moving away from trying
20:00
to increase confidence to assessing confidence. And the idea here is that
20:06
an individual AI prediction which is used for an individual clinical decision for an individual patient
20:13
may or may not be trustworthy, so it's not necessarily appropriate
20:18
always to increase our confidence in AI predictions at the level of
20:24
the individual clinical decision. And really, what we're trying to do here. Is to retain a degree of critical appraisal,
20:31
which, as clinicians we would apply to any information involved in a clinical decision making process
20:36
and to avoid having either under confidence leading to inconsistency,
20:42
lack of benefit being realized by rejecting AI information,
20:48
and overconfidence obviously with the risks of clinical error and potentially patient harm.
20:56
So this is a nuanced problem, and if we go to the next slide, please.
21:03
Thank you. There are five factors in this clinical use layer which really drives this interaction
21:09
between the AI and the human decision making. The first of which is is underpinned by
21:16
our clinicians attitudes to AI, and we're aware from our research what we heard is that this varies a lot.
21:23
There are some clinicians who are digital leaders who are very excited about AI, are very knowledgeable,
21:30
very confident in it, and have a preference to drive it forward and include these types
21:37
of technologies in in as many clinical contexts as possible. There are also people who are more sceptical,
21:43
either through their own experience or potentially, through their lack of experience actually.
21:49
So there's a great variation there which we need to be aware of and take
21:54
account of as underpinning this confidence assessment. The other thing that really underpins
22:00
this is the clinical context. Obviously there's a huge variety of situations in healthcare from
22:05
primary services GP, all the way through to emergency medicine
22:11
in tertiary referral centers. And that has a great impact because
22:17
not only of the of the potential risks and benefits associated with employing AI in those different contexts,
22:24
but also the timescales for decision making. Some decisions are made over many
22:30
weeks with involvement of patients and families and are very discursive and other decisions are made in the
22:37
instant in emergency situations. And obviously that will impact the way that we assess our confidence
22:43
in AI and what we do with that confidence assessment for clinical decision making.
22:49
As Annabelle pointed out earlier, there are technical features of the AI system itself which will
22:55
impact on our confidence and our confidence assessment. AI can
23:01
make various types of predictions, it can be diagnostic or prognostic.
23:06
It can be used to recommend a treatment strategy for some sort of stratification,
23:12
or it can in fact be if you like a preprocessing of some images or
23:17
some other clinical data which adds a layer of information into a clinical decision making pathway
23:23
that already exists. So the type of information, the type of prediction which the AI makes,
23:29
the way in which it makes it, and the way in which that information is presented, whether it's presented as categorical
23:36
or probabilistic, whether uncertainty is included. All of these things are features
23:42
which will affect the way that we value that information as clinicians and the way that we assess our
23:48
confidence in that information when making clinical decisions. There's another factor here,
23:54
which is separated out, although it is a technical feature which is explainability and explainability,
24:00
I think is an area that's that's worthy of some examination at the moment, because it promises a lot,
24:07
and there's been quite a lot of interest in the potential for using explainable AI to get decision reasoning
24:14
out of out of neural networks particularly, and to be able to see the reasons
24:21
for individual clinical decisions. What we found through the literature survey and also talking to experts
24:28
was that its not yet ready for for real time. So we we believe that a I explain ability
24:35
has potential and we believe that a model validation level it has value.
24:40
But at this stage it does not appear to have value for individual clinical decisions.
24:46
So I think we need to be quite cautious using that as a way of assessing confidence in AI for
24:52
clinical reasoning and decision making. And really, what underpins this confidence assessment
24:57
is this fifth factor of cognitive biases. All of us as humans are subject to
25:04
cognitive biases and we may or may not be familiar with what those are, but it's important to acknowledge
25:10
them and to understand the way in which AI presented information may change those cognitive biases which
25:17
we may or may not be consciously aware, Impact on our clinical decision making.
25:22
So just to give you some examples there, there's things like confirmation bias, automation bias,
25:30
all these kinds of things are unavoidable. I think we have a tendency to assume that we are less susceptible
25:35
than the general population, but I think the research would suggest that that's not true. And therefore it's important to understand
25:42
how that factors into the AI assisted clinical decision making process.
25:48
OK, so next slide please Hatim, and then over to Maxine for a
25:54
couple of questions from the chair. Amazing thank you so much Frank, Georgia and Annabelle for a whistle
26:01
stop tour. So I'm gonna fill the the the gap with a couple of questions
26:06
so please whilst the time passes do come up with your own and post them
26:11
in the chat and depending on how the conversation pans out I also plan to invite you to ask a question yourself.
26:17
So please do ask any questions and there's no question too stupid or too intelligent! Probably there is one too
26:23
intelligent but it's probably not, so please ask the full range of questions.
26:28
So this one is definitely planned, but, what is your top priority for improving
26:34
clinical confidence in AI over the next one to two years that of drives towards that
26:40
center of that double ended arrow you presented Nick? What are you working on for the next
26:46
12 to 24 months? Thanks, Maxine. I'm not sure it's what we're working on so much as what the whole
26:54
community needs to be working on and I think really the challenges over the next period are at the baseline
27:02
and implementation layers as we presented in the pyramid, so there's some
27:08
work that is going on nationally at the moment, around regulatory clarity,
27:14
and that will definitely help with the baseline confidence and evidence standards.
27:20
As Annabelle pointed, they are currently being developed and are changing all the time in this space.
27:27
So we will see increased guidance. We will see increased definition of what levels of evidence are
27:34
appropriate for AI and healthcare, and that's definitely going to be a positive thing.
27:40
I think the other challenge is, moving to a place where,
27:45
again as Annabelle pointed out, we have some class guidance because a
27:50
lot of the products that are currently being produced or becoming available are relatively small niches and
27:57
I think expecting product specific guidance from bodies like NICE for every individual AI product,
28:03
which is going to enter the health care arena is is not a sustainable way to to work in in the longer term.
28:10
So I think having some more general guidance and standards around how to evaluate and implement AI once
28:17
it's achieved regulatory approval, I think will be, very helpful. And then the second part of this
28:24
answer really is at the local level. So considering a healthcare setting whatever that might be, I think,
28:31
really the challenges are around people. We need to have the right people to
28:36
drive adoption of these technologies forward and to do it with appropriate levels of knowledge and critique,
28:43
but also with sufficient motivation and positivity that we actually do get these
28:50
things translated into clinical practice. And I think around that there's a need to define some roles.
28:57
There may be roles which aren't common currently in healthcare organizations,
29:02
particularly around the task of implementing AI and I think we need
29:08
to take a multidisciplinary approach. So I think we need clinicians,
29:14
I think we need users. I think we need drivers of policy,
29:19
policy people and people who are kind of holding the purse strings and we also need technical people
29:25
who understand the evidence and the challenges associated with doing robust and ethical implementation.
29:33
So hopefully that ties in as an answer to what we were discussing in the framework.
29:41
I will hand back over, to Maxine. A busy two years? Yes a busy two years. Also,
29:46
The work is never done. And you're waiting on all these dependencies and everyone is sort of working them out as we go
29:51
so it is a bit of a juggling act I think for everyone in the community. So two things one is.
29:57
Amanda asked the question, be more Amanda, keep them coming. The second one is that
30:05
you talked about the need to define roles and support with multidisciplinary teams and
30:10
diverse groups and decision making. And I know that this is kind of a bit of a sneak preview because you
30:16
know future reports are going to be on education, but you need definitely need a pipeline
30:21
to start filling those roles, and that's going to take quite quite a period of time. So whilst not creeping
30:27
into future webinar topics, can you give a little bit of a hint about how you're thinking about the education piece on this one?
30:33
I think maybe for Annabelle? Yeah, sure. So as George mentioned we are
30:39
we are releasing 2 reports so the first one is coming out in a couple of weeks which will be focusing on what we
30:44
just talked about and the second one will be following in the next few months. And that's really focusing down on what
30:50
this means for educating the NHS workforce. So we have had a little thing about this and we can give a bit of
30:55
information now so Hatim, would you mind just go on to the next slide? Yeah, great, so we
31:01
are thinking about this across the whole NHS workforce, so this is not just about clinician end users
31:07
it's about everyone from like the most senior NHS management through to
31:14
the people who are commissioning products and people who are embedding them within the NHS. So the way we think about this is by
31:21
splitting that workforce into 5 archetypes, and these archetypes are based on the role that individuals will
31:26
have in relation to AI technology, and these archetypes are not exclusive so you can as an individual
31:32
sit in multiple buckets at the same time, but the reason that these archetypes we feel are helpful is
31:38
because these different individuals have different educational and training needs and and so it can
31:44
help us focus on on what we're going to need to do to prepare these different aspects of the workforce.
31:49
So just to go to explain them in a bit more detail to shape is here. Shape is really the people who
31:55
are setting the agenda for AI, so these are the people who are coming up with kind of regulation, policy and guidelines.
32:00
All of those things to do. To do with AI and examples of
32:05
them might be NHS leaders, the regulators of AI and other people who work within arms length
32:13
bodies and the drivers really are the people who are leading
32:18
digital transformation, so they are involved in commissioning AI. They're also involved in building
32:24
up the teams and infrastructure within HHS organizations that are going to be needed to implement AI,
32:30
so for example. They might be an ICS leadership board, or it might be CIOs within within
32:37
ICSs. The next bucket are creators so these are people who are actually
32:42
making AI. So when it comes to the NHS workforce, it may be that these.
32:48
Individuals are Co. Creating this AI with for example, like a commercial partner or something like that,
32:54
and the kind of people who would be doing this would be like data scientists. They might be software engineers,
33:00
or they might be a specialist clinicians or researchers and academics who are working on AI. Then we have our embedders, and
33:07
the embedder's role is essentially to integrate and implement AI
33:12
technologies within the NHS. So these individuals they might
33:18
also overlap with creators and being kind of data scientists, and they might also be clinical
33:24
scientist specialist clinicians by IT and and IG teams.
33:29
Yeah, so that's who we really mean by the embedders. And then finally the users and the users.
33:35
are anyone within health care who are obviously using AI product, so this might be clinicians. It might be allied health professionals.
33:41
It might also be non clinical staff and what's really important with all of these archetypes is that we
33:47
need to make sure we capture everyone so we don't just want to capture the workforce in training but also we need to make sure that we're
33:54
targeting those who are fully trained who are already working. And we're going to need to be giving slightly
34:00
different expert advice to each one of these different archetypes. Now the reason there's a box around creators
34:05
and embedders is just because we feel like at the moment, this is probably the area where we have
34:11
the least skill within the within the NHS at the moment, so it's one of the areas we're going to have to think carefully about.
34:17
How do we bring people with these kind of data scientists clinical informatics skills within the NHS?
34:23
And how do we train up existing people within the NHS to become specialists in that area?
34:29
So moving on a little bit from from this kind of baseline information education,
34:35
these people are going to need, we are also going to have to think about product specific training. So this is about giving people
34:41
knowledge and information about a specific product that they're using, and this really affects 3 main archetypes,
34:47
so we're talking about the drivers, the embedder's, and the users. So the drivers need to know specific
34:53
information about products so that they can make the right commissioning decisions about those products. Then
34:58
embedders need to know the technical information about products so they can make sure that they're integrated in
35:04
a way that's both safe and effective, and finally uses. So users are going to need training on specific products
35:10
that they're using to make sure that they understand clearly what the indications are for that product,
35:15
what the limitations are of that product, and really importantly, how to communicate with that product
35:22
about patients and how to facilitate joint decision making amongst clinicians and patients using AI
35:29
In the mix. So that's everything for me. Maxine, back to you. Amazing thanks
35:35
Thanks for Annabelle. So I am conscious of time and there's a couple of great questions,
35:42
especially for I guess this kind kind of conceptual piece about, you know what is confidence?
35:47
What is appropriate? So I think I might look to smush
35:52
maybe some of Amanda's question with some of James' so as you're
35:57
like thinking about what does it mean to be confident and what does it mean to be appropriate? How does ethics cut across as well?
36:04
Kind of ethics has its own, you know, like, let's redefine the question type of discussion that happens,
36:10
and so I'd love to come. Yeah, pick up Amanda's question about, uh, you know. Where does ethics transects
36:16
with appropriate confidence? And then I'm going to bundle that with the bottom of James question which is,
36:23
as this conceptual work, which I think is incredibly important, to underpin some of the the harder
36:29
or the kind of the the base-siding or that base layer, how does that intersect with already existing standards?
36:35
And I know that was kind of touched on a little bit. But, linking the conceptual with the hard would be a good thing to touch
36:43
on for the last couple of minutes. Perfect , shall I take that one?
36:48
Or those ones I think is probably a better description. So yes, let's start by thinking about ethics,
36:56
and I think our starting point for this work was, the idea that if we
37:01
want to do AI ethically, then we have to do it robustly and we have to do it safely and we have
37:07
to do it with appropriate confidence. Anything else is not ethical, essentially because what we're
37:13
doing is we're ensuring that we achieve patient benefit, we're minimizing risk,
37:19
and we're maximizing impact. And that includes ethical considerations like maximizing impact for different groups,
37:26
different demographic groups, and ensuring that we know what the performance of our AI is
37:32
for different demographic groups, So it cuts through, evidence and
37:38
standards and regulation. I'll come back to that in a second and in response to changes question.
37:43
It also cuts through what George was talking about in terms of local validation.
37:49
Local validation is absolutely key to ensuring that we have generalizability, that we understand the limitations
37:54
of the algorithms that we use, and that allows us to use them ethically. And then when you get to the clinical
38:02
decision making layer that really is where individual critical
38:07
thinking comes into understanding what the ethical implications might be and when it might be appropriate to
38:14
disregard potentially an AI prediction and whether that disadvantages
38:21
people is is something that needs to be considered at the workflow integration and the implementation
38:27
stage. So that we can try and be as evenhanded as possible with technologies that are not
38:32
necessarily, inherently even-handed in their performance, and that really is a big ethical
38:38
challenge with AI. I think it's very important to always have alternate pathways that can be
38:43
used in the case where we we don't have confidence in the AI and we need to make sure that those don't result
38:49
in detriment to certain patient groups. And so I think that would be my response to the ethical question.
38:56
I think in terms of regulation and MDR, Hi James, I'm the clinical scientist,
39:02
which is is probably why clinical scientists have got some representation here to some extent.
39:09
So I'm familiar with the MDR and the ISO standards.
39:15
There are going to be some new standards we expect in in the relatively near future looking at
39:21
AI specifically as a medical device. I'm sure you're aware of some of the discussion around that,
39:26
and I think our hope is that that will not replace what's in the MDR and ISO thirteen 485, but rather it
39:34
will clarify it and extend it. I think the other thing that's really
39:39
important to to think about in terms of C marking as it used to be called and now UK marking and the MDR is
39:47
that medical device approval does not tell you anything about performance,
39:53
so it's not necessarily sufficient. It's necessary, but it's not necessarily sufficient,
40:00
and I think where NICE and bodies like that come in in terms of providing these evidence standards
40:05
so what should our expectations be in an AI product so that we can have clinical confidence?
40:11
Yes, of course it has to be regulatory approved. But that to me is a first step,
40:17
not a final step. Yeah, I think it's like a great answer and for me that the -
40:22
not that my opinion matters in this but the the appropriate confidence for me is a really nice way of knitting together
40:28
You know things like ethics which can sometimes feel a bit intangible and impractical and you know with MDR that have some shortfalls.
40:34
So for me, this was this felt like a really nice way to to cut, turn some kind of floating themes,
40:39
or, you know, swallow tools into something a bit more holistic and practical so I know they run out of time. And it's not a competition
40:45
of who's the most popular but if it was Tracy would be winning! A number of your questions
40:50
in advance of this came about shared decision making. So in 45 seconds can one of you take,
40:58
the question around a, shared decision making or b, how do we make sure that patients truly
41:03
sit alongside the the creators that had a little bit of a dotted line around them? So who's gonna take that swiftly?
41:10
I can try and do that very quickly and so just to say in terms of the archetypes, patients are not in there.
41:15
They're not an archetype. That is actually and completely intentional, because this report is about the workforce and how we prepare the NHS workforce now.
41:22
It's also intentional because we think that patient, that conversation about patient involvement is really, really important and deserves
41:28
his own attention. So it is intentionally excluded from here. However, what is really important about
41:33
how we include patients is, first of all in that bit about how we design products in a
41:39
way that enhances confidence. We need to make sure we have users involved and patients involved at that design stage from very early on,
41:46
'cause they're the ones ultimately who these products are gonna be used on. So it's really important that we get their
41:51
input all the way through. The second thing is to say when we're talking about preparing on the users,
41:57
so the clinician users are a huge part of their preparation is about how they can,
42:02
how they can communicate about these products with patients, so making sure they have conversations that
42:07
bring patients bring patients in early. They make it clear about. The limitations and the risks
42:12
and the benefits of using AI. What it means for their data and their patient information, and how they can make decisions together.
42:19
And as a group so you know, clinician patient, but also potentially AI moving in there is like, you know, a third,
42:27
a third agent in that mix. Amazing, thanks Anabelle and thanks for
42:32
doing this so succinctly so I'm sorry we haven't had time to to hit some of the other questions come but
42:37
thank you so much for for posing them, and there's some Good ones that I'm sure that the individuals on the panel will be
42:43
happy to to follow up and answer. Then obviously keep the conversation going. But here endeth the first webinar
42:48
of the series. The next one is happening at the end of March. As Hatim says it's on nursing
42:54
and the recordings of this will also be made available in case your child came in halfway
42:59
through demanding lunch or something catastrophic like that, but otherwise thank you so much for
43:05
tuning in and do follow a Health Education England and on Twitter @HEE_digiready and I'm sure
43:11
everyone would love to keep the conversation going but thank you very much for for your attention and your questions and for coming and
43:17
hanging out this lunchtime with us. Thanks bye bye thank you.
Media last reviewed: 3 May 2023
Next review due: 3 May 2024
Webinar 3 - Learning from the application of AI in Health
The third installment of the DART-Ed webinar series took place on 11 August 2022 and looked at how artificial intelligence has the potential to transform healthcare, and in many cases is starting to do so. However, implementation of AI requires specialist skills from those clinicians working on these projects. The webinar focussed on experiences from clinicians who are actively working on AI related projects in the NHS. The webinar was guest chaired by Dr. Haris Shuaib, the AI Transformation Lead at the London AI Centre for Value Based Healthcare, and Fellowship Director for the recently launched London AI Fellowship Programme.
On the panel were:
- Haris Shuaib (Chair) - Consultant Physicist, Head of Clinical Scientific Computing at Guy's and St Thomas' NHS Foundation Trust
-
Dr. Hatim Abdulhussein - National Clinical Lead - AI and Digital Medical Workforce at Health Education England
- Dr. Amar Patel - GP, Digital Lead and IIF Clinical Lead at Southport and Formby PCN and Topol Fellow
- Dr. Kavitha Vimalesvaran - Cardiology Registrar and Clinical AI Fellow
- Christina Sothinathan - Digital Health NHS Navigator, DigitalHealth.London and Advanced Practice Physiotherapist, St George’s Hospital
Watch a recording of this session below.
Watch webinar 3
Haris Shuaib: Fantastic. OK. There you go. That's the last face I was waiting for. How to? OK. Fantastic. Let's get started. I've given people 2 minutes. Grace, that's good enough. So welcome to of the Webinar series hosted by Health Education in England. This is learning from the application of AI and. Health. So I hope you found yourself in the. Right place. As has just been put into the chat box, we are going to be recording this webinar and it's going to be published on the website in the next few weeks so you can let people who couldn't make it know that they can still catch up later as you'll find you've been disabled in terms of your microphones and cameras. And hopefully once we get to the Q&A session, we'll get people to to put their hands up and then they'll be able to come off mute and ask the panellist or myself the questions that they want to ask and then we could take the session from there. In terms of the format of the session. Today we'll have an overview from Hatim of of the darted programme, which hopefully some of you at least will be aware of, and then each of the other three panellists, Amar Christina Kavita, will introduce themselves and spend a couple of minutes talking about their work and then the remaining half of the session hopefully will be a sort of panel. Discussion and Q&A UM on the things that we've raised. Fantastic. Hopefully that's OK with everyone. I'll start by introducing myself. So I'm Harris Schwab. I'm a consultant, clinical scientist at Garza Thomas's and I'm head of Clinical scientific computing there, which is a team uh, which is charged with developing people, platforms and policy. Uh, for digital health. So we essentially help translate. Having it to health technologies into routine care, uh, which is quite an exciting field to be. And as you can imagine, a lot of our work involves deploying uh artificial intelligence algorithms into the front. Line. With my other hat, I'm also the AI transformation lead for the London AI Centre, which is a large innovate UK funded UM initially was across London. Now it's expanded across the southeast of NHS trusts, academic institutions and industry partners where we're collaborating around developing AI prototypes. Along a whole range of patient pathways, primarily in secondary care, to improve patient outcomes and operational efficiency. But today we won't be talking that much about, about my work and more about the work of my fellow panel. And having said that, I think we'll get started and I'll hand over to Hatim in the first instance. To give us an intro of the diet programme over to you had to.
Hatim Abdulhussein: Thanks, Harris. So what I'm going to try and do is just give a brief overview of what we've been doing so far, but not spend too much time because what you really want, if you really wanna hear from is Amar, Kavita and Christina. So from my point of view, I'm a GP in in northwest London and I am a a national lead for health education in England for AI and digital medical workforce that involves looking at two areas. One is the emerging technologies around digital health, AI and robotics and education. And then how that what that means for education in the future and then also being the overall lead for digital readiness for the medical? Question what I'm going to do today is just very briefly go over some of the kind of key outputs that we worked on so far. So you can go back to the previous Robin, nice to see kind of our approach to this problem and some of the background in terms of the policy that has driven what we're doing and but what I'm gonna do is give a bit of an overview briefly about what outfits have been so far and actually just highlight some educational opportunities for people that wanna. Developed some learning in this space, so in January we published the AI road map and that was an opportunity to start to understand how many technologies currently exist in the NHS and what kind of taxonomies they. Within so majority of technologies, you won't be surprised to hear our diagnostics around 34%, but a whole host of technologies around automation and service efficiency as well. And in a growing amount of technologies around remote monitoring and P4 medicine. So people medicine is around participatory, preventative, personalised health care technologies that work out population health level. And within that, we're then able to look at what kind of workforce groups they're going to affect, how they might affect those workforce groups and also delve deeper into a couple of those technologies to really think about what the impact might be on the way that we will work in the future. So have a read of that report because I think it's really interesting to to think about putting into your mind how things might have might things might change in the. And then in May, we published a report which helped us to start to create a framework for how we can understand how we're gonna build confidence in AI amongst healthcare workforce and what key factors are gonna drive that confidence and at national level. That's really about governance, making sure that the regulation and standards are appropriate is adequate evaluation and validation of these technologies and national approach from organisations like Nice. MHRA. And clear guidelines and and then at local level in terms of an organisation. And I'm going to talk a bit about is this strategy and the culture of an organisation. Do you actually have the technical skills to be able to implement this as a technology and then looking more closely at a local level whether the validation has been has occurred at that level because we recognise that every population is different. And then finally, what does it mean as a clinician? So me as a frontline? P How will it affect the way I work? What kind of cognitive biases are like do I need to be aware of when I'm working with AI? Well, what do I do when it's perhaps not working as effectively as it should, and how do I then raise that and actually we go into this in a little more detail on in webinar too. So if you want to go and learn about the approach and some of the findings of this piece of research in more detail rather than in the report, then have a look at when are two. So to highlight on educational opportunities, we've partnered with the University of Manchester, who are developing A clinical data engineering CPD. For healthcare professionals to pick and learn specific modules that will help contribute to their learning to becoming someone who's a clinician but also will have some learning or some knowledge of skills around data engineering and data science. And at the moment we're piloting the first module of this and we're offering 15 funded places on this first module and I will post the link to the chat. To access how to apply to this and what to do to apply to it? Another opportunity to outline, and this is not something we're directly we're involved with, but we are aware of and and and support the sort of outline of this is new funded programme which is on explainable artificial intelligence and healthcare management and this is going to be a Masters programme. It's going to be delivered by University of Pavia in Italy and one of the partners in the UK is Kill University. So whilst it being an evil that is open to applications from the UK. And the programme costs €2500, but 80% of students that graduate from the programme are the top 80% will have their thesis award paid back to them. So, uh, because of the funding that's attributed to the programme and it kind of, well, I think, why this course is really interesting as it covers 2 areas. There's the sort of the core skills around AI. So there's modules on introduction to data science. And and and and and ethical AI, but also starts to, uh, also covers areas around healthcare management as well. So there's a module on transforming healthcare. There's a module on looking at AI and the workforce and the impact that it might have on the workforce. So should be a really fascinating masters programme for anyone that wants to apply and get our post details in the chat for where you can express your. Interest and then someone from Kiel or from one of the one of the university partners will reach out to to see if you. Want more information? Thank you for listening and we will pass over to Amar.
Amar Patel: Thanks Hatim. Yeah. So my name is Emma. I'm a GP and digital lead and investment and impact from lead at Southport and Formby Primary Care Network and the top off fellow. So just to give people a bit of context, in the last couple of years practises in primary care have been organising into primary care networks and essentially these are groups of practise. Is given some additional funding and then they provide additional services as a result of this. So our area, we're already organised, we had quite a large federation, but the legal basis of the primary care network allowed us to organise a bit more and developed kind of the right kind of infrastructure that we needed. So a large part of my work is looking at kind of strategic and organisational level work looking quite open questions really and one. In particular, simply what is going on here? What's happening in our area to kind of get us to give us an idea of, you know, how we can develop and essentially that great questions, a data question. So a lot of the work is to develop in the requisite infrastructure that allows us to kind of utilise all that information that we. Have, but it's largely locked up in various systems, so my work this is a an AI kind of machine learning type webinar, but my work you know is machine learning, but I would say the unsung hero is just fairly simple data visualisation. Actually that can be utilised. Is with kind of more advanced tools and advanced infrastructure, so the work I'm doing is largely exploratory data analysis, looking at general kind of clinical activity. So kind of consultations that we're seeing prescriptions, lab reports that we're having to review documents that we're reviewing, engaging an idea about how we can apply machine learning principles like clustering and. In forecasting, to get an idea of what is going on now and what could be going on there in 5-10 years time and then when you combine this with your other streams like financial data, staff data, patient survey preference data, then you get a. Really rich picture of what? We should be building so the work at the moment relates to allowing us to kind of have extra functionality where we move past using archaic search and report systems to querying databases at source using search query language, and then we're allowed to provide that further, richer analysis using these. Aiden, or are to do that now. That's that's the exciting stuff. The machine learning and the modelling. But as I said, the larger goal, it's a small part of the larger goal, which is to turn us into a data-driven product creating organisation and the bigger areas of that and the more time consuming areas of that are. Actually, the foundational infrastructure, so your information and governance policies, your software, your hardware, getting that political buy in from practises to those so that they really understand the value of data and can trust that group work kind of activity. So that's one element. And probably the larger element as a part of that is that team. So fundamentally this is a people team, there is no point having machine learning AI siloed group that some does some work that's not really relevant to the rest of the organisation. So we're building a digital hub and and the role of that is to get a diverse set of. People, administrators, pharmacists, clinicians and slowly allow them to transition from using Excel and your simple systems to querying databases at source and then using your advanced analytics languages so that over a number of years we can then develop a system where there you've got grassroots and. By people who are seeing the job and doing the job every day. So the advanced patient facing stuff, we're not ready there. We're not ready yet and we're we're, I'm not. We don't need to be ready yet. I think this is part of the stage for multi year plan. But the opportunities are there and it's just making sure that what what we. We're doing this at the right time in the right stage and creating the right culture.
Haris Shuaib: Fantastic. Thanks, Amar. Over to you, Christina.
Christina Sothinathan: Thank you. My name is Christina. I'm my background. So that's a clinical lead physiotherapist for many years, and MSK at Kings. And then I became. I did my Digital Pioneer fellowship through digital Health, London. And then I moved across to NHS England and Improvement London region to be a clinical transformation fellow. And now I've moved over to Digital health London to be a NHS digital. Navigator so. One of the innovators I'm working with called limbic, so limbic. Our AI solution addresses end to end mental health pathways. So they're implemented IN20I app services. They haven't all creating of 92% and they're sort of end to end pathways. You have limbic. Access and then limbic care. So I'm just gonna talk about limbic access for the time being and then move on to limbic care. So perhaps this allows patients to self refer to eye apps and it uses a type of machine learning called natural language processing via a. Which analyses text input so it picks out keywords and uses conversational AI and benefits of that that engages patients in a more humanistic and caring approach and caring way. So this improves usability and it's easier for patients to self refer and this is reflected. By their 91% completion rate. And it also collects really important screening that would normally be done within an appointment or pre appointment via paper, and that's the PHQ 9. So that is a for those of you don't know, as a depression school and that allows the chat bot to then stratifying and risk flags so that the care can be adapted and tailored based on the individual needs. And in terms of that suicidal risk, the pathway can be tailored to local pathways. So asking for example, if the patient can keep themselves safe or follow up with questions around suicidal preparation or suicidal intents or likelihoods and just checking on protective factors, ways that they can keep themselves safe. So it's an AI solution that provides clear point information pre appointment in order to support the clinicians in their clinical decision making. So the clinicians have the most relevant information and screening and pre appointment. This frees up clinical time and to allow the clinicians to add more value based care and therefore reduces the. And then in terms of limbic care, this uses the same engine machine learning engine, but for treatment support tools. So whilst patients are on the way to list but also when they have started treatment and their post discharge. So it helps with that elective backlog that we hearing so much about it helps patients to wait well while they're waiting. And it also helps with patient safety as well. So during the therapy, it can be accessed 24/7. So for example, the patient can log in and say I'm feeling anxious and it will reply saying your therapist has recommended some breathing exercises. Shall we do this together? OK, let's go. Now, this sounds very simple, but we know that we're in anxiety and depression can inhibit problem solving abilities. So it can be really beneficial from that point of view. Can be accessed at the weekends 24/7 and then post discharge. It allows access for patients to sort of help prevent that relapse. So Olympic is never going to to replace clinicians at the moment, but it supports clinicians and we know that AI is not ready to replace humans as it lapses with human values that are really important to patients and individuals such as empathy and that gut instinct. However, the benefits in terms of user benefits, it can be accessed 24 hours. A day seven days a week and we know humans need to sleep, so we all need to sleep. The chatbot doesn't need to sleep and the feedback from patients is that they feel less judged. Speaking with a chatbot. And it's less burdensome. And speaking to humans could be really beneficial and makes it much easier for those with social anxiety. And in terms of clinicians and the benefits of clinicians, it reduces by supporting the clinicians. It reduces that workforce burden. So we know that workforce burden is a huge challenge. The NHS generally and clinicians like having information prior to their appointments. It allows preparation, including seeking support for those more complex patients. It just reflecting on my time at Kings. We used to do pre appointment screening including the PHP 9 including start back, which is a MSK stratification tool and it really does free up time during the appointment so that you can spend more time with the patient in their appointment time actually adding more value. So you know we're very. Aware of the sort of. Crows and the sort of the benefits and their limitations to AI, but overall this isn't trying to replace the clinician, but more about supporting the clinician. Happy to take any questions after, so I'll pass on to Kavita.
Kavitha Vimalesvaran: Thanks, Christina. Hi everyone. My name is Kavita and so a little bit about my background. I'm actually a cardiology registrar in NW London with a sub specialty interest in cardiac imaging. I'm currently in the second year of my PhD at Imperial College London where I'm developing a novel AI based clinical decision. Support system which enhances the acquisition of cardiac embryo. I have a particular interest in valvular heart disease and therefore I'm focusing a lot of my research in training neural networks to accurately identify valvular abnormalities in different cardiac views. I've also got a bit of experience in using semi supervised natural language processing or some of you might recognise this as NLP. Algorithms to accurately categorise diagnosis and that was specifically in cardiac MRI from radiology reports. So you know we could utilise this for very quickly screening through lots of radiology reports and pulling out scans that we would that need for a specific experiment. And part of my pH DI work very, very closely with a very diverse group of people, including engineers, medical physicists, other clinicians, to create these algorithms. I also lead on the AI lab at Imperial, where we conduct sort of fortnightly forums to promote this sort of collaborative working really. Where fellows like myself are able to share our current findings, challenges in some of the development of these algorithms, or maybe the implementation work that we're carrying out. We try and work through some of these challenges together through this. And and I guess what I've learned, mostly through my PhD, is the experience of data procurement, and this is both locally and nationally. And both of these types of procurement. So whether you're trying to get data locally from your NHS trusts or nationally say from the buyer bank, they both have very different challenges in trying to navigate. Through each. That applying for ethical approval and then sort of utilising medical images for AI and specifically for that data curation part, which is sometimes the most important and most difficult part to get right before you start training these AI models. So that's part of my week. So half of my week is spent on my pH. D. And the other half of my week is spent as a clinical AI fellow at GSTT. I'm one of 11 clinical AI fellows on this first cohort of its kind that Harris obviously initiated. And one of my main roles in the clinical AI fellowship is currently supporting the development and evaluation of CE marked AI tool and CE marked. For those of you don't know, just means that it has regulatory approval and the software that we're going to be working with or the company that we're going to be working with is secure AI. The specific product is Q. We are what Q ER is. It's an algorithm that can detect up to 11 critical abnormalities from non contrast head CT's and what we hope is that through this product we'd be able to prioritise the most critical scans for reporting by radiologists. Therefore, patients coming in with a stroke or major. Trauma where brain injuries will quickly get treatment that that they need through that prioritisation and the software we hope will be deployed within the emergency department and radiology workflow. As with any other study, AI or non AI, there's always the process of developing the actual study protocol, and we're at that phase at the moment where we're designing the prospective study, which will eventually be conducted across five different NHS sites. And these are major trauma centres, stroke centres and at DGH. I'm also supporting the ethics. Application in this and also collaborating with our industry stakeholders, particularly in terms of managing their expectations. And eventually I'll be leading on the academic outputs from this project. And I think this is often the rate limiting step. This initial thing where you have to really just get going design the study, get all the sites together, get all the API's together and see what everyone's expectations are of all of this. And before the study can actually even get going, one of my main objectives, I suppose by the end of this is to create a body of evidence. To build that trust and confidence of the healthcare workforce in this specific AI system. Which can then allow it to be carefully scaled up for use across the NHS sites, and what I'd like to take away eventually from the fellowship is to help me better understand all of these challenges and the various different phases with the life cycles of implementation of AI tools, which would then give me a transferable skill set. For deployment, deploying similar projects into hospitals in the NHS. Thank.
Haris Shuaib: Fantastic. Thank you, kavita. Thank you, Amor and Christina as well. Fascinating introduction to your work. And I have to admit, I didn't have any input into the construction of this panel, but I'm so glad we have such a a diverse in just three people. We basically cover everything which should hopefully make the Q&A quite interesting. We've got secondary care, primary care. Mental health. Different clinical backgrounds as well. We had people send in questions prior to the event and so I'm going to use some of them as a launching pad into what I want to ask, but I'd also be keen for people to. To put their hands up if they've got questions from based on what they've heard so far, one of the things I want to start with and I'll address this to everyone, is a few of you mentioned the importance and the challenges around data and there was a question by Lee Whittaker that was sent in about what, if any, other major risks or drawbacks about AI and. Health one of them major challenges is around the quality of our data infrastructure. And. Particularly around the development of AI, you require often large data sets across many different geographies, and that means crossing institutional boundaries. And I'd be very interested to hear about the challenges you face and ways that you've overcome that. How are you talked about how your new legal basis? As a PCN enabled some activities. Did that make it easier to get access to run SQL queries on larger cohorts of patients, or was there still, you know, 80% of the work still outstanding to do? So to you am I first.
Amar Patel: Yeah, I mean the the threat is everyone's terrified about data, really aren't they data sharing and it was that's by far been one of the you know I'd love to be focusing all my time into building models and doing that kind of analysis, but actually one of the bigger areas which has caused the most delays is that information governance policy and ensuring that everyone is. Aware of what's going on and the right kind of the right stakeholder buying, what's complicated more is the sort of roll out of the national data opt out, which made it even more difficult where you had people that were opting out not really knowing why they're opting out perhaps is not knowing what was. Involved in that and what that actually meant. And so I have a way of dealing with that is to be very basic. I mean you've got, you want to remember, I mean general practise the baseline for I suppose digital maturity and data infrastructure maturity is very low. So we don't. Even really need to be thinking too much about AI right now, we just need to do the foundational blocks that we can then build on. So one of the things in terms of moving forward was just you know existing IT providers but have the ability to provide new ways of searching great, that's a new that's the straight away that's a tick box to that pathway towards being build you know more complex kind of analysis and then just using what we can that involves things like implied consent anyway. So clinical audit is classified under direct patient care and one of our things that we can do here in practise anyway. So making our use of these technologies related to things that are already part of our day-to-day work. Before going on to those higher level areas, so I think what's helped with our organisation is that this the sharing that's going on with because I mentioned we're already, we're already organised into by a federation for the last several years and then and that that's turning. More of a PCN as well. We already had data sharing things in place where we were providing direct care, so it made it a lot easier for us to do that. But I'd say the biggest challenge is making sure that practises are organising and having that ability to have a plan for direct care of patients. And then allowing that kind of direct kind of clinical audit kind of pathway that makes it much easier.
Haris Shuaib: Fantastic. Now really interesting and turning to you, Christina, with particularly the limbic app where you have, you know an end to end pathway management where you're potentially jumping from organisations or from different services. You mentioned that you're able to refer to a number of IP. Services. Does that present challenges? How has that been? In terms of implementing Olympic or clinicians being able to use Olympic to its fullest extent, how has data flown across organisations or does it? Is it easier because a lot of the data is held or controlled by limbic themselves?
Christina Sothinathan: What I believe that the data is held by limbic itself, so that does make it easier and then. The clinician can input into the app so that. They can. It can help with that limbic care when a patient is having. Sort of a more. Of a slightly, you know, having a bad day. More anxious or depressed and then can sort of pull that information from the clinical side to help support the patient. And so one of the challenges really is with the ICB and knowing where the funding flows are, I'd say that's probably one of the sort of challenges for innovators at the moment. So rather than being organisational just. Funding flows from an ICB level. I don't know how everyone else feels about that, but that's kind of the main challenge from innovators that I've been working with.
Haris Shuaib: OK, interesting. So that's, is that funding for the innovation itself or for the underlying infrastructure to enable it?
Christina Sothinathan:
The underlying infrastructure where ICU's are relatively new set up and it's having those structures around and framework for funding flows. It's not that obvious. And there's a. Bit quite a bit of uncertainty around that. That's quite a big barrier for innovation at the moment.
Haris Shuaib: Yeah. No. Kavitha to just to bring in this conversation, I think you have an interesting perspective in that your week is split between being a developer and an evaluator or a deployer and evaluator and. Could. What are the different challenges right? I mean, I'm involved in very similar activity and I know that it's relatively easier to curate static data sets, right? Obviously there's a lot of blood, sweat and tears that goes into it, but it's static and you have a big cohort of data that you throw into your machine learning algorithm. At deploy time when you want to evaluate it, say in a prospective trial. Now you need the data to come in the same structure and format, but you need it to come live. So in your experience, what's been the challenges has has that gap been too big to overcome or has it worked well in some pathways?
Kavitha Vimalesvaran: So completely, as you say, Harris, it's sort of then getting the outputs to the patient, which is ultimately why we're all doing what we do. It's a little bit different on my end because I work a lot with industry in terms of industry in a sense that the algorithms that I'm developing are for MRI scanners. Who owns the MRI scanner? Siemens GE, for example. So to get these algorithms onto the magnets, essentially you have to be able to converse with those teams and actually convince your they're. And eventually they're your stakeholders, get their buy in to, you know, have them be happy to have your. Algorithm on the magnet and that has to be robustly tested before they would allow you to integrate your algorithm onto such a big already multi diverse. Software. So it's not easy. We have at the moment incorporated some of our early work where we're able to get in on the fly outputs for, say, cardiac volumes, mass calculations, etcetera. But that hasn't come easy and you have to obviously pre label everything for research purposes only. And the other thing I think with that is just, you know, being able to provide evidence that you have tested and validated your algorithms so. So you know, trying to train and test just on one data set from one hospital on one scanner type, it may not be applicable to all other scanner types or you know different softwares. So you know you have to really get data where you can show that what you've developed is actually transferable. Other clinical data sets and that's one way which we are trying to get around some of this. So some of my early work has been developed on noisy clinical data from imperial, but we're now trying to validate some of those algorithms on the UK Biobank data set to get a bit more leverage essentially and validation.
Haris Shuaib: Fantastic. No, really interesting. I think what's clear across all of those domains is I think it's we've come a long way in the past. So say five years in the amount of attention that frontline clinicians like yourselves as well as corporate and support services like it have put into data infrastructure. I think everybody knows that without solving that, nothing else gets solved. One of the things that the London AI Centre has been working on is exactly those kinds of problems. I won't talk about it too much right now, but we have two sort of flagship platforms. One is the Federated Learning platform, which essentially allows us to develop algorithms without moving the data at all, which solves some of the the privacy and IG risks around extracting data from data controllers. You essentially move the algorithm. Through the data and then aggregate the learning and then the other side we have the AI deployment engine which solve some of the interoperability challenges computer that you talked about where essentially you have a single into which you can deploy our algorithms and which can communicate with the rest of the hospital in an intro. Or wait. I'll put a link in the chat for those I wanted to read up more about it. There aren't any more questions that come in, so I'm just going to carry on talking about stuff that I like to talk about you guys. In. All of your work have, I mean the industry collaboration is pretty crucial to this, all right? And also Kavita, there's a pretty strong academic component, particularly when it comes to evidence. Generation right and figuring out what should we pay for and what works and what doesn't. Work. And I it reminds me a bit less of the academic component, but when we were sort of digitalizing healthcare sort of 20 years ago when we had sort of the first flat tax contract. Coming out and we had sort of the national programme for it and then ten years later when those contracts were coming to an end, we learned a lot of painful lessons about how we didn't quite set up those relationships for success. It was very difficult for a lot of us to move to our new solutions or to our new providers. In your recent experience working with industry and with academia. What are some of the sort of key takeaways that you've come away with that would ensure the NHS partner gets the most out of that relationship? Or does the relationship need to fundamentally change because now we're a bit wiser? Kavita, do you wanna kick off? With that.
Kavitha Vimalesvaran: Yeah. So I think in terms of, you know, you've got to manage expectations in terms of what your NHS partner wants to gain out of it and what industry wants to get out of it. And I think in terms of your NHS partner, what we want, what they want is essentially we've got a limited amount of resources. Our resources aren't growing exponentially, but demand is growing exponentially, and so how can these AI solutions be of use? To mitigate some of those issues, and it has to be a cost benefit relationship. I think eventually and I don't know if it has been proven yet that all of these AI solutions actually are going to be beneficial from a cost point of view. You know, some people might argue and say we'll just hire more. Healthcare staff for the price that you're paying for these AI solutions, so. I think what we're doing at the moment in terms of collecting this body of evidence, say just from my cure project, it's actually then conducting a health economics evaluation to see whether solutions like these do actually benefit, you know, NHS partners and actually make a difference to patients eventually. And that they are soluble for mitigating some of the issues that we foresee and are happening right now. Now, especially in the post COVID pandemic era, where there's a huge backlog of cases, demand for Healthcare is higher than ever. So those are the things that I think we need to work on between HealthCare Partners and industry and trying to find that ground where you know we're working towards a goal that together basically.
Haris Shuaib: Fantastic. Christina, I think you have a really interesting perspective particularly and you don't need to focus on limbic, but with your role as an NHS navigator with just health London from your perspective, the HSN's roles are is to increase adoption of innovation and you're usually very good at connecting innovators to to trust. And and. And the like. What could we do better from the NHS perspective to help innovators and to get the most out of it?
Christina Sothinathan: I think this was the biggest sort of barrier is around implementation and we know that workforce, the current workforce, are kind of on their knees at the moment. There's a huge workforce crisis taking on a new innovation. It's just an extra thing for our clinicians and our workforce, our service managers to do when they're already. Mandated with an elective backlog, so trying to get that initial implementation is a huge barrier for the workforce. Generally. I mean, I think the issue is it's not just the the fact that the workforce is burnt out. It's the fact that you know positions are not being filled. And so even if we recruited more clinicians, if there was money to do so, we we're going to really struggle to recruit clinicians and to recruit workforce. That AI therefore supports. Can support the workforce when they're already depleted. You know, I know of jobs that have been readvertised because they're just not being filled, whereas maybe 5-10 years ago they were filled very, very quickly and you'd maybe have 50 people applying for the same job. That's not really the case anymore. So that's the sort of first thing is around implementation in the workforce, on their knees. And and encourage innovators to do is really help with that implementation process to sort of almost provide project management support to the NHS Trust or to the NHS organisation to help with that implementation process, to offload the burden on conditions just for that initial phase until it becomes almost business. As usual, and then there can be a transition period across, so that would be sort of my thoughts on it.
Haris Shuaib: Fantastic. And that leads nicely to what I wanted to ask him on this topic. And and it also relates. To. Commuters AI lab that and the forum that she has and are you talked about a sort of a hub that you're trying to build and a sort of an in House team. How far can you can we take that your in your locality as a PCN or just done as an NHS where? We are sort of in housing some of the deeper technical expertise when it comes to AI, just conscious of time, but briefly your thoughts on that would be interesting on how far we can go with that.
Amar Patel: Yeah. I mean, there's a there's an interview with Satya Nadella, who's Microsoft CEO, and they asked him what is the real innovation that they see with AI? And his response was he's most excited about the grassroots uptake, the idea that someone who's doing a job every day can. Have a pack. Bit of software or code that is already built for them that they can then apply in a you know a day-to-day thing that can help. And that's essentially you know the where the real innovation. Comes. It comes from the kind of, you know, the ground up. And so it's kind of enabling people to do that and now, you know, realistically we can't expect everyone in an organisation to become especially. And whatever data science away, that's just not gonna happen. But you can certainly target people that are interested in it and have have some sort of local structure where they can then engage with the wider group. So I think it's about, you know, providing opportunities locally to do that. And then, you know, providing support where they can take it to the next stage so. Like local innovation agencies or innovation funding to where we can take that route, I suppose the other element is that I suppose it's David's point. About. Expectations an AI product is essentially a start up really, or it's being used as like a. It's in a way to a startup company. I mean, how many startup companies are still alive in 5 years time?
Amar Patel: 10 years time and I think that's the reality of it. So is being aware that you know when you're investing in this area, be realistic about what you're trying to achieve and an infrastructure and governance and ethics and all of that kind of thing can then grow other elements rather than gambling. Obviously on a few topics they're questionable. Whether would be around in five years.
Haris Shuaib: Fantastic, Emma. No, very, very pertinent point at the end. No, completely. That brings us to one minute to go, and so I'll take the opportunity to wrap up then. Thank you so much the panellist for sharing their background and the work that they're doing all very much at the coalface and really grounded in the realities of delivering healthcare. So hopefully the audience appreciated that and of course thank you to to the audience for listening in and for participating with the questions beforehand. There's been a bit of chatter in the chat and that's mostly about connected. With some of our panellists, hopefully we can follow through with that at the end. From my perspective, I just want to say yeah, thank you to the organisers as well. There will be a recording like I said at. The. Top that's going to be made available on the Ed web pages and finally you can follow Dot Ed on Twitter. And hopefully the handle will be in the chat for you to follow. Then you can stay up to date for any future webinars and that's everything from me and hope everybody enjoys the heat. Take care. Bye.
Christina Sothinathan: Thanks. Alright, bye.
Media last reviewed: 3 May 2023
Next review due: 3 May 2024
Webinar 4 - Digital and AI transformation in Dentistry - where are we?
Digital technology is transforming how dentistry will be delivered in the future. Adopting digital opportunities will enable staff and patients to confidently navigate this new digital environment.
This webinar provided a scene setting to digital readiness in dentistry, as well as demonstrate the potential role of AI in dentistry and the interoperability challenge in the context of the profession.
On the panel were:
- Sam Shah (chair) - Chief Medical Strategy Officer, Numan and Digital Health Research Lead, UCL Global Business School for Health
- Andrew Dickenson - Chief Dental Officer for Wales
- Dr. Hatim Abdulhussein - National Clinical Lead - AI and Digital Medical Workforce, Health Education England
- Dr. Vinay Chavda - Academic Clinical Fellow in Dental and Maxillofacial Radiology, Birmingham Community Healthcare NHS Foundation Trust
- Dr. Tashfeen Kholasi - General Dental Practitioner and Vice President, College of General Dentistry
Watch a recording of the session below.
Watch webinar 4
Sam Shah: Presentations and discussions from our fantastic and esteemed panel. So I'm Sam Shaw. I'm the chief medical strategy officer at Newman and Digital health research lead at UCLA Global Business School for Health, and I'm joined today by an amazing panel of people that many of you might know across the sector. We have doctor Tashfeen classy, who is a general dental practitioner CIO and the Vice President of the College of General Dentistry. We've got NHL there. Who is the is an academic clinical fellow in dental and maxillofacial radiology in at Birmingham Community Healthcare NHS Trust. We've got doctor Hakeem Abdul Hussain, who is the national clinical lead for AI digital medical workforce at HEB, and also the medical director of one of the HSN's. And of course Andrew Dickinson, who is our Chief Dental officer for Wall. Goals and has of course got a big interest in digital and technology, and I'm sure be sharing his insights from a policy perspective. So welcome everyone. So with that, I'm going to ask each of you to just briefly introduce yourselves in terms of what you do, but also tell us about your views and thoughts. On digital dentistry, I'm going to start off with Tashfeen. So Tashfeen over to you first of all.
Tashfeen Kholasi: Hello everyone, thank you for inviting me to this really honoured to be here with such amazing talent. My name is Doctor Tashfeen Kholasi I'm a general dentist, but I have a very strong interest in digital dentistry. My previous path to dentistry was I did computer science and math. At university and worked in local authority working on interoperability projects. So I have a strong interest in how we connect systems to. I I then decided to go down the dentistry pathway but realised how disconnected as a profession we actually are and with the rest of the health service, which led me to work in health education England as a clinical leader, a clinical fellow in leadership with HEB. Back in 2017, and that led me onto this path of being able to work with the NHS England and HSX as the clinical leading digital dentistry where we delivered the 1st. Proof of concept that dentistry can be connected to the rest of healthcare, and we did that with urgent dental care and NHS 111. So we know that our suppliers in dentist we can connect to the wider system and so we can do that data sharing piece where we're we're passing data from the health service into our dental. Services with, you know, like patient name, like patient details what their primary concerns are and and my other interest in the digital realm is actually around clinical safety and health IT. So I'm also a clinical safety officer for a couple of different Organism. Questions and it's about how we clinically manage the data that we're passing and the systems that we're using to make sure that we're not introducing any harm or introducing any new hazards to how we manage our patients. It's all about safety and health IT. That's where I think I'm going to stop for a moment and and. Pass on to our next esteemed.
Sam Shah: Guest, thank you very much, Tasha, and it's great to hear about your path from technology through to dentistry and now bringing it together. I'm next going to move over to Vinay. Vinay, please, can you tell us a bit about yourself, but also your thoughts and views on the sector?
Vinay Chavda: Hi, my name is Vinay Chavda. I am currently an academic clinical fellow in dental and maxillofacial radiology at SD2 level. I actually got into dental radiology through my work as a Topol fellow where which was a HEE funded programme looking on on the back of the the total report looking at the future of the healthcare workforce. Across all healthcare professions, but particularly I was focusing in dentistry and recognising that in dentistry we use a lot of dental radiography to treat our patients almost every day. From that I sent a project looking at AI in dental radiography and dental radiology and that's kind of how I ended up doing my job today and and training to be a consultant in Dental and Max radiology. I've got a few more slides, but I'll show you later on once we've done our introductions.
Sam Shah: Feel free to share your slides at this stage and yeah, please.
Vinay Chavda: Tune up. OK, great. So I could just, it was easy for me to jot down my my points on the slides so that I don't forget anything. So. So yeah, this was the, the this hopeful fellowship and this is kind of how I ended up getting my interest in, in digital technology and artificial intelligence, in imaging in particular. UM, so these, uh, this this table is slightly out of date from 2019, but it shows that the UM ratio of acceptances and applications is just behind dentistry for artificial intelligence courses at university. These were new cast figures that. That that we use here and is actually above medicine as well. My other. Thing that I just want to bring up is that with all of these AI technologies, is that there is a a hype cycle and this is this was first developed by Gartner and the the the management consultancy company. But where we are within dentistry, I think there is still quite an inflated expectation of what AI can do. And soon, and everyone on this on this talk will still be aware of some of the limitations of some of the digital technology and I guess that's what we're to talk about today. But my. Current area that I would like to reinforce is when we look at the number of published papers on PUB Med for instance, this is there's these two graphs that I pulled up that the pub Med database yesterday. When we look at dentistry and implants that actually peaked first in 2016, AI and dentistry. Has not yet Pete, and we're not nearly at the level of the the published research that, for instance, dental implants have reached yet. And when I had a chat with the editor of the Journal of Dental and Maxillofacial Radiology. Almost half the journal is now looking at AI and dental imaging, and they were now looking for a bioinformatician to go onto the editorial team to ensure that the research that is now being published is good quality. And that's, I think, something that we need to be aware of as dentists to say what is coming out there in in future. So that's where I'll stop. Thank you.
Sam Shah: Really insightful Vinay and thanks for sharing those thoughts, especially having been one of the first Topal fellows. As a dental surgeon, but also now embarking on a pathway that very much touches on all parts of AI. So it's super, super helpful to hear those thoughts and we'll come back to some of that in our discussion, especially around the Gardner. Hype cycle and it sort of goes back to a point that she made about the the system as we see it. Well, next I will come on to Andrew Dickinson, Chief Dental Officer for Wales. Over to you Andrew.
Andrew Dickinson: Sam, thanks very much and and and it's a real pleasure to be sharing this webinar with everybody today. I'm as you kind of introduce me. I'm currently Chief Dental Officer for Wales but before that I worked for Health Education England as a postgraduate dental Dean and and one of the real pleasures in My Portfolio was to support the whole tell team as it was developing and starting to move into the AI robotics. Digital. And simulation field and and as you alluded at the beginning, I was not an expert in this, but it's amazing how quickly you have skill and and and that led me to to consider the whole concept about digital readiness and and from there that led onto a paper which Hattie and I had published as a as a commentary in the British Dental journal earlier this year. Just to start that conversation, and I think that's probably some of us on this call, have that role in in terms of asking the questions, not being able to give any answers, but just to just to start getting the the conversation flowing. And for me, digital readiness always sounds a very simple concept and and and in dentistry, we have been early adopters of of technology. But I think at at while technologies is advancing at a really quick pace. At the same time, we have to keep thinking is everybody coming on that journey at the same time and? And it's probably an overused terminology, but you know, we have a widening digital divide. And as we learned a lot through the events of the last couple of years, we've had to adopt technology a lot, a lot faster than we ever imagined. But does that really mean that that we are ready? And when I say we, that's not just the profession, but also the the patients that that we're working with and my worry is that the digital divide tends to focus on access to the digital technologies, whereas for me readiness actually refers to the preparedness such as do we have the skills and do we have the trust in that technology. As this influences the adoption of digital tools. Which for me and would be nice to talk about this, I think is a very separate issue to access issues I I was taken recently by something drop through my front door from Lloyds Bank and they've been doing a lot of work in the digital sphere and they were saying in the UK we have 99% Internet coverage, but there's over 1/4 of the population. Which actually fall into the lowest level of digital capability and there's 4% of our population have no basic digital skills at all. They also showed and This is why I read their document, is that 34% of the population actually struggle to interact with healthcare services. And this is where digital readiness is going to be really prominent over the coming years. The other conflict I have is people talk about digital literacy and and and healthcare literacy. And for me that's almost a little bit of victim blaming because the system hasn't actually adapted to allow people who are struggling with digital readiness to actually be supported. Uh, there's an assumption that it's up to them to go and. Go and learn. This so there were just three messages. Sam, I wanted to say about digital readiness because I think there are three interconnected elements that might be helpful for our discussions. One is, do we have the digital skills? Are people actually able to initiate an online session, surf the Internet, search for content in a meaningful way? Secondly, trust, do they actually? Trust the the information that they either read online or do they trust the system, and then the third is is usage. What? What do they actually use digital digital for? And interestingly, all three elements can actually be. Measured actually the range from asking very simple questions. So in terms of digital skills, how confident are you getting on the Internet? If you've got a new piece of electronic equipment, can you set it up or do you need people to help you with uh and and certainly from a dentistry point of view, how confident are you with are you learning? I think these are very important questions that we can just start to measure. Peoples prepare. Deadness in terms of trust. That's really complicated and and as you alluded, there is a policy element in that is the infrastructure right, are there systemic elements in involved in all of this. It's OK saying we need to go online, but actually are we providing people with the software, the hardware, the, the, the data security, the financial element behind all of that. And of course, we've got to overcome the usage element, which is. There are some early adopters, there are some people who are the hesitant users and there's some which are actually non adopters yet. And we've got to think about how do we bring everybody. In there. So if I just conclude Sam just by, but just my suggestion is that around digital readiness does require us to think a little bit differently and just move away from the technology adoption element. Let's have a look at the culture and the behavioural changes that are required, especially in in healthcare as we're moving patients into that sphere, there's the. Even adoption element and that means that we've got to become more proficient and that proficiency starts today. It starts with education. I signpost everybody to the HE resource on on the digital readiness education programme. UMUM but uh. In in conclusion we can actually actually measure people's digital readiness using UM and metals digital readiness score, and that takes into account both our digital capability and our proficiency. And I think if we can think like that, then we've made a big step forward. So thanks very much.
Sam Shah: Thank you, Andrew. I really do and and very much like the reference to the sort of the take on digital literacy and actually we need to change the way we frame. It and and I think absolutely right. Yourself and other very senior leaders are there to ask the right questions and get people to to think about what we might mean and what we can do to change things for patients. And thank you also for mentioning the Lloyds Consumer Digital Index for anyone that wants to read it. I've put the link into the chat there that you can follow up on but in a moment we'll come over to Hasim and. I've been, you know, we've we've heard from Tashfeen about the disjointedness in the system, but also her journey through working in national organisations, trying to make a change in digital dentistry, but also how she came with a set of skills we've heard from Vinay about the total fellowship and the opportunities it. Brings both in terms of his own work, but also what it means for the profession as a whole. And then we've heard from Andrew about the work around digital readiness and what that might mean, how we score people, the difference it makes. And we didn't mention the the digital poverty in Commonwealth share in Wales, where telehealth is almost non-existent. But having you've been leading on this programme, you've been running large parts of the system and you have a role in innovation. Be great to hear from you about what people can access, but also your your thoughts around innovation in healthcare.
Hatim Abdulhussein: Thanks, Sam. I'm gonna bring up a couple of slides but but, but whilst I do that, I mean it's really important to say as a as a practising GP, all health is essential and it's really real pleasure to be around the table with with oral health experts. But ultimately as a generalist, you know that is important for for the wider workforce as well. I think outside of dentistry as well, we should all be thinking about how we promote oral health. You know, it's linked toward. Umm, cardiovascular disease and other general health outcomes. And we know actually it's a really important factor in being able to just generally understand how well, uh, someone is, you know, coping in society and and and and and and where they need any further support. I mean, I've been, I've had the pleasure of of of working with our local primary care network and you know we need to do more within primary care networks to bring. Uh, general practise, dentistry, pharmacists and other healthcare professionals at at that kind of level of care at a neighbourhood level at that place based level together to be able to deliver outcomes that are. Make a difference. And well, technology is one way in which we can do that. And there's one enabler to to be able to to allow us to do that. I'm gonna provide a brief update on our digital, AI, robotics and education programme and share as, as Sam you you kindly mentioned, a couple of, well, a couple of opportunities that are on the horizon. So I know some of the questions that have come in uh prior to the conversation today have talked about some of the factors that we. Know are gonna. Impact on confidence in AI. So these are things around regulation standards. Ability and we've got a previous webinar, uh on that goes into more detail around how and what drives healthcare worker confidence in the eye. So please do check that out. That was that in Webinar 3 and all of these are recorded on on the digital Academy website which has been shared in the chat. And and and so just to highlight that, we recently published A developing healthcare worker confidence in a report which which talks about all of those levels from a national level in terms of regulation standards from a local level in terms of local validation that's needed when you're delivering an AI or digital solution and ultimately then at the coal face of that relationship with the patient, you know, how does the technology like AI impact on the way we make this. Visions and and and the way we interact with the people that we serve. And so if you want a little bit more reading and and that kind of information on that, please do. Check out those pieces of. Work. I'm really pleased to announce and this is this is this is I think is fantastic that we are opening up the the fellowships and clinical artificial intelligence to both medical and dental postgraduate learners. And this is the second cohort of the fellowship programme, and the first cohort was London, but the second cohort is wider than London are now post in in in most regions of of of England. And and and so I'd encourage you to have a look at the website and find out more if you are a dental learner that is currently in their SD free year of Specialty Training or is in a programme which ends at SD3 and in their S2E2 year, then you will be eligible to apply for for these fellowship programmes and what's really unique about these clinical artificial intelligence. Terms is that it's an opportunity to do work that. AI deployment focused, so ultimately it's moving away from some of the research and development that actually being at the the the innovation space of we're actually thinking about and embedding some of these technologies and and and improving the the care of the people that we serve through there through that. So it's a really interesting time and opportunity to to join that programme. Sadly, the top of fellowship programme. Is it's about to close for applications today. If someone is is out there that is really keen to to get an application in today, there is still time. But but if not, I mean, I am pleased to say that in all cohorts, we've always had a representation from from dentistry and World Health and I hope that continues in, in the next cohorts and then next year, we will be relaunching the flagship NHS and Care Digital Academy programme around the digital leadership programme, which Sam was on cohort one and a real trailblazer. In this space. It's and. And so we heard that people will keep an eye out for that if you are keen to like Sam and like has been moving to really senior dental leadership roles around digital and that would be great to see. So just to highlight these opportunities exist, they're open to to our, to our dental workforce out there. And I heard that you've found. Hearing from from the esteem panel today. Really useful and we look forward to answering some questions.
Tashfeen Kholasi: Thank you Hashem for that was and that was brilliant. I was just gonna say that. And I actually did in it. And it's just a digital Academy as well. I was part of cohort too. And so if anybody is considering applying for either total or one of these clinical fellows in a I highly recommend it. As the one year that you spend thinking to yourself are, I'm gonna step out a dentist for your clinical dentistry for a year. It will change the way you think about healthcare. So I highly recommend it take the opportunity. If it doesn't work for you, that's OK you know, you can always go back into clinical work. That's bread and butter, but try something different. There's no harm.
Sam Shah: That's a that's a great plug, Tashfeen, because I think both yourself and Vinay have taken the courageous steps of doing things that it's slightly different in a different pathway that have resulted in in but in both of you being able to do some excellent things and have influence at at at a system level. So so fantastic plug and thank. So now we move to questions and we've got lots of questions already lined up. But I also would like people who have joined to to ask questions. So what I would say is if you'd like to ask your question, take yourself, put yourself onto camera if you can, but either way, click on the reaction button so I can then see your name pop up in the. Participants and I can then see that you want to ask a question, but I'm gonna take the the the first question and it goes first to Andrew then to. Ashwin, we've got a question from Sammy Stagner. And that question is about is the fragmentation of dental services an issue with collecting and implementing large data set solutions in primary care. Andrew, first, I'll I'll come to you and then to Tashfeen.
Andrew Dickinson: Yeah, so brilliant. Brilliant start of question. I mean data is key to everything that we do is key to research is key to development is key to policy. And and I think. We are moving in a new era where we need to understand how the care that we're delivering is is having an effect at a population level and and at the current time a lot of the data that we collect in primary care tends to be more transactional. It's it's around billing, it's it's a bit it's appointment scheduling. Document storage and it could do a lot more than that. The fragmentation of dental services is an interesting element because. We are still siloed to a certain degree between primary community and secondary care, and and we have we have an insight into what each of those areas are doing, but it it isn't a, a, a fluid flow of information between between the three sectors. The other element around data collection. Of a large number. Of pieces of software that that practise is used as part of their practise management system in Wales. We've got 14 in England, I think it's 25 or 26 and so straight away some of those are inaccessible. Some are cloud, very well, very few is cloud based. Presently most of it requires a local server, so it's very difficult to extract. Data from from these systems. So I I think that is is, is is a real challenge to us. UM and of course the question then is do do we start thinking dentistry similar to to the to General Medical practise, do we go for one or two pieces of software where you can have it all cloud based, you can start extracting that that information and and and maybe this is where. When I was talking about digital readiness, we've got to be more trustworthy about the data. I think people are very suspicious sometimes when we're asking for data we we've got to be mindful that when we ask for data, it can be very time consuming for people to have to go and and and download it or or put it into a format that that can be used. Used so UM, maybe that's where as we go through the the, the the literacy and and and upskilling of of practitioners for example in into into how to use data we should also be asking why are we using the data what does it what does what do we want to do for us. But I think we can't. UM, it's going to be difficult until we are probably more cloud based.
Sam Shah: Wow, there's lots.
Vinay Chavda: What do you think? Sorry.
Sam Shah: Lots. Lots in there. I mean. There's lots in there. There's a piece in there about interoperability, integration, data privacy and trust and and and of course you know the the the perennial issue that we face about those systems and where the data is held, Tasha and you've been a CIO. Your GDP and you've been at the centre trying to piece this stuff together. What are your thoughts?
Tashfeen Kholasi: So I think the first question in my mind is what are we trying to? What's the problem we're trying to solve, you know, everyone was talking about AI and how it can do this amazing. All the amazing solutions to how are we gonna how we're gonna provide care. Actually, what are we actually trying to solve? We have lots of data points in dentistry. I completely agree and we have lots of different systems and they're in different parts of the dental health sector. And how do we connect that up? So that's the interoperability issue. And we don't really have any standards with regards to interoperability for dentistry per say and there's been a an attempt to use Snow Med, but actually is that the way forward should be using others forms of interoperability standards like HL 7 and fire. And and I think one of the things that we were trying to establish a few years ago was actually let's get some common standards going so that we can have a digitally connected dentistry system across the across the net across the network, from hospital services into primary care. We have a whole host. Of. Information just sitting in our in our systems in practise and that can pick up anything from our pathology to carries management. If you're gonna be looking at X-rays and imaging to being able to. Plan things like orthodontic treatment and you can use AI tools to be able to do that. But. Actually, ultimately, and this is going back to what you were saying about digital readiness and how how are we preparing our profession to go forward using technology? The other issue is is that when we're talking about interoperability is that we we're often talking about how servers are connected and how we can process large chunks of data. But that's only half of the the solution, because actually from a from. And we've got user interface, we've got users who will need to be using several different systems, and one of the the hardest things and frustrating things as the clinician and working in hospital and working in practise is actually when I have to log into multiple different systems to be able to access that data. So if I've got some AI running in the background, then I'm going to help to use. As an adjunct to my care, and I want that to be integrated to whatever system I'm using to make that seamless workflow. So that's all part and parcel of what we're trying to achieve. And then my other issue is around clinical safety and I love clinical safety. I think it's really important and. We need to.
Sam Shah: What? What is clinical safety? What? What is it? There's a lot of people here. Well, think of it in terms of what we do in practise. But what do you think of clinical safety as?
Tashfeen Kholasi: Clinical safety in health technology. So it's health it. And it's about ensuring that the data and the technology that we are using is safe to be able to be deployed into the services that we're using for safe patient care. So the data. So for example, you may have patient aid comes in, they've been expecting to have some. Investigative tests and they're expecting that data to be presented to them in a way that's usable and has got clinical significance, but actually. Through the pipeline that it's gone down to be able to get that investigation and that outcome has something gone wrong with it and has something falling down and just think of your Swiss cheese model that. Error has basically penetrated through the technology to the degree that actually you've got a patient now that's been, for example, being told that they've got an an oral cancer when actually they haven't. It's just that something's gone wrong in that pathway. So it's actually ensuring that the technology that we're using isn't introducing any new hazards. Isn't falling down. And and causing problems when it falls down, it should stop at that point and and making sure that the the tools that we're using is providing the diagnosis that we're expecting and not introducing new harms.
Sam Shah: That's been I think that's really helpful to highlight those particular things. And I was trying to get to the chat to put in the link to DCB 0129 and 0160, but I can't get into the chat, but if you can, do you share the link so people can look those things up? And understand clinical. Safety in health tech. But health tech, I'm sorry.
Tashfeen Kholasi: Gone. I was going to say that that's and then it's just digital standard. Which is relevant to only in England, so and Wales and Scotland. Ireland. It's not relevant, however, the the, the the theory behind it is so even though we don't necessarily have to use it in our allied countries, it's something that's definitely there are clinical safety offices. That's. Clinical safety officers has to be a clinician, has to understand how health IT works, has to understand the standards and they then can become somebody that will then promote safe health IT and in healthcare and that's open to dentists as well, so. As far as I. Know I'm probably one of the few dentists. Who is a clinical safety officer in the UK? But as we grow, we're gonna need more so.
Sam Shah: You know. Absolutely. And I think a good call for people to join that section of the, the the profession and. As we move into the world of AI and technology, that was that will be AI driven as well. This will become increasingly important in the same way that it would be for a medical device which sort of takes us on to our next question, which is for Vinay. Vinay, this is about, you know Peter Bailey, who many of you might know who has been working and leading on on tech. In a big provider organisation and what's your view towards X-ray based AI technology in the marketplace at the moment something I'm sure we're very familiar.
Vinay Chavda: Yeah, absolutely. So I mean, my last time there were fourteen companies worldwide which have got some sort of dental, AI, imaging algorithms which are open to market. Now. My thing with this going back to the safety element is that we are all GDC registered and we are. Got to put our patients best interest 1st and there was again the word tool that was just brought up in the conversation and essentially AI imaging software interpretation is just under the tool that we can use in our day-to-day practise to make our jobs easier, make them safer, make sure we don't miss anything for our patio. Now with all tools you can use them the right way or the wrong way. You can you can put a Rotary endo file down a down a canal and fracture the file if you're. If you're not careful or safe enough, but that's similar with the artificially selling imaging algorithms. So if you're not sure on what you're looking at or you're not sure on how the data is presented to you. You will make errors and then you won't be putting your patients best interest forward. So there is lots of availability out there. There's lots of companies out there who will be providing promotional information and promotional advertising for themselves cause it is a free market, which is it, that's healthy and that will build AI software out there. But as clinicians, we need to make sure that we, if we are going to procure this software, we do it correctly and through the right standards that are set out by by the, by UK standards and and we've got many, many regulators of of, of digital technology and our MHR a CQC. But all these different regulators. To make sure that the software that we procure in our practise, because essentially again primary care are all separate small individual companies and they will procure different pieces of software and whether that is AI imaging or whether that's dental software, electronic patient records. We want to make sure that we're doing the right thing for our patients. So how do we assess what's best? And I guess the answer is up to up to the regulators within this country. But there are lots out there. They are getting better slowly. There are still some errors. And again that's when you need to be assess the literature, assess. What is being published and and be able to determine what is good and? What is not good?
Sam Shah: That's really interesting and you know particularly around that's what the issue around the evaluation and assessment and in the moment been been touching back in on this. But just on that number of people who are on this call today and and myself were in a discussion with Eric Topol earlier this week and one of the things he referred to was the bias in in data sets. Especially when you're building a. Like are you worried about the risk that when we're building technology using AI and industry, inherently we're using a very biassed data set? If we were to take an NHS data set, let's say in the in the UK the population it comes from itself could be biassed? Is it's a worry for you?
Vinay Chavda: Well, that's it. I mean the the idle garbage in, garbage out kind of thing. So like, yes, we all know that term, but what does that actually mean? The the one of the biggest cited papers in in that dental and maxillofacial radiology journal. Showed the AI numbering system of an RPG, but it got thrown very, very quickly when an implant was thrown in or a root canal. Troops tooth was thrown in, and it's very, very easy to throw off these algorithms if you've got something which has not been seen before. So if the algorithm was trained on a patient population. Where that type of implant has never been used in this country or not regulated to be used in this country, that will throw off that that algorithm. So yes, absolutely you've got variable. Demographics of patients and and ethnic backgrounds in this country, which could absolutely throw off off different different pathology.
Sam Shah: You're gonna come in there.
Tashfeen Kholasi: I'm gonna just mention about UM, the regulatory aspect. So you're looking at software as a medical device, so that ISO standards. So any technology that you're planning on procuring, you need to make sure that they've got the right regulations and that they're meeting the right standards, because if they're not, that's when you're gonna be bringing in. Inherent risk.
Sam Shah: And is there anything in particular that you're finding evolving at the moment for regulation in the UK, especially now with the the change from Europe? Is that something that innovators should be aware of?
Tashfeen Kholasi: And I think. Again, software is a medical device. I think that's something that wasn't massively heard of in the past, but it's definitely something that's becoming more familiar. And I think as we're. Getting more and. More entrance into the market and you need to be aware of that regulated standard. You've also got things like the software. Development cycle that you should be. Adhering to for best practise, because at the end of the day, when something goes wrong and you were audited by the regulators, then they are going to be looking to see what what processes were and you need to be able to justify why you've done and what you've done to make sure that you've made things clinically safe.
Sam Shah: I mean Hassan, you probably have lots of innovators coming to you in the HSN and of course, in HEB. What's your general advice to them when they're starting out and thinking about the standards they need to meet and the type of validation they need for a product, especially what? Could be in dentistry.
Hatim Abdulhussein: Yeah, look, it's a very it's a very difficult landscape to navigate and it takes some time to to truly understand it and that's where the role of organisations like academic health, science networks to have experience in running innovation, exchange programmes, have experience in working with them. Small and medium enterprises to be able to overcome some of these challenges is really important and you know, getting set up on some of those. And accelerators are really important to do. And and and our colleagues at Nice MSRA have been doing some work in the in the background to really make this place as as clear as possible. They've got a a platform that they're currently piloting as in beta phase and and and that will be a really useful platform to guide people that are working in this space to understand what kind of regulatory boundaries they need to overcome. There is an evidence standards framework for AI from from Nice as well, which is is, is is really important because it starts to think about the type of evidence and the level of evidence. You need to be able to overcome and to to be able to have a solution and to the market. Enter the NHS. And and so that's that's really important and we are seeing transformation in this area. I think regulators are taking notice, but as with anything these things take time and and and so it's important to say that time is is needed because these are very sensitive matters. These are people's lives that we're dealing with these are we know how important how oral health is we know how important avoiding. Bias in the way we make decisions is and, and therefore we need to do these things safely. We need to do these things ethically, and we need to make sure that we have the right infrastructure.
Sam Shah: Only the other day when think about infrastructure, I was listening to the discussion on whole genome sequencing that's taking place and and things like that. You know, the use of crisper. But I won't bore you with that now, but I'm going to change tact and think about that framework. We we know that technology is being used across the NHS already advanced technology that's being used for genome sequencing. And dentistry, we often don't even see some of those. The what now might seem as basic or routine technology emerge. I guess starting off with Andrew. You know we we've. Seen extreme reality being used in maxillofacial surgery, where colleagues at kings and elsewhere have used the technology to to to plan surgery without having to necessarily start operating. Or have more precision around the type of device or implant they made place. What are your thoughts on the NHS framework and can we really implement and utilise technology advances in the same way as other sectors and you know, are we really able to keep abreast of AI or other things in the NHS environment?
Andrew Dickinson: We the the simple answer is we can and we should, but the more complex answer is it's it's really difficult and it's very difficult because as a professional group, we're actually quite small compared compared to medicine. We tend to be almost primary care orientated, which is a really strong. Place for for us to be. If we were gonna make a whole system. Uh. Change but at the same time, we're all the majority are frontline clinicians and and therefore therefore the opportunities to start looking at technology falls into their spare time. So you're absolutely right, framework is is that guidance which is is is, is a a yardstick for us all to aspire towards. But the adoption is going to be very different and I think there are various policy areas around this. Number one is if it's something that we really want and I think we've all said this in our introductions, if it's something that we really want and we can answer that question honestly, what is it going to do for?
Andrew Dickinson: Then that's where we have to drive it. Through policy, we need to be funding. This. The personalization in medicine is going to come into dentistry and and we start to think like that. We're moving away as again our oral health is starting to improve. We're looking more at what can we do with our patients rather than what can we do. For our patients and and and technology is is going to be perfect for that in terms of monitoring supporting the patients that more of that M health would would come into the primary care situation. But I guess that that that's one of my other concerns which I said at the beginning is obviously when new technology starts to come along, we see it at conferences, we hear conference presentations, we get very excited and we go and spend. Money. And we buy something nice and shiny and being cynical now, but we buy something nice and shiny and then we don't utilise it to its full. Capability because we're not either using it for the right reason or we or we don't have that level of practise where it's going to be adopted on a more regular. Basis. So you're quite right in in the secondary care, that's where advances have probably been driven a lot faster than than primary care. But I strongly believe that if we can get what we're asking correct and then we negotiate that appropriately through policy, then we've got a real opportunity to make substantial changes in the digital world. With indented screen, but I think the caveat on on that is we've got to have those open conversations about what is it that we want to do.
Sam Shah: That, that, that, that really nicely sums up our our session today, which comes to close in a few minutes. But I suppose you know really to summarise that I guess we need to avoid sort of searching for problems for solutions that might exist. But really being problem focused around what does it want to solve, what's the business benefit we want for the system and what's the outcome we want for the patient. And certainly I I'm I'm as many people know, cynic have lots of things, but one of them is sometimes where we end up adopting technology for the sake of adopting technology. And I've certainly seen many of those over the years in the NHS and elsewhere. But that takes us to the end of our session today and the questions and discussion probably could have carried on for a few more hours, but I'd like to take a moment to first of all thank our team and the team at HEB for putting this session together and the work that they're doing around digitization of the profession and professionalisation, which is absolutely amazing. In itself, urge you all to look at the website and apply where you can. Thank you to our fantastic. Nick panel to Tashfeen, to Vinay and to Andrew for championing this. This particular area of digital transformation, AI and data in dentistry. Please do keep it going. Let the conversation continue. But thank you very much everyone for joining us today and your questions and see you next time.
Hatim Abdulhussein: Thank you. And thanks Sam, for that for an excellent sharing.
Media last reviewed: 3 May 2023
Next review due: 3 May 2024
These webinars also include the work the DART-Ed programme are undertaking around education, and preparing the workforce with the knowledge and skills they will need, now and for the future.
Webinar 5 - Artificial Intelligence (AI) and Digital Healthcare Technologies Capability framework
This webinar looked at how the AI and Digital Healthcare Technologies Capability Framework, published on 21 February 2023, addresses the need for our healthcare workforce to continually adapt to meet the needs of the society it serves. Health Education England (HEE) commissioned the University of Manchester to perform a learning needs analysis and develop a framework outlining the skills and capabilities to ensure our health and care professionals can work in a digitally and data enhanced environment.
On the panel were:
- Professor Adrian M Brooke (Chair) - Medical Director (Workforce Alignment) Health Education England, and Honorary Professor, University of Leicester
- Dr Hatim Abdulhussein - National Clinical Lead for AI and Digital Medical Workforce, Health Education England
- Dr Alan Davies - Senior Lecturer for Health Data Science, University of Manchester
- Professor Sonia Kumar - Professor of Medical Education and Associate Dean, University of Leeds
A recording of the webinar is available below.
Video: Watch webinar 5
Adrian Brooke: It's 12:45 and thank you all for coming. Those who have logged on and we hope you will stay with us for the whole time. So uh, can we ask that? All people who are joining the webinar have their cameras turned off and the microphones on mute the. Uh. Session will be recorded. And if you'd like to ask questions, but this is supposed to be a kind of a chat. And therefore, it's really helpful if there are questions, if you can put them in the chat window, which you can see if you click on this series of icons at the top of the screen, the very first icon in the model, middle of the teams screen. There's chat. If you click that. A window will open. I'll even do it now. A little column appears in in in black on the right-hand side of the screen, and then you can write your question there and then post it using the little uh paper aeroplane icon in the bottom right-hand corner of the screen and then we'll know what. Question you've asked and we can pose that to the panel that we've got. So thank you all for coming. Uhm. Can I start by asking? Uh. All our panel. Members to just briefly introduce yourself and. Where you work, what your job title is, and I I will I will demonstrate by saying my name is Adrian Brooke. I'm medical director at health. And England and my background clinically was in paediatrics and I'm an interested dinosaur from the pre digital age in area. So can I go over to you Sonia next please.
Sonia Kumar: Hello everyone lovely to be here. My name is Sonia Kumar. I'm a GP by background professor of medical education and I'm an associate Dean at the University of Leeds.
Adrian Brooke: Thanks, Sonia. And can we then move on to Alan next please?
Alan Davies: Hi everyone. I'm Doctor Alan Davies. I'm a senior lecturer at the University of Manchester and my background in nursing and computer science.
Adrian Brooke: Excellent and last but by no means least, Hatim.
Hatim Abdulhussein: Thanks Adrian. So my name is Hatim Abdul Hussain. I'm a GP in in northwest London, Health Education England, national lead for AI and digital workforce.
Adrian Brooke: Thank you. So and what we're going to do is just very briefly, uh, kind of to introduce this session, really just set the scene and. What I really just want to do here is in fact to just remind everyone that the AI digital healthcare technologies capability Framework was published this morning at 10:00 AM and is available on the NHS Digital Academy. Website. So the that report and the framework build on the findings and recommendations made in the top hall review, which came out in 2019 and was entitled preparing the healthcare workforce to deliver the digital feature and that outlined a set of recommendations. Preparing the NHS workforce to become world leaders and utilising digital technologies to the benefit of our patients, of course. Now we know clinical teams in near future will be required to use AI, artificial intelligence and other digital health technologies effectively and equitably. Really, for the benefit of all. And that's starting now actually. So this is not, this is not something for the distant future. This is occurring as we speak, but in response to this needs on health education England, HE's how it's foreshortened, it's an arms length body commissioned University of Manchester to undertake a learning needs. Assessment and create a capability framework. And that's to aid learning and development of. Our healthcare workforce. Now the framework aims to help healthcare workers identify gaps in their current knowledge and areas for preparatory activities to support digital transformation of the workforce, as well as their own individual learning. And it's this builds these capabilities that we've just published, build on the foundational digital literacy capabilities first introduced in a health and care digital capabilities framework.
Adrian Brooke: So the AI and digital healthcare framework extends this with capabilities around the use of. Health data and the technologies that make use of this data, for example applications. On your mobile. Phone or computer wearable technologies, software and programmes, etcetera. And this is further extended with more advanced capabilities like artificial intelligence and of course, the advent of robotics. And capabilities of course range across the whole spectrum from initial awareness through implementing these technologies in a healthcare environment and supporting digital transformation projects. So I'm going to shut up now. I think that's probably welcome for everyone and perhaps I'll turn to Hatim and Alan to present the framework. Thank you.
Hatim Abdulhussein: Thanks, Adrian. And so I'm just bringing up the slides and hopefully. We can all see you then. Give me a second. Yeah, OK everyone can see the slides. Thanks for joining us. Come here. Yeah, perfect. Alright. So I mean, I but you know myself, we're just gonna go through the the methodology behind the framework and and a brief overview of what the framework includes. But when I'm reflecting on what I'm going to say today, I was just looking back when I started my my GP training as as a registrar and started. Uh, my first placement as a GP in a practise in Hayes. I was doing majority face to face-to-face consultations and practising in a way that seemed very familiar to me and I think going back up only about two or three percent of my consultations were even telephone based, so the majority were with the patient in front of me. I don't want him to ask an emergency at the trainee and we hit the first wave of the pandemic and being an accident emergency, I noticed. Certain things. So I noticed how suddenly our nursing staff and them and see what collecting observations on on a device and we're inputting them into a. Then.
Adrian Brooke: I’m hearing people are struggling to see the slides, so we've got messages in the chat that says they can't see the slides.
Hatim Abdulhussein: Let me see if I can share that as a candidate.
Adrian Brooke: That's a few.
Hatim Abdulhussein: Give me a second.
Adrian Brooke: So if you go into presenter and all the we all disappear. That might be easier. Oh. Can someone message to say with the slides on out? Yeah, we've got some. Yes. OK, thank you. That's really helpful.
Hatim Abdulhussein: Wonderful. Wonderful. So, so. So being in Annie, I noticed that things were changing. So nursing staff were recording observations on, on, on, on a system and we were having to access that system to to be able to look at observations. And we had new healthcare records that we were using in that emergency department. And I remember going in one day in, in paediatric A&E and being told we've got this new system in place and not really being turned, showing how to use it and not really being kind of having had the time to really familiarise myself with the system. I then went into general practice and notice the whole world had changed. Uh, you know, when I logged into my my sister, my suddenly had widgets on the screen that allowed me to text and receive messages from Patty. And suddenly when I was looking at appointments, I had something called the consultation where people were giving me information beforehand and then I had to act upon and thinking about what I. Was going. To do going forward and all of a sudden I was doing about 50 to 60% of my consultations. Uh, either via the telephone or in some cases, even by video consultation. And I reflected on how one we got to this stage, but two, whether I felt that I had the best skills in place to, to be able to, to. Work in this new way. And. And I was also at a point where I was preparing for my general practise exams and a key part of my exams was to record myself consulting with patients and a lot of these consultations were over telephone and video. Now that became an opportunity for me. It allowed me to really analyse the way I consult with patients and to reflect on my educational supervisor around how best to do that and what kind of mitigation I needed to take when I'm consulting with a patient over a video call rather than over a phone or versus a face-to-face consultation. And so when I came into my role at Health Education England, it's very important for me to think about how we do our best to, to help people understand what they need to know, to be able to work with the types of technologies that we interact with patients with. And so that's really the thekey context behind why this is important. It will enable people working in, in, health and care to be able to understand the types of skills they need to have when interacting with people interacting with patients andd using technology passing over to Alan and I'll move the slides along.
Alan Davies: Thanks. So I'm just gonna talk very briefly about the sort of methods we use to to create the framework. So we used an iterative mixed methods approach to create it that involved code design as well. So this involved carrying out a systematic literature review to look at the academic side of things and where the different gaps were and a series of. Workshops which we did online and then that was followed up by a digital survey as well. Next slide please that in. So the the systematic literature review is really used to to generate some initial concepts and this was carried out by Health Education England's knowledge and management team and we also include as well as the academic literature. We looked at the great literature. So we looked at existing frameworks, international frameworks and other relevant policies and documents and we use this to generate. Set of a set of really groupings of topics and themes and concepts. So looking at the different things that were coming up constantly in the literature, that would seem to be important, and we grouped these together roughly into. Uh, what we call a concept map and that acted as the basis for the workshops to give people a starting point so they could look at the kinds of technologies and things that we were talking about under those different main areas and spark the the debate really. So if you move. To the next slide. So we carried out the workshops online. It was during the pandemic and we used, we used the thing called Myro, which is a an interactive board that allows multiple users to work at the same on the same page basically. And we also put people into breakout rooms and the series of workshops targeted different stakeholder groups. So the first one was really around. So they're people like the top or fellows there. We had NHS clinic or entrepreneurs we had in the second group, we had industry representatives. So this was Babylon Health, Google Health, Barclays and Bupa and the final workshop was focused around subject matter experts. So we used each of these three workshops to use the topics to spark discussion. And consider what the different capabilities might might involve, and then we're able to rank these in order of importance and complexity. Next slide. We use something called the nominal group technique for the workshops and this is quite a useful technique when you've got people that aren't familiar with each other, or you might have power dynamic imbalances. So essentially you've got this nominal phase where you privately consider the information, and we did this offline prior to the workshops, and then we have an item generation phase. This is all around. Radiation. So people come up with ideas without being interrupted. By others and in this we captured that on post. It notes on the Mira board and then you go back around to the clarification and discussion where you can kind of probe into the different ideas and ask people to explain them. And then finally, there's a voting stage where you're able to order the priority of the different items. So we use this to generate a draught version of the framework. Next slide, please. And then we sent that draught framework out via survey for wider participation so we could get more people to give us feedback. We took that feedback on board and then constructed the final version of the framework that you can see in the report. Next slide please. So the framework is as I mentioned before, it's built on top of the original digital literacy framework and that forms the foundation. And then on top of that, we've got a lot of skills around data. So obviously for a lot of these advanced technologies, wearables, AI, machine learning, they're all built on and understanding and use of data. And then on top of that, we've then got those, those technologies. And then on the higher end, we've got things like artificial intelligence and AI. So it's built up in that in that in that way basically. And it sort of straddles the space between the original digital literacy framework, which is very much around basic digital competencies. You know, so can you switch your machine on send emails and do all these fundamental digital things? And then at the other end, we've got special frameworks for special groups like Informaticians. And this framework very much straddles that space in between. The expert frameworks and the very fundamental digital literacies. Next slide, please. So umm the other problem we had here is how do you make a framework where you've got so many different types and roles in the NHS, so many different types of workers in the NHS workforce? So it would be quite a challenge to map these these capabilities onto all those different working groups. And the other problem is some of these work working groups will have different roles. So you might be a clinical nurse, but you might also be involved. In informatics projects, for example, so you might wear multiple hats. So to get around this, we use archetypes instead. So essentially we map. The capabilities onto archetypes and then people can self-identify which archetype or archetypes they belong to, or their managers can do this as well, and the archetypes include things like shapers. So this can be people in leadership positions or arms-length, arm length bodies. We've got drivers. So this can be your CIO's and CIO's. Creators. So these are people that are actually creating some of this stuff, engineers, data. Tests and then we've got embeddings. So these are people actually embedding some of these things into the into the various systems. So it teams and so forth. And then we've got the users as well. So people actually use the technologies and and it is possible that you you can you can come under one or more of these different archetypes at different points. Next slide please. We also use something called Bloom's digital taxonomy. So for any educators out there, you're probably quite familiar with blooms. It's quite a popular framework that's often used in in education, and this is a digital version of that framework. And we mapped all of the different capability statements onto Bloom's taxonomy as well. And it really includes moving from lower-order thinking. Skills through to higher order thinking skills, so at the lower end you've got things like remembering things and and basic understanding moving through to application, analysing, evaluating and then creating. So we use Bloom's taxonomy across the framework and and through the different sections as well. Next slide please. So the framework itself is split into a number of key domains, and these domains include things like digital implementation, digital health for patients, and the public ethical, legal, and regulatory considerations, human factors, health, data management and artificial intelligence. Next slide please. And a number of these domains also have sub domains so you can see there for example that that that they break down further. So AI includes things like robotics, we've got things like management and leadership under human factors and ethics and regulations under under the legal issues and so forth. And inside each of these we've got a number of individual capability statements. Next slide please. So on on each of these, uh, uh domains and sub domains, we've got a number of statements split into 4 levels. They're split into 4 levels to make this compatible with the original digital literacy framework, so it's a familiar structure and the levels really just infer increasing complexity or difficulty. So level one's going to be easier than level 4. And then within each of these levels, you've got the actual. Capability statements themselves and then these are mapped onto those different archetypes that you can see at. The bottom there. So that's a that's a quick whistle stuck tour through kind of how we designed the framework and kind of what the framework consists of. And I'll pass back to hatting.
Hatim Abdulhussein: Thanks, Alan. And our key message here is says great, we've got a framework, we've got an idea at a very early stage of what these capabilities might be. But how do we make sure that one is sustainable? And two, we get the impact we need to get in terms of me being in my clinic room as a GP to touch the skills that I would need to be able to work the touch technologies that I'm interacting. And so the first thing is to say that technology is, is fast adapting and in our framework, we've done our best to make sure that we're technology agnostic, but we need to make sure that we continue to keep live to, to advancements and developments in this area. And so we're gonna be doing some work to make sure we have a a mechanism in place to continue to review and refresh the the capabilities with the and within the framework. As well as building new areas as things emerge in in, in policy and in healthcare. The second part is we want to empower individual learners to be able to use this framework, so it's about embedding it into existing health, education, England, learning platforms or tools such as the the Learning hub. So that individuals can really measure their own, learning their own aspirations for where they want to get to and and and and. And that will then drive them forward in terms of what kind of skills. They develop based on that material out there. And then the final part is to be able to make sure we're working with their educational bodies. Uh, like, like people like Sonia. Who, who's who's working women, a higher education institution or our all colleges or our professional regulators to be able to support the educational reform that we need as a result of the learning that we have developed over the past year and a half in. Developing this framework. And so that we know that when I am entering my TP training, I have it quite clearly within my remit to be able to develop these skills naturally within the competency, the capabilities that I need to build as part of becoming a GP. I hope that's been a helpful overview of the framework, but and I'll pass it back now over to to Adrian for for for the discussion.
Adrian Brooke: Thanks Hatim and thanks Alan for for a kind of a lightning tour through the the rationale, background, development and deployment of of the framework. So. So thank you very much. What we'd like to move on now to this discussion on how we can implement that framework. Uh, in undergraduate and postgraduate uh training. So I'm and I'm going to turn to to Tim and Alan. And Sonia, I mean, we have got this. You know this funny triad between? If you like the individual, the framework, the places individuals are in, for example, you know postgraduate or undergraduate courses and and we've got the changing landscape as well. So we've got lots of moving targets. And of course, we've got a regulatory framework as well to navigate as well because some Healthcare is are highly regulated field for obvious and very good reason. Which may not always be quite as adaptive I would imagine, and I don't. If anyone would like to kind of comment on some of the difficulties that that that throws up or you know, maybe around assessments or stuff.
Hatim Abdulhussein: I can go first and then so whenever. Ohh. When we kicked off this piece of work, I think we made it very clear at the start that we needed to be engaging with with educational bodies right from the start to be able to help them understand one why we're doing this and and two how they might use the product at the end. And some early examples of where that's been in effect is is, for example, the British Institute of Radiology. So we did a piece of work in, in, in January of of last year that looked at the AI and data-driven technologies in the NHS and what workforce groups they affect at the top of that tree, we saw radiology, radiologists and and near the top healthcare scientists as well. And so. In further conversations off the back of that with the British Institute of Radiology, we were able to say, well, look, this is going to be really important for your membership. It's going to be really important for those that are working in the professional groups that that you're responsible for. What can we do to to enable the the that learning that these groups need to need to have to be able to work with these technologies. And we've got a a webinar series and and and. Learning materials that are being developed by the British Institute of Radiology and are launching at the AI Annual Congress as well in a few months time and so. The key is is is to find the the bodies that are, you know, really valued at the importance of this and are looking to to work with us to to to build some of that proof of value. Uh for for uh. The learning in this space.
Adrian Brooke: Thank you. So so it sounds like some colleges are kind of acknowledging this and you know, sometimes we say in education and we the assessment drives learn. And therefore, if, if you're going to be asked about it in the exam, that's quite a powerful driver. Clearly, a lot of the workforce is not in training or education, but is post training, as it were in in service roles. But still needs to know so the the the if you like the examination, uh press or to make you learn it's slightly less. Urgent. But I'm just wondering. For example, uh, stuff like finals for undergraduates. You know what? What's the inclusion in of the digital kind of agenda in that and how, how might this framework relate to that? Sonia, can you can you? I I know you're GP, but I'm. I'm. I'm. I'm. I'm thinking things like licencing exams and stuff.
Sonia Kumar: Yeah.
Adrian Brooke: Like that so.
Sonia Kumar: Yeah, I know. I've involved in medical education for quite a number of years. I mean, I have to say First off, I mean, I'm really excited by this because it's a very clear outline of the domains that you need to consider with digital health. Technologies, but I think equally I'm also quite worried about how the health service is sort of moving at breakneck speed in how we're adopting digital technologies and indeed how society is as well. We all know that there's Google, there's wearables, there's apps, you know, digital health is part of our everyday lives. But yet when you look at the training needs and how it's being integrated into undergraduate curriculum, and that's across the health professions, postgraduate curricula, you do start to think that actually digital help. Of at best, is is sometimes mentioned it. It isn't. It isn't a strong theme and I think one of the really sort of beautiful ways of highlighting this is the medical licencing exam, which comes in just for medical students in 2024, doesn't really mention digital health, even though it does have an area around of around capabilities. I did a bit of a look yesterday and I was putting in words like technology, digital, remote consulting, anything that could encapsulate what we're talking about today and it just isn't reflected and that that that's new that, that that isn't that hasn't even been launched yet. That's coming out in 2024. So that disconnect between what society. Is is moving ahead with what the NHS and HG's moving ahead with, but yet how educational bodies and that's undergraduate and postgraduate are sort of somehow lagging behind I think will be will be a problem not only for dissemination of this framework, but actually the bigger thing is is actually how are we supporting our patients rather like you her team. I remember a patient coming in with their genome profile and I had a student in with me. You know, I was totally out of my depth and had to counsel the patient about their risk with for various conditions. So not only is there a training need for our pipeline, our students, there's a huge training need for our trainers. You know who who is going to be teaching our students all of these six domains around digital health? So I don't know. I don't want to use the word emergency, but I do think there is a digital health emergency that we need to address.
Adrian Brooke: Thank you so much. It's really, uh, kind of powerful was call to action, isn't. It yeah that. That the that we need to catch up across the system, maybe it reflects A societal A wider societal issue where we've got the kind of inexorable and ever quickening March of technology. And across society we struggle to catch up and are playing catch up with it. But from a kind of list point of view. And if you like, this is an aspect in a in a in a part of medical education or healthcare, education and practise. So I think that's a really powerful observation. And and it we have got this strange situation. Have we not and I'd be interested to hear people's comments on this that you know that everything is moving really quite rapidly there. Normally in in a lot of healthcare knowledge and understanding. It's sort of held behind a kind of a a bit of a Mystic shroud of learning, isn't it? So we we've. At this aspect of the the the doubling time of medical knowledge and it used to be 50 years and then 25 years, and then ten years. And I think it's currently at about 70 days and shrinking. But for technology which is often released in a commercial setting. 1st and then adapted for healthcare rather than the other way around. Actually, if you like our public our way ahead of us in terms of their use and and often their sophistication certainly for some parts of the population. So I think that's another challenge. How how do you do you think the framework will help? Our our healthcare workforce, if you like, map their progress and their learning journey that helps equips them to meet that challenge. Perhaps I'll ask Alan because you you described the the construction of the framework.
Alan Davies
It's thanks. Actually I think I think it has the potential to do that definitely and we've we've certainly been. And and we've we've, we've we've put parts of the framework and mapped that against our new clinical data science programme, for example. So we're trying to try and embed these things in, in some of those postgraduate work that we're doing. We've also got a lot of stuff going on in nursing at the University of Manchester, so particular modules and courses. You know, a lot of them tend to be. That's graduate focus, though, because you know there's a lot of crowded curriculum in a lot of these medical professions. So in medicine and and nursing that they're always putting more and more things in. And obviously digital is very important, but often we're seeing you know that that it's maybe not getting the attention that it deserves, but some people are also trying to embed that into normal normal kind of practise and put it into other units and other things which I think is a good idea. It's another way to to embed some of the digital stuff in there as well. So we're seeing sort of more and more adoption of these things and if it can be incorporated into other modules and. Interdisciplinary learning as well. When we're working with other professional groups, because that's what happens in reality and you're you're going to use this technology a lot to communicate with other groups and other departments. And really we need to start embedding this early on in the undergraduate and postgraduate curriculum. So I think definitely having that. Framework and the ability for people to sort of look at that and see what those requirements might be certainly gives educators something they can start to work with and start to make sure that they're including some of those main elements.
Adrian Brooke: So, so early adoption, uh dissemination and uptake kind of key key themes I think coming out out of of of your answer. And how how might you see that for example Hatim or Sonya in GP training, given that that's both your clinical backgrounds is, is that something you've you've seen heard? I mean I'd wanna make presumptions, but I suspect has him slightly closer to training than Sonia, but maybe not very not very much. But from from your point of view Hatim have you seen that or Sonya, have you seen that in practise?
Hatim Abdulhussein: So my reflection during my training is I think that we're on a journey here that similarly we were on with leadership with, with quality improvement, which are areas that have fell in quite naturally into the the, the GP portfolio or or the GP kind of workplace based assessments. And now when I was training that whilst there wasn't a a specific section around digital, I made sure that I. Did a lot of reflection on the way that I use digital tools in in the way I interacted with my patients and I did a I spent a lot of time thinking about, well actually did that make things better? Did that help the, the, the the case or did that actually make things worse? Was that the right modality to to choose to communicate with that patient? And was I excluding them from from care by by by using that modality and I spent a lot of time reflecting on these things more naturally purely because it was important to me. But I think we do need to create the conditions within the portfolio to to support people to to be able to do that reflection and and to be able to make to understand that better because ultimately this is all about safe. Ethical patient care and and and to be able to to deliver safe, ethical patient care you to be able to be competent in working with the tools that you're working with and understand their their strengths and their limitations.
Sonia Kumar: I just sort of, I suppose just adding from an undergraduate perspective, you know, I I think the evidence around I wrote a paper around this, the evidence around how you teach digital health, how you actually embed this in curricula is quite sparse. So I think that there's there's a real gap. There is, how do we actually get this information and. And skills and values around digital inclusion out to our students. Clearly, PowerPoints not gonna do it. You know, teaching our students all of this in lectures isn't gonna do it. I mean, one thought that I have is is 1 and this is bringing in my experience around community based learning and and being a GP is that students and also just building on what you said her team around quality improvement. One thing that I have done went previous to my role at the University of Leeds. I was at Imperial College London for 10 years and one thing that we did there is that the students year three medical students. Is what we call Community Action projects and we sort of focused these around hot topics such as COVID vaccine hesitancy. And just one thought is that students could do quality improvement projects with communities in that they learn the knowledge base and the skill based around digital health. But they do that through working with communities in upskilling communities, in digital literacy. So you sort of have a double win there that not only the students learning, but actually they're learning through. Nervous because I do think that we need to think about training needs not just of our healthcare professionals, but also the gaps in our in our in, in patients. We need to empower them so that when they are coming with information, they've been able to do a little appraisal of the information themselves and that they're not spending huge amounts of money and time on digital health technologies that may not be. Best for the health?
Adrian Brooke: Thank you, senator. That that's a really helpful insight actually. So we do have a question and so thank you for the questions and comments which are coming through on the chat. We have a question from Jane Daily that that that was dinner at 12:29 for members of the panel who want to look at the question before I fire it out. View so it says digital first will only be integrated or embedded if workforce contracts and the rewards and recognition system is revolutionised. How does this align with critical and strategic workforce planning? I've got horrible things. I read that and it might come back my way, but does anyone want to kind of start off with that as a response?
Hatim Abdulhussein: Yeah, I can kick us off and actually it would be great to have your views on that as well, but am I guess the the way the way I see this is is important. Uh, this is a particular group that we haven't necessarily focused on just now. We're talking about undergraduate and postgraduate uh. Training. But actually there's a whole group around continuing professional development who are out there working in the NHS that will need to equally have these skills looked at and be able to kind of be supported to be able to keep their skills up to date or develop their skills where gaps lie. And I think the key here is is, is, is culture of an organisation, top down leadership in terms of saying this is important. To be able to develop the skills in this area, making some of these things are built into. Uh. Annual appraisals so that you're able to at some point, you know, look at your digital literacy use as something like a digital self-assessment tool that we're developing at health education in England and and piloting in, in, in, in, in, in the north of the country to be able to say where am I right now and where do I need to go and have a really open and frank conversation with your line manager in terms of how you then develop those skills and why it's important that you develop those skills. And and so if you have all of that kind of naturally happening within an organisation, you're going to be more digitally mature as an organisation and. And so it's important that we work with with providers to to be able to, to. Enable that.
Adrian Brooke: Thanks. Thanks for Hatim. Yeah. And I I think from a a workforce planning is really quite a complex thing, isn't it? Because a lot of planning is there's kind of short medium and long term planning and some of that planning you know long term planning assumes or if you like has foresight that there will be. A great deal. Of digital technological change. And and yet it can't be exact in articulating exactly what it would look like or how it affects. If you like the you know what's often only workforce planning cycles as circles, as productivity or your workforce requirements or your learning requirement. Even so, it becomes really quite an inexact science at that stage. And as we know. Kind of current. Progress is not always a good kind of predictor of future growth in in in the area. So it's it's it should be quite easy. I think it's the answer. It's actually very, very difficult to to do accurately beyond very broad assumptions. I think that's one of the issues. So it's a really good question. Kind of highlighting some of the some of the difficulties in in trying to do that, I think reward is a really useful kind of. That local example of how you can reward to your workforce for training. And pursuing that that knowledge, journey and competency and capability journey amongst uh, for, for digital. And we know for example, there are yeah, areas of practise and so protect the example of diagnosis of stroke where. AI technologies for for imaging to to diagnose kind of strokes which are amenable to intervention. I think that's my understanding of the technology that's being. Because you know that's grown from about 10% uptake a couple of years ago and it's about 70% of units and now using it. So there, there, there, there's that rapid growth, it would be quite hard to predict where it's incredibly welcome and and some of the other technological learning advancements. Which require greater interplay with. If you like skill, individual skill might take a bit longer and of course need guarantees and regulation because you know you don't want to be doing robotic surgery on people if you're not properly qualified to to to undertake that procedure. For example, very simplistic view. So umm, I think, uh Alan, you have your hands up. So please do come.
Alan Davies: So I think hatting was first.
Hatim Abdulhussein: No, that's fine. And I took it off. I took it off. Go. Go for it.
Alan Davies: I was just gonna. Yeah, I was just gonna say another another thing that I think is quite important with this as well is we talk a lot about the the digital. Literacy. But as the technologies get more advanced, they're often closely associated with data. So there's this concept of data literacy as well, where you know if, if you're if you're not putting the right data into it or or or doing that in the right way, obviously what you get out of it can be affected. So I think that's another key thing is also having access to this. Data for people to learn from as well, and to learn how to use data and therefore you know. So it's not just the tools that we need to sort of teach people, it's the data that goes into the tools and how that's collected and and maintained as well as it were. And also you know we often have trouble with that in academia getting access to real data sets and things like that to train people on. So we're looking at things like synthetic data and. You know, using things like electronic health record systems and using sort of fake data and things like that. But again, the sooner we can get people using some of these tools and the data that's associated with it as well and getting them comfortable with using data that's going to help as well in this in this. There I think.
Adrian Brooke: Right, so there's. A really good. Question from Catherine uh Worley at 12:34 for the time stamp aware amongst the panel which says do we need to up skill? The trainers first can't teach effectively something you don't understand yourself which is fantastic question. Catherine and incredibly true. So who would like to have a go at answering that? Hatting I saw the ghost to the nod there, so that that means that means it's you.
Hatim Abdulhussein: No, I'd love to hear Sonia's opinion on this as as a a senior educator and what your thoughts are in terms of one, I think we might all I'm hoping we're all going to say yes. And two, how do we then do it?
Sonia Kumar: Well, I suppose just turning this on its head so. This is pre COVID I think 20/17/2018 I or maybe no. It was around 20/19. It was around the time of the total review. I set up a module for medical students called digital health futures and looking at it it wasn't particularly forward thinking but you know it was. It was on the basis of the total review which was where we really started to embed some of this. Learning for medical students. And what became apparent exactly as you say, Catherine, is that none of us really knew as much as the students. And so that really is where the light bulb hit that actually, I do wonder whether it is the new generation. It is our students that will be upskilling us. Obviously, we don't want to do complete reverse teaching, but I do think there is something about Co creating. Any curricular changes with our students, they are so completely savvy not only with just digital tech on their on their smartphones. But also around that there are a lot of students that are really, really excited about digital health and know a lot about it. And so when we ran this module, the students were absolutely teaching us. And so when we developed the module and and we presented some of our our work at conferences, we were very much working alongside. Students. So I think the how has to be with the new generation who have been brought up with digital education and digital health.
Adrian Brooke: Great. And of course, there's nothing to stop our educators from using the same capability and competency framework themselves, plot their own journey, and make sure they're teaching to the right level. That, as, as you say, you've got to understand a subject properly to be able to teach it. Well, I think that's one that. Really observed. So we're we're just coming to the end, we've got the last couple of minutes and there's one very quick question. Which I'm hoping. There will be a really quick answer to that. I think talks about the cheap digital literacy literacy assessment from Carrie O'Reilly digital literary assessment. Is this available for wider use?
Hatim Abdulhussein: So yeah, to my understanding, the digital self-assessment tool is is currently being piloted and it's not open for for wider use as of yet. But but then, but hopefully will be soon and I'll I'll share a a link in the chat to the website so that people can stay updated in, in terms of its progress.
Adrian Brooke: That that. That's brilliant. Fantastic. There are lots of really interesting and insightful comments on the chat, and I can reassure you as we approach the last minute of the webinar because I think we log off at 12. 45. UM that, UM, uh. Recording of the webinar will be will be made available and it will be on the darted web pages. Uh, so, uh, and we'll add a link in the chat which I hope will come soon and and there's a Twitter channel at NHS Digital Academy. So I think this conversation and the developments can be followed on on that, on Twitter. On. And I think I I hope, I hope the link can get posted into that as well. So I I would really, really like to thank our panel today. I'd like to thank Sonia for for our input and development of this. I'd like to thank Alan similarly. Thank you so much. And I'd like to thank I'm Hasim for really kind of, uh, helping coordinate and drive a lot of this in HE. So thank you so. Much. I'd also like to thank. Beth and Emily, who you won't see on this on webinar, but basically without their abilities to organise and corral us four of us into a room or be it virtually at this time, none of this would happen. Thank you so much for listening and tuning in. And I hope we'll have further conversations and look forward to you all all joining us in the future. Thank you. Good afternoon.
Media last reviewed: 3 May 2023
Next review due: 3 May 2024
Page last reviewed: 12 September 2022