References
This page lists references for the report.
- Joshi I, Morley J. Artificial Intelligence: How to get it right. Putting policy into practice for safe data-driven innovation in health and care. 2019:1-55. https://www.nhsx.nhs.uk/ai-lab/explore-all-resources/understand-ai/artificial-intelligence-how-get-it-right. Accessed February 28, 2022.
- Hardie T, Horton T, Willis M, Warburton W. Switched on. How Do We Get the Best out of Automation and AI in Health Care? 2021. doi:10.37829/HF-2021-I03
- AI Roadmap report and interactive dashboard - Health Education England. https://www.hee.nhs.uk/our-work/dart-ed/ai-roadmap. Accessed February 28, 2022.
- Spiegelhalter D. Should We Trust Algorithms? Harvard Data Sci Rev. January 2020:1-12. doi:10.1162/99608f92.cb91a35a
- Topol E. The Topol Review: Preparing the Healthcare Workforce to Deliver the Digital Future. 2019. https://topol.hee.nhs.uk/the-topol-review/. Accessed February 28, 2022.
- NHS. NHS Long Term Plan: Digital transformation. NHS England. https://www.longtermplan.nhs.uk/areas-of-work/digital-transformation/. Published 2019. Accessed February 28, 2022.
- National AI Strategy - GOV.UK. https://www.gov.uk/government/publications/national-ai-strategy. Accessed February 28, 2022.
- The National Strategy for AI in Health and Social Care - NHS AI Lab programmes - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/ai-lab-programmes/the-national-strategy-for-ai-in-health-and-social-care/. Accessed February 28, 2022.
- Sinha S, Al Huraimel K. Transforming Healthcare with AI. In: Reimagining Businesses with AI; 2020:33-54. doi:10.1002/9781119709183.ch3
- Liu X, Keane PA, Denniston AK. Time to regenerate: the doctor in the age of artificial intelligence. J R Soc Med. 2018;111(4):113-116. doi:10.1177/0141076818762648
- How to build trust with Trusts on artificial intelligence - Med-Tech Innovation. https://www.med-technews.com/medtech-insights/ai-in-healthcare-insights/how-to-build-trust-with-trusts-on-artificial-intelligence_1/. Accessed February 28, 2022.
- Leslie D. Understanding artificial intelligence ethics and safety. 2019. doi:10.5281/zenodo.3240529
- Parikh RB, Teeple S, Navathe AS. Addressing Bias in Artificial Intelligence in Health Care. JAMA - J Am Med Assoc. 2019;322(24):2377-2378. doi:10.1001/jama.2019.18058
- Leslie D, Mazumder A, Peppin A, Wolters MK, Hagerty A. Does “AI” stand for augmenting inequality in the era of covid-19 healthcare? BMJ. 2021;372. doi:10.1136/bmj.n304
- UK to pilot world-leading approach to improve ethical adoption of AI in healthcare. GOV.UK. https://www.gov.uk/government/news/uk-to-pilot-world-leading-approach-to-improve-ethical-adoption-of-ai-in-healthcare. Accessed March 8, 2022.
- The multi-agency advice service (MAAS) (now know as the AI and Digital Regulations Service) - Regulating the AI ecosystem - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/ai-lab-programmes/regulating-the-ai-ecosystem/the-multi-agency-advice-service-maas/. Accessed March 7, 2022.
- Digital Technology Assessment Criteria (DTAC) - Key tools and information - NHS Transformation Directorate. https://www.nhsx.nhs.uk/key-tools-and-info/digital-technology-assessment-criteria-dtac/. Accessed March 7, 2022.
- MHRA. Software and AI as a Medical Device Change Programme. https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/software-and-ai-as-a-medical-device-change-programme. Published 2021. Accessed March 7, 2022.
- HM Government. Designated standards: medical devices - GOV.UK. https://www.gov.uk/government/publications/designated-standards-medical-devices. Accessed March 7, 2022.
- NHS digital, data and technology standards - NHS Digital. https://digital.nhs.uk/about-nhs-digital/our-work/nhs-digital-data-and-technology-standards. Accessed March 7, 2022.
- IMDRF. International Medical Device Regulators Forum - Software as a Medical Device (SaMD): Key Definitions. 2013. https://www.imdrf.org/documents/software-medical-device-samd-key-definitions. Accessed March 7, 2022.
- CQC. The five key questions we ask - Care Quality Commission. Care Quality Commission. https://www.cqc.org.uk/what-we-do/how-we-do-our-job/five-key-questions-we-ask. Published 2016. Accessed March 7, 2022.
- CQC. What we do - Care Quality Commission. https://www.cqc.org.uk/what-we-do. Published 2019. Accessed March 7, 2022.
- Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. npj Digit Med. 2021;4(1). doi:10.1038/s41746-021-00509-1
- Wall E, Stasko J, Endert A. Toward a Design Space for Mitigating Cognitive Bias in Vis. 2019 IEEE Vis Conf VIS 2019. 2019:111-115. doi:10.1109/VISUAL.2019.8933611
- Anwar R. Good medical practice. BMJ. 2003;327(7425):1213. doi:10.1136/bmj.327.7425.1213
- Smith H. Clinical AI: opacity, accountability, responsibility and liability. AI Soc. 2021;36(2):535-545. doi:10.1007/S00146-020-01019-6/FIGURES/1
- Hwang EJ, Park S, Jin KN, et al. Development and Validation of a Deep Learning-Based Automated Detection Algorithm for Major Thoracic Diseases on Chest Radiographs. JAMA Netw open. 2019;2(3):e191095. doi:10.1001/jamanetworkopen.2019.1095
- Beede E, Baylor E, Hersch F, et al. A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy. In: Conference on Human Factors in Computing Systems - Proceedings. 2020. doi:10.1145/3313831.3376718
- HM Government. The medical devices regulations 2002. 2002;(618):1-40. https://www.legislation.gov.uk/uksi/2002/618/contents/made. Accessed March 7, 2022.
- Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: Systematic review of design, reporting standards, and claims of deep learning studies in medical imaging. BMJ. 2020;368. doi:10.1136/bmj.m689
- Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digit Heal. 2019;1(6):e271-e297. doi:10.1016/S2589-7500(19)30123-2
- Interim guidance on incorporating artificial intelligence into the NHS Breast Screening Programme. Gov.uk. https://www.gov.uk/government/publications/artificial-intelligence-in-the-nhs-breast-screening-programme/interim-guidance-on-incorporating-artificial-intelligence-into-the-nhs-breast-screening-programme. Published 2021. Accessed March 7, 2022.
- Gille F, Jobin A, Ienca M. What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intell Med. 2020;1-2:100001. doi:10.1016/j.ibmed.2020.100001
- NICE. Evidence standards framework for digital health technologies. 2019. https://www.nice.org.uk/about/what-we-do/our-programmes/evidence-standards-framework-for-digital-health-technologies. Accessed March 7, 2022.
- NICE. NICE META Tool. https://meta.nice.org.uk/. Published 2021. Accessed March 7, 2022.
- Collins GS, Dhiman P, Andaur Navarro CL, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021;11(7):e048008. doi:10.1136/bmjopen-2020-048008
- Sounderajah V, Ashrafian H, Golub RM, et al. Developing a reporting guideline for artificial intelligence-centred diagnostic test accuracy studies: The STARD-AI protocol. BMJ Open. 2021;11(6):e047709. doi:10.1136/bmjopen-2020-047709
- Rivera SC, Liu X, Chan AW, Denniston AK, Calvert MJ. Guidelines for clinical trial protocols for interventions involving artificial intelligence: The SPIRIT-AI Extension. BMJ. 2020;370. doi:10.1136/bmj.m3210
- Liu X, Cruz Rivera S, Moher D, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26(9):1364-1374. doi:10.1038/s41591-020-1034-x
- Vasey B, Clifton DA, Collins GS, et al. DECIDE-AI: new reporting guidelines to bridge the development-to-implementation gap in clinical artificial intelligence. Nat Med. 2021;27(2):186-187. doi:10.1038/s41591-021-01229-5
- Sounderajah V, Ashrafian H, Rose S, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat Med. 2021;27(10):1663-1665. doi:10.1038/s41591-021-01517-0
- STANDING Together Working Group. STANDING together. 2021. https://www.datadiversity.org/. Accessed March 8, 2022.
- González-Gonzalo C, Thee EF, Klaver CCW, et al. Trustworthy AI: Closing the gap between development and integration of AI systems in ophthalmic practice. Prog Retin Eye Res. December 2021:101034. doi:10.1016/j.preteyeres.2021.101034
- A buyer’s guide to AI in health care - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/explore-all-resources/adopt-ai/a-buyers-guide-to-ai-in-health-and-care/. Accessed March 8, 2022.
- CDDO. Algorithmic Transparency Standard. GOV.UK. https://www.gov.uk/government/publications/algorithmic-transparency-data-standard. Published 2021. Accessed March 7, 2022.
- Google Cloud Model Cards. https://modelcards.withgoogle.com/about. Accessed March 7, 2022.
- Sendak MP, Gao M, Brajer N, Balu S. Presenting machine learning model information to clinical end users with model facts labels. npj Digit Med. 2020;3(1):1-4. doi:10.1038/s41746-020-0253-3
- Leslie D. Explaining Decisions Made with AI. SSRN Electron J. 2022. doi:10.2139/ssrn.4033308
- A guide to good practice for digital and data-driven health technologies - GOV.UK. https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology. Accessed March 7, 2022.
- What Good Looks Like framework - What Good Looks Like - NHS Transformation Directorate. https://www.nhsx.nhs.uk/digitise-connect-transform/what-good-looks-like/what-good-looks-like-publication/. Accessed March 7, 2022.
- A guide to using artificial intelligence in the public sector - GOV.UK. https://www.gov.uk/government/publications/a-guide-to-using-artificial-intelligence-in-the-public-sector. Accessed March 7, 2022.
- Good Machine Learning Practice for Medical Device Development: Guiding Principles - GOV.UK. https://www.gov.uk/government/publications/good-machine-learning-practice-for-medical-device-development-guiding-principles. Accessed March 7, 2022.
- Medical Technologies Evaluation Programme - NICE guidance. https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance/nice-medical-technologies-evaluation-programme. Accessed March 7, 2022.
- Diagnostics Assessment Programme - NICE guidance - Our programmes. https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance/nice-diagnostics-guidance. Accessed March 7, 2022.
- HeartFlow FFRCT for estimating fractional flow reserve from coronary CT angiography - Guidance - NICE. https://www.nice.org.uk/guidance/mtg32. Accessed March 7, 2022.
- Zio XT for detecting cardiac arrhythmias - Guidance - NICE. https://www.nice.org.uk/guidance/mtg52. Accessed March 7, 2022.
- An Innovator’s Guide to the NHS.; 2020. https://www.boehringer-ingelheim.co.uk/sites/gb/files/documents/innovators_guide.pdf. Accessed March 7, 2022.
- NICE. Medtech innovation briefings. https://www.nice.org.uk/about/what-we-do/our-programmes/nice-advice/medtech-innovation-briefings. Accessed March 7, 2022.
- NICE. The technologies - Artificial intelligence in mammography. https://www.nice.org.uk/advice/mib242/chapter/The-technologies. Accessed March 7, 2022.
- Principled Artificial Intelligence - Berkman Klein Center. https://cyber.harvard.edu/publication/2020/principled-ai. Published 2020. Accessed March 7, 2022.
- Government Digital Service. Data Ethics Framework - GOV.UK. Government Digital Service. https://www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-2020. Published 2020. Accessed March 7, 2022.
- WHO. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance.; 2021. http://apps.who.int/bookorders. Accessed March 7, 2022.
- Hesketh R. Trusted autonomous systems in healthcare A policy landscape review. 2021. doi:10.18742/pub01-062
- NHS AI Virtual Hub - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/ai-lab-virtual-hub/. Accessed March 8, 2022.
- Dermatology digital playbook - Digital playbooks - NHS Transformation Directorate. https://www.nhsx.nhs.uk/key-tools-and-info/digital-playbooks/dermatology-digital-playbook/. Accessed March 7, 2022.
- NHS. Interoperability Toolkit - NHS Digital. https://digital.nhs.uk/services/interoperability-toolkit. Published 2021. Accessed March 7, 2022.
- Gaube S, Suresh H, Raue M, et al. Do as AI say: susceptibility in deployment of clinical decision-aids. npj Digit Med. 2021;4(1):1-8. doi:10.1038/s41746-021-00385-9
- Garcia-Vidal C, Sanjuan G, Puerta-Alcalde P, Moreno-García E, Soriano A. Artificial intelligence to support clinical decision-making processes. EBioMedicine. 2019;46:27-29. doi:10.1016/j.ebiom.2019.07.019
- van Baalen S, Boon M, Verhoef P. From clinical decision support to clinical reasoning support systems. J Eval Clin Pract. 2021;27(3):520-528. doi:10.1111/jep.13541
- NICE. NICE guidelines. PSA testing | Diagnosis | Prostate cancer | CKS |. https://cks.nice.org.uk/topics/prostate-cancer/diagnosis/psa-testing/. Published 2017. Accessed February 28, 2022.
- Saraiya M, Kottiri BJ, Leadbetter S, et al. Total and percent free prostate-specific antigen levels among U.S. men, 2001-2002. Cancer Epidemiol Biomarkers Prev. 2005;14(9):2178-2182. doi:10.1158/1055-9965.EPI-05-0206
- Kelly CJ, Karthikesalingam A, Suleyman M, Corrado G, King D. Key challenges for delivering clinical impact with artificial intelligence. BMC Med. 2019;17(1):1-9. doi:10.1186/s12916-019-1426-2
- Magrabi F, Ammenwerth E, McNair JB, et al. Artificial Intelligence in Clinical Decision Support: Challenges for Evaluating AI and Practical Implications. Yearb Med Inform. 2019;28(1):128-134. doi:10.1055/s-0039-1677903
- Myers PD, Ng K, Severson K, et al. Identifying unreliable predictions in clinical risk models. npj Digit Med. 2020;3(1):1-8. doi:10.1038/s41746-019-0209-7
- Benda NC, Novak LL, Reale C, Ancker JS. Trust in AI: why we should be designing for APPROPRIATE reliance. J Am Med Inform Assoc. 2021;29(1):207-212. doi:10.1093/jamia/ocab238
- Shen J, Zhang CJP, Jiang B, et al. Artificial intelligence versus clinicians in disease diagnosis: Systematic review. JMIR Med Informatics. 2019;7(3):e10010. doi:10.2196/10010
- Lee MH, Siewiorek DP, Smailagic A. A human-ai collaborative approach for clinical decision making on rehabilitation assessment. Conf Hum Factors Comput Syst - Proc. 2021;(Figure 1). doi:10.1145/3411764.3445472
- Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res. 2020;22(6):e15154. doi:10.2196/15154
- Petkus H, Hoogewerf J, Wyatt JC. What do senior physicians think about AI and clinical decision support systems: Quantitative and qualitative analysis of data from specialty societies. Clin Med J R Coll Physicians London. 2020;20(3):324-328. doi:10.7861/clinmed.2019-0317
- Westbrook JI, Raban MZ, Walter SR, Douglas H. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: A prospective, direct observation study. BMJ Qual Saf. 2018;27(8):655-663. doi:10.1136/bmjqs-2017-007333
- Larasati R, Liddo A De, Motta E. AI Healthcare System Interface: Explanation Design for Non-Expert User Trust. CEUR Workshop Proc. 2021;2903.
- Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf. 2019;28(6):495-498. doi:10.1136/bmjqs-2019-009484
- Blease C, Bernstein MH, Gaab J, et al. Computerization and the future of primary care: A survey of general practitioners in the UK. PLoS One. 2018;13(12):e0207418. doi:10.1371/journal.pone.0207418
- PWC. What doctor? What Dr. 2017;(June):1-50. http://medicalfuturist.com/. Accessed February 28, 2022.
- Mori I. Public views of Machine Learning Findings from public research and engagement. 2017;(April). http://www.ipsos-mori.com/terms. Accessed February 28, 2022.
- Holm S. Handle with care: Assessing performance measures of medical AI for shared clinical decision-making. Bioethics. 2022;36(2):178-186. doi:10.1111/bioe.12930
- Bond RR, Mulvenna M, Wang H. Human centered artificial intelligence: Weaving UX into algorithmic decision making. RoCHI 2019 Int Conf Human-Computer Interact. 2019:2-9. https://hai.stanford.edu. Accessed March 8, 2022.
- Buçinca Z, Malaya MB, Gajos KZ. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. 2021;5(April). doi:10.1145/3449287
- Cai CJ, Winter S, Steiner D, Wilcox L, Terry M. “Hello AI”: Uncovering the onboarding needs of medical practitioners for human–AI collaborative decision-making. Proc ACM Human-Computer Interact. 2019;3(CSCW). doi:10.1145/3359206
- Chari S, Seneviratne O, Gruen DM, Foreman MA, Das AK, McGuinness DL. Explanation Ontology: A Model of Explanations for User-Centered AI. In: Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). Vol 12507 LNCS. Springer Science and Business Media Deutschland GmbH; 2020:228-243. doi:10.1007/978-3-030-62466-8_15
- Guo C, Pleiss G, Sun Y, Weinberger KQ. On calibration of modern neural networks. In: 34th International Conference on Machine Learning, ICML 2017. Vol 3. ; 2017:2130-2143.
- Zhang Y, Vera Liao Q, Bellamy RKE. Efect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In: FAT* 2020 - Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency. ; 2020:295-305. doi:10.1145/3351095.3372852
- Cutillo CM, Sharma KR, Foschini L, et al. Machine intelligence in healthcare—perspectives on trustworthiness, explainability, usability, and transparency. npj Digit Med. 2020;3(1):1-5. doi:10.1038/s41746-020-0254-2
- Watson D. The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence. Minds Mach. 2019;29(3):417-440. doi:10.1007/s11023-019-09506-6
- Winkler JK, Fink C, Toberer F, et al. Association between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition. JAMA Dermatology. 2019;155(10):1135-1141. doi:10.1001/jamadermatol.2019.1735
- Tjoa E, Guan C. A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI. IEEE Trans Neural Networks Learn Syst. 2021;32(11):4793-4813. doi:10.1109/TNNLS.2020.3027314
- Jin W, Li X, Hamarneh G. One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images. July 2021. https://arxiv.org/abs/2107.05047v1. Accessed February 28, 2022.
- Adebayo J, Gilmer J, Muelly M, Goodfellow I, Hardt M, Kim B. Sanity checks for saliency maps. In: Advances in Neural Information Processing Systems. Vol 2018-Decem. Neural information processing systems foundation; 2018:9505-9515. https://arxiv.org/abs/1810.03292v3. Accessed February 28, 2022.
- Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Heal. 2021;3(11):e745-e750. doi:10.1016/s2589-7500(21)00208-9
- Babic BB, Gerke S, Evgeniou T, Glenn Cohen I. Beware explanations from AI in health care the benefits of explainable artificial intelligence are not what they appear. Science. 2021;373(6552):284-286. doi:10.1126/science.abg1834
- Chen C, Li O, Tao C, Barnett AJ, Su J, Rudin C. This looks like that: Deep learning for interpretable image recognition. In: Advances in Neural Information Processing Systems. Vol 32. Neural information processing systems foundation; 2019. https://arxiv.org/abs/1806.10574v5. Accessed February 28, 2022.
- Yu KH, Kohane IS. Framing the challenges of artificial intelligence in medicine. BMJ Qual Saf. 2019;28(3):238-241. doi:10.1136/bmjqs-2018-008551
- Cho MK. Rising to the challenge of bias in health care AI. Nat Med. 2021;27(12):2079-2081. doi:10.1038/s41591-021-01577-2
- Zou J, Schiebinger L. Ensuring that biomedical AI benefits diverse populations. EBioMedicine. 2021;67. doi:10.1016/j.ebiom.2021.103358
- Center for data ethics and innovation. Review into bias in algorithmic decision-making Centre for Data Ethics and Innovation. 2020
- Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447-453. doi:10.1126/science.aax2342
- Henry Kamulegeya L, Okello M, Mark Bwanika J, et al. Using artificial intelligence on dermatology conditions in Uganda: A case for diversity in training data sets for machine learning. bioRxiv. October 2019:826057. doi:10.1101/826057
- Wynants L, Van Calster B, Collins GS, et al. Prediction models for diagnosis and prognosis of covid-19: Systematic review and critical appraisal. BMJ. 2020;369:26. doi:10.1136/bmj.m1328
- Subbaswamy A, Adams R, Saria S. Evaluating Model Robustness and Stability to Dataset Shift. 2020;130. http://arxiv.org/abs/2010.15100. Accessed February 28, 2022.
- McLennan S, Fiske A, Celi LA, et al. An embedded ethics approach for AI development. Nat Mach Intell. 2020;2(9):488-490. doi:10.1038/s42256-020-0214-1
- Tatman R. Gender and Dialect Bias in YouTube’s Automatic Captions. In: EACL 2017 - Ethics in Natural Language Processing, Proceedings of the 1st ACL Workshop. 2017:53-59. doi:10.18653/v1/w17-1606
- Koenecke A, Nam A, Lake E, et al. Racial disparities in automated speech recognition. Proc Natl Acad Sci U S A. 2020;117(14):7684-7689. doi:10.1073/pnas.1915768117
- Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28(3):231-237. doi:10.1136/bmjqs-2018-008370
- The AI Ethics Initiative - NHS AI Lab programmes - NHS Transformation Directorate. https://www.nhsx.nhs.uk/ai-lab/ai-lab-programmes/ethics/. Accessed March 8, 2022.
- Ibrahim H, Liu X, Denniston AK. Reporting guidelines for artificial intelligence in healthcare research. Clin Exp Ophthalmol. 2021;49(5):470-476. doi:10.1111/ceo.13943
- McCradden MD, Joshi S, Anderson JA, Mazwi M, Goldenberg A, Shaul RZ. Patient safety and quality improvement: Ethical principles for a regulatory approach to bias in healthcare machine learning. J Am Med Informatics Assoc. 2020;27(12):2024-2027. doi:10.1093/jamia/ocaa085
- Kliegr T, Bahník Š, Fürnkranz J. A review of possible effects of cognitive biases on interpretation of rule-based machine learning models. Artif Intell. 2021;295:103458. doi:10.1016/j.artint.2021.103458
- Hickman SE, Baxter GC, Gilbert FJ. Adoption of artificial intelligence in breast imaging: evaluation, ethical constraints and limitations. Br J Cancer. 2021;125(1):15-22. doi:10.1038/s41416-021-01333-w
- Stewart J, Sprivulis P, Dwivedi G. Artificial intelligence and machine learning in emergency medicine. EMA - Emerg Med Australas. 2018;30(6):870-874. doi:10.1111/1742-6723.13145
- Goddard K, Roudsari A, Wyatt JC. Automation bias: A systematic review of frequency, effect mediators, and mitigators. J Am Med Informatics Assoc. 2012;19(1):121-127. doi:10.1136/amiajnl-2011-000089
- Braun M, Hummel P, Beck S, Dabrock P. Primer on an ethics of AI-based decision support systems in the clinic. J Med Ethics. 2021;47(12):E3. doi:10.1136/medethics-2019-105860
- Dymek C, Kim B, Melton GB, Payne TH, Singh H, Hsiao CJ. Building the evidence-base to reduce electronic health record-related clinician burden. J Am Med Inform Assoc. 2021;28(5):1057-1061. doi:10.1093/jamia/ocaa238
- Co Z, Holmgren AJ, Classen DC, et al. The tradeoffs between safety and alert fatigue: Data from a national evaluation of hospital medication-related clinical decision support. J Am Med Informatics Assoc. 2020;27(8):1252-1258. doi:10.1093/jamia/ocaa098
- Medlock S, Wyatt JC, Patel VL, Shortliffe EH, Abu-Hanna A. Modeling information flows in clinical decision support: Key insights for enhancing system effectiveness. J Am Med Informatics Assoc. 2016;23(5):1001-1006. doi:10.1093/jamia/ocv177
- Burton JW, Stein MK, Jensen TB. A systematic review of algorithm aversion in augmented decision making. J Behav Decis Mak. 2020;33(2):220-239. doi:10.1002/bdm.2155
- Young AT, Amara D, Bhattacharya A, Wei ML. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Heal. 2021;3(9):e599-e611. doi:10.1016/S2589-7500(21)00132-1
- De Silva D. Helping people share decision making | The Health Foundation. The Health Foundation. https://www.health.org.uk/publications/helping-people-share-decision-making. Published 2012. Accessed March 7, 2022.
- Triberti S, Durosini I, Pravettoni G. A “Third Wheel” Effect in Health Decision Making Involving Artificial Entities: A Psychological Perspective. Front Public Heal. 2020;8(April):1-9. doi:10.3389/fpubh.2020.00117
- Building a Smarter Health Care Workforce Using AI. AHA Cent Heal Innov. 2019. https://www.aha.org/system/files/media/file/2019/09/Market_Insights_AI_Workforce_2.pdf. Accessed March 7, 2022.
Page last reviewed: 14 April 2023
Next review due: 14 April 2024