作者: Andy Spezzatti , Dennis Vetter , Inga Strümke , Julia Amann , Michelle Livne
DOI:
关键词:
摘要: The role of explainability in clinical decision support systems (CDSS) based on artificial intelligence (AI) raises important medical and ethical questions. Some see explainability for CDSS as a necessity, others caution against it or against certain implementations of it. This leads to considerable uncertainty and leaves the usability and utility of explainability in AI-based CDSS systems under controversy. This paper presents a review of the key arguments in favor and against explainability for CDSS by applying them to a practical use case. We performed a qualitative case study approach combined with a normative analysis using socio-technical scenarios. Our paper builds on the interdisciplinary assessment (via the Z-Inspection® process) of a black-box AI CDSS used in the emergency call setting to identify patients with life-threatening cardiac arrest. Specifically, the assessment informed the development of the two …