作者: Douglas Cirqueira , Dietmar Nedbal , Markus Helfert , Marija Bezbradica
DOI: 10.1007/978-3-030-57321-8_18
关键词:
摘要: Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why (AI) machine learning (ML) models provide certain predictions. However, the trust of those into AI explanations is still an issue, especially domain experts, who are knowledgeable about their but not inner workings. Social user-centric XAI research states it essential to understand stakeholder’s requirements tailored needs, enhance in working with models. Scenario-based design elicitation can help bridge gap between social operational aspects a stakeholder early before adoption information systems identify its real problem practices generating user requirements. Nevertheless, rarely explored scenarios XAI, fraud detection supporting experts work We demonstrate usage scenario-based context, develop derived banking fraud. discuss how be adopted or expert appropriate his daily operations make decisions reviewing fraudulent cases banking. The generalizability further validated through systematic literature review domains visual analytics detection.