Abstract
As a solution for the pressing issue in medicine of “black-box” artificial intelligence (AI), models that are hard to understand, explainable AI (XAI) is gaining in popularity. XAI aims at making AI more understandable by explaining its working, e.g., through human understandable explanations. However, while prior research found that such explanations must be adapted for the given expert group being addressed, we find limited work on explanations and their effect on medical experts. To address this gap, we conducted an online experiment with such medical experts (e.g., doctors, nurses) (n=204), to investigate how explanations can be utilized to achieve a causal understanding and respective usage of AI. Our results demonstrate and contribute to literature by identifying transparency and usefulness as powerful mediators, which were not known before. Additionally, we contribute to practice by depicting how these can be used by managers to improve the adoption of AI systems in medicine.
Original language | English |
---|---|
Title of host publication | International Conference on Information Systems, ICIS 2022 : Digitization for the Next Generation |
Number of pages | 17 |
Publisher | Association for Information Systems |
Publication date | 2022 |
ISBN (Electronic) | 9781713893615 |
Publication status | Published - 2022 |
Externally published | Yes |
Event | The 43rd International Conference on Information Systems: ICIS 2022: Digitization for the Next Generation - Copenhagen, Denmark Duration: 9 Dec 2022 → 14 Dec 2022 Conference number: 43 https://icis2022.aisconferences.org/ |
Conference
Conference | The 43rd International Conference on Information Systems: ICIS 2022 |
---|---|
Number | 43 |
Country/Territory | Denmark |
City | Copenhagen |
Period | 09/12/2022 → 14/12/2022 |
Internet address |
Series | Proceedings of the International Conference on Information Systems |
---|---|
ISSN | 0000-0033 |
Keywords
- Causability
- Explainable AI
- Local explanations
- Medical explainable AI