Cognitive Challenges in Human-AI Collaboration: A Study on Trust, Errors, and Heuristics in Clinical Decision-Making

Research output: Book/ReportPhD thesis

106 Downloads (Pure)

Abstract

Artificial Intelligence (AI) have the potential to transform healthcare. Applications are far-reaching, from diagnosing and detecting disease through to implementing treatments and surgeries. Yet curiously, while AI is now being integrated into diverse economic sectors like finance, retail, and automotive, healthcare institutions have been slow to adopt AI. The reasons for this slow uptake are multiple but relate to low trust in AI among clinicians and a conflict with the prevailing culture of evidence-based medicine, whereby physicians critically engage in diagnostic discourse to reach clinical decisions. There is a growing shift toward explainable AI (XAI) systems that promise to transform the opaque "black-box" into a more interpretable
"glass-box."
This thesis aims to develop and test a framework for understanding how clinicians collaborate with AI and XAI. In so doing, I aim to move beyond common characterizations of “AI aversion” or “AI appreciation,” which have been used to describe when clinicians engage with AI or not, to understand the cognitive underpinnings of clinicians' engagement with AI. I further seek to understand when AI collaboration is effective, leading to more accurate medical decisions or worsening performance, leading to more or new errors. To do so, I perform a mixed-methods study of clinician-AI collaboration dynamics, with a focus on trust, errors, and heuristics.
Original languageEnglish
Place of PublicationFerderiksberg
PublisherCopenhagen Business School [Phd]
Number of pages203
ISBN (Print)9788775683277
ISBN (Electronic)9788775683284
DOIs
Publication statusPublished - 2025
SeriesPhD Series
Number04.2025
ISSN0906-6934

Cite this