Abstract
The increasing adoption of machine learning (ML) and artificial intelligence (AI) in critical decision-making has intensified demands for explainability and transparency, particularly in high-risk domains such as finance. This dissertation investigates how explainable AI (xAI) can be designed and adapted to meet the diverse explainability needs of stakeholders in high-risk organizations through a longitudinal Action Design Research (ADR) study conducted in collaboration with a European financial institution implementing ML in transaction monitoring (TM) for anti-money laundering (AML).
The ADR study addresses key gaps in the explainable AI literature by providing empirical evidence of how stakeholders interact with xAI in organizational contexts. The systematic literature review reveals four central debates in xAI research and identifies a critical need for empirical studies examining stakeholder needs. The empirical investigation demonstrates how implementation modes (automation versus augmentation) fundamentally alter stakeholder information requirements and liability structures, challenging assumptions about universal explainability needs. The research introduces a novel proximity-responsibility framework that systematically explains how stakeholders' proximity to AI systems influences both their trust in these systems and their utilization of xAI. High-proximity stakeholders (data scientists, investigators) develop trust through operational familiarity and use xAI for optimization and decision support, while low-proximity stakeholders (compliance managers, auditors) rely on xAI primarily for oversight, documentation, and regulatory compliance.
Significantly, this dissertation challenges the prevailing explainability-trust hypothesis that dominates xAI literature by providing empirical evidence that trust influences explainable AI use rather than being produced by it. The research reconceptualizes trust and control as complementary coordination mechanisms and proposes a hierarchical conceptual framework positioning interpretability as methodology for developing xAI serving a specific purpose rather than generic trust generation.
The findings contribute theoretical frameworks for understanding human-AI collaboration in high-stakes environments, practical design principles for stakeholder-specific xAI implementation, and methodological guidance for conducting rigorous xAI research in organizational contexts. The study demonstrates that effective xAI design requires moving beyond one-size-fits-all approaches toward nuanced, stakeholder-specific solutions that recognize the complex interplay between organizational context, implementation choices, and human factors in AI adoption.
Keywords: Artificial Intelligence (AI), Explainable artificial intelligence (xAI), machine learning (ML), stakeholder theory, human-AI collaboration, high-risk decision making, trust, organizational context, transaction monitoring (TM), anti-money laundering (AML)
The ADR study addresses key gaps in the explainable AI literature by providing empirical evidence of how stakeholders interact with xAI in organizational contexts. The systematic literature review reveals four central debates in xAI research and identifies a critical need for empirical studies examining stakeholder needs. The empirical investigation demonstrates how implementation modes (automation versus augmentation) fundamentally alter stakeholder information requirements and liability structures, challenging assumptions about universal explainability needs. The research introduces a novel proximity-responsibility framework that systematically explains how stakeholders' proximity to AI systems influences both their trust in these systems and their utilization of xAI. High-proximity stakeholders (data scientists, investigators) develop trust through operational familiarity and use xAI for optimization and decision support, while low-proximity stakeholders (compliance managers, auditors) rely on xAI primarily for oversight, documentation, and regulatory compliance.
Significantly, this dissertation challenges the prevailing explainability-trust hypothesis that dominates xAI literature by providing empirical evidence that trust influences explainable AI use rather than being produced by it. The research reconceptualizes trust and control as complementary coordination mechanisms and proposes a hierarchical conceptual framework positioning interpretability as methodology for developing xAI serving a specific purpose rather than generic trust generation.
The findings contribute theoretical frameworks for understanding human-AI collaboration in high-stakes environments, practical design principles for stakeholder-specific xAI implementation, and methodological guidance for conducting rigorous xAI research in organizational contexts. The study demonstrates that effective xAI design requires moving beyond one-size-fits-all approaches toward nuanced, stakeholder-specific solutions that recognize the complex interplay between organizational context, implementation choices, and human factors in AI adoption.
Keywords: Artificial Intelligence (AI), Explainable artificial intelligence (xAI), machine learning (ML), stakeholder theory, human-AI collaboration, high-risk decision making, trust, organizational context, transaction monitoring (TM), anti-money laundering (AML)
| Originalsprog | Engelsk |
|---|
| Udgivelsessted | Frederiksberg |
|---|---|
| Forlag | Copenhagen Business School [Phd] |
| Antal sider | 276 |
| ISBN (Trykt) | 9788775683895 |
| ISBN (Elektronisk) | 9788775683901 |
| DOI | |
| Status | Udgivet - 2025 |
| Navn | PhD Series |
|---|---|
| Nummer | 35.2025 |
| ISSN | 0906-6934 |
Citationsformater
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver