Monetization Could Corrupt Algorithmic Explanations

Travis Greene*, Sofie Goethals, David Martens, Galit Shmueli

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

17 Downloads (Pure)

Abstract

Explainable artificial intelligence (XAI) aims to provide insights into the logic of automated decisions with the goal of promoting fairer, more transparent, and more trustworthy automated decision-making. Despite mounting regulatory pressure, changing consumer expectations, and a growing stream of XAI-related research, few consumer-facing applications of XAI exist. In anticipation of future XAI-enabled products and services, we use ethical foresight analysis to investigate the possible consequences of monetizing explanations. By developing a conceptual artifact we call an explanation platform, we analyze what could happen when digital advertising is fused with XAI. We explore the platform’s business and design logic, examine its potential social and ethical impact, and describe several plausible explanation manipulation scenarios and strategies. We find that while XAI monetization could incentivize industry adoption of XAI technology and expand algorithmic recourse across society, it could also lead to corrupted forms of explanations optimized for profit-driven objectives. Overall, our foresight analysis makes the case for the economic and technological feasibility of monetized XAI, but raises concerns about its desirability in liberal democratic societies.
Original languageEnglish
JournalAI & Society
Number of pages18
ISSN0951-5666
DOIs
Publication statusPublished - 9 May 2025

Bibliographical note

Epub ahead of print. Published online: 09 May 2025.

Keywords

  • Digital platforms
  • Explainable AI (XAI)
  • AI ethics
  • Personalization
  • Data monetization
  • Advertising

Cite this