It Takes a Village: The Ecology of Explaining AI

Lauren Waardenburg, Attila Marton

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

Abstract

AI systems are commonly believed to be able to aid in more objective decision-making and, eventually, to make objective decisions of their own. However, such belief is riddled with fallacies, which are based on an overly simplistic approach to organizational decision-making. Based on an ethnography of the Dutch police, we demonstrate that making decisions with AI requires practical explanations that go beyond an analysis of the computational methods used to generate predictions, to include an entire ecology of unbounded, open-ended interactions and interdependencies. In other words, explaining AI is ecological. Yet, this typically goes unnoticed. We argue that this is highly problematic, as it is through acknowledging this ecology that we can recognize that we are not, and never will be, making objective decisions with AI. If we continue to ignore the ecology of explaining AI, we end up reinforcing, and potentially even further stigmatizing, existing societal categories.
Original languageEnglish
Title of host publicationResearch Handbook on Artificial Intelligence and Decision Making in Organizations
EditorsIoanna Constantiou, Mayur P. Joshi, Marta Stelmaszak
Number of pages12
Place of PublicationCheltenham
PublisherEdward Elgar Publishing
Publication date2024
Pages214–225
Chapter12
ISBN (Print)9781803926209
ISBN (Electronic)9781803926216
DOIs
Publication statusPublished - 2024
SeriesResearch Handbooks in Business and Management

Keywords

  • Artificial intelligence
  • Decision making
  • Explainable AI
  • Predictive policing
  • Ecology
  • Decision-paradox

Cite this