A Systematic Review of Algorithm Aversion in Augmented Decision Making

Jason W. Burton*, Mari Klara Stein, Tina Blegind Jensen

*Corresponding author for this work

Research output: Contribution to journalReviewResearchpeer-review


Despite abundant literature theorizing societal implications of algorithmic decision making, relatively little is known about the conditions that lead to the acceptance or rejection of algorithmically generated insights by individual users of decision aids. More specifically, recent findings of algorithm aversion—the reluctance of human forecasters to use superior but imperfect algorithms—raise questions about whether joint human-algorithm decision making is feasible in practice. In this paper, we systematically review the topic of algorithm aversion as it appears in 61 peer-reviewed articles between 1950 and 2018 and follow its conceptual trail across disciplines. We categorize and report on the proposed causes and solutions of algorithm aversion in five themes: expectations and expertise, decision autonomy, incentivization, cognitive compatibility, and divergent rationalities. Although each of the presented themes addresses distinct features of an algorithmic decision aid, human users of the decision aid, and/or the decision making environment, apparent interdependencies are highlighted. We conclude that resolving algorithm aversion requires an updated research program with an emphasis on theory integration. We provide a number of empirical questions that can be immediately carried forth by the behavioral decision making community.
Original languageEnglish
JournalJournal of Behavioral Decision Making
Issue number2
Pages (from-to)220-239
Number of pages20
Publication statusPublished - Apr 2020

Bibliographical note

Published online 23 October 2019


  • Algorithm aversion
  • Augmented decision making
  • Human-algorithm interaction
  • Systematic review

Cite this