Simple Changes to Content Curation Algorithms Affect the Beliefs People Form in a Collaborative Filtering Experiment

Jason W. Burton, Stefan M. Herzog, Philipp Lorenz-Spreen

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

134 Downloads (Pure)

Abstract

Content-curating algorithms provide a crucial service for social media users by surfacing relevant content, but they can also bring about harms when their objectives are misaligned with user values and welfare. Yet, potential behavioral consequences of this alignment problem remain understudied in controlled experiments. In a preregistered, two-wave, collaborative filtering experiment, we demonstrate that small changes to the metrics used for sampling and ranking posts affect the beliefs people form. Our results show observable differences in two types of outcomes within statisticized groups: belief accuracy and consensus. We find partial support for hypotheses that the recently proposed approaches of "bridging-based ranking" and "intelligence-based ranking" promote consensus and belief accuracy, respectively. We also find that while personalized, engagement-based ranking promotes posts that participants perceive favorably, it simultaneously leads those participants to form more polarized and less accurate beliefs than any of the other algorithms considered.
Original languageEnglish
Title of host publication46th Annual Meeting of the Cognitive Science Society (CogSci 2024)
Number of pages8
Place of PublicationSeattle, WA
PublisherCognitive Science Society
Publication date2024
Pages811-818
Publication statusPublished - 2024
SeriesProceedings of the Annual Meeting of the Cognitive Science Society
Volume46
ISSN1069-7977

Keywords

  • Algorithmic curation
  • Collaborative filtering
  • Belief updating
  • Engagement-based ranking
  • Bridging-based ranking
  • Intelligence-based ranking

Cite this