The Governance & Behavioral Challenges of Generative Artificial Intelligence’s Hypercustomization Capabilities

  • Christoph M. Abels*
  • , Ezequiel Lopez-Lopez
  • , Jason W. Burton
  • , Dawn L. Holford
  • , Levin Brinkmann
  • , Stefan M. Herzog
  • , Stephan Lewandowsky
  • *Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Generative artificial intelligence (GenAI) is changing human–machine interactions and the broader information ecosystem. Much as social media algorithms personalize online experiences, GenAI applications can align with user preferences to customize the way individuals interact with information. However, through training, fine-tuning, and prompting, GenAI applications can introduce a new level of customization: hypercustomization. By dynamically tailoring responses to an individual’s explicit and implicit preferences, hypercustomization can reinforce biases, false beliefs, or misconceptions. As a result, it can heighten significant societal challenges, such as the spread of misinformation and political and social polarization. In this article, we explore the risks associated with hypercustomization and the governance and behavioral challenges that might impede effective risk mitigation. These challenges include a lack of transparency in GenAI applications, opacity of the nature of their interactions with users, users’ overreliance on these systems, and the inefficacy of warning messages. We also provide recommendations for overcoming these challenges.
Original languageEnglish
Volume11
Issue number1
Pages (from-to)22-32
ISSN2379-4607
DOIs
Publication statusPublished - Apr 2025

Keywords

  • Generative artificial intelligence (GenAI)
  • Governance
  • Hypercustomization
  • Behavioral interventions

Cite this