Responsible AI: Concepts, Critical Perspectives and an Information Systems Research Agenda

Polyxeni Vassilakopoulou, Elena Parmiggiani, Arisa Shollo, Miria Grisot

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Being responsible for Artificial Intelligence (AI) harnessing its power while minimising risks for individuals and society is one of the greatest challenges of our time. A vibrant discourse on Responsible AI is developing across academia, policy making and corporate communications. In this editorial, we demonstrate how the different literature strands intertwine but also diverge and propose a comprehensive definition of Responsible AI as the practice of developing, using and governing AI in a human-centred way to ensure that AI is worthy of being trusted and adheres to fundamental human values. This definition clarifies that Responsible AI is not a specific category of AI artifacts that have special properties or can undertake responsibilities, humans are ultimately responsible for AI, for its consequences and for controlling AI development and use. We explain how the four papers included in this special issue manifest different Responsible AI practices and synthesise their findings into an integrative framework that includes business models, services/products, design processes and data. We suggest that IS Research can contribute socially relevant knowledge about Responsible AI providing insights on how to balance instrumental and humanistic AI outcomes and propose themes for future IS research on Responsible AI.
Original languageEnglish
Article number3
JournalScandinavian Journal of Information Systems
Volume34
Issue number2
Pages (from-to)89-112
Number of pages24
ISSN0905-0167
Publication statusPublished - Dec 2022

Cite this