Augmenting Research Methods With Foundation Models and Generative AI

Sippo Rossi*, Matti Rossi, Raghava Rao Mukkamala, Jason Bennett Thatcher, Yogesh K. Dwived

*Corresponding author for this work

Research output: Contribution to journalEditorialpeer-review

Abstract

Deep learning (DL) research has made remarkable progress in recent years. Natural language processing and image generation have made the leap from computer science journals to open-source communities and commercial services. Pre-trained DL models built on massive datasets, also known as foundation models, such as the GPT-3 and BERT, have led the way in democratizing artificial intelligence (AI). However, their potential use as research tools has been overshadowed by fears of how this technology can be misused. Some have argued that AI threatens scholarship, suggesting they should not replace human collaborators. Others have argued that AI creates opportunities, suggesting that AI-human collaborations could speed up research. Taking a constructive stance, this editorial outlines ways to use foundation models to advance science. We argue that DL tools can be used to create realistic experiments and make specific types of quantitative studies feasible or safer with synthetic rather than real data. All in all, we posit that the use of generative AI and foundation models as a tool in information systems research is in very early stages. Still, if we proceed cautiously and develop clear guidelines for using foundation models and generative AI, their benefits for science and scholarship far outweigh their risks.
Original languageEnglish
Article number102749
JournalInternational Journal of Information Management
Volume77
Number of pages8
ISSN0268-4012
DOIs
Publication statusPublished - Aug 2024

Bibliographical note

Published online: 12. January 2024.

Keywords

  • Foundation model
  • Generative AI
  • Experiments
  • Synthetic data

Cite this