What’s the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions

Susanne Beck, Tiare-Maria Brasseur, Marion Poetz, Henry Sauermann

Research output: Contribution to conferencePaperResearchpeer-review


An increasing number of research projects successfully involves the general public (the crowd) in tasks such as collecting observational data or classifying images to answer scientists’ research questions. Although such crowd science projects have generated great hopes among scientists and policy makers, it is not clear whether the crowd can also meaningfully contribute to other stages of the research process, in particular the identification of research questions that should be studied. We first develop a conceptual framework that ties different aspects of “good” research questions to different types of knowledge. We then discuss potential strengths and weaknesses of the crowd compared to professional scientists in developing research questions, while also considering important heterogeneity among crowd members. Data from a series of online and field experiments has been gathered and is currently analyzed to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence a crowd’s performance in generating research questions. Our results aim for advancing the literatures on crowd and citizen science as well as the broader literature on crowdsourcing and the organization of open and distributed knowledge production. Our findings have important implications for scientists and policy makers.
Original languageEnglish
Publication date2019
Number of pages36
Publication statusPublished - 2019
EventDRUID19 Conference - Copenhagen Business School, Frederiksberg, Denmark
Duration: 19 Jun 201921 Jun 2019
Conference number: 41


ConferenceDRUID19 Conference
LocationCopenhagen Business School
Internet address


  • Crowd science
  • Open science
  • Scientific knowledge production
  • Experimental design
  • Problem finding
  • Crowdsourcing

Cite this