What's the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

An increasing number of research projects successfully involve the general public (the crowd) in tasks such as collecting observational data or classifying images to answer scientists’ research questions. Although such crowd science projects have generated great hopes among scientists and policy makers, it is not clear whether the crowd can also meaningfully contribute to other stages of the research process, in particular the identification of research questions that should be studied. We first develop a conceptual framework that ties different aspects of “good” research questions to different types of knowledge. We then discuss potential strengths and weaknesses of the crowd compared to professional scientists in developing research questions, while also considering important heterogeneity among crowd members. Data from a series of online and field experiments has been gathered and is currently analyzed to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence a crowd’s performance in generating research questions. Our results aim for advancing the literatures on crowd and citizen science as well as the broader literature on crowdsourcing and the organization of open and distributed knowledge production. Our findings have important implications for scientists and policy makers.
Original languageEnglish
Title of host publicationProceedings of the Seventy-ninth Annual Meeting of the Academy of Management
EditorsGuclu Atinc
Number of pages6
Place of PublicationBriar Cliff Manor, NY
PublisherAcademy of Management
Publication date2019
Pages644-649
Article number115
DOIs
Publication statusPublished - 2019
EventThe Academy of Management Annual Meeting 2019: Understanding the Inclusive Organization - Boston, United States
Duration: 9 Aug 201913 Aug 2019
Conference number: 79
http://aom.org/annualmeeting/

Conference

ConferenceThe Academy of Management Annual Meeting 2019
Number79
CountryUnited States
CityBoston
Period09/08/201913/08/2019
Internet address
SeriesAcademy of Management Proceedings
ISSN2151-6561

Cite this

Beck, S., Brasseur, T., Poetz, M. K., & Sauermann, H. (2019). What's the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions. In G. Atinc (Ed.), Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management (pp. 644-649). [115] Briar Cliff Manor, NY: Academy of Management. Academy of Management Proceedings https://doi.org/10.5465/AMBPP.2019.115
Beck, Susanne ; Brasseur, Tiare ; Poetz, Marion Kristin ; Sauermann, Henry. / What's the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions. Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management. editor / Guclu Atinc. Briar Cliff Manor, NY : Academy of Management, 2019. pp. 644-649 (Academy of Management Proceedings).
@inproceedings{885b34d6c0114df5afbe2d520292173b,
title = "What's the Problem?: How Crowdsourcing Contributes to Identifying Scientific Research Questions",
abstract = "An increasing number of research projects successfully involve the general public (the crowd) in tasks such as collecting observational data or classifying images to answer scientists’ research questions. Although such crowd science projects have generated great hopes among scientists and policy makers, it is not clear whether the crowd can also meaningfully contribute to other stages of the research process, in particular the identification of research questions that should be studied. We first develop a conceptual framework that ties different aspects of “good” research questions to different types of knowledge. We then discuss potential strengths and weaknesses of the crowd compared to professional scientists in developing research questions, while also considering important heterogeneity among crowd members. Data from a series of online and field experiments has been gathered and is currently analyzed to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence a crowd’s performance in generating research questions. Our results aim for advancing the literatures on crowd and citizen science as well as the broader literature on crowdsourcing and the organization of open and distributed knowledge production. Our findings have important implications for scientists and policy makers.",
author = "Susanne Beck and Tiare Brasseur and Poetz, {Marion Kristin} and Henry Sauermann",
year = "2019",
doi = "10.5465/AMBPP.2019.115",
language = "English",
series = "Academy of Management Proceedings",
pages = "644--649",
editor = "Guclu Atinc",
booktitle = "Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management",
publisher = "Academy of Management",

}

Beck, S, Brasseur, T, Poetz, MK & Sauermann, H 2019, What's the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions. in G Atinc (ed.), Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management., 115, Academy of Management, Briar Cliff Manor, NY, Academy of Management Proceedings, pp. 644-649, Boston, United States, 09/08/2019. https://doi.org/10.5465/AMBPP.2019.115

What's the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions. / Beck, Susanne; Brasseur, Tiare; Poetz, Marion Kristin; Sauermann, Henry.

Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management. ed. / Guclu Atinc. Briar Cliff Manor, NY : Academy of Management, 2019. p. 644-649 115 (Academy of Management Proceedings).

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

TY - GEN

T1 - What's the Problem?

T2 - How Crowdsourcing Contributes to Identifying Scientific Research Questions

AU - Beck, Susanne

AU - Brasseur, Tiare

AU - Poetz, Marion Kristin

AU - Sauermann, Henry

PY - 2019

Y1 - 2019

N2 - An increasing number of research projects successfully involve the general public (the crowd) in tasks such as collecting observational data or classifying images to answer scientists’ research questions. Although such crowd science projects have generated great hopes among scientists and policy makers, it is not clear whether the crowd can also meaningfully contribute to other stages of the research process, in particular the identification of research questions that should be studied. We first develop a conceptual framework that ties different aspects of “good” research questions to different types of knowledge. We then discuss potential strengths and weaknesses of the crowd compared to professional scientists in developing research questions, while also considering important heterogeneity among crowd members. Data from a series of online and field experiments has been gathered and is currently analyzed to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence a crowd’s performance in generating research questions. Our results aim for advancing the literatures on crowd and citizen science as well as the broader literature on crowdsourcing and the organization of open and distributed knowledge production. Our findings have important implications for scientists and policy makers.

AB - An increasing number of research projects successfully involve the general public (the crowd) in tasks such as collecting observational data or classifying images to answer scientists’ research questions. Although such crowd science projects have generated great hopes among scientists and policy makers, it is not clear whether the crowd can also meaningfully contribute to other stages of the research process, in particular the identification of research questions that should be studied. We first develop a conceptual framework that ties different aspects of “good” research questions to different types of knowledge. We then discuss potential strengths and weaknesses of the crowd compared to professional scientists in developing research questions, while also considering important heterogeneity among crowd members. Data from a series of online and field experiments has been gathered and is currently analyzed to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence a crowd’s performance in generating research questions. Our results aim for advancing the literatures on crowd and citizen science as well as the broader literature on crowdsourcing and the organization of open and distributed knowledge production. Our findings have important implications for scientists and policy makers.

UR - https://sfx-45cbs.hosted.exlibrisgroup.com/45cbs?url_ver=Z39.88-2004&url_ctx_fmt=info:ofi/fmt:kev:mtx:ctx&ctx_enc=info:ofi/enc:UTF-8&ctx_ver=Z39.88-2004&rfr_id=info:sid/sfxit.com:azlist&sfx.ignore_date_threshold=1&rft.object_id=110978978188586&rft.object_portfolio_id=&svc.holdings=yes&svc.fulltext=yes

U2 - 10.5465/AMBPP.2019.115

DO - 10.5465/AMBPP.2019.115

M3 - Article in proceedings

T3 - Academy of Management Proceedings

SP - 644

EP - 649

BT - Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management

A2 - Atinc, Guclu

PB - Academy of Management

CY - Briar Cliff Manor, NY

ER -

Beck S, Brasseur T, Poetz MK, Sauermann H. What's the Problem? How Crowdsourcing Contributes to Identifying Scientific Research Questions. In Atinc G, editor, Proceedings of the Seventy-ninth Annual Meeting of the Academy of Management. Briar Cliff Manor, NY: Academy of Management. 2019. p. 644-649. 115. (Academy of Management Proceedings). https://doi.org/10.5465/AMBPP.2019.115