Crowdsourcing Research Questions?

Leveraging the Crowd’s Experiential Knowledge for Problem Finding

Tiare-Maria Brasseur, Susanne Beck, Henry Sauermann, Marion Poetz

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

Abstract

Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection.
Original languageEnglish
Publication date2019
Number of pages3
Publication statusPublished - 2019
EventDRUID Academy Conference 2019 - Aalborg University, Aalborg, Denmark
Duration: 16 Jan 201918 Jan 2019
https://conference.druid.dk/Druid/?confId=58

Conference

ConferenceDRUID Academy Conference 2019
LocationAalborg University
CountryDenmark
CityAalborg
Period16/01/201918/01/2019
Internet address

Cite this

Brasseur, T-M., Beck, S., Sauermann, H., & Poetz, M. (2019). Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding. Abstract from DRUID Academy Conference 2019, Aalborg, Denmark.
Brasseur, Tiare-Maria ; Beck, Susanne ; Sauermann, Henry ; Poetz, Marion. / Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding. Abstract from DRUID Academy Conference 2019, Aalborg, Denmark.3 p.
@conference{844e29057b194102911163d2fc0d7fa6,
title = "Crowdsourcing Research Questions?: Leveraging the Crowd’s Experiential Knowledge for Problem Finding",
abstract = "Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection.",
author = "Tiare-Maria Brasseur and Susanne Beck and Henry Sauermann and Marion Poetz",
year = "2019",
language = "English",
note = "null ; Conference date: 16-01-2019 Through 18-01-2019",
url = "https://conference.druid.dk/Druid/?confId=58",

}

Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding. / Brasseur, Tiare-Maria; Beck, Susanne; Sauermann, Henry; Poetz, Marion.

2019. Abstract from DRUID Academy Conference 2019, Aalborg, Denmark.

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

TY - ABST

T1 - Crowdsourcing Research Questions?

T2 - Leveraging the Crowd’s Experiential Knowledge for Problem Finding

AU - Brasseur, Tiare-Maria

AU - Beck, Susanne

AU - Sauermann, Henry

AU - Poetz, Marion

PY - 2019

Y1 - 2019

N2 - Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection.

AB - Recently, both researchers and policy makers have become increasingly interested in involving the general public (i.e., the crowd) in the discovery of new science-based knowledge. There has been a boom of citizen science/crowd science projects (e.g., Foldit or Galaxy Zoo) and global policy aspirations for greater public engagement in science (e.g., Horizon Europe). At the same time, however, there are also criticisms or doubts about this approach. Science is complex and laypeople often do not have the appropriate knowledge base for scientific judgments, so they rely on specialized experts (i.e., scientists) (Scharrer, Rupieper, Stadtler & Bromme, 2017). Given these two perspectives, there is no consensus on what the crowd can do and what only researchers should do in scientific processes yet (Franzoni & Sauermann, 2014). Previous research demonstrates that crowds can be efficiently and effectively used in late stages of the scientific research process (i.e., data collection and analysis). We are interested in finding out what crowds can actually contribute to research processes that goes beyond data collection and analysis. Specifically, this paper aims at providing first empirical insights on how to leverage not only the sheer number of crowd contributors, but also their diversity in experience for early phases of the research process (i.e., problem finding). In an online and field experiment, we develop and test suitable mechanisms for facilitating the transfer of the crowd’s experience into scientific research questions. In doing so, we address the following two research questions: 1. What factors influence crowd contributors’ ability to generate research questions? 2. How do research questions generated by crowd members differ from research questions generated by scientists in terms of quality? There are strong claims about the significant potential of people with experiential knowledge, i.e., sticky problem knowledge derived from one’s own practical experience and practices (Collins & Evans, 2002), to enhance the novelty and relevance of scientific research (e.g., Pols, 2014). Previous evidence that crowds with experiential knowledge (e.g., users in Poetz & Schreier, 2012) or ?outsiders?/nonobvious individuals (Jeppesen & Lakhani, 2010) can outperform experts under certain conditions by having novel perspectives, support the assumption that the participation of non-scientists (i.e., crowd members) in scientific problem-finding might complement scientists’ lack of experiential knowledge. Furthermore, by bringing in exactly these new perspectives, they might help overcome problems of fixation/inflexibility in cognitive-search processes among scientists (Acar & van den Ende, 2016). Thus, crowd members with (higher levels of) experiential knowledge are expected to be superior in identifying very novel and out-of-the-box research problems with high practical relevance, as compared to scientists. However, there are clear reasons to be skeptical: despite their advantage to possess important experiential knowledge, the crowd lacks the scientific knowledge we assume to be required to formulate meaningful research questions. To study exactly how the transfer of crowd members’ experiential knowledge into science can be facilitated, we conducted two experimental studies in context of traumatology (i.e., research on accidental injuries). First, we conducted a large-scale online experiment (N=704) in collaboration with an international crowdsourcing platform to test the effect of two facilitating treatments on crowd members’ ability to formulate real research questions (study 1). We used a 2 (structuring knowledge/no structuring knowledge) x 2 (science knowledge/no science knowledge) between-subject experimental design. Second, we tested the same treatments in the field (study 2), i.e., in a crowdsourcing project in collaboration with LBG Open Innovation in Science Center. We invited patients, care takers and medical professionals (e.g., surgeons, physical therapists or nurses) concerned with accidental injuries to submit research questions using a customized online platform (https://tell-us.online/) to investigate the causal relationship between our treatments and different types and levels of experiential knowledge (N=118). An international jury of experts (i.e., journal editors in the field of traumatology) then assesses the quality of submitted questions (from the online and field experiment) along several quality dimensions (i.e., clarity, novelty, scientific impact, practical impact, feasibility) in an online evaluation process. To assess the net effect of our treatments, we further include a random sample of research questions obtained from early-stage research papers (i.e., conference papers) into the expert evaluation (blind to the source) and compare them with the baseline groups of our experiments. We are currently finalizing the data collection.

M3 - Conference abstract for conference

ER -

Brasseur T-M, Beck S, Sauermann H, Poetz M. Crowdsourcing Research Questions? Leveraging the Crowd’s Experiential Knowledge for Problem Finding. 2019. Abstract from DRUID Academy Conference 2019, Aalborg, Denmark.