Limits...
Participation and contribution in crowdsourced surveys.

Swain R, Berger A, Bongard J, Hines P - PLoS ONE (2015)

Bottom Line: In particular, we found that: the rate at which participants submitted new survey questions followed a heavy-tailed distribution; the distribution in the types of questions posed was similar; and many users posed non-obvious yet predictive questions.While we did not find a significant relationship between the quantity of participation and the quality of contribution for both response submissions and question submissions, we did find several other more nuanced participant behavior patterns, which did correlate with contribution in one of the three surveys.We conclude that there exists an optimal time for users to pose questions early on in their participation, but only after they have submitted a few responses to other questions.

View Article: PubMed Central - PubMed

Affiliation: Computer Science Department (graduated), University of Vermont, Burlington, United States of America.

ABSTRACT
This paper identifies trends within and relationships between the amount of participation and the quality of contributions in three crowdsourced surveys. Participants were asked to perform a collective problem solving task that lacked any explicit incentive: they were instructed not only to respond to survey questions but also to pose new questions that they thought might-if responded to by others-predict an outcome variable of interest to them. While the three surveys had very different outcome variables, target audiences, methods of advertisement, and lengths of deployment, we found very similar patterns of collective behavior. In particular, we found that: the rate at which participants submitted new survey questions followed a heavy-tailed distribution; the distribution in the types of questions posed was similar; and many users posed non-obvious yet predictive questions. By analyzing responses to questions that contained a built-in range of valid response we found that less than 0.2% of responses lay outside of those ranges, indicating that most participants tend to respond honestly to surveys of this form, even without explicit incentives for honesty. While we did not find a significant relationship between the quantity of participation and the quality of contribution for both response submissions and question submissions, we did find several other more nuanced participant behavior patterns, which did correlate with contribution in one of the three surveys. We conclude that there exists an optimal time for users to pose questions early on in their participation, but only after they have submitted a few responses to other questions. This suggests that future crowdsourced surveys may attract more predictive questions by prompting users to pose new questions at specific times during their participation and limiting question submission at non-optimal times.

No MeSH data available.


Related in: MedlinePlus

Number of Questions Submitted.Plots (a) and (b) use the Childhood BMI data, plots (c) and (d) use the EnergyMinder data, and plots (e) and (f) use the Personal Savings data. Plots (a), (c), and (e) display probability distributions for the number of questions submitted per user using log10 axes, along with power-law fit lines. Plots (b), (d), and (f) display the times at which users submitted questions. User ID was assigned based on time of first arrival to the surveys.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4383627&req=5

pone.0120521.g002: Number of Questions Submitted.Plots (a) and (b) use the Childhood BMI data, plots (c) and (d) use the EnergyMinder data, and plots (e) and (f) use the Personal Savings data. Plots (a), (c), and (e) display probability distributions for the number of questions submitted per user using log10 axes, along with power-law fit lines. Plots (b), (d), and (f) display the times at which users submitted questions. User ID was assigned based on time of first arrival to the surveys.

Mentions: Fig. 2 reports the distributions of questions submitted per user in each of the three studies as well as when those questions were submitted over the course of the study. Fig. 2a, c, and e indicate that the number of questions submitted follows a heavy tailed distribution. There were insufficient data to confirm statistically that these distributions could be fit with a power law, but they were similar to such a distribution. This means that in each of the three studies, a small percentage of participants submitted the majority of the questions, while most participants submitted no questions.


Participation and contribution in crowdsourced surveys.

Swain R, Berger A, Bongard J, Hines P - PLoS ONE (2015)

Number of Questions Submitted.Plots (a) and (b) use the Childhood BMI data, plots (c) and (d) use the EnergyMinder data, and plots (e) and (f) use the Personal Savings data. Plots (a), (c), and (e) display probability distributions for the number of questions submitted per user using log10 axes, along with power-law fit lines. Plots (b), (d), and (f) display the times at which users submitted questions. User ID was assigned based on time of first arrival to the surveys.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4383627&req=5

pone.0120521.g002: Number of Questions Submitted.Plots (a) and (b) use the Childhood BMI data, plots (c) and (d) use the EnergyMinder data, and plots (e) and (f) use the Personal Savings data. Plots (a), (c), and (e) display probability distributions for the number of questions submitted per user using log10 axes, along with power-law fit lines. Plots (b), (d), and (f) display the times at which users submitted questions. User ID was assigned based on time of first arrival to the surveys.
Mentions: Fig. 2 reports the distributions of questions submitted per user in each of the three studies as well as when those questions were submitted over the course of the study. Fig. 2a, c, and e indicate that the number of questions submitted follows a heavy tailed distribution. There were insufficient data to confirm statistically that these distributions could be fit with a power law, but they were similar to such a distribution. This means that in each of the three studies, a small percentage of participants submitted the majority of the questions, while most participants submitted no questions.

Bottom Line: In particular, we found that: the rate at which participants submitted new survey questions followed a heavy-tailed distribution; the distribution in the types of questions posed was similar; and many users posed non-obvious yet predictive questions.While we did not find a significant relationship between the quantity of participation and the quality of contribution for both response submissions and question submissions, we did find several other more nuanced participant behavior patterns, which did correlate with contribution in one of the three surveys.We conclude that there exists an optimal time for users to pose questions early on in their participation, but only after they have submitted a few responses to other questions.

View Article: PubMed Central - PubMed

Affiliation: Computer Science Department (graduated), University of Vermont, Burlington, United States of America.

ABSTRACT
This paper identifies trends within and relationships between the amount of participation and the quality of contributions in three crowdsourced surveys. Participants were asked to perform a collective problem solving task that lacked any explicit incentive: they were instructed not only to respond to survey questions but also to pose new questions that they thought might-if responded to by others-predict an outcome variable of interest to them. While the three surveys had very different outcome variables, target audiences, methods of advertisement, and lengths of deployment, we found very similar patterns of collective behavior. In particular, we found that: the rate at which participants submitted new survey questions followed a heavy-tailed distribution; the distribution in the types of questions posed was similar; and many users posed non-obvious yet predictive questions. By analyzing responses to questions that contained a built-in range of valid response we found that less than 0.2% of responses lay outside of those ranges, indicating that most participants tend to respond honestly to surveys of this form, even without explicit incentives for honesty. While we did not find a significant relationship between the quantity of participation and the quality of contribution for both response submissions and question submissions, we did find several other more nuanced participant behavior patterns, which did correlate with contribution in one of the three surveys. We conclude that there exists an optimal time for users to pose questions early on in their participation, but only after they have submitted a few responses to other questions. This suggests that future crowdsourced surveys may attract more predictive questions by prompting users to pose new questions at specific times during their participation and limiting question submission at non-optimal times.

No MeSH data available.


Related in: MedlinePlus