Limits...
Evidence of Experimental Bias in the Life Sciences: Why We Need Blind Data Recording.

Holman L, Head ML, Lanfear R, Jennions MD - PLoS Biol. (2015)

Bottom Line: Observer bias and other "experimenter effects" occur when researchers' expectations influence study outcome.Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values.We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

View Article: PubMed Central - PubMed

Affiliation: Division of Evolution, Ecology and Genetics, Research School of Biology, Australian National University, Canberra, Australian Capital Territory, Australia.

ABSTRACT
Observer bias and other "experimenter effects" occur when researchers' expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work "blind," meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

No MeSH data available.


Density plots showing the distribution of z scores taken from putatively experimental blind and nonblind papers.The dotted line shows z = 1.96 (z scores above this line are “significant” at α = 0.05), and the numbers give the sample size (number of papers) and the percentage of papers that were blind for this dataset. The bottom-right figure shows the median z score (and the interquartile range) in each FoR category for blind and nonblind papers.
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4496034&req=5

pbio.1002190.g002: Density plots showing the distribution of z scores taken from putatively experimental blind and nonblind papers.The dotted line shows z = 1.96 (z scores above this line are “significant” at α = 0.05), and the numbers give the sample size (number of papers) and the percentage of papers that were blind for this dataset. The bottom-right figure shows the median z score (and the interquartile range) in each FoR category for blind and nonblind papers.

Mentions: The analysis of the “p =“ dataset is shown in Table 1. The number of authors (both the linear and quadratic terms), year of publication, and scientific discipline (scored objectively using the Field of Research [FoR] journal categorizations of the Excellence in Research for Australia initiative) all had a significant effect on z-transformed p-values. The estimated effect of blind data recording was small and negative, though its 95% confidence intervals overlapped zero (Table 1; Fig 2). The analysis of the “p =“ dataset therefore failed to reject the hypothesis that the mean p-value is the same in our blind and nonblind paper categories. This analysis also showed that z scores increase (i.e., p-values decrease) with author number, but the rate of increase slows down as author number grows (S2 Fig). Additionally, z scores have declined (i.e., p-values have increased) with time (S3 Fig).


Evidence of Experimental Bias in the Life Sciences: Why We Need Blind Data Recording.

Holman L, Head ML, Lanfear R, Jennions MD - PLoS Biol. (2015)

Density plots showing the distribution of z scores taken from putatively experimental blind and nonblind papers.The dotted line shows z = 1.96 (z scores above this line are “significant” at α = 0.05), and the numbers give the sample size (number of papers) and the percentage of papers that were blind for this dataset. The bottom-right figure shows the median z score (and the interquartile range) in each FoR category for blind and nonblind papers.
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4496034&req=5

pbio.1002190.g002: Density plots showing the distribution of z scores taken from putatively experimental blind and nonblind papers.The dotted line shows z = 1.96 (z scores above this line are “significant” at α = 0.05), and the numbers give the sample size (number of papers) and the percentage of papers that were blind for this dataset. The bottom-right figure shows the median z score (and the interquartile range) in each FoR category for blind and nonblind papers.
Mentions: The analysis of the “p =“ dataset is shown in Table 1. The number of authors (both the linear and quadratic terms), year of publication, and scientific discipline (scored objectively using the Field of Research [FoR] journal categorizations of the Excellence in Research for Australia initiative) all had a significant effect on z-transformed p-values. The estimated effect of blind data recording was small and negative, though its 95% confidence intervals overlapped zero (Table 1; Fig 2). The analysis of the “p =“ dataset therefore failed to reject the hypothesis that the mean p-value is the same in our blind and nonblind paper categories. This analysis also showed that z scores increase (i.e., p-values decrease) with author number, but the rate of increase slows down as author number grows (S2 Fig). Additionally, z scores have declined (i.e., p-values have increased) with time (S3 Fig).

Bottom Line: Observer bias and other "experimenter effects" occur when researchers' expectations influence study outcome.Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values.We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

View Article: PubMed Central - PubMed

Affiliation: Division of Evolution, Ecology and Genetics, Research School of Biology, Australian National University, Canberra, Australian Capital Territory, Australia.

ABSTRACT
Observer bias and other "experimenter effects" occur when researchers' expectations influence study outcome. These biases are strongest when researchers expect a particular result, are measuring subjective variables, and have an incentive to produce data that confirm predictions. To minimize bias, it is good practice to work "blind," meaning that experimenters are unaware of the identity or treatment group of their subjects while conducting research. Here, using text mining and a literature review, we find evidence that blind protocols are uncommon in the life sciences and that nonblind studies tend to report higher effect sizes and more significant p-values. We discuss methods to minimize bias and urge researchers, editors, and peer reviewers to keep blind protocols in mind.

No MeSH data available.