Limits...
Randomization Does Not Help Much, Comparability Does.

Saint-Mont U - PLoS ONE (2015)

Bottom Line: Fisher, randomization "relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed." Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular.However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion.Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable.

View Article: PubMed Central - PubMed

Affiliation: Nordhausen University of Applied Sciences, Nordhausen, Germany.

ABSTRACT
According to R.A. Fisher, randomization "relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed." Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable.

No MeSH data available.


Related in: MedlinePlus

The linear function n/10, and from above to below fp(n) for  and .
© Copyright Policy
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4507867&req=5

pone.0132102.g001: The linear function n/10, and from above to below fp(n) for and .

Mentions: A typical choice could be i = 10 and k = 3, which specifies the requirement that most samples be located within a rather tight acceptable range. In this case, one has to consider the functions n/10 and . These functions of n are shown in the following figure (Fig 1):


Randomization Does Not Help Much, Comparability Does.

Saint-Mont U - PLoS ONE (2015)

The linear function n/10, and from above to below fp(n) for  and .
© Copyright Policy
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4507867&req=5

pone.0132102.g001: The linear function n/10, and from above to below fp(n) for and .
Mentions: A typical choice could be i = 10 and k = 3, which specifies the requirement that most samples be located within a rather tight acceptable range. In this case, one has to consider the functions n/10 and . These functions of n are shown in the following figure (Fig 1):

Bottom Line: Fisher, randomization "relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed." Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular.However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion.Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable.

View Article: PubMed Central - PubMed

Affiliation: Nordhausen University of Applied Sciences, Nordhausen, Germany.

ABSTRACT
According to R.A. Fisher, randomization "relieves the experimenter from the anxiety of considering innumerable causes by which the data may be disturbed." Since, in particular, it is said to control for known and unknown nuisance factors that may considerably challenge the validity of a result, it has become very popular. This contribution challenges the received view. First, looking for quantitative support, we study a number of straightforward, mathematically simple models. They all demonstrate that the optimism surrounding randomization is questionable: In small to medium-sized samples, random allocation of units to treatments typically yields a considerable imbalance between the groups, i.e., confounding due to randomization is the rule rather than the exception. In the second part of this contribution, the reasoning is extended to a number of traditional arguments in favour of randomization. This discussion is rather non-technical, and sometimes touches on the rather fundamental Frequentist/Bayesian debate. However, the result of this analysis turns out to be quite similar: While the contribution of randomization remains doubtful, comparability contributes much to a compelling conclusion. Summing up, classical experimentation based on sound background theory and the systematic construction of exchangeable groups seems to be advisable.

No MeSH data available.


Related in: MedlinePlus