Limits...
Common Statistical Pitfalls in Basic Science Research

View Article: PubMed Central - PubMed

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

The analysis of clinical samples, population samples, and controlled trials is typically subjected to rigorous statistical review... Although determining an appropriate sample size for basic science research might be more challenging than for clinical research, it is still important for planning, analysis, and ethical considerations... In basic science research, there is often no prior study, or great uncertainty exists regarding the expected variability of the outcome measure, making sample size calculations a challenge... A significant statistical finding (eg, P<0.05 when the significance criterion is set at 5%) is due to a true effect or a difference or to a type I error... A type I error is also known as a false‐positive result and occurs when the hypothesis is rejected, leading the investigator to conclude that there is an effect when there is actually none... Conversely, a comparison that fails to reach statistical significance is caused by either no true effect or a type II error... A type II error is described as a false‐negative result and occurs when the test fails to detect an effect that actually exists... Minimizing type II error and increasing statistical power are generally achieved with appropriately large sample sizes (calculated based on expected variability)... A common pitfall in basic science studies is a sample size that is too small to robustly detect or exclude meaningful effects, thereby compromising study conclusions... In designing even basic science experiments, investigators must pay careful attention to control groups (conditions), randomization, blinding, and replication... The goal is to ensure that bias (systematic errors introduced in the conduct, analysis, or interpretation of study results) and confounding (distortions of effect caused by other factors) are minimized to produce valid estimates of effect... It is common to find basic science studies that neglect this distinction, often to the detriment of the investigation because a repeated‐measures design is a very good way to account for innate biological variability between experimental units and often is more likely to detect treatment differences than analysis of independent events... Investigators often design careful studies with repeated measurements over time, only to ignore the repeated nature of the data with analyses performed at each time point... Such an approach not only fails to examine longitudinal effects contained in the data but also results in decreased statistical power compared with a repeated‐measures analysis... Survival analyses can be particularly challenging for investigators in basic science research because small samples may not result in sufficient numbers of events (eg, deaths) to perform meaningful analysis.

No MeSH data available.


Related in: MedlinePlus

Percentage of apoptosis by strain. *P<0.05 against wild type treated with Ad‐LacZ. †P<0.05 between treated TG1 mice and TG1 treated with Ad‐LacZ. ‡P<0.05 between treated TG2 mice and TG2 treated with Ad‐LacZ. Cat indicates catalase; SOD, superoxide dismutase; TG, transgenic; WT, wild type.
© Copyright Policy - creativeCommonsBy-nc
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC5121512&req=5

jah31794-fig-0006: Percentage of apoptosis by strain. *P<0.05 against wild type treated with Ad‐LacZ. †P<0.05 between treated TG1 mice and TG1 treated with Ad‐LacZ. ‡P<0.05 between treated TG2 mice and TG2 treated with Ad‐LacZ. Cat indicates catalase; SOD, superoxide dismutase; TG, transgenic; WT, wild type.

Mentions: We wish to compare apoptosis in cell isolates in 3 different strains of mice (wild type and 2 strains of transgenic [TG] mice) treated with control (Ad‐LacZ) versus adenoviruses expressing catalase or superoxide dismutase. The outcome of interest is percentage of apoptosis (a continuous outcome), and the comparison of interest is percentage of apoptosis among strains. Six isolates were taken from each strain of mice and plated into cell culture dishes, grown to confluence, and then treated as indicated on 6 different occasions. The unit of analysis is the isolate, and data are combined from each experiment (different days) and summarized as shown in Figure 6. The data are means and standard errors taken over n=6 isolates for each type of mouse and condition.


Common Statistical Pitfalls in Basic Science Research
Percentage of apoptosis by strain. *P<0.05 against wild type treated with Ad‐LacZ. †P<0.05 between treated TG1 mice and TG1 treated with Ad‐LacZ. ‡P<0.05 between treated TG2 mice and TG2 treated with Ad‐LacZ. Cat indicates catalase; SOD, superoxide dismutase; TG, transgenic; WT, wild type.
© Copyright Policy - creativeCommonsBy-nc
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC5121512&req=5

jah31794-fig-0006: Percentage of apoptosis by strain. *P<0.05 against wild type treated with Ad‐LacZ. †P<0.05 between treated TG1 mice and TG1 treated with Ad‐LacZ. ‡P<0.05 between treated TG2 mice and TG2 treated with Ad‐LacZ. Cat indicates catalase; SOD, superoxide dismutase; TG, transgenic; WT, wild type.
Mentions: We wish to compare apoptosis in cell isolates in 3 different strains of mice (wild type and 2 strains of transgenic [TG] mice) treated with control (Ad‐LacZ) versus adenoviruses expressing catalase or superoxide dismutase. The outcome of interest is percentage of apoptosis (a continuous outcome), and the comparison of interest is percentage of apoptosis among strains. Six isolates were taken from each strain of mice and plated into cell culture dishes, grown to confluence, and then treated as indicated on 6 different occasions. The unit of analysis is the isolate, and data are combined from each experiment (different days) and summarized as shown in Figure 6. The data are means and standard errors taken over n=6 isolates for each type of mouse and condition.

View Article: PubMed Central - PubMed

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

The analysis of clinical samples, population samples, and controlled trials is typically subjected to rigorous statistical review... Although determining an appropriate sample size for basic science research might be more challenging than for clinical research, it is still important for planning, analysis, and ethical considerations... In basic science research, there is often no prior study, or great uncertainty exists regarding the expected variability of the outcome measure, making sample size calculations a challenge... A significant statistical finding (eg, P<0.05 when the significance criterion is set at 5%) is due to a true effect or a difference or to a type I error... A type I error is also known as a false‐positive result and occurs when the hypothesis is rejected, leading the investigator to conclude that there is an effect when there is actually none... Conversely, a comparison that fails to reach statistical significance is caused by either no true effect or a type II error... A type II error is described as a false‐negative result and occurs when the test fails to detect an effect that actually exists... Minimizing type II error and increasing statistical power are generally achieved with appropriately large sample sizes (calculated based on expected variability)... A common pitfall in basic science studies is a sample size that is too small to robustly detect or exclude meaningful effects, thereby compromising study conclusions... In designing even basic science experiments, investigators must pay careful attention to control groups (conditions), randomization, blinding, and replication... The goal is to ensure that bias (systematic errors introduced in the conduct, analysis, or interpretation of study results) and confounding (distortions of effect caused by other factors) are minimized to produce valid estimates of effect... It is common to find basic science studies that neglect this distinction, often to the detriment of the investigation because a repeated‐measures design is a very good way to account for innate biological variability between experimental units and often is more likely to detect treatment differences than analysis of independent events... Investigators often design careful studies with repeated measurements over time, only to ignore the repeated nature of the data with analyses performed at each time point... Such an approach not only fails to examine longitudinal effects contained in the data but also results in decreased statistical power compared with a repeated‐measures analysis... Survival analyses can be particularly challenging for investigators in basic science research because small samples may not result in sufficient numbers of events (eg, deaths) to perform meaningful analysis.

No MeSH data available.


Related in: MedlinePlus