Limits...
Bias modelling in evidence synthesis.

Turner RM, Spiegelhalter DJ, Smith GC, Thompson SG - J R Stat Soc Ser A Stat Soc (2009)

Bottom Line: The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies.Adjustment had the effect of shifting the combined estimate away from the by approximately 10%, and the variance of the combined estimate was almost tripled.Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods.

View Article: PubMed Central - PubMed

ABSTRACT
Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods.

No MeSH data available.


Related in: MedlinePlus

Effect of adjusting for (a) additive bias and (b) all bias on the odds ratio (with 95% CIs) in Hermann et al. (1984)
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2667303&req=5

fig05: Effect of adjusting for (a) additive bias and (b) all bias on the odds ratio (with 95% CIs) in Hermann et al. (1984)

Mentions: The effect of adjusting the study results for total additive bias is illustrated in Fig. 5(a), for the four assessors separately and pooled. We decided to pool across assessors after summing the biases rather than before, because individuals may differently categorize potential sources of bias. The pooled distribution for the bias-adjusted result is based on medians of the means and standard deviations of the four assessors’ distributions for the bias-adjusted result. In principle, we could choose to give more weight to certain assessors’ opinions on the basis of their greater knowledge in particular areas, but selection of such weights would be difficult in practice.


Bias modelling in evidence synthesis.

Turner RM, Spiegelhalter DJ, Smith GC, Thompson SG - J R Stat Soc Ser A Stat Soc (2009)

Effect of adjusting for (a) additive bias and (b) all bias on the odds ratio (with 95% CIs) in Hermann et al. (1984)
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2667303&req=5

fig05: Effect of adjusting for (a) additive bias and (b) all bias on the odds ratio (with 95% CIs) in Hermann et al. (1984)
Mentions: The effect of adjusting the study results for total additive bias is illustrated in Fig. 5(a), for the four assessors separately and pooled. We decided to pool across assessors after summing the biases rather than before, because individuals may differently categorize potential sources of bias. The pooled distribution for the bias-adjusted result is based on medians of the means and standard deviations of the four assessors’ distributions for the bias-adjusted result. In principle, we could choose to give more weight to certain assessors’ opinions on the basis of their greater knowledge in particular areas, but selection of such weights would be difficult in practice.

Bottom Line: The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies.Adjustment had the effect of shifting the combined estimate away from the by approximately 10%, and the variance of the combined estimate was almost tripled.Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods.

View Article: PubMed Central - PubMed

ABSTRACT
Policy decisions often require synthesis of evidence from multiple sources, and the source studies typically vary in rigour and in relevance to the target question. We present simple methods of allowing for differences in rigour (or lack of internal bias) and relevance (or lack of external bias) in evidence synthesis. The methods are developed in the context of reanalysing a UK National Institute for Clinical Excellence technology appraisal in antenatal care, which includes eight comparative studies. Many were historically controlled, only one was a randomized trial and doses, populations and outcomes varied between studies and differed from the target UK setting. Using elicited opinion, we construct prior distributions to represent the biases in each study and perform a bias-adjusted meta-analysis. Adjustment had the effect of shifting the combined estimate away from the by approximately 10%, and the variance of the combined estimate was almost tripled. Our generic bias modelling approach allows decisions to be based on all available evidence, with less rigorous or less relevant studies downweighted by using computationally simple methods.

No MeSH data available.


Related in: MedlinePlus