Limits...
Can research assessments themselves cause bias in behaviour change trials? A systematic review of evidence from solomon 4-group studies.

McCambridge J, Butor-Bhavsar K, Witton J, Elbourne D - PLoS ONE (2011)

Bottom Line: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes.Ten studies from a range of applied areas were included.There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review.

View Article: PubMed Central - PubMed

Affiliation: Centre for Research on Drugs & Health Behaviour, Faculty of Public Health & Policy, London School of Hygiene & Tropical Medicine, London, United Kingdom. Jim.McCambridge@lshtm.ac.uk

ABSTRACT

Background: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes. Assessments may introduce bias in randomised controlled trials by altering receptivity to intervention in experimental groups and differentially impacting on the behaviour of control groups. In a Solomon 4-group design, participants are randomly allocated to one of four arms: (1) assessed experimental group; (2) unassessed experimental group (3) assessed control group; or (4) unassessed control group. This design provides a test of the internal validity of effect sizes obtained in conventional two-group trials by controlling for the effects of baseline assessment, and assessing interactions between the intervention and baseline assessment. The aim of this systematic review is to evaluate evidence from Solomon 4-group studies with behavioural outcomes that baseline research assessments themselves can introduce bias into trials.

Methodology/principal findings: Electronic databases were searched, supplemented by citation searching. Studies were eligible if they reported appropriately analysed results in peer-reviewed journals and used Solomon 4-group designs in non-laboratory settings with behavioural outcome measures and sample sizes of 20 per group or greater. Ten studies from a range of applied areas were included. There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review.

Conclusions/significance: There were too few high quality completed studies to infer conclusively that biases stemming from baseline research assessments do or do not exist. There is, therefore a need for new rigorous Solomon 4-group studies that are purposively designed to evaluate the potential for research assessments to cause bias in behaviour change trials.

Show MeSH

Related in: MedlinePlus

PRISMA 2009 Flow Diagram.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3198466&req=5

pone-0025223-g001: PRISMA 2009 Flow Diagram.

Mentions: Ten studies were eligible for inclusion in this review [22], [23], [24], [25], [26], [27], [28], [29], [30], [31] – see Figure for a summary of the study selection process and Table 1 for details of included studies. The majority (n = 6) of these studies took place in schools and were concerned with the prevention of health compromising behaviours among children. The four studies with adults evaluated health promotion interventions. The two smallest studies also had the shortest periods of follow-up study. The four adult studies comprised similar sample sizes and follow-up intervals (see Table 1). The baseline research assessments were with questionnaires in all cases bar two, in which interviews took place [22], [23].


Can research assessments themselves cause bias in behaviour change trials? A systematic review of evidence from solomon 4-group studies.

McCambridge J, Butor-Bhavsar K, Witton J, Elbourne D - PLoS ONE (2011)

PRISMA 2009 Flow Diagram.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3198466&req=5

pone-0025223-g001: PRISMA 2009 Flow Diagram.
Mentions: Ten studies were eligible for inclusion in this review [22], [23], [24], [25], [26], [27], [28], [29], [30], [31] – see Figure for a summary of the study selection process and Table 1 for details of included studies. The majority (n = 6) of these studies took place in schools and were concerned with the prevention of health compromising behaviours among children. The four studies with adults evaluated health promotion interventions. The two smallest studies also had the shortest periods of follow-up study. The four adult studies comprised similar sample sizes and follow-up intervals (see Table 1). The baseline research assessments were with questionnaires in all cases bar two, in which interviews took place [22], [23].

Bottom Line: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes.Ten studies from a range of applied areas were included.There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review.

View Article: PubMed Central - PubMed

Affiliation: Centre for Research on Drugs & Health Behaviour, Faculty of Public Health & Policy, London School of Hygiene & Tropical Medicine, London, United Kingdom. Jim.McCambridge@lshtm.ac.uk

ABSTRACT

Background: The possible effects of research assessments on participant behaviour have attracted research interest, especially in studies with behavioural interventions and/or outcomes. Assessments may introduce bias in randomised controlled trials by altering receptivity to intervention in experimental groups and differentially impacting on the behaviour of control groups. In a Solomon 4-group design, participants are randomly allocated to one of four arms: (1) assessed experimental group; (2) unassessed experimental group (3) assessed control group; or (4) unassessed control group. This design provides a test of the internal validity of effect sizes obtained in conventional two-group trials by controlling for the effects of baseline assessment, and assessing interactions between the intervention and baseline assessment. The aim of this systematic review is to evaluate evidence from Solomon 4-group studies with behavioural outcomes that baseline research assessments themselves can introduce bias into trials.

Methodology/principal findings: Electronic databases were searched, supplemented by citation searching. Studies were eligible if they reported appropriately analysed results in peer-reviewed journals and used Solomon 4-group designs in non-laboratory settings with behavioural outcome measures and sample sizes of 20 per group or greater. Ten studies from a range of applied areas were included. There was inconsistent evidence of main effects of assessment, sparse evidence of interactions with behavioural interventions, and a lack of convincing data in relation to the research question for this review.

Conclusions/significance: There were too few high quality completed studies to infer conclusively that biases stemming from baseline research assessments do or do not exist. There is, therefore a need for new rigorous Solomon 4-group studies that are purposively designed to evaluate the potential for research assessments to cause bias in behaviour change trials.

Show MeSH
Related in: MedlinePlus