Limits...
Development, inter-rater reliability and feasibility of a checklist to assess implementation (Ch-IMP) in systematic reviews: the case of provider-based prevention and treatment programs targeting children and youth.

Cargo M, Stankov I, Thomas J, Saini M, Rogers P, Mayo-Wilson E, Hannes K - BMC Med Res Methodol (2015)

Bottom Line: The checklist was pilot-tested on a cohort of 27 effectiveness reviews targeting children and youth.Use of the tool demands time investment and it requires adjustment to improve its feasibility for wider use.The checklist could be used by authors and editors to improve the quality of systematic reviews, and shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.

View Article: PubMed Central - PubMed

Affiliation: Spatial Epidemiology and Evaluation Research Group, School of Population Health, University of South Australia, Adelaide, Australia. margaret.cargo@unisa.edu.au.

ABSTRACT

Background: Several papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials. Information about implementation is also required in systematic reviews of complex interventions to facilitate the translation and uptake of evidence of provider-based prevention and treatment programs. To capture whether and how implementation is assessed within systematic effectiveness reviews, we developed a checklist for implementation (Ch-IMP) and piloted it in a cohort of reviews on provider-based prevention and treatment interventions for children and young people. This paper reports on the inter-rater reliability, feasibility and reasons for discrepant ratings.

Methods: Checklist domains were informed by a framework for program theory; items within domains were generated from a literature review. The checklist was pilot-tested on a cohort of 27 effectiveness reviews targeting children and youth. Two raters independently extracted information on 47 items. Inter-rater reliability was evaluated using percentage agreement and unweighted kappa coefficients. Reasons for discrepant ratings were content analysed.

Results: Kappa coefficients ranged from 0.37 to 1.00 and were not influenced by one-sided bias. Most kappa values were classified as excellent (n = 20) or good (n = 17) with a few items categorised as fair (n = 7) or poor (n = 1). Prevalence-adjusted kappa coefficients indicate good or excellent agreement for all but one item. Four areas contributed to scoring discrepancies: 1) clarity or sufficiency of information provided in the review; 2) information missed in the review; 3) issues encountered with the tool; and 4) issues encountered at the review level. Use of the tool demands time investment and it requires adjustment to improve its feasibility for wider use.

Conclusions: The case of provider-based prevention and treatment interventions showed relevancy in developing and piloting the Ch-IMP as a useful tool for assessing the extent to which systematic reviews assess the quality of implementation. The checklist could be used by authors and editors to improve the quality of systematic reviews, and shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.

No MeSH data available.


Related in: MedlinePlus

Conceptual framework for developing program theory. Source: Chen H-T. Practical Program Evaluation. Thousand Oaks, CA: Sage Publications, 2005. Reprinted with permission from Sage Publications
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4562191&req=5

Fig1: Conceptual framework for developing program theory. Source: Chen H-T. Practical Program Evaluation. Thousand Oaks, CA: Sage Publications, 2005. Reprinted with permission from Sage Publications

Mentions: In Chen’s framework (Fig. 1) the action model supporting the prevention or treatment intervention must be implemented appropriately in order to activate the transformation process in the program’s change model. The action model articulates what the program will do to bring about change in children and youth outcomes. For example, if a change model for a given intervention is designed to increase children’s levels of physical activity by changing perceived social norms for physical activity and opportunities to engage in physical activity, the action model stipulates what the intervention will do to activate the change model. Will the intervention include school-based activities only? Will parents be engaged? Will teachers receive training? Will the school collaborate with external agencies? Which agencies, how and why? Who will the intervention target and why? The action model provides the justification for these choices and clarifies what the program will do (i.e., program operations) to increase behaviour change related to physical activity.Fig. 1


Development, inter-rater reliability and feasibility of a checklist to assess implementation (Ch-IMP) in systematic reviews: the case of provider-based prevention and treatment programs targeting children and youth.

Cargo M, Stankov I, Thomas J, Saini M, Rogers P, Mayo-Wilson E, Hannes K - BMC Med Res Methodol (2015)

Conceptual framework for developing program theory. Source: Chen H-T. Practical Program Evaluation. Thousand Oaks, CA: Sage Publications, 2005. Reprinted with permission from Sage Publications
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4562191&req=5

Fig1: Conceptual framework for developing program theory. Source: Chen H-T. Practical Program Evaluation. Thousand Oaks, CA: Sage Publications, 2005. Reprinted with permission from Sage Publications
Mentions: In Chen’s framework (Fig. 1) the action model supporting the prevention or treatment intervention must be implemented appropriately in order to activate the transformation process in the program’s change model. The action model articulates what the program will do to bring about change in children and youth outcomes. For example, if a change model for a given intervention is designed to increase children’s levels of physical activity by changing perceived social norms for physical activity and opportunities to engage in physical activity, the action model stipulates what the intervention will do to activate the change model. Will the intervention include school-based activities only? Will parents be engaged? Will teachers receive training? Will the school collaborate with external agencies? Which agencies, how and why? Who will the intervention target and why? The action model provides the justification for these choices and clarifies what the program will do (i.e., program operations) to increase behaviour change related to physical activity.Fig. 1

Bottom Line: The checklist was pilot-tested on a cohort of 27 effectiveness reviews targeting children and youth.Use of the tool demands time investment and it requires adjustment to improve its feasibility for wider use.The checklist could be used by authors and editors to improve the quality of systematic reviews, and shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.

View Article: PubMed Central - PubMed

Affiliation: Spatial Epidemiology and Evaluation Research Group, School of Population Health, University of South Australia, Adelaide, Australia. margaret.cargo@unisa.edu.au.

ABSTRACT

Background: Several papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials. Information about implementation is also required in systematic reviews of complex interventions to facilitate the translation and uptake of evidence of provider-based prevention and treatment programs. To capture whether and how implementation is assessed within systematic effectiveness reviews, we developed a checklist for implementation (Ch-IMP) and piloted it in a cohort of reviews on provider-based prevention and treatment interventions for children and young people. This paper reports on the inter-rater reliability, feasibility and reasons for discrepant ratings.

Methods: Checklist domains were informed by a framework for program theory; items within domains were generated from a literature review. The checklist was pilot-tested on a cohort of 27 effectiveness reviews targeting children and youth. Two raters independently extracted information on 47 items. Inter-rater reliability was evaluated using percentage agreement and unweighted kappa coefficients. Reasons for discrepant ratings were content analysed.

Results: Kappa coefficients ranged from 0.37 to 1.00 and were not influenced by one-sided bias. Most kappa values were classified as excellent (n = 20) or good (n = 17) with a few items categorised as fair (n = 7) or poor (n = 1). Prevalence-adjusted kappa coefficients indicate good or excellent agreement for all but one item. Four areas contributed to scoring discrepancies: 1) clarity or sufficiency of information provided in the review; 2) information missed in the review; 3) issues encountered with the tool; and 4) issues encountered at the review level. Use of the tool demands time investment and it requires adjustment to improve its feasibility for wider use.

Conclusions: The case of provider-based prevention and treatment interventions showed relevancy in developing and piloting the Ch-IMP as a useful tool for assessing the extent to which systematic reviews assess the quality of implementation. The checklist could be used by authors and editors to improve the quality of systematic reviews, and shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.

No MeSH data available.


Related in: MedlinePlus