Limits...
How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed.

Soumerai SB, Starr D, Majumdar SR - Prev Chronic Dis (2015)

View Article: PubMed Central - PubMed

Affiliation: Harvard Medical School and Harvard Pilgrim Health Care Institute, 133 Brookline Ave, 6th Floor, Boston, MA 02215. Email: ssoumerai@hms.harvard.edu. Dr Soumerai is also co-chair of the Evaluative Sciences and Statistics Concentration of Harvard University's PhD Program in Health Policy.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

Medscape, LLC is accredited by the ACCME to provide continuing medical education for physicians... Medscape, LLC designates this Journal-based CME activity for a maximum of 1... Upon completion of this activity, participants will be able to: Define healthy user bias in health care research and means to reduce it Assess means to reduce selection bias in health care research Assess how to overcome confounding factors by indication in health care research Evaluate social desirability bias and history bias in health care research Another pattern in the evolution of science is that early studies of new treatments tend to show the most dramatic, positive health effects, and these effects diminish or disappear as more rigorous and larger studies are conducted... As these positive effects decrease, harmful side effects emerge... Sometimes researchers may publish overly definitive conclusions using unreliable study designs, reasoning that it is better to have unreliable data than no data at all and that the natural progression of science will eventually sort things out... We do not agree... For example, one of many weak cohort studies purported to show that flu vaccines reduce mortality in the elderly (Figure 2)... This study, which was widely reported in the news media and influenced policy, found significant differences in the rate of flu-related deaths and hospitalizations among the vaccinated elderly compared with that of their unvaccinated peers... One of the oldest and most accepted “truths” in the history of medication safety research is that benzodiazepines (popular medications such as Valium and Xanax that are prescribed for sleep and anxiety) may cause hip fractures among the elderly... This intervention took place during an explosion of research and news media reporting on treatments for acute myocardial infarction that could have influenced the prescribing behavior of physicians... These data demonstrate that inpatient mortality in the United States was declining before, during, and after the 100,000 Lives Campaign... The program itself probably had no effect on the trend, yet the widespread policy and media reports led to several European countries adopting this “successful” model of patient safety at considerable costs... Subsequently, several large RCTs demonstrated that many components of the 100,000 Lives Campaign were not particularly effective, especially when compared with the benefits reported in the IHI’s press releases.

Show MeSH
Example of a weak post-only study of a hospital safety program and mortality that did not control for history. Narrow bar shows start of quality of care program. There is no evidence that data are available for the years leading up to the program. The study did not define the intervention period other than to state that planning occurred in 2003. Figure is based on data extracted from Pryor et al (45). Abbreviation: FY, fiscal year.Fiscal YearDeaths per 100 Discharges1999Unknown2000Unknown2001Unknown2002Unknown2003Unknown20042.220052.120062.020071.920081.920091.920101.8
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4492215&req=5

Figure 17: Example of a weak post-only study of a hospital safety program and mortality that did not control for history. Narrow bar shows start of quality of care program. There is no evidence that data are available for the years leading up to the program. The study did not define the intervention period other than to state that planning occurred in 2003. Figure is based on data extracted from Pryor et al (45). Abbreviation: FY, fiscal year.Fiscal YearDeaths per 100 Discharges1999Unknown2000Unknown2001Unknown2002Unknown2003Unknown20042.220052.120062.020071.920081.920091.920101.8

Mentions: In 1999, the Institute of Medicine issued a landmark report on how the misuse of technologies and drugs may be causing illnesses and deaths in hospitals throughout the nation (44). Since then, researchers and policy makers have been trying to find ways to improve patient safety. However, the research designed to advance this agenda is often too weak to measure the effects on safety. For example, a recent study was designed to measure the impact of a large patient safety program on death rates in one hospital network (45). The program focused on 6 laudable goals, including reducing the number of adverse drug events, birth traumas, fall injuries, hospital-acquired infections, surgical complications, and pressure ulcers. Unfortunately, the investigators measured mortality rates only after planning and initiating the program (Figure 17), so it is impossible to know whether the reduction in mortality rates resulted from the quality improvement program or from the continuation of pre-existing trends (history bias).


How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed.

Soumerai SB, Starr D, Majumdar SR - Prev Chronic Dis (2015)

Example of a weak post-only study of a hospital safety program and mortality that did not control for history. Narrow bar shows start of quality of care program. There is no evidence that data are available for the years leading up to the program. The study did not define the intervention period other than to state that planning occurred in 2003. Figure is based on data extracted from Pryor et al (45). Abbreviation: FY, fiscal year.Fiscal YearDeaths per 100 Discharges1999Unknown2000Unknown2001Unknown2002Unknown2003Unknown20042.220052.120062.020071.920081.920091.920101.8
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4492215&req=5

Figure 17: Example of a weak post-only study of a hospital safety program and mortality that did not control for history. Narrow bar shows start of quality of care program. There is no evidence that data are available for the years leading up to the program. The study did not define the intervention period other than to state that planning occurred in 2003. Figure is based on data extracted from Pryor et al (45). Abbreviation: FY, fiscal year.Fiscal YearDeaths per 100 Discharges1999Unknown2000Unknown2001Unknown2002Unknown2003Unknown20042.220052.120062.020071.920081.920091.920101.8
Mentions: In 1999, the Institute of Medicine issued a landmark report on how the misuse of technologies and drugs may be causing illnesses and deaths in hospitals throughout the nation (44). Since then, researchers and policy makers have been trying to find ways to improve patient safety. However, the research designed to advance this agenda is often too weak to measure the effects on safety. For example, a recent study was designed to measure the impact of a large patient safety program on death rates in one hospital network (45). The program focused on 6 laudable goals, including reducing the number of adverse drug events, birth traumas, fall injuries, hospital-acquired infections, surgical complications, and pressure ulcers. Unfortunately, the investigators measured mortality rates only after planning and initiating the program (Figure 17), so it is impossible to know whether the reduction in mortality rates resulted from the quality improvement program or from the continuation of pre-existing trends (history bias).

View Article: PubMed Central - PubMed

Affiliation: Harvard Medical School and Harvard Pilgrim Health Care Institute, 133 Brookline Ave, 6th Floor, Boston, MA 02215. Email: ssoumerai@hms.harvard.edu. Dr Soumerai is also co-chair of the Evaluative Sciences and Statistics Concentration of Harvard University's PhD Program in Health Policy.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

Medscape, LLC is accredited by the ACCME to provide continuing medical education for physicians... Medscape, LLC designates this Journal-based CME activity for a maximum of 1... Upon completion of this activity, participants will be able to: Define healthy user bias in health care research and means to reduce it Assess means to reduce selection bias in health care research Assess how to overcome confounding factors by indication in health care research Evaluate social desirability bias and history bias in health care research Another pattern in the evolution of science is that early studies of new treatments tend to show the most dramatic, positive health effects, and these effects diminish or disappear as more rigorous and larger studies are conducted... As these positive effects decrease, harmful side effects emerge... Sometimes researchers may publish overly definitive conclusions using unreliable study designs, reasoning that it is better to have unreliable data than no data at all and that the natural progression of science will eventually sort things out... We do not agree... For example, one of many weak cohort studies purported to show that flu vaccines reduce mortality in the elderly (Figure 2)... This study, which was widely reported in the news media and influenced policy, found significant differences in the rate of flu-related deaths and hospitalizations among the vaccinated elderly compared with that of their unvaccinated peers... One of the oldest and most accepted “truths” in the history of medication safety research is that benzodiazepines (popular medications such as Valium and Xanax that are prescribed for sleep and anxiety) may cause hip fractures among the elderly... This intervention took place during an explosion of research and news media reporting on treatments for acute myocardial infarction that could have influenced the prescribing behavior of physicians... These data demonstrate that inpatient mortality in the United States was declining before, during, and after the 100,000 Lives Campaign... The program itself probably had no effect on the trend, yet the widespread policy and media reports led to several European countries adopting this “successful” model of patient safety at considerable costs... Subsequently, several large RCTs demonstrated that many components of the 100,000 Lives Campaign were not particularly effective, especially when compared with the benefits reported in the IHI’s press releases.

Show MeSH