Limits...
How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed.

Soumerai SB, Starr D, Majumdar SR - Prev Chronic Dis (2015)

View Article: PubMed Central - PubMed

Affiliation: Harvard Medical School and Harvard Pilgrim Health Care Institute, 133 Brookline Ave, 6th Floor, Boston, MA 02215. Email: ssoumerai@hms.harvard.edu. Dr Soumerai is also co-chair of the Evaluative Sciences and Statistics Concentration of Harvard University's PhD Program in Health Policy.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

Medscape, LLC is accredited by the ACCME to provide continuing medical education for physicians... Medscape, LLC designates this Journal-based CME activity for a maximum of 1... Upon completion of this activity, participants will be able to: Define healthy user bias in health care research and means to reduce it Assess means to reduce selection bias in health care research Assess how to overcome confounding factors by indication in health care research Evaluate social desirability bias and history bias in health care research Another pattern in the evolution of science is that early studies of new treatments tend to show the most dramatic, positive health effects, and these effects diminish or disappear as more rigorous and larger studies are conducted... As these positive effects decrease, harmful side effects emerge... Sometimes researchers may publish overly definitive conclusions using unreliable study designs, reasoning that it is better to have unreliable data than no data at all and that the natural progression of science will eventually sort things out... We do not agree... For example, one of many weak cohort studies purported to show that flu vaccines reduce mortality in the elderly (Figure 2)... This study, which was widely reported in the news media and influenced policy, found significant differences in the rate of flu-related deaths and hospitalizations among the vaccinated elderly compared with that of their unvaccinated peers... One of the oldest and most accepted “truths” in the history of medication safety research is that benzodiazepines (popular medications such as Valium and Xanax that are prescribed for sleep and anxiety) may cause hip fractures among the elderly... This intervention took place during an explosion of research and news media reporting on treatments for acute myocardial infarction that could have influenced the prescribing behavior of physicians... These data demonstrate that inpatient mortality in the United States was declining before, during, and after the 100,000 Lives Campaign... The program itself probably had no effect on the trend, yet the widespread policy and media reports led to several European countries adopting this “successful” model of patient safety at considerable costs... Subsequently, several large RCTs demonstrated that many components of the 100,000 Lives Campaign were not particularly effective, especially when compared with the benefits reported in the IHI’s press releases.

Show MeSH
Example of selection bias: underlying differences between groups of medical providers show how they are not comparable in studies designed to compare providers using EHRs with providers not using EHRs. Figure is based on data extracted from Simon et al (23) and Decker et al (24). Abbreviation: EHR, electronic health record.CharacteristicPercentage Using Electronic Health RecordsSize of practiceLarge (≥7 physicians)52Small (1–3 physicians)29Type of hospitalTeaching hospital40Nonteaching hospital14Age of physician, y≤454646–5537>5526
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC4492215&req=5

Figure 4: Example of selection bias: underlying differences between groups of medical providers show how they are not comparable in studies designed to compare providers using EHRs with providers not using EHRs. Figure is based on data extracted from Simon et al (23) and Decker et al (24). Abbreviation: EHR, electronic health record.CharacteristicPercentage Using Electronic Health RecordsSize of practiceLarge (≥7 physicians)52Small (1–3 physicians)29Type of hospitalTeaching hospital40Nonteaching hospital14Age of physician, y≤454646–5537>5526

Mentions: Let’s examine some studies that illustrate how provider selection biases may invalidate studies about the health and cost effects of health IT. Figure 4 illustrates that underlying differences exist between physicians and hospitals who do or do not use EHRs (23,24). Large physician practices and teaching hospitals are much more likely to use EHRs than are small or solo practices or nonteaching hospitals. Because hospital size and teaching status are predictors of quality of care (with larger hospitals and teaching hospitals predicting higher quality), the 2 factors can create powerful biases that can lead to untrustworthy conclusions. Thus, although studies may associate health IT with better patient health, what they are really pointing out are the differences between older physicians and younger physicians or differences between large physician practices and small physician practices. Such large differences between EHR adopters and nonadopters make it almost impossible to determine the effects of EHRs on health in simple comparative studies. Perhaps as more hospitals adopt EHRs or risk penalties, this type of selection bias may decrease, but that is in itself a testable hypothesis.


How Do You Know Which Health Care Effectiveness Research You Can Trust? A Guide to Study Design for the Perplexed.

Soumerai SB, Starr D, Majumdar SR - Prev Chronic Dis (2015)

Example of selection bias: underlying differences between groups of medical providers show how they are not comparable in studies designed to compare providers using EHRs with providers not using EHRs. Figure is based on data extracted from Simon et al (23) and Decker et al (24). Abbreviation: EHR, electronic health record.CharacteristicPercentage Using Electronic Health RecordsSize of practiceLarge (≥7 physicians)52Small (1–3 physicians)29Type of hospitalTeaching hospital40Nonteaching hospital14Age of physician, y≤454646–5537>5526
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC4492215&req=5

Figure 4: Example of selection bias: underlying differences between groups of medical providers show how they are not comparable in studies designed to compare providers using EHRs with providers not using EHRs. Figure is based on data extracted from Simon et al (23) and Decker et al (24). Abbreviation: EHR, electronic health record.CharacteristicPercentage Using Electronic Health RecordsSize of practiceLarge (≥7 physicians)52Small (1–3 physicians)29Type of hospitalTeaching hospital40Nonteaching hospital14Age of physician, y≤454646–5537>5526
Mentions: Let’s examine some studies that illustrate how provider selection biases may invalidate studies about the health and cost effects of health IT. Figure 4 illustrates that underlying differences exist between physicians and hospitals who do or do not use EHRs (23,24). Large physician practices and teaching hospitals are much more likely to use EHRs than are small or solo practices or nonteaching hospitals. Because hospital size and teaching status are predictors of quality of care (with larger hospitals and teaching hospitals predicting higher quality), the 2 factors can create powerful biases that can lead to untrustworthy conclusions. Thus, although studies may associate health IT with better patient health, what they are really pointing out are the differences between older physicians and younger physicians or differences between large physician practices and small physician practices. Such large differences between EHR adopters and nonadopters make it almost impossible to determine the effects of EHRs on health in simple comparative studies. Perhaps as more hospitals adopt EHRs or risk penalties, this type of selection bias may decrease, but that is in itself a testable hypothesis.

View Article: PubMed Central - PubMed

Affiliation: Harvard Medical School and Harvard Pilgrim Health Care Institute, 133 Brookline Ave, 6th Floor, Boston, MA 02215. Email: ssoumerai@hms.harvard.edu. Dr Soumerai is also co-chair of the Evaluative Sciences and Statistics Concentration of Harvard University's PhD Program in Health Policy.

AUTOMATICALLY GENERATED EXCERPT
Please rate it.

Medscape, LLC is accredited by the ACCME to provide continuing medical education for physicians... Medscape, LLC designates this Journal-based CME activity for a maximum of 1... Upon completion of this activity, participants will be able to: Define healthy user bias in health care research and means to reduce it Assess means to reduce selection bias in health care research Assess how to overcome confounding factors by indication in health care research Evaluate social desirability bias and history bias in health care research Another pattern in the evolution of science is that early studies of new treatments tend to show the most dramatic, positive health effects, and these effects diminish or disappear as more rigorous and larger studies are conducted... As these positive effects decrease, harmful side effects emerge... Sometimes researchers may publish overly definitive conclusions using unreliable study designs, reasoning that it is better to have unreliable data than no data at all and that the natural progression of science will eventually sort things out... We do not agree... For example, one of many weak cohort studies purported to show that flu vaccines reduce mortality in the elderly (Figure 2)... This study, which was widely reported in the news media and influenced policy, found significant differences in the rate of flu-related deaths and hospitalizations among the vaccinated elderly compared with that of their unvaccinated peers... One of the oldest and most accepted “truths” in the history of medication safety research is that benzodiazepines (popular medications such as Valium and Xanax that are prescribed for sleep and anxiety) may cause hip fractures among the elderly... This intervention took place during an explosion of research and news media reporting on treatments for acute myocardial infarction that could have influenced the prescribing behavior of physicians... These data demonstrate that inpatient mortality in the United States was declining before, during, and after the 100,000 Lives Campaign... The program itself probably had no effect on the trend, yet the widespread policy and media reports led to several European countries adopting this “successful” model of patient safety at considerable costs... Subsequently, several large RCTs demonstrated that many components of the 100,000 Lives Campaign were not particularly effective, especially when compared with the benefits reported in the IHI’s press releases.

Show MeSH