Limits...
Comparing hospital mortality--how to count does matter for patients hospitalized for acute myocardial infarction (AMI), stroke and hip fracture.

Kristoffersen DT, Helgeland J, Clench-Aas J, Laake P, Veierød MB - BMC Health Serv Res (2012)

Bottom Line: Mortality based on in-and-out-of-hospital deaths, weighted according to length of stay at each hospital for transferred patients (W30D), was compared to a) mortality based on in-and-out-of-hospital deaths excluding patients treated at two or more hospitals (S30D), and b) mortality based on in-hospital deaths (IH30D).Mortality measures based on in-hospital deaths alone, or measures excluding admissions for transferred patients, can be misleading as indicators of hospital performance.We propose to attribute the outcome to all hospitals by fraction of time spent in each hospital for patients transferred between hospitals to reduce bias due to double counting or exclusion of hospital stays.

View Article: PubMed Central - HTML - PubMed

Affiliation: Norwegian Knowledge Centre for the Health Services, Quality Measurement Unit, PO Box 7004, St,Olavs plass, N-0130, Oslo, Norway. dok@nokc.no

ABSTRACT

Background: Mortality is a widely used, but often criticised, quality indicator for hospitals. In many countries, mortality is calculated from in-hospital deaths, due to limited access to follow-up data on patients transferred between hospitals and on discharged patients. The objectives were to: i) summarize time, place and cause of death for first time acute myocardial infarction (AMI), stroke and hip fracture, ii) compare case-mix adjusted 30-day mortality measures based on in-hospital deaths and in-and-out-of hospital deaths, with and without patients transferred to other hospitals.

Methods: Norwegian hospital data within a 5-year period were merged with information from official registers. Mortality based on in-and-out-of-hospital deaths, weighted according to length of stay at each hospital for transferred patients (W30D), was compared to a) mortality based on in-and-out-of-hospital deaths excluding patients treated at two or more hospitals (S30D), and b) mortality based on in-hospital deaths (IH30D). Adjusted mortalities were estimated by logistic regression which, in addition to hospital, included age, sex and stage of disease. The hospitals were assigned outlier status according to the Z-values for hospitals in the models; low mortality: Z-values below the 5-percentile, high mortality: Z-values above the 95-percentile, medium mortality: remaining hospitals.

Results: The data included 48 048 AMI patients, 47 854 stroke patients and 40 142 hip fracture patients from 55, 59 and 58 hospitals, respectively. The overall relative frequencies of deaths within 30 days were 19.1% (AMI), 17.6% (stroke) and 7.8% (hip fracture). The cause of death diagnoses included the referral diagnosis for 73.8-89.6% of the deaths within 30 days. When comparing S30D versus W30D outlier status changed for 14.6% (AMI), 15.3% (stroke) and 36.2% (hip fracture) of the hospitals. For IH30D compared to W30D outlier status changed for 18.2% (AMI), 25.4% (stroke) and 27.6% (hip fracture) of the hospitals.

Conclusions: Mortality measures based on in-hospital deaths alone, or measures excluding admissions for transferred patients, can be misleading as indicators of hospital performance. We propose to attribute the outcome to all hospitals by fraction of time spent in each hospital for patients transferred between hospitals to reduce bias due to double counting or exclusion of hospital stays.

Show MeSH

Related in: MedlinePlus

Number of hospitals shifting rank and direction of shift when comparing the ranks obtained by mortality measures S30D and IH30D versus W30D per medical condition. Shifts are categorized as none, minor (1–5 shifts), moderate (6–10 shifts), and major (>10 shifts). The top bar on every plot shows the number of hospitals with no shift in rank. The empty bars to the right of the vertical axis show the number of hospitals shifting to better rank (lower mortality) when compared to W30D. The filled bars to the left of the vertical axis show the number of hospitals shifting to lower rank (higher mortality) when compared to W30D.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3526398&req=5

Figure 1: Number of hospitals shifting rank and direction of shift when comparing the ranks obtained by mortality measures S30D and IH30D versus W30D per medical condition. Shifts are categorized as none, minor (1–5 shifts), moderate (6–10 shifts), and major (>10 shifts). The top bar on every plot shows the number of hospitals with no shift in rank. The empty bars to the right of the vertical axis show the number of hospitals shifting to better rank (lower mortality) when compared to W30D. The filled bars to the left of the vertical axis show the number of hospitals shifting to lower rank (higher mortality) when compared to W30D.

Mentions: In Figure 1, back-to-back barplots display the shifts and direction of shift, per shift category, for the hospital ranks when comparing S30D and IH30D to W30D, unadjusted (lower two rows) and case-mix adjusted (upper two rows), per medical condition. The ranking was highly influenced by the method of counting the number of deaths. For the comparisons of adjusted mortalities, no altered rank was seen for 5-9% of the hospitals. Most shifts were minor (77.0%-86%) when comparing S30D versus W30D (upper row 1, Figure 1). For IH30D versus W30D, 14% of the AMI, 17% of the stroke and 42% of the hip fracture hospitals had major (>10) shifts in rank (row 2 from top, Figure 1). Minor shifts in rank were seen for adjusted versus unadjusted measurements.


Comparing hospital mortality--how to count does matter for patients hospitalized for acute myocardial infarction (AMI), stroke and hip fracture.

Kristoffersen DT, Helgeland J, Clench-Aas J, Laake P, Veierød MB - BMC Health Serv Res (2012)

Number of hospitals shifting rank and direction of shift when comparing the ranks obtained by mortality measures S30D and IH30D versus W30D per medical condition. Shifts are categorized as none, minor (1–5 shifts), moderate (6–10 shifts), and major (>10 shifts). The top bar on every plot shows the number of hospitals with no shift in rank. The empty bars to the right of the vertical axis show the number of hospitals shifting to better rank (lower mortality) when compared to W30D. The filled bars to the left of the vertical axis show the number of hospitals shifting to lower rank (higher mortality) when compared to W30D.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3526398&req=5

Figure 1: Number of hospitals shifting rank and direction of shift when comparing the ranks obtained by mortality measures S30D and IH30D versus W30D per medical condition. Shifts are categorized as none, minor (1–5 shifts), moderate (6–10 shifts), and major (>10 shifts). The top bar on every plot shows the number of hospitals with no shift in rank. The empty bars to the right of the vertical axis show the number of hospitals shifting to better rank (lower mortality) when compared to W30D. The filled bars to the left of the vertical axis show the number of hospitals shifting to lower rank (higher mortality) when compared to W30D.
Mentions: In Figure 1, back-to-back barplots display the shifts and direction of shift, per shift category, for the hospital ranks when comparing S30D and IH30D to W30D, unadjusted (lower two rows) and case-mix adjusted (upper two rows), per medical condition. The ranking was highly influenced by the method of counting the number of deaths. For the comparisons of adjusted mortalities, no altered rank was seen for 5-9% of the hospitals. Most shifts were minor (77.0%-86%) when comparing S30D versus W30D (upper row 1, Figure 1). For IH30D versus W30D, 14% of the AMI, 17% of the stroke and 42% of the hip fracture hospitals had major (>10) shifts in rank (row 2 from top, Figure 1). Minor shifts in rank were seen for adjusted versus unadjusted measurements.

Bottom Line: Mortality based on in-and-out-of-hospital deaths, weighted according to length of stay at each hospital for transferred patients (W30D), was compared to a) mortality based on in-and-out-of-hospital deaths excluding patients treated at two or more hospitals (S30D), and b) mortality based on in-hospital deaths (IH30D).Mortality measures based on in-hospital deaths alone, or measures excluding admissions for transferred patients, can be misleading as indicators of hospital performance.We propose to attribute the outcome to all hospitals by fraction of time spent in each hospital for patients transferred between hospitals to reduce bias due to double counting or exclusion of hospital stays.

View Article: PubMed Central - HTML - PubMed

Affiliation: Norwegian Knowledge Centre for the Health Services, Quality Measurement Unit, PO Box 7004, St,Olavs plass, N-0130, Oslo, Norway. dok@nokc.no

ABSTRACT

Background: Mortality is a widely used, but often criticised, quality indicator for hospitals. In many countries, mortality is calculated from in-hospital deaths, due to limited access to follow-up data on patients transferred between hospitals and on discharged patients. The objectives were to: i) summarize time, place and cause of death for first time acute myocardial infarction (AMI), stroke and hip fracture, ii) compare case-mix adjusted 30-day mortality measures based on in-hospital deaths and in-and-out-of hospital deaths, with and without patients transferred to other hospitals.

Methods: Norwegian hospital data within a 5-year period were merged with information from official registers. Mortality based on in-and-out-of-hospital deaths, weighted according to length of stay at each hospital for transferred patients (W30D), was compared to a) mortality based on in-and-out-of-hospital deaths excluding patients treated at two or more hospitals (S30D), and b) mortality based on in-hospital deaths (IH30D). Adjusted mortalities were estimated by logistic regression which, in addition to hospital, included age, sex and stage of disease. The hospitals were assigned outlier status according to the Z-values for hospitals in the models; low mortality: Z-values below the 5-percentile, high mortality: Z-values above the 95-percentile, medium mortality: remaining hospitals.

Results: The data included 48 048 AMI patients, 47 854 stroke patients and 40 142 hip fracture patients from 55, 59 and 58 hospitals, respectively. The overall relative frequencies of deaths within 30 days were 19.1% (AMI), 17.6% (stroke) and 7.8% (hip fracture). The cause of death diagnoses included the referral diagnosis for 73.8-89.6% of the deaths within 30 days. When comparing S30D versus W30D outlier status changed for 14.6% (AMI), 15.3% (stroke) and 36.2% (hip fracture) of the hospitals. For IH30D compared to W30D outlier status changed for 18.2% (AMI), 25.4% (stroke) and 27.6% (hip fracture) of the hospitals.

Conclusions: Mortality measures based on in-hospital deaths alone, or measures excluding admissions for transferred patients, can be misleading as indicators of hospital performance. We propose to attribute the outcome to all hospitals by fraction of time spent in each hospital for patients transferred between hospitals to reduce bias due to double counting or exclusion of hospital stays.

Show MeSH
Related in: MedlinePlus