Limits...
An external validation of models to predict the onset of chronic kidney disease using population-based electronic health records from Salford, UK.

Fraccaro P, van der Veer S, Brown B, Prosperi M, O'Donoghue D, Collins GS, Buchan I, Peek N - BMC Med (2016)

Bottom Line: Five models also had an associated simplified scoring system.The two models that did not require recalibration were also the ones that had the best performance in the decision curve analysis.Clinical prediction models should be (re)calibrated for their intended uses.

View Article: PubMed Central - PubMed

Affiliation: NIHR Greater Manchester Primary Care Patient Safety Translational Research Centre, Institute of Population Health, The University of Manchester, Manchester, UK.

ABSTRACT

Background: Chronic kidney disease (CKD) is a major and increasing constituent of disease burdens worldwide. Early identification of patients at increased risk of developing CKD can guide interventions to slow disease progression, initiate timely referral to appropriate kidney care services, and support targeting of care resources. Risk prediction models can extend laboratory-based CKD screening to earlier stages of disease; however, to date, only a few of them have been externally validated or directly compared outside development populations. Our objective was to validate published CKD prediction models applicable in primary care.

Methods: We synthesised two recent systematic reviews of CKD risk prediction models and externally validated selected models for a 5-year horizon of disease onset. We used linked, anonymised, structured (coded) primary and secondary care data from patients resident in Salford (population ~234 k), UK. All adult patients with at least one record in 2009 were followed-up until the end of 2014, death, or CKD onset (n = 178,399). CKD onset was defined as repeated impaired eGFR measures over a period of at least 3 months, or physician diagnosis of CKD Stage 3-5. For each model, we assessed discrimination, calibration, and decision curve analysis.

Results: Seven relevant CKD risk prediction models were identified. Five models also had an associated simplified scoring system. All models discriminated well between patients developing CKD or not, with c-statistics around 0.90. Most of the models were poorly calibrated to our population, substantially over-predicting risk. The two models that did not require recalibration were also the ones that had the best performance in the decision curve analysis.

Conclusions: Included CKD prediction models showed good discriminative ability but over-predicted the actual 5-year CKD risk in English primary care patients. QKidney, the only UK-developed model, outperformed the others. Clinical prediction models should be (re)calibrated for their intended uses.

No MeSH data available.


Related in: MedlinePlus

Procedure to identify and select CKD prediction models
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4940699&req=5

Fig1: Procedure to identify and select CKD prediction models

Mentions: Figure 1 depicts the model inclusion process. Of the 29 models identified by Collins et al. [14] and Echouffo-Tcheugui and Kengne [15], 18 were developed with the aim of predicting CKD onset. We excluded three models because of incomplete reporting of regression models (regression coefficients not fully reported) in the original paper [49] and one model because it was developed in a specific sub-population (namely HIV patients) [20]. We excluded a further seven models for which we had more than one missing predictor in our dataset, including missing data for eGFR, urinary excretion, and c-reactive protein [50]; missing post-prandial glucose, proteinuria and uric acid [51]; missing eGFR and quantitative albuminuria [52], and finally, we excluded two models because of missing eGFR and low levels of high-density lipoprotein cholesterol [52, 53], respectively. The final set consisted of seven models (five logistic regression models and two CPH regression models) and five simplified scoring systems [36, 51–56]. Table 1 describes the details of the included models, and Additional file 3: Tables S1, S2 and S3 provide the population characteristics of the development datasets, the regression coefficients, and the simplified scoring systems.Fig. 1


An external validation of models to predict the onset of chronic kidney disease using population-based electronic health records from Salford, UK.

Fraccaro P, van der Veer S, Brown B, Prosperi M, O'Donoghue D, Collins GS, Buchan I, Peek N - BMC Med (2016)

Procedure to identify and select CKD prediction models
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4940699&req=5

Fig1: Procedure to identify and select CKD prediction models
Mentions: Figure 1 depicts the model inclusion process. Of the 29 models identified by Collins et al. [14] and Echouffo-Tcheugui and Kengne [15], 18 were developed with the aim of predicting CKD onset. We excluded three models because of incomplete reporting of regression models (regression coefficients not fully reported) in the original paper [49] and one model because it was developed in a specific sub-population (namely HIV patients) [20]. We excluded a further seven models for which we had more than one missing predictor in our dataset, including missing data for eGFR, urinary excretion, and c-reactive protein [50]; missing post-prandial glucose, proteinuria and uric acid [51]; missing eGFR and quantitative albuminuria [52], and finally, we excluded two models because of missing eGFR and low levels of high-density lipoprotein cholesterol [52, 53], respectively. The final set consisted of seven models (five logistic regression models and two CPH regression models) and five simplified scoring systems [36, 51–56]. Table 1 describes the details of the included models, and Additional file 3: Tables S1, S2 and S3 provide the population characteristics of the development datasets, the regression coefficients, and the simplified scoring systems.Fig. 1

Bottom Line: Five models also had an associated simplified scoring system.The two models that did not require recalibration were also the ones that had the best performance in the decision curve analysis.Clinical prediction models should be (re)calibrated for their intended uses.

View Article: PubMed Central - PubMed

Affiliation: NIHR Greater Manchester Primary Care Patient Safety Translational Research Centre, Institute of Population Health, The University of Manchester, Manchester, UK.

ABSTRACT

Background: Chronic kidney disease (CKD) is a major and increasing constituent of disease burdens worldwide. Early identification of patients at increased risk of developing CKD can guide interventions to slow disease progression, initiate timely referral to appropriate kidney care services, and support targeting of care resources. Risk prediction models can extend laboratory-based CKD screening to earlier stages of disease; however, to date, only a few of them have been externally validated or directly compared outside development populations. Our objective was to validate published CKD prediction models applicable in primary care.

Methods: We synthesised two recent systematic reviews of CKD risk prediction models and externally validated selected models for a 5-year horizon of disease onset. We used linked, anonymised, structured (coded) primary and secondary care data from patients resident in Salford (population ~234 k), UK. All adult patients with at least one record in 2009 were followed-up until the end of 2014, death, or CKD onset (n = 178,399). CKD onset was defined as repeated impaired eGFR measures over a period of at least 3 months, or physician diagnosis of CKD Stage 3-5. For each model, we assessed discrimination, calibration, and decision curve analysis.

Results: Seven relevant CKD risk prediction models were identified. Five models also had an associated simplified scoring system. All models discriminated well between patients developing CKD or not, with c-statistics around 0.90. Most of the models were poorly calibrated to our population, substantially over-predicting risk. The two models that did not require recalibration were also the ones that had the best performance in the decision curve analysis.

Conclusions: Included CKD prediction models showed good discriminative ability but over-predicted the actual 5-year CKD risk in English primary care patients. QKidney, the only UK-developed model, outperformed the others. Clinical prediction models should be (re)calibrated for their intended uses.

No MeSH data available.


Related in: MedlinePlus