Limits...
Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

Blagus R, Lusa L - BMC Bioinformatics (2015)

Bottom Line: We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used.Examples based on the re-analysis of real datasets and simulation studies are provided.We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

View Article: PubMed Central - PubMed

Affiliation: Institute for Biostatistics and Medical Informatics, University of Ljubljana, Vrazov trg 2, Ljubljana, Slovenia. rok.blagus@mf.uni-lj.si.

ABSTRACT

Background: Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class).

Results: Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed.

Conclusions: We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

No MeSH data available.


Related in: MedlinePlus

Probability that at least one of the replicas of a sample included in the test fold is included also in the training fold, as a function of the proportion of minority class samples (pmin). The figure shows how the probability that a test sample has a replica in the learning fold depends on the level of class-imbalance (pmin) in a dataset with n=100 samples when using 2 fold CV
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
getmorefigures.php?uid=PMC4634915&req=5

Fig2: Probability that at least one of the replicas of a sample included in the test fold is included also in the training fold, as a function of the proportion of minority class samples (pmin). The figure shows how the probability that a test sample has a replica in the learning fold depends on the level of class-imbalance (pmin) in a dataset with n=100 samples when using 2 fold CV

Mentions: As an illustration, we graphically show in Fig. 2 how the probability that a test (left-out) sample has a replica in the learning fold depends on the level of class-imbalance in a dataset with n=100 samples when 2-fold split is used (ptest=0.5). The probability is very large for large levels of class-imbalance and approaches zero when the class distribution is more balanced.Fig. 2


Joint use of over- and under-sampling techniques and cross-validation for the development and assessment of prediction models.

Blagus R, Lusa L - BMC Bioinformatics (2015)

Probability that at least one of the replicas of a sample included in the test fold is included also in the training fold, as a function of the proportion of minority class samples (pmin). The figure shows how the probability that a test sample has a replica in the learning fold depends on the level of class-imbalance (pmin) in a dataset with n=100 samples when using 2 fold CV
© Copyright Policy - OpenAccess
Related In: Results  -  Collection

License 1 - License 2
Show All Figures
getmorefigures.php?uid=PMC4634915&req=5

Fig2: Probability that at least one of the replicas of a sample included in the test fold is included also in the training fold, as a function of the proportion of minority class samples (pmin). The figure shows how the probability that a test sample has a replica in the learning fold depends on the level of class-imbalance (pmin) in a dataset with n=100 samples when using 2 fold CV
Mentions: As an illustration, we graphically show in Fig. 2 how the probability that a test (left-out) sample has a replica in the learning fold depends on the level of class-imbalance in a dataset with n=100 samples when 2-fold split is used (ptest=0.5). The probability is very large for large levels of class-imbalance and approaches zero when the class distribution is more balanced.Fig. 2

Bottom Line: We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used.Examples based on the re-analysis of real datasets and simulation studies are provided.We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

View Article: PubMed Central - PubMed

Affiliation: Institute for Biostatistics and Medical Informatics, University of Ljubljana, Vrazov trg 2, Ljubljana, Slovenia. rok.blagus@mf.uni-lj.si.

ABSTRACT

Background: Prediction models are used in clinical research to develop rules that can be used to accurately predict the outcome of the patients based on some of their characteristics. They represent a valuable tool in the decision making process of clinicians and health policy makers, as they enable them to estimate the probability that patients have or will develop a disease, will respond to a treatment, or that their disease will recur. The interest devoted to prediction models in the biomedical community has been growing in the last few years. Often the data used to develop the prediction models are class-imbalanced as only few patients experience the event (and therefore belong to minority class).

Results: Prediction models developed using class-imbalanced data tend to achieve sub-optimal predictive accuracy in the minority class. This problem can be diminished by using sampling techniques aimed at balancing the class distribution. These techniques include under- and oversampling, where a fraction of the majority class samples are retained in the analysis or new samples from the minority class are generated. The correct assessment of how the prediction model is likely to perform on independent data is of crucial importance; in the absence of an independent data set, cross-validation is normally used. While the importance of correct cross-validation is well documented in the biomedical literature, the challenges posed by the joint use of sampling techniques and cross-validation have not been addressed.

Conclusions: We show that care must be taken to ensure that cross-validation is performed correctly on sampled data, and that the risk of overestimating the predictive accuracy is greater when oversampling techniques are used. Examples based on the re-analysis of real datasets and simulation studies are provided. We identify some results from the biomedical literature where the incorrect cross-validation was performed, where we expect that the performance of oversampling techniques was heavily overestimated.

No MeSH data available.


Related in: MedlinePlus