Limits...
Genomic data sampling and its effect on classification performance assessment.

Azuaje F - BMC Bioinformatics (2003)

Bottom Line: These methods are designed to reduce the bias and variance of small-sample estimations.Conservative and optimistic accuracy estimations can be obtained by applying different methods.Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

View Article: PubMed Central - HTML - PubMed

Affiliation: School of Computing and Mathematics, University of Ulster, Jordanstown, Nothern Ireland, UK. fj.azuaje@ulster.ac.uk

ABSTRACT

Background: Supervised classification is fundamental in bioinformatics. Machine learning models, such as neural networks, have been applied to discover genes and expression patterns. This process is achieved by implementing training and test phases. In the training phase, a set of cases and their respective labels are used to build a classifier. During testing, the classifier is used to predict new cases. One approach to assessing its predictive quality is to estimate its accuracy during the test phase. Key limitations appear when dealing with small-data samples. This paper investigates the effect of data sampling techniques on the assessment of neural network classifiers.

Results: Three data sampling techniques were studied: Cross-validation, leave-one-out, and bootstrap. These methods are designed to reduce the bias and variance of small-sample estimations. Two prediction problems based on small-sample sets were considered: Classification of microarray data originating from a leukemia study and from small, round blue-cell tumours. A third problem, the prediction of splice-junctions, was analysed to perform comparisons. Different accuracy estimations were produced for each problem. The variations are accentuated in the small-data samples. The quality of the estimates depends on the number of train-test experiments and the amount of data used for training the networks.

Conclusion: The predictive quality assessment of biomolecular data classifiers depends on the data size, sampling techniques and the number of train-test experiments. Conservative and optimistic accuracy estimations can be obtained by applying different methods. Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

Show MeSH

Related in: MedlinePlus

Accuracy estimation for the splice-junction sequence classifier (III) Cross-validation method based on a 95%–5% splitting. Prediction accuracy values and the confidence intervals for the means (95% confidence) are depicted for a number of train-test runs. A: 10 train-test runs, B: 25 train-test runs, C: 50 train-test runs, D: 100 train-test runs, E: 200 train-test runs, F: 300 train-test runs, G: 400 train-test runs, H: 500 train-test runs, I: 800 train-test runs, J: 1000 train-test runs.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC149349&req=5

Figure 11: Accuracy estimation for the splice-junction sequence classifier (III) Cross-validation method based on a 95%–5% splitting. Prediction accuracy values and the confidence intervals for the means (95% confidence) are depicted for a number of train-test runs. A: 10 train-test runs, B: 25 train-test runs, C: 50 train-test runs, D: 100 train-test runs, E: 200 train-test runs, F: 300 train-test runs, G: 400 train-test runs, H: 500 train-test runs, I: 800 train-test runs, J: 1000 train-test runs.

Mentions: Each BP-ANN consisted of 60 input nodes, 10 hidden nodes and 3 output nodes. The sampling techniques generated different accuracy estimates. But unlike the expression datasets, there were relatively less significant differences between methods. There are not significant statistical differences between the estimates produced by the train-test experiments belonging to a particular data sampling method. Moreover, less train-test runs are required to reduce the variance of the cross-validation and bootstraps estimates. Figures 9, 10 and 11 portray the mean accuracy estimates and their confidence intervals (95% confidence) obtained for each cross-validation technique respectively. Figure 9 indicates that more than 300 train-test runs are required to significantly reduce the variance of the 50%–50% cross-validation estimates. However, a confidence interval size equal to 0.01 had been achieved earlier for only 50 runs. In general this method produced the most conservative cross-validation accuracy estimates. Figure 10 shows that only 50 train-test runs are required to significantly reduce the variance of the 75%–25% cross-validation estimates. The 95%–5% cross-validation method (Figure 11) needed only 100 train-test runs to achieve the same. This splitting method generated one of the most optimistic accuracy estimates for this dataset. The leave-one-out method also produced one of highest accuracy estimates for this problem (0.97). There are not significant differences between the accuracy estimates produced by these two methods. Finally, Figure 12 illustrates the results generated by the bootstrap technique. In this method only 100 train-test runs were required to significantly reduce the variance of the estimates.


Genomic data sampling and its effect on classification performance assessment.

Azuaje F - BMC Bioinformatics (2003)

Accuracy estimation for the splice-junction sequence classifier (III) Cross-validation method based on a 95%–5% splitting. Prediction accuracy values and the confidence intervals for the means (95% confidence) are depicted for a number of train-test runs. A: 10 train-test runs, B: 25 train-test runs, C: 50 train-test runs, D: 100 train-test runs, E: 200 train-test runs, F: 300 train-test runs, G: 400 train-test runs, H: 500 train-test runs, I: 800 train-test runs, J: 1000 train-test runs.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC149349&req=5

Figure 11: Accuracy estimation for the splice-junction sequence classifier (III) Cross-validation method based on a 95%–5% splitting. Prediction accuracy values and the confidence intervals for the means (95% confidence) are depicted for a number of train-test runs. A: 10 train-test runs, B: 25 train-test runs, C: 50 train-test runs, D: 100 train-test runs, E: 200 train-test runs, F: 300 train-test runs, G: 400 train-test runs, H: 500 train-test runs, I: 800 train-test runs, J: 1000 train-test runs.
Mentions: Each BP-ANN consisted of 60 input nodes, 10 hidden nodes and 3 output nodes. The sampling techniques generated different accuracy estimates. But unlike the expression datasets, there were relatively less significant differences between methods. There are not significant statistical differences between the estimates produced by the train-test experiments belonging to a particular data sampling method. Moreover, less train-test runs are required to reduce the variance of the cross-validation and bootstraps estimates. Figures 9, 10 and 11 portray the mean accuracy estimates and their confidence intervals (95% confidence) obtained for each cross-validation technique respectively. Figure 9 indicates that more than 300 train-test runs are required to significantly reduce the variance of the 50%–50% cross-validation estimates. However, a confidence interval size equal to 0.01 had been achieved earlier for only 50 runs. In general this method produced the most conservative cross-validation accuracy estimates. Figure 10 shows that only 50 train-test runs are required to significantly reduce the variance of the 75%–25% cross-validation estimates. The 95%–5% cross-validation method (Figure 11) needed only 100 train-test runs to achieve the same. This splitting method generated one of the most optimistic accuracy estimates for this dataset. The leave-one-out method also produced one of highest accuracy estimates for this problem (0.97). There are not significant differences between the accuracy estimates produced by these two methods. Finally, Figure 12 illustrates the results generated by the bootstrap technique. In this method only 100 train-test runs were required to significantly reduce the variance of the estimates.

Bottom Line: These methods are designed to reduce the bias and variance of small-sample estimations.Conservative and optimistic accuracy estimations can be obtained by applying different methods.Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

View Article: PubMed Central - HTML - PubMed

Affiliation: School of Computing and Mathematics, University of Ulster, Jordanstown, Nothern Ireland, UK. fj.azuaje@ulster.ac.uk

ABSTRACT

Background: Supervised classification is fundamental in bioinformatics. Machine learning models, such as neural networks, have been applied to discover genes and expression patterns. This process is achieved by implementing training and test phases. In the training phase, a set of cases and their respective labels are used to build a classifier. During testing, the classifier is used to predict new cases. One approach to assessing its predictive quality is to estimate its accuracy during the test phase. Key limitations appear when dealing with small-data samples. This paper investigates the effect of data sampling techniques on the assessment of neural network classifiers.

Results: Three data sampling techniques were studied: Cross-validation, leave-one-out, and bootstrap. These methods are designed to reduce the bias and variance of small-sample estimations. Two prediction problems based on small-sample sets were considered: Classification of microarray data originating from a leukemia study and from small, round blue-cell tumours. A third problem, the prediction of splice-junctions, was analysed to perform comparisons. Different accuracy estimations were produced for each problem. The variations are accentuated in the small-data samples. The quality of the estimates depends on the number of train-test experiments and the amount of data used for training the networks.

Conclusion: The predictive quality assessment of biomolecular data classifiers depends on the data size, sampling techniques and the number of train-test experiments. Conservative and optimistic accuracy estimations can be obtained by applying different methods. Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

Show MeSH
Related in: MedlinePlus