Limits...
Genomic data sampling and its effect on classification performance assessment.

Azuaje F - BMC Bioinformatics (2003)

Bottom Line: These methods are designed to reduce the bias and variance of small-sample estimations.Conservative and optimistic accuracy estimations can be obtained by applying different methods.Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

View Article: PubMed Central - HTML - PubMed

Affiliation: School of Computing and Mathematics, University of Ulster, Jordanstown, Nothern Ireland, UK. fj.azuaje@ulster.ac.uk

ABSTRACT

Background: Supervised classification is fundamental in bioinformatics. Machine learning models, such as neural networks, have been applied to discover genes and expression patterns. This process is achieved by implementing training and test phases. In the training phase, a set of cases and their respective labels are used to build a classifier. During testing, the classifier is used to predict new cases. One approach to assessing its predictive quality is to estimate its accuracy during the test phase. Key limitations appear when dealing with small-data samples. This paper investigates the effect of data sampling techniques on the assessment of neural network classifiers.

Results: Three data sampling techniques were studied: Cross-validation, leave-one-out, and bootstrap. These methods are designed to reduce the bias and variance of small-sample estimations. Two prediction problems based on small-sample sets were considered: Classification of microarray data originating from a leukemia study and from small, round blue-cell tumours. A third problem, the prediction of splice-junctions, was analysed to perform comparisons. Different accuracy estimations were produced for each problem. The variations are accentuated in the small-data samples. The quality of the estimates depends on the number of train-test experiments and the amount of data used for training the networks.

Conclusion: The predictive quality assessment of biomolecular data classifiers depends on the data size, sampling techniques and the number of train-test experiments. Conservative and optimistic accuracy estimations can be obtained by applying different methods. Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

Show MeSH

Related in: MedlinePlus

Entropy error during training for a SRBCT classifier (IV) Leave-one-out data splitting.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC149349&req=5

Figure 20: Entropy error during training for a SRBCT classifier (IV) Leave-one-out data splitting.


Genomic data sampling and its effect on classification performance assessment.

Azuaje F - BMC Bioinformatics (2003)

Entropy error during training for a SRBCT classifier (IV) Leave-one-out data splitting.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC149349&req=5

Figure 20: Entropy error during training for a SRBCT classifier (IV) Leave-one-out data splitting.
Bottom Line: These methods are designed to reduce the bias and variance of small-sample estimations.Conservative and optimistic accuracy estimations can be obtained by applying different methods.Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

View Article: PubMed Central - HTML - PubMed

Affiliation: School of Computing and Mathematics, University of Ulster, Jordanstown, Nothern Ireland, UK. fj.azuaje@ulster.ac.uk

ABSTRACT

Background: Supervised classification is fundamental in bioinformatics. Machine learning models, such as neural networks, have been applied to discover genes and expression patterns. This process is achieved by implementing training and test phases. In the training phase, a set of cases and their respective labels are used to build a classifier. During testing, the classifier is used to predict new cases. One approach to assessing its predictive quality is to estimate its accuracy during the test phase. Key limitations appear when dealing with small-data samples. This paper investigates the effect of data sampling techniques on the assessment of neural network classifiers.

Results: Three data sampling techniques were studied: Cross-validation, leave-one-out, and bootstrap. These methods are designed to reduce the bias and variance of small-sample estimations. Two prediction problems based on small-sample sets were considered: Classification of microarray data originating from a leukemia study and from small, round blue-cell tumours. A third problem, the prediction of splice-junctions, was analysed to perform comparisons. Different accuracy estimations were produced for each problem. The variations are accentuated in the small-data samples. The quality of the estimates depends on the number of train-test experiments and the amount of data used for training the networks.

Conclusion: The predictive quality assessment of biomolecular data classifiers depends on the data size, sampling techniques and the number of train-test experiments. Conservative and optimistic accuracy estimations can be obtained by applying different methods. Guidelines are suggested to select a sampling technique according to the complexity of the prediction problem under consideration.

Show MeSH
Related in: MedlinePlus