Limits...
A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard.

Keith JM, Davey CM, Boyd SE - BMC Bioinformatics (2012)

Bottom Line: In all cases, run times were feasible, and results precise.In all three test cases, the globally optimal logical combination of the classifiers was found to be their union, according to three out of four ranking criteria.We propose as a general rule of thumb that the union of classifiers will be close to optimal.

View Article: PubMed Central - HTML - PubMed

Affiliation: School of Mathematical Sciences, Monash University, Victoria 3800, Australia. jonathan.keith@monash.edu

ABSTRACT

Background: Many problems in bioinformatics involve classification based on features such as sequence, structure or morphology. Given multiple classifiers, two crucial questions arise: how does their performance compare, and how can they best be combined to produce a better classifier? A classifier can be evaluated in terms of sensitivity and specificity using benchmark, or gold standard, data, that is, data for which the true classification is known. However, a gold standard is not always available. Here we demonstrate that a Bayesian model for comparing medical diagnostics without a gold standard can be successfully applied in the bioinformatics domain, to genomic scale data sets. We present a new implementation, which unlike previous implementations is applicable to any number of classifiers. We apply this model, for the first time, to the problem of finding the globally optimal logical combination of classifiers.

Results: We compared three classifiers of protein subcellular localisation, and evaluated our estimates of sensitivity and specificity against estimates obtained using a gold standard. The method overestimated sensitivity and specificity with only a small discrepancy, and correctly ranked the classifiers. Diagnostic tests for swine flu were then compared on a small data set. Lastly, classifiers for a genome-wide association study of macular degeneration with 541094 SNPs were analysed. In all cases, run times were feasible, and results precise. The optimal logical combination of classifiers was also determined for all three data sets. Code and data are available from http://bioinformatics.monash.edu.au/downloads/.

Conclusions: The examples demonstrate the methods are suitable for both small and large data sets, applicable to the wide range of bioinformatics classification problems, and robust to dependence between classifiers. In all three test cases, the globally optimal logical combination of the classifiers was found to be their union, according to three out of four ranking criteria. We propose as a general rule of thumb that the union of classifiers will be close to optimal.

Show MeSH

Related in: MedlinePlus

Conditional dependencies of the model. The dependencies of parameters in the model. ϕis the proportion of the population that is positive for the feature of interest, Tnis the true classification of individual n, αkand βkare the probabilities of a true positive and a false positive (respectively) for classifier k, and Cknis the classification of individual n according to classifier k.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3473310&req=5

Figure 3: Conditional dependencies of the model. The dependencies of parameters in the model. ϕis the proportion of the population that is positive for the feature of interest, Tnis the true classification of individual n, αkand βkare the probabilities of a true positive and a false positive (respectively) for classifier k, and Cknis the classification of individual n according to classifier k.

Mentions: Figure 3 shows the conditional dependencies of the model, for an arbitrary number of individuals N and classifiers K. Let Ckn be the outcome of Classifier k for each individual n, with Ckn=1 indicating a positive result and Ckn=0 indicating a negative result. These outcomes are modeled as independent Bernoulli trials, conditional on the true classification for each individual (that is, the classifiers are conditionally independent). Let the true classification for individual n be Tn and let αk and βk denote the true positive and false positive rates of Classifier k, respectively. Hence:


A Bayesian method for comparing and combining binary classifiers in the absence of a gold standard.

Keith JM, Davey CM, Boyd SE - BMC Bioinformatics (2012)

Conditional dependencies of the model. The dependencies of parameters in the model. ϕis the proportion of the population that is positive for the feature of interest, Tnis the true classification of individual n, αkand βkare the probabilities of a true positive and a false positive (respectively) for classifier k, and Cknis the classification of individual n according to classifier k.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3473310&req=5

Figure 3: Conditional dependencies of the model. The dependencies of parameters in the model. ϕis the proportion of the population that is positive for the feature of interest, Tnis the true classification of individual n, αkand βkare the probabilities of a true positive and a false positive (respectively) for classifier k, and Cknis the classification of individual n according to classifier k.
Mentions: Figure 3 shows the conditional dependencies of the model, for an arbitrary number of individuals N and classifiers K. Let Ckn be the outcome of Classifier k for each individual n, with Ckn=1 indicating a positive result and Ckn=0 indicating a negative result. These outcomes are modeled as independent Bernoulli trials, conditional on the true classification for each individual (that is, the classifiers are conditionally independent). Let the true classification for individual n be Tn and let αk and βk denote the true positive and false positive rates of Classifier k, respectively. Hence:

Bottom Line: In all cases, run times were feasible, and results precise.In all three test cases, the globally optimal logical combination of the classifiers was found to be their union, according to three out of four ranking criteria.We propose as a general rule of thumb that the union of classifiers will be close to optimal.

View Article: PubMed Central - HTML - PubMed

Affiliation: School of Mathematical Sciences, Monash University, Victoria 3800, Australia. jonathan.keith@monash.edu

ABSTRACT

Background: Many problems in bioinformatics involve classification based on features such as sequence, structure or morphology. Given multiple classifiers, two crucial questions arise: how does their performance compare, and how can they best be combined to produce a better classifier? A classifier can be evaluated in terms of sensitivity and specificity using benchmark, or gold standard, data, that is, data for which the true classification is known. However, a gold standard is not always available. Here we demonstrate that a Bayesian model for comparing medical diagnostics without a gold standard can be successfully applied in the bioinformatics domain, to genomic scale data sets. We present a new implementation, which unlike previous implementations is applicable to any number of classifiers. We apply this model, for the first time, to the problem of finding the globally optimal logical combination of classifiers.

Results: We compared three classifiers of protein subcellular localisation, and evaluated our estimates of sensitivity and specificity against estimates obtained using a gold standard. The method overestimated sensitivity and specificity with only a small discrepancy, and correctly ranked the classifiers. Diagnostic tests for swine flu were then compared on a small data set. Lastly, classifiers for a genome-wide association study of macular degeneration with 541094 SNPs were analysed. In all cases, run times were feasible, and results precise. The optimal logical combination of classifiers was also determined for all three data sets. Code and data are available from http://bioinformatics.monash.edu.au/downloads/.

Conclusions: The examples demonstrate the methods are suitable for both small and large data sets, applicable to the wide range of bioinformatics classification problems, and robust to dependence between classifiers. In all three test cases, the globally optimal logical combination of the classifiers was found to be their union, according to three out of four ranking criteria. We propose as a general rule of thumb that the union of classifiers will be close to optimal.

Show MeSH
Related in: MedlinePlus