Limits...
Statistical resolution of ambiguous HLA typing data.

Listgarten J, Brumme Z, Kadie C, Xiaojiang G, Walker B, Carrington M, Goulder P, Heckerman D - PLoS Comput. Biol. (2008)

Bottom Line: Our improvements are achieved by using a parsimonious parameterization for haplotype distributions and by smoothing the maximum likelihood (ML) solution.These improvements make it possible to scale the refinement to a larger number of alleles and loci in a more computationally efficient and stable manner.We also show how to augment our method in order to incorporate ethnicity information (as HLA allele distributions vary widely according to race/ethnicity as well as geographic area), and demonstrate the potential utility of this experimentally.

View Article: PubMed Central - PubMed

Affiliation: Microsoft Research, Redmond, Washington, United States of America. jennl@microsoft.com

ABSTRACT
High-resolution HLA typing plays a central role in many areas of immunology, such as in identifying immunogenetic risk factors for disease, in studying how the genomes of pathogens evolve in response to immune selection pressures, and also in vaccine design, where identification of HLA-restricted epitopes may be used to guide the selection of vaccine immunogens. Perhaps one of the most immediate applications is in direct medical decisions concerning the matching of stem cell transplant donors to unrelated recipients. However, high-resolution HLA typing is frequently unavailable due to its high cost or the inability to re-type historical data. In this paper, we introduce and evaluate a method for statistical, in silico refinement of ambiguous and/or low-resolution HLA data. Our method, which requires an independent, high-resolution training data set drawn from the same population as the data to be refined, uses linkage disequilibrium in HLA haplotypes as well as four-digit allele frequency data to probabilistically refine HLA typings. Central to our approach is the use of haplotype inference. We introduce new methodology to this area, improving upon the Expectation-Maximization (EM)-based approaches currently used within the HLA community. Our improvements are achieved by using a parsimonious parameterization for haplotype distributions and by smoothing the maximum likelihood (ML) solution. These improvements make it possible to scale the refinement to a larger number of alleles and loci in a more computationally efficient and stable manner. We also show how to augment our method in order to incorporate ethnicity information (as HLA allele distributions vary widely according to race/ethnicity as well as geographic area), and demonstrate the potential utility of this experimentally. A tool based on our approach is freely available for research purposes at http://microsoft.com/science.

Show MeSH
Sensitivity to training data set size for the European and African data sets.Top row shows the geometric mean probabilities; the bottom row shows the percentage of correct MAP predictions.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC2289775&req=5

pcbi-1000016-g002: Sensitivity to training data set size for the European and African data sets.Top row shows the geometric mean probabilities; the bottom row shows the percentage of correct MAP predictions.

Mentions: To determine whether the availability of more training data may lead to improved refinements, we examined the sensitivity of performance to the size of the training set. For the European and the African private data sets, we iteratively halved the sample size of training data, where the largest available training data set sizes were, respectively, 6020 and 2836. The results shown in Figure 2 suggest that more training data would improve the performance on the African data set, and to a smaller extent, on the European data set. Note that the African data set is smaller to start with than the European one, and also known to be more genetically diverse; both are explanations for the observed trends.


Statistical resolution of ambiguous HLA typing data.

Listgarten J, Brumme Z, Kadie C, Xiaojiang G, Walker B, Carrington M, Goulder P, Heckerman D - PLoS Comput. Biol. (2008)

Sensitivity to training data set size for the European and African data sets.Top row shows the geometric mean probabilities; the bottom row shows the percentage of correct MAP predictions.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC2289775&req=5

pcbi-1000016-g002: Sensitivity to training data set size for the European and African data sets.Top row shows the geometric mean probabilities; the bottom row shows the percentage of correct MAP predictions.
Mentions: To determine whether the availability of more training data may lead to improved refinements, we examined the sensitivity of performance to the size of the training set. For the European and the African private data sets, we iteratively halved the sample size of training data, where the largest available training data set sizes were, respectively, 6020 and 2836. The results shown in Figure 2 suggest that more training data would improve the performance on the African data set, and to a smaller extent, on the European data set. Note that the African data set is smaller to start with than the European one, and also known to be more genetically diverse; both are explanations for the observed trends.

Bottom Line: Our improvements are achieved by using a parsimonious parameterization for haplotype distributions and by smoothing the maximum likelihood (ML) solution.These improvements make it possible to scale the refinement to a larger number of alleles and loci in a more computationally efficient and stable manner.We also show how to augment our method in order to incorporate ethnicity information (as HLA allele distributions vary widely according to race/ethnicity as well as geographic area), and demonstrate the potential utility of this experimentally.

View Article: PubMed Central - PubMed

Affiliation: Microsoft Research, Redmond, Washington, United States of America. jennl@microsoft.com

ABSTRACT
High-resolution HLA typing plays a central role in many areas of immunology, such as in identifying immunogenetic risk factors for disease, in studying how the genomes of pathogens evolve in response to immune selection pressures, and also in vaccine design, where identification of HLA-restricted epitopes may be used to guide the selection of vaccine immunogens. Perhaps one of the most immediate applications is in direct medical decisions concerning the matching of stem cell transplant donors to unrelated recipients. However, high-resolution HLA typing is frequently unavailable due to its high cost or the inability to re-type historical data. In this paper, we introduce and evaluate a method for statistical, in silico refinement of ambiguous and/or low-resolution HLA data. Our method, which requires an independent, high-resolution training data set drawn from the same population as the data to be refined, uses linkage disequilibrium in HLA haplotypes as well as four-digit allele frequency data to probabilistically refine HLA typings. Central to our approach is the use of haplotype inference. We introduce new methodology to this area, improving upon the Expectation-Maximization (EM)-based approaches currently used within the HLA community. Our improvements are achieved by using a parsimonious parameterization for haplotype distributions and by smoothing the maximum likelihood (ML) solution. These improvements make it possible to scale the refinement to a larger number of alleles and loci in a more computationally efficient and stable manner. We also show how to augment our method in order to incorporate ethnicity information (as HLA allele distributions vary widely according to race/ethnicity as well as geographic area), and demonstrate the potential utility of this experimentally. A tool based on our approach is freely available for research purposes at http://microsoft.com/science.

Show MeSH