Limits...
The effect of prior assumptions over the weights in BayesPI with application to study protein-DNA interactions from ChIP-based high-throughput data.

Wang J - BMC Bioinformatics (2010)

Bottom Line: To further understand the implementation of hyperparameters re-estimation technique in Bayesian hierarchical model, we added two more prior assumptions over the weight in BayesPI, namely Laplace prior and Cauchy prior, by using the evidence approximation method.The newly implemented BayesPI was tested on both synthetic and real ChIP-based high-throughput datasets to identify the corresponding protein binding energy matrices.In future, the evidence approximation method can be an alternative to Monte Carlo methods for computational implementation of Bayesian hierarchical model.

View Article: PubMed Central - HTML - PubMed

Affiliation: Department of Pathology, The Norwegian Radium Hospital, Oslo University Hospital, Montebello 0310 Oslo, Norway. junbai.wang@rr-research.no

ABSTRACT

Background: To further understand the implementation of hyperparameters re-estimation technique in Bayesian hierarchical model, we added two more prior assumptions over the weight in BayesPI, namely Laplace prior and Cauchy prior, by using the evidence approximation method. In addition, we divided hyperparameter (regularization constants alpha of the model) into multiple distinct classes based on either the structure of the neural networks or the property of the weights.

Results: The newly implemented BayesPI was tested on both synthetic and real ChIP-based high-throughput datasets to identify the corresponding protein binding energy matrices. The results obtained were encouraging: 1) there was a minor effect on the quality of predictions when prior assumptions over the weights were altered (e.g. the prior probability distributions to the weights and the number of classes to the hyperparameters) in BayesPI; 2) however, there was a significant impact on the computational speed when tuning the weight prior in the model: for example, BayesPI with a Laplace weight prior achieved the best performance with regard to both the computational speed and the prediction accuracy.

Conclusions: From this study, we learned that it is absolutely necessary to try different prior assumptions over the weights in Bayesian hierarchical model to design an efficient learning algorithm, though the quality of the final results may not be associated with such changes. In future, the evidence approximation method can be an alternative to Monte Carlo methods for computational implementation of Bayesian hierarchical model.

Show MeSH

Related in: MedlinePlus

Performance comparisons from simulated ChIP-chip datasets. The upper panel of the figure shows the box plots of the distribution of motif similarity scores across 15 different weight prior configurations. The lower panel of the figures shows the box plots of the distribution of CPU hours used by 15 prior assumptions over the weights. Here, the red line represents Gaussian prior assumption to the weights (e.g. G1, G2, G3, G4, and G5), the blue line represents Laplace prior approximation over the weights (e.g. L1, L2, L3, L4, and L5), and the black line indicates Cauchy priors to the weights (C1, C2, C3, C4, and C5), in which the numerical values 1, 2, 3, 4, and 5 represent regularization constant α with one, two, three, four, and greater than five classes, respectively.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC2921412&req=5

Figure 1: Performance comparisons from simulated ChIP-chip datasets. The upper panel of the figure shows the box plots of the distribution of motif similarity scores across 15 different weight prior configurations. The lower panel of the figures shows the box plots of the distribution of CPU hours used by 15 prior assumptions over the weights. Here, the red line represents Gaussian prior assumption to the weights (e.g. G1, G2, G3, G4, and G5), the blue line represents Laplace prior approximation over the weights (e.g. L1, L2, L3, L4, and L5), and the black line indicates Cauchy priors to the weights (C1, C2, C3, C4, and C5), in which the numerical values 1, 2, 3, 4, and 5 represent regularization constant α with one, two, three, four, and greater than five classes, respectively.

Mentions: To evaluate the performance of BayesPI under (15) different prior assumptions over the weights, we first tried each of them on the same set of simulated ChIP-chip experiments (16 synthetic ChIP-chip datasets), where the synthetic DNA sequences and ChIP-chip log ratios were generated using MATLAB Bay Net toolbox and MATLAB build-in random number generator, respectively [1]. The accuracy of the predictions was accessed from motif similarity scores by comparing the predicted motif energy matrix with the corresponding SGD consensus sequences [6]. In Figure 1, we have illustrated the outcomes of the above-mentioned simulations in 15 different prior assumptions, where both the CPU hours required for the calculation and distribution of the motif similarity scores among all the tests are shown. The results are very interesting because no significant changes of the prediction quality could be observed across the tests after changing either the prior probability assumption or the number of subclasses for α hyperparameters, except for the tests with Gaussian approximation (e.g. comparing the distribution of motif similarity scores using Wilcoxon rank-sum test: Gaussian vs. Cauchy, p < 0.03; Laplace vs. Cauchy, p < 0.04). However, the CPU hours used for various tests differed significantly. Particularly, the selection of prior probability assumption over the weights in Bayesian neural networks had a much stronger impact on the cost of CPU hours than that by tuning the number of subclasses of hyperparameters. For examples, by using a Laplace assumption over the weights in BayesPI, the CPU hours used for the calculations were shortened by almost two to five times when compared with the assumptions of the weights by the other two probability distributions (e.g. comparing the distribution of used CPU hours by Wilcoxon rank-sum test: Gaussian vs. Laplace, p < 1.4e-9; Cauchy vs. Laplace, p < 5.8e-8). It is worth noting that the assignment of Laplace prior probability to weights utilizes the least CPU hours for the calculation, but provides the best prediction accuracy. Thus, we can expect Laplace approximation over the weights to provide the most efficient computation for BayesPI if real ChIP-chip datasets are used.


The effect of prior assumptions over the weights in BayesPI with application to study protein-DNA interactions from ChIP-based high-throughput data.

Wang J - BMC Bioinformatics (2010)

Performance comparisons from simulated ChIP-chip datasets. The upper panel of the figure shows the box plots of the distribution of motif similarity scores across 15 different weight prior configurations. The lower panel of the figures shows the box plots of the distribution of CPU hours used by 15 prior assumptions over the weights. Here, the red line represents Gaussian prior assumption to the weights (e.g. G1, G2, G3, G4, and G5), the blue line represents Laplace prior approximation over the weights (e.g. L1, L2, L3, L4, and L5), and the black line indicates Cauchy priors to the weights (C1, C2, C3, C4, and C5), in which the numerical values 1, 2, 3, 4, and 5 represent regularization constant α with one, two, three, four, and greater than five classes, respectively.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC2921412&req=5

Figure 1: Performance comparisons from simulated ChIP-chip datasets. The upper panel of the figure shows the box plots of the distribution of motif similarity scores across 15 different weight prior configurations. The lower panel of the figures shows the box plots of the distribution of CPU hours used by 15 prior assumptions over the weights. Here, the red line represents Gaussian prior assumption to the weights (e.g. G1, G2, G3, G4, and G5), the blue line represents Laplace prior approximation over the weights (e.g. L1, L2, L3, L4, and L5), and the black line indicates Cauchy priors to the weights (C1, C2, C3, C4, and C5), in which the numerical values 1, 2, 3, 4, and 5 represent regularization constant α with one, two, three, four, and greater than five classes, respectively.
Mentions: To evaluate the performance of BayesPI under (15) different prior assumptions over the weights, we first tried each of them on the same set of simulated ChIP-chip experiments (16 synthetic ChIP-chip datasets), where the synthetic DNA sequences and ChIP-chip log ratios were generated using MATLAB Bay Net toolbox and MATLAB build-in random number generator, respectively [1]. The accuracy of the predictions was accessed from motif similarity scores by comparing the predicted motif energy matrix with the corresponding SGD consensus sequences [6]. In Figure 1, we have illustrated the outcomes of the above-mentioned simulations in 15 different prior assumptions, where both the CPU hours required for the calculation and distribution of the motif similarity scores among all the tests are shown. The results are very interesting because no significant changes of the prediction quality could be observed across the tests after changing either the prior probability assumption or the number of subclasses for α hyperparameters, except for the tests with Gaussian approximation (e.g. comparing the distribution of motif similarity scores using Wilcoxon rank-sum test: Gaussian vs. Cauchy, p < 0.03; Laplace vs. Cauchy, p < 0.04). However, the CPU hours used for various tests differed significantly. Particularly, the selection of prior probability assumption over the weights in Bayesian neural networks had a much stronger impact on the cost of CPU hours than that by tuning the number of subclasses of hyperparameters. For examples, by using a Laplace assumption over the weights in BayesPI, the CPU hours used for the calculations were shortened by almost two to five times when compared with the assumptions of the weights by the other two probability distributions (e.g. comparing the distribution of used CPU hours by Wilcoxon rank-sum test: Gaussian vs. Laplace, p < 1.4e-9; Cauchy vs. Laplace, p < 5.8e-8). It is worth noting that the assignment of Laplace prior probability to weights utilizes the least CPU hours for the calculation, but provides the best prediction accuracy. Thus, we can expect Laplace approximation over the weights to provide the most efficient computation for BayesPI if real ChIP-chip datasets are used.

Bottom Line: To further understand the implementation of hyperparameters re-estimation technique in Bayesian hierarchical model, we added two more prior assumptions over the weight in BayesPI, namely Laplace prior and Cauchy prior, by using the evidence approximation method.The newly implemented BayesPI was tested on both synthetic and real ChIP-based high-throughput datasets to identify the corresponding protein binding energy matrices.In future, the evidence approximation method can be an alternative to Monte Carlo methods for computational implementation of Bayesian hierarchical model.

View Article: PubMed Central - HTML - PubMed

Affiliation: Department of Pathology, The Norwegian Radium Hospital, Oslo University Hospital, Montebello 0310 Oslo, Norway. junbai.wang@rr-research.no

ABSTRACT

Background: To further understand the implementation of hyperparameters re-estimation technique in Bayesian hierarchical model, we added two more prior assumptions over the weight in BayesPI, namely Laplace prior and Cauchy prior, by using the evidence approximation method. In addition, we divided hyperparameter (regularization constants alpha of the model) into multiple distinct classes based on either the structure of the neural networks or the property of the weights.

Results: The newly implemented BayesPI was tested on both synthetic and real ChIP-based high-throughput datasets to identify the corresponding protein binding energy matrices. The results obtained were encouraging: 1) there was a minor effect on the quality of predictions when prior assumptions over the weights were altered (e.g. the prior probability distributions to the weights and the number of classes to the hyperparameters) in BayesPI; 2) however, there was a significant impact on the computational speed when tuning the weight prior in the model: for example, BayesPI with a Laplace weight prior achieved the best performance with regard to both the computational speed and the prediction accuracy.

Conclusions: From this study, we learned that it is absolutely necessary to try different prior assumptions over the weights in Bayesian hierarchical model to design an efficient learning algorithm, though the quality of the final results may not be associated with such changes. In future, the evidence approximation method can be an alternative to Monte Carlo methods for computational implementation of Bayesian hierarchical model.

Show MeSH
Related in: MedlinePlus