Limits...
Efficient implementation of MrBayes on multi-GPU.

Bao J, Xia H, Zhou J, Liu X, Wang G - Mol. Biol. Evol. (2013)

Bottom Line: This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture.By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets.Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.

View Article: PubMed Central - PubMed

Affiliation: College of Information Technical Science, Nankai University, Tianjin, China.

ABSTRACT
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.

Show MeSH
Different pipelining models: without pipelining, “chain-by-chain,” and “node-by-node.” Ts denotes the kernel startup time of the s-th internal node, Tk denotes the kernel execution time of the k-th internal node, and h is the number of nodes in a chain.
© Copyright Policy - creative-commons
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC3649675&req=5

mst043-F7: Different pipelining models: without pipelining, “chain-by-chain,” and “node-by-node.” Ts denotes the kernel startup time of the s-th internal node, Tk denotes the kernel execution time of the k-th internal node, and h is the number of nodes in a chain.

Mentions: On the GPU side, the pipelining model views each Markov chain as a CUDA stream composed of sequential subtasks, uploading transition probability matrices onto GPU, calling kernels to calculate conditional likelihoods and local likelihoods on GPU, and downloading local likelihoods to CPU. Without any pipelining model (fig. 7a), all chains are run serially. For instance, chain i + 1 will not start the uploading step until its immediate predecessor chain i has transferred all the result data back to the CPU side.Fig. 7.


Efficient implementation of MrBayes on multi-GPU.

Bao J, Xia H, Zhou J, Liu X, Wang G - Mol. Biol. Evol. (2013)

Different pipelining models: without pipelining, “chain-by-chain,” and “node-by-node.” Ts denotes the kernel startup time of the s-th internal node, Tk denotes the kernel execution time of the k-th internal node, and h is the number of nodes in a chain.
© Copyright Policy - creative-commons
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC3649675&req=5

mst043-F7: Different pipelining models: without pipelining, “chain-by-chain,” and “node-by-node.” Ts denotes the kernel startup time of the s-th internal node, Tk denotes the kernel execution time of the k-th internal node, and h is the number of nodes in a chain.
Mentions: On the GPU side, the pipelining model views each Markov chain as a CUDA stream composed of sequential subtasks, uploading transition probability matrices onto GPU, calling kernels to calculate conditional likelihoods and local likelihoods on GPU, and downloading local likelihoods to CPU. Without any pipelining model (fig. 7a), all chains are run serially. For instance, chain i + 1 will not start the uploading step until its immediate predecessor chain i has transferred all the result data back to the CPU side.Fig. 7.

Bottom Line: This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture.By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets.Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.

View Article: PubMed Central - PubMed

Affiliation: College of Information Technical Science, Nankai University, Tianjin, China.

ABSTRACT
MrBayes, using Metropolis-coupled Markov chain Monte Carlo (MCMCMC or (MC)(3)), is a popular program for Bayesian inference. As a leading method of using DNA data to infer phylogeny, the (MC)(3) Bayesian algorithm and its improved and parallel versions are now not fast enough for biologists to analyze massive real-world DNA data. Recently, graphics processor unit (GPU) has shown its power as a coprocessor (or rather, an accelerator) in many fields. This article describes an efficient implementation a(MC)(3) (aMCMCMC) for MrBayes (MC)(3) on compute unified device architecture. By dynamically adjusting the task granularity to adapt to input data size and hardware configuration, it makes full use of GPU cores with different data sets. An adaptive method is also developed to split and combine DNA sequences to make full use of a large number of GPU cards. Furthermore, a new "node-by-node" task scheduling strategy is developed to improve concurrency, and several optimizing methods are used to reduce extra overhead. Experimental results show that a(MC)(3) achieves up to 63× speedup over serial MrBayes on a single machine with one GPU card, and up to 170× speedup with four GPU cards, and up to 478× speedup with a 32-node GPU cluster. a(MC)(3) is dramatically faster than all the previous (MC)(3) algorithms and scales well to large GPU clusters.

Show MeSH