Limits...
GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

Jung J, Mori T, Kobayashi C, Matsunaga Y, Yoda T, Feig M, Sugita Y - Wiley Interdiscip Rev Comput Mol Sci (2015)

Bottom Line: Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance.We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS.WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

View Article: PubMed Central - PubMed

Affiliation: Computational Biophysics Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan.

ABSTRACT

GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

No MeSH data available.


Related in: MedlinePlus

Benchmark performance of MD simulations of (a) DHFR, (b) ApoA1, and (c) STMV on PC clusters, and (d) STMV, macromolecular crowding systems consisting of (e) 11.7 million atoms and (f) 103.7 million atoms on K computer.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
getmorefigures.php?uid=PMC4696414&req=5

fig03: Benchmark performance of MD simulations of (a) DHFR, (b) ApoA1, and (c) STMV on PC clusters, and (d) STMV, macromolecular crowding systems consisting of (e) 11.7 million atoms and (f) 103.7 million atoms on K computer.

Mentions: Benchmark performance tests were carried out on our in-house PC cluster in which 32 nodes are connected with InfiniBand FDR. Each node has two Intel Xeon E5-2690 CPUs, each with eight 2.9 GHz cores. In total, up to 512 cores were utilized in the benchmark tests. Intel compilers (version 12.1) with OpenMPI (version 1.4.4) were used to compile the MD programs (GENESIS, NAMD version 2.9,13 and CHARMM c40a286). The performance was compared for three benchmark systems: DHFR (23,558 atoms in a 62.23 × 62.23 × 62.23 Å3 box), ApoA1 (92,224 atoms in a 108.86 × 108.86 × 77.76 Å3 box), and STMV (1,066,628 atoms in a 216.83 × 216.83 × 216.83 Å3 box). All input files were obtained from the NAMD webpage.87 For the CHARMM program, we used domdec that implements a domain decomposition scheme and allows processors to be split between the calculation of real-space and reciprocal-space interactions.86 In CHARMM, splitting processors shows better performance for small systems when increasing the number of processors. The processors are split according to a ratio of 3:1 (real-space:reciprocal-space). In the case of ATDYN, we use the ratio 1:1. In all systems, we tried to assign the same conditions: cutoff = 10 Å, pairlist cutoff = 11.5 Å, pairlist update every 10 steps, 2-fs time step with SHAKE/SETTLE88,89 constraints with NVE ensemble. The PME grid sizes for DHFR, ApoA1, and STMV are 64 × 64 × 64, 128 × 128 × 96, and 256 × 256 × 256, respectively. In all cases, we used double precision arithmetic for real numbers. Multiple time-step integrators like r-RESPA90 were not used. The performance shown in Figure 3(a–c) was evaluated as the CPU time difference between 1000 and 2000 integration steps. The best performance up to 512 cores is shown in Table 1. On the PC cluster, NAMD shows the best performance for all the three systems. The best performance of SPDYN lies between CHARMM and NAMD. For small numbers of processors, CHARMM shows better performance than SPDYN, but SPDYN scales better with an increasing number of cores. ATDYN shows the worst performance, which is due to an inefficient parallelization scheme, as expected.


GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

Jung J, Mori T, Kobayashi C, Matsunaga Y, Yoda T, Feig M, Sugita Y - Wiley Interdiscip Rev Comput Mol Sci (2015)

Benchmark performance of MD simulations of (a) DHFR, (b) ApoA1, and (c) STMV on PC clusters, and (d) STMV, macromolecular crowding systems consisting of (e) 11.7 million atoms and (f) 103.7 million atoms on K computer.
© Copyright Policy - open-access
Related In: Results  -  Collection

License
Show All Figures
getmorefigures.php?uid=PMC4696414&req=5

fig03: Benchmark performance of MD simulations of (a) DHFR, (b) ApoA1, and (c) STMV on PC clusters, and (d) STMV, macromolecular crowding systems consisting of (e) 11.7 million atoms and (f) 103.7 million atoms on K computer.
Mentions: Benchmark performance tests were carried out on our in-house PC cluster in which 32 nodes are connected with InfiniBand FDR. Each node has two Intel Xeon E5-2690 CPUs, each with eight 2.9 GHz cores. In total, up to 512 cores were utilized in the benchmark tests. Intel compilers (version 12.1) with OpenMPI (version 1.4.4) were used to compile the MD programs (GENESIS, NAMD version 2.9,13 and CHARMM c40a286). The performance was compared for three benchmark systems: DHFR (23,558 atoms in a 62.23 × 62.23 × 62.23 Å3 box), ApoA1 (92,224 atoms in a 108.86 × 108.86 × 77.76 Å3 box), and STMV (1,066,628 atoms in a 216.83 × 216.83 × 216.83 Å3 box). All input files were obtained from the NAMD webpage.87 For the CHARMM program, we used domdec that implements a domain decomposition scheme and allows processors to be split between the calculation of real-space and reciprocal-space interactions.86 In CHARMM, splitting processors shows better performance for small systems when increasing the number of processors. The processors are split according to a ratio of 3:1 (real-space:reciprocal-space). In the case of ATDYN, we use the ratio 1:1. In all systems, we tried to assign the same conditions: cutoff = 10 Å, pairlist cutoff = 11.5 Å, pairlist update every 10 steps, 2-fs time step with SHAKE/SETTLE88,89 constraints with NVE ensemble. The PME grid sizes for DHFR, ApoA1, and STMV are 64 × 64 × 64, 128 × 128 × 96, and 256 × 256 × 256, respectively. In all cases, we used double precision arithmetic for real numbers. Multiple time-step integrators like r-RESPA90 were not used. The performance shown in Figure 3(a–c) was evaluated as the CPU time difference between 1000 and 2000 integration steps. The best performance up to 512 cores is shown in Table 1. On the PC cluster, NAMD shows the best performance for all the three systems. The best performance of SPDYN lies between CHARMM and NAMD. For small numbers of processors, CHARMM shows better performance than SPDYN, but SPDYN scales better with an increasing number of cores. ATDYN shows the worst performance, which is due to an inefficient parallelization scheme, as expected.

Bottom Line: Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance.We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS.WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

View Article: PubMed Central - PubMed

Affiliation: Computational Biophysics Research Team, RIKEN Advanced Institute for Computational Science Kobe, Japan.

ABSTRACT

GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

No MeSH data available.


Related in: MedlinePlus