Tinker-HP is a CPUs and GPUs based, multi-precision, MPI massively parallel package dedicated to long polarizable molecular dynamics simulations and to polarizable QM/MM. Tinker-HP is an evolution of the popular Tinker package that conserves it simplicity of use but brings new capabilities allowing performing very long molecular dynamics simulations on modern supercomputers that use thousands of cores.
The Tinker-HP approach offers various strategies using domain decomposition techniques for periodic boundary conditions in the framework of the (n)log(n) Smooth Particle Mesh Ewald. Tinker-HP proposes a high performance scalable computing environment for polarizable (AMOEBA, Amberpol...) and classical (Amber, Charmm, OPLS...) force fields giving access to large systems up to millions of atoms.
This phase-advance GPU version (1.2 ++) is not (yet) an official release of Tinker-HP but is made freely available in link with the COVID-19 HPC community effort. This work will be part of a larger 2021 Tinker-HP 1.3 official release.There is no difference between the use of Tinker-HP and Tinker-HP (GPU version) as long as the feature you are looking for is available on the GPU version. The present version is optimized to accelerate simulations using the AMOEBA polarizable force field. Some minimal non-polarizable capabilities are present (enhanced support will be available in 2021). The code has been extensively tested on 1080, 2080, 3090, P100, V100 and A100 NVIDIA GPU cards and support multi-GPUs computations. It will be part of the major Tinker-HP 1.3 2021 release but this present version will continue to evolve.
Before running the NGC Tinker-HP container please ensure your system meets the following requirements.
The following examples demonstrate using the NGC Tinker-HP container DYNAMIC executable. The DYNAMIC program performs a molecular dynamics (MD) or stochastic dynamics (SD) computation. Starts either from a specified input molecular structure (an .xyz file) or from a structure-velocity-acceleration set saved from a previous dynamics trajectory (a restart from a .dyn file).
The environment variable
BENCHMARK_DIR will be used throughout the example to refer to the directory containing the extracted data. Ensure you're running the correct container version by replacing
YYYY.MM with the appropriate container tag.
wget https://raw.githubusercontent.com/TinkerTools/tinker-hp/master/GPU/examples/cox.xyz wget https://raw.githubusercontent.com/TinkerTools/tinker-hp/master/GPU/examples/cox.key export BENCHMARK_DIR=$PWD
cd $BENCHMARK_DIR docker run --rm --gpus all --ipc=host -v $BENCHMARK_DIR:/host_pwd -w /host_pwd nvcr.io/hpc/tinkerhp:YYYY.MM mpirun --oversubscribe -np 1 /usr/local/tinker-hp/bin/dynamic cox.xyz 500 2 20 2 300
Docker versions below 1.40 must enable GPU support with
docker run --rm --gpus all --runtime nvidia --ipc=host -v $BENCHMARK_DIR:/host_pwd -w /host_pwd nvcr.io/hpc/tinkerhp:YYYY.MM mpirun --oversubscribe -np 1 /usr/local/tinker-hp/bin/dynamic cox.xyz 500 2 20 2 300
cd $BENCHMARK_DIR singularity run --nv -B $BENCHMARK_DIR:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/tinkerhp:YYYY.MM mpirun --oversubscribe -np 1 /usr/local/tinker-hp/bin/dynamic cox.xyz 500 2 20 2 300
There is currently an issue in Singularity versions below v3.5 causing the
LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The
LD_LIBRARY_PATH must be unset before invoking Singularity:
LD_LIBRARY_PATH="" singularity run --nv -B $BENCHMARK_DIR:/host_pwd --pwd /host_pwd docker:nvcr.io/hpc/tinkerhp:ngc-YYYYMMDD mpirun --oversubscribe -np 1 dynamic cox.xyz 500 2 20 2 300
Clusters running the Slurm resource manager and Singularity container runtime may launch parallel Tinker-HP experiments directly through
srun. The NGC Tinker-HP container supports
pmi2, which is available within most Slurm installations, as well as
pmix3. The following example was run on two nodes using Slurm.
srun --mpi=pmi2 --ntasks-per-node=1 singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd tinkerhp.sif /usr/local/tinker-hp/bin/dynamic cox.xyz 500 2 20 1 300
Tinker-HP: a Massively Parallel Molecular Dynamics Package for Multiscale Simulations of Large Complex Systems with Advanced Polarizable Force Fields.
Accelerating Molecular Dynamics Simulations of Large Complex Systems with Advanced Point Dipole Polarizable Force Fields using GPUs and Multi-GPUs systems
Tinkertools, Tinker-HP is part of the Tinker distribution and uses the same tools as Tinker
TINKER Software Tools for Molecular Design Version 3.9 June 2001
More information on Tinker-HP capabilities
By pulling and using the container, you accept the terms and conditions of the Tinker License
This work has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 810367), project EMC2 (see preprint for full acknowledgments). We would also like to thank GENCI, NVIDIA and HPE as well as the engineering team of the IDRIS Supercomputer center (CNRS/GENCI, France).
Tinker-HP developers also provide support to registered users only (http://tinker-hp.ip2ct.upmc.fr/?Download-instructions).
Users can cite:
Tinker-HP : Accelerating Molecular Dynamics Simulations of Large Complex Systems with Advanced Point Dipole Polarizable Force Fields using GPUs and Multi-GPUs systems. O. Adjoua, L. Lagardère, L.-H. Jolly, Arnaud Durocher, Z. Wang, T. Very, I. Dupays, T. Jaffrelot Inizan, F. Célerse, P. Ren, J. Ponder, J-P. Piquemal, J. Chem. Theory. Comput., 2021, XX, XX, online (Open Access) https://doi.org/10.1021/acs.jctc.0c01164
Tinker-HP: a Massively Parallel Molecular Dynamics Package for Multiscale Simulations of Large Complex Systems with Advanced Polarizable Force Fields. L. Lagardère, L.-H. Jolly, F. Lipparini, F. Aviat, B. Stamm, Z. F. Jing, M. Harger, H. Torabifard, G. A. Cisneros, M. J. Schnieders, N. Gresh, Y. Maday, P. Ren, J. W. Ponder, J.-P. Piquemal, Chem. Sci., 2018, 9, 956-972 (Open Access) https://doi.org/10.1039/C7SC04531J