NGC | Catalog


For copy image paths and more information, please view on a desktop device.
Logo for cp2k


Please add description



Latest Tag



March 1, 2023

Compressed Size

1.9 GB

Multinode Support


Multi-Arch Support



CP2K is a quantum chemistry and solid state physics software package that can perform atomistic simulations of solid state, liquid, molecular, periodic, material, crystal, and biological systems. CP2K provides a general framework for different modeling methods such as DFT using the mixed Gaussian and plane waves approaches GPW and GAPW. Supported theory levels include DFTB, LDA, GGA, MP2, RPA, semi-empirical methods (AM1, PM3, PM6, RM1, MNDO, ...), and classical force fields (AMBER, CHARMM, ...). CP2K can do simulations of molecular dynamics, metadynamics, Monte Carlo, Ehrenfest dynamics, vibrational analysis, core level spectroscopy, energy minimization, and transition state optimization using NEB or dimer method.

System Requirements

The following requirements must be met before running the NGC CP2k container:

Container Runtimes


  • Pascal(sm60), Volta(sm70), or Ampere (sm80/sm86) NVIDIA GPU(s)
  • CPU supporting avx2_256 instruction set
  • CUDA driver version >= r510, -or- 418(>=40.04), r440(>=33.01), >=470.xx.yy


  • Pascal(sm60), Volta(sm70), or Ampere (sm80/sm86) NVIDIA GPU(s)
  • ARMv8.2 CPU
  • CUDA driver version >= r470

System Recommendations

  • CP2k works well with Ampere A100, Volta V100 or Pascal P100 GPUs.
  • Launch multiple ranks per GPU to get better GPU utilization. For example on a 2 socket Broadwell server with 32 total cores and 4 P100, set ranks per GPU to 2, Threads to 2. We include a '' script in the container to do this easily.

Running CP2k Examples

Get interactive session

Without Infiniband

DOCKER="docker run -it --rm --gpus all --shm-size 32Gb -v ${PWD}:/host_pwd --workdir /host_pwd

With Infiniband

DOCKER="docker run -it --rm -v ${PWD}:/host_pwd --workdir /host_pwd --gpus all --shm-size 32Gb --device=/dev/infiniband --cap-add=IPC_LOCK --net=host


Benchmarks: in /opt/cp2k/benchmarks/ Run

'MPI_PER_GPU=X mpirun -n X*Y cp2k.psmp -i <testcase>'
with: X=number of MPI ranks per GPU, 2-4 typically performs best
      Y=number of GPUs on your system
      testcase to choose in the list below.

Test cases:

  1. Linear Scaling SCF, a benchmark that is CPU and H2D bound: in ./QS_DM_LS/ :

    - H2O-dft-ls.NREP2.inp     (small, 16 GB total, eg. run on 1xV100 or 1xP100)
    - H2O-dft-ls.NREP4.inp     (medium, 160 GB total, eg. run on 4xA100)
    - you can also adjust NREP in the test case header to change the size. in general size= NREP^3 * 2.5 GB


MPI_PER_GPU=2 mpirun --bind-to none -n 2 cp2k.psmp -i H2O-dft-ls.NREP2.inp
  1. Random Phase Approximation (RPA), a benchmark that is FLOP-bound and uses the COSMA library: in ./QS_mp2_rpa/

    - 32-H2O/H2O-32-RI-dRPA-TZ.inp      (1-8 GPUs)
    - 64-H2O/H2O-64-RI-dRPA-TZ.inp      (4-128 GPUs)
    - 128-H2O/H2O-128-RI-dRPA-TZ.inp    (8-1024 GPUs)


MPI_PER_GPU=8 mpirun --bind-to none -n 8 cp2k.psmp -i H2O-32-RI-dRPA-TZ.inp

Running with Singularity

This example is loosely designed and can be modified and adapted to best fit your system architecture.

Pull the Image

Save the NGC Gromacs container as a local Singularity image file:

$ singularity build cp2k_v9.1.0.sif docker://

The container is now saved in the current directory as cp2k_v9.1.0.sif

Define the SINGULARITY command alias.

SINGULARITY="singularity run --nv -B ${PWD}:/host_pwd --pwd /host_pwd cp2k_v9.1.0.sif"

Note: Singularity 3.1.x - 3.2.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Suggested Reading


CP2k GitHub

CP2k How To