NGC | Catalog
Welcome Guest

MILC

For pull tags and more information, please view on a desktop device.
Logo for MILC

Description

MILC represents part of a set of codes written by the MIMD Lattice Computation (MILC) collaboration used to study quantum chromodynamics (QCD), the theory of the strong interactions of subatomic physics. It performs simulations of four dimensional SU(3) lattice gauge theory on MIMD parallel machines. "Strong interactions" are responsible for binding quarks into protons and neutrons and holding them all together in the atomic nucleus. The MILC collaboration has produced application codes to study several different QCD research areas, only one of which, ks_dynamical simulations with conventional dynamical Kogut-Susskind quarks, is used here.

Publisher

QCD Community

Latest Tag

quda0.8-patch4Oct2017

Modified

January 7, 2021

Compressed Size

208.66 MB

Multinode Support

Yes

Multi-Arch Support

No

quda0.8-patch4Oct2017 (Latest) Scan Results

No results available.

MILC

MILC represents part of a set of codes written by the MIMD Lattice Computation (MILC) collaboration used to study quantum chromodynamics (QCD), the theory of the strong interactions of subatomic physics. It performs simulations of four dimensional SU(3) lattice gauge theory on MIMD parallel machines. "Strong interactions" are responsible for binding quarks into protons and neutrons and holding them all together in the atomic nucleus.

The MILC collaboration has produced application codes to study several different QCD research areas, only one of which, ks_dynamical simulations with conventional dynamical Kogut-Susskind quarks, is used here. More info on MILC can be found here.

System requirements

Before running the NGC MILC container please ensure your system meets the following requirements.

  • Pascal(sm60) or Volta(sm70) NVIDIA GPU(s)
  • CUDA driver version >= 384.84
  • One of the following container runtimes

For early access to ARM64 container content please see: https://developer.nvidia.com/early-access-arm-containers ---

Running MILC

Supported Architectures

NGC provides access to MILC containers targeting the following NVIDIA GPU architectures.

  • Pascal(sm60)
  • Volta(sm70)

This option forms the last component of the image tag. For example if running on Pascal(sm60) the following image would be requested.

milc:cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60
Executables

su3_rhmd_hisq: primary MILC application binary

Command invocation

Example command form:

su3_rhmd_hisq -geom $GEOM $INPUT_FILE $OUTPUT_FILE

Where

  • $GEOM: grid of virtual processors for gpu optimization
  • $INPUT_FILE: Input file to be procesed
  • $OUTPUT_FILE: file in which to write results to
Examples

The following examples demonstrate how to run the NGC MILC container under supported container runtimes.

Running with nvidia-docker

Command line execution with nvidia-docker

In this example, we are running the SC15 student cluster competition benchmark with the scripts in the /workspace/examples directory inside of the container on 1 GPU.

Note that the SC15 cluster data will be downloaded by the script if it is not available in the directory mounted to /data in the container.

To save the output, we are mapping (with -v) the current working directory to the /sc15_cluster directory inside of the container and saving our log file there so they will available outside of the container when complete. To run the MILC container from the CLI, issue the following command:

nvidia-docker run -ti --rm -v $(pwd)/data:/data -v $(pwd):/sc15_cluster nvcr.io/hpc/milc:cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60 /workspace/examples/sc15_cluster.sh 1

Note you could also point the CLI command to your local directory instead and run your own scripts (*.sh for example). The script below starts the MILC container and runs the *.sh script from your results directory.

nvidia-docker run -ti --rm -v $(pwd)/data:/data -v $(pwd):/results nvcr.io/hpc/milc:cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60 /results/*.sh

Interactive shell with nvidia-docker

In this example, we are running the SC15 benchmark again while inside the /workspace directory in the container. Running interactively is useful for making multiple MILC containers run within the same OS image.

To run the MILC container interactively, issue the following command which starts the container and also mounts your current directory to /work so it is available inside the container. (see the -v options on the command below to set the mapping of your local data directory to the one inside container).

nvidia-docker run -ti --rm -v $(pwd)/data:/data -v $(pwd):/work nvcr.io/hpc/milc:cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60 /bin/bash

After the container starts you can run in two different ways. One way is to run inside the /workspace directory using the default scripts and modifying the scripts and running again. Note that there any mounted datasets will be in /data if you use the above command.

/workspace/examples/sc15_cluster.sh 1

You can connect your own working directory with your scripts to /work in the container and run these once inside the container:

-v :/work

Running with Singularity

Before running with singularity you must set NGC container registry authentication credentials. This is most easily accomplished by setting the following environment variables.

$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=

More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.

Once credentials are set in the environment the NGC MILC container can be pulled to a local Singularity image.

singularity build milc_cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60.simg docker://nvcr.io/hpc/milc:cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60

This will save the container to current working directory as milc_cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60.simg

Once the local Singularity image has been pulled the following modes of running are supported.

Note: Singularity/2.x

In order to pull NGC images with singularity version 2.x and earlier, NGC container registry authentication credentials are required.

To set your NGC container registry authentication credentials:

$ export SINGULARITY_DOCKER_USERNAME='$oauthtoken'
$ export SINGULARITY_DOCKER_PASSWORD=

More information describing how to obtain and use your NVIDIA NGC Cloud Services API key can be found here.

Note: Singularity 3.1.x - 3.2.x

There is currently a bug in Singularity 3.1.x and 3.2.x causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

$ LD_LIBRARY_PATH="" singularity exec ...

Command line execution with Singularity

In this example, we are running the SC15 student cluster competition benchmark with the scripts in the /workspace/examples directory inside of the container on 1 GPU.

Note that the SC15 cluster data will be downloaded by the script if it is not available in the directory mounted to /data in the container.

To save the output, we are mapping (with -v) $(pwd)/run to the /sc15_cluster directory inside of the container and saving our log file there so they will available outside of the container when complete. To run the MILC container from the CLI, issue the following command:

mkdir run data
singularity run --nv -B $(pwd)/data:/data -B $(pwd)/run:/sc15_cluster milc_cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60.simg /workspace/examples/sc15_cluster.sh 1

Note you could also point the CLI command to your local directory instead and run your own scripts (*.sh for example). The script below starts the MILC container and runs the *.sh script from your results directory.

singularity run --nv -B $(pwd)/results:/results milc_cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60.simg /results/*.sh

Interactive shell with Singularity

In this example, we are running the SC15 benchmark again while inside the /workspace directory in the container. Running interactively is useful for making multiple MILC containers run within the same OS image.

To run the MILC container interactively, issue the following command which starts the container and also mounts your current directory to /work so it is available inside the container. (see the -v options on the command below to set the mapping of your local data directory to the one inside container).

mkdir run data
singularity run --nv -B $(pwd)/data:/data -B $(pwd)/run:/sc15_cluster milc_cuda9-ubuntu1604-quda0.8-mpi3.0.0-patch4Oct2017-sm60.simg

To run inside the /workspace directory use the default scripts and modify the scripts and running again. Note that there any mounted datasets will be in /data if you use the above command.

/workspace/examples/sc15_cluster.sh 1

You can connect your own working directory with your scripts to /work in the container and run these once inside the container:

-B :/work

Benchmarks

The MILC container contains scripts for running the SC15 student cluster competition and apex benchmarks in /workspace/examples:

  • sc15_cluster.sh will download the SC15 student cluster competition dataset if it is not already available in /data and run the benchmark. This script expects the number of GPUs to use for the benchmark as the first positional argument.
  • apex.sh will download the apex dataset if it is not already available in /data and run the benchmark. This script expects the number of GPUs to use for the benchmark as the first positional argument. Note this benchmark requires multiple GPUs

Suggested Reading

MILC Manual