NGC | Catalog

Relion

For copy image paths and more information, please view on a desktop device.
Logo for Relion

Description

RELION implements an empirical Bayesian approach for analysis of electron cryo-microscopy.

Publisher

Open Source

Latest Tag

3.1.3

Modified

December 1, 2022

Compressed Size

415.07 MB

Multinode Support

Yes

Multi-Arch Support

Yes

RELION

RELION (REgularized LIkelihood OptimizatioN) implements an empirical Bayesian approach for analysis of electron cryo-microscopy (Cryo-EM). Specifically, RELION provides refinement methods of singular or multiple 3D reconstructions as well as 2D class averages. RELION is an important tool in the study of living cells.

RELION is comprised of multiple steps that cover the entire single-particle analysis workflow. Steps include beam-induced motion-correction, CTF estimation, automated particle picking, particle extraction, 2D class averaging, 3D classification, and high-resolution refinement in 3D. RELION can process movies generated from direct-electron detectors, apply final map sharpening, and perform local-resolution estimation.

System requirements

Before running the NGC RELION container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • nvidia-docker
  • Singularity >= 3.1
  • One of the following NVIDIA GPU(s)
  • Pascal(sm60)
  • Volta (sm70)
  • Ampere (sm80)

x86_64

  • CPU with AVX2 instruction support
  • One of the following CUDA driver versions
  • >= r460
  • r450 (>=.80.02)
  • r440 (>=.33.01)
  • r418 (>=.40.04)

arm64

  • ARM 8.1 + ARM NEON CPU
  • CUDA driver version >= r450

System recommendations

  • Systems with multiple GPUs are best.
  • Large local scratch disk space, ideally SSD or RamFS. The example presented below needs at least 100 GB of scratch space.
  • High clock rate is more important than number of cores, although having more than one thread per rank is good.
  • Launch multiple ranks per GPU to get better GPU utilization. The usage of NVIDIA MPS is recommended.

Attention

If you will see "memory allocator issue" error, please add the next argument into your Relion run command

--free_gpu_memory 200 

Examples

The following examples demonstrate using the NGC RELION container to run a 3D classification experiment with the Plasmodium ribosome data set presented in Wong et al, eLife 2014. Throughout this example the container version will be referenced as x.y.z, replace this with the tag you wish to run.

The dataset, which is ~50GB in size, must be downloaded and extracted before running the benchmark. The environment variable BENCHMARK_DIR will be used throughout the example to refer to the directory containing the extracted data.

wget ftp://ftp.mrc-lmb.cam.ac.uk/pub/scheres/relion_benchmark.tar.gz
tar -xzvf relion_benchmark.tar.gz
export BENCHMARK_DIR=$PWD/relion_benchmark

Although the relion_* command line utilities may be called directly within the NGC RELION container this example will utilize a convenience script, run_relion.sh. This script will set common command line arguments needed for the example classification experiment. This helper script should be placed within the benchmark data directory.

cd ${BENCHMARK_DIR}
wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/relion/single-node/run_relion.sh
chmod +x run_relion.sh

While this script attempts to set reasonable defaults for common HPC configurations additional tuning is recommended for maximum performance. For detailed tuning and benchmarking guidance please see the Relion Benchmarks & compute hardware page.

Running with nvidia-docker

cd $BENCHMARK_DIR
docker run --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/relion:x.y.z ./run_relion.sh

Note: Docker < v1.40

Docker versions below 1.40 must enable GPU support with --runtime nvidia.

docker run --rm --runtime nvidia --ipc=host -v $PWD:/host_pwd -w /host_pwd nvcr.io/hpc/relion:x.y.z ./run_relion.sh

Running with Singularity

cd $BENCHMARK_DIR
singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/relion:x.y.z ./run_relion.sh

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker://nvcr.io/hpc/relion:x.y.z ./run_relion.sh

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel RELION experiments directly through srun. The NGC RELION container supports pmi2, which is available within most Slurm installations, as well as pmix3. A typical parallel relion_refine_mpi experiment would take the following form.

srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] relion_refine_mpi [relion_flags]

An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.

Suggested Reading

The RELION website

Full RELION tutorial

Relion benchmark how-to

The Electron Microscopy Public Image Archive, providing many datasets which can be used with RELION.