Logo for Relion
RELION implements an empirical Bayesian approach for analysis of electron cryo-microscopy.
Open Source
Latest Tag
May 1, 2024
Compressed Size
415.07 MB
Multinode Support
Multi-Arch Support


RELION (REgularized LIkelihood OptimizatioN) implements an empirical Bayesian approach for analysis of electron cryo-microscopy (Cryo-EM). Specifically, RELION provides refinement methods of singular or multiple 3D reconstructions as well as 2D class averages. RELION is an important tool in the study of living cells.

RELION is comprised of multiple steps that cover the entire single-particle analysis workflow. Steps include beam-induced motion-correction, CTF estimation, automated particle picking, particle extraction, 2D class averaging, 3D classification, and high-resolution refinement in 3D. RELION can process movies generated from direct-electron detectors, apply final map sharpening, and perform local-resolution estimation.

System requirements

Before running the NGC RELION container please ensure your system meets the following requirements.

  • One of the following container runtimes
  • One of the following NVIDIA GPU(s)
    • Pascal(sm60)
    • Volta (sm70)
    • Ampere (sm80)


  • CPU with AVX2 instruction support
  • One of the following CUDA driver versions
    • >= r460
    • r450 (>=.80.02)
    • r440 (>=.33.01)
    • r418 (>=.40.04)


  • ARM 8.1 + ARM NEON CPU
  • CUDA driver version >= r450

System recommendations

  • Systems with multiple GPUs are best.
  • Large local scratch disk space, ideally SSD or RamFS. The example presented below needs at least 100 GB of scratch space.
  • High clock rate is more important than number of cores, although having more than one thread per rank is good.
  • Launch multiple ranks per GPU to get better GPU utilization. The usage of NVIDIA MPS is recommended.


If you will see "memory allocator issue" error, please add the next argument into your Relion run command

--free_gpu_memory 200 


The following examples demonstrate using the NGC RELION container to run a 3D classification experiment with the Plasmodium ribosome data set presented in Wong et al, eLife 2014. Throughout this example the container version will be referenced as x.y.z, replace this with the tag you wish to run.

The dataset, which is ~50GB in size, must be downloaded and extracted before running the benchmark. The environment variable BENCHMARK_DIR will be used throughout the example to refer to the directory containing the extracted data.

tar -xzvf relion_benchmark.tar.gz
export BENCHMARK_DIR=$PWD/relion_benchmark

Although the relion_* command line utilities may be called directly within the NGC RELION container this example will utilize a convenience script, This script will set common command line arguments needed for the example classification experiment. This helper script should be placed within the benchmark data directory.

chmod +x

While this script attempts to set reasonable defaults for common HPC configurations additional tuning is recommended for maximum performance. For detailed tuning and benchmarking guidance please see the Relion Benchmarks & compute hardware page.

Running with nvidia-docker

docker run --rm --gpus all --ipc=host -v $PWD:/host_pwd -w /host_pwd ./

Note: Docker < v1.40

Docker versions below 1.40 must enable GPU support with --runtime nvidia.

docker run --rm --runtime nvidia --ipc=host -v $PWD:/host_pwd -w /host_pwd ./

Running with Singularity

singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker:// ./

Note: Singularity < v3.5

There is currently an issue in Singularity versions below v3.5 causing the LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The LD_LIBRARY_PATH must be unset before invoking Singularity:

LD_LIBRARY_PATH="" singularity run --nv -B $PWD:/host_pwd --pwd /host_pwd docker:// ./

Running multi-node with Slurm and Singularity

Clusters running the Slurm resource manager and Singularity container runtime may launch parallel RELION experiments directly through srun. The NGC RELION container supports pmi2, which is available within most Slurm installations, as well as pmix3. A typical parallel relion_refine_mpi experiment would take the following form.

srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] relion_refine_mpi [relion_flags]

An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.

Running on Base Platform Command

NVIDIA Base Command Platform (BCP) offers a ready-to-use cloud-hosted solution that manages the end-to-end lifecycle of development, workflows, and resource management. Before running the commands below, install and configure the ngc cli, more information can be found here.

Uploading the Dataset to BCP

Upload the Plasmodium ribosome dataset using the command below:

ngc dataset upload --source ./relion_benchmark/  --desc  "RELION dataset specified in current public NGC readme" relion_dataset
Running RELION on BCP

Single node on two GPUs running 25 iterations using shiny2sets dataset and running the 3D refinement benchmark type:

ngc batch run --name "relion_singlenode" --priority NORMAL --order 50 --preempt RUNONCE --min-timeslice 0s --total-runtime 0s --ace <your-ace> --instance dgxa100.80g.2.norm --commandline "mpirun --allow-run-as-root -wdir /work/ -n 5 /usr/bin/nventry -build_base_dir=/usr/local/relion -build_default=sm80 /usr/bin/time -f \"%eelapsedFINISH\" relion_refine_mpi --o /results/ --i /work/Particles/ --iter 25 --j 4 --gpu 0:1 --ref /work/ --firstiter_cc --ini_high 60 --ctf --trust_ref_size --tau2_fudge 4 --particle_diameter 360 --K 6 --flatten_solvent --zero_mask --oversampling 1 --healpix_order 2 --offset_range 5 --offset_step 2 --sym C1 --norm --scale --random_seed 0 --dont_combine_weights_via_disc --pool 100" --result /results --image "hpc/relion:3.1.3" --org <your-org> --datasetid <dataset-id>:/work/

Multi-node run on two nodes with one process per node and two GPUs, running 25 iterations using shiny2sets dataset and running the 3D refinement benchmark type:

ngc batch run --name "relion_multinode" --priority NORMAL --order 50 --preempt RUNONCE --min-timeslice 0s --total-runtime 21000s --ace <your-ace> --instance dgxa100.80g.8.norm --commandline "mpirun --allow-run-as-root -wdir /work/ --map-by ppr:1:node -n 2 /usr/bin/nventry -build_base_dir=/usr/local/relion -build_default=sm80 /usr/bin/time -f \"%eelapsedFINISH\" relion_refine_mpi --o /results/ --i /work/Particles/ --iter 25 --j 2 --gpu 0:1 --ref /work/ --firstiter_cc --ini_high 60 --ctf --trust_ref_size --tau2_fudge 4 --particle_diameter 360 --K 6 --flatten_solvent --zero_mask --oversampling 1 --healpix_order 2 --offset_range 5 --offset_step 2 --sym C1 --norm --scale --random_seed 0 --dont_combine_weights_via_disc --pool 100" --result /results/ --array-type "MPI" --replicas "2" --image "hpc/relion:3.1.3" --org <your-org> --datasetid <dataset-id>:/work/

Suggested Reading

The RELION website

Full RELION tutorial

Relion benchmark how-to

The Electron Microscopy Public Image Archive, providing many datasets which can be used with RELION. BCP User Guide