LBPM (Lattice Boltzmann Methods for Porous Media) is an open source software framework designed to model flow processes based on digital rock physics, and is freely available through the Open Porous Media project. Digital rock physics refers to a growing class of methods that leverage microscopic data sources to obtain insight into the physical behavior of fluids in rock and other porous materials. LBPM simulation protocols are based on two-fluid lattice Boltzmann methods, focusing in particular on wetting phenomena. To learn more about LBPM and the computational methods it uses checkout "An Adaptive Volumetric Flux Boundary Condition for Lattice Boltzmann Methods".
Before running the NGC LBPM container please ensure your system meets the following requirements.
The following examples simulate water-flooding using the NGC LBPM container. In an experimental setting, water is pumped into a sample at a particular flow rate to displace oil from the pore space. The examples will use a flux boundary condition to mimic this same basic approach. LBPM Tutorial, Step 8. Simulating Water Flooding (Part I).
To run this simulation we must download three files:
input.db: Information regarding the intended domain structure and domain decomposition provided to LBPM
mask_water_flooded_water_and_oil.raw.morphdrain.raw: A digital rock image stored as a raw binary file
run.sh: A helper script that sets common command line arguments and should be placed within the benchmark data directory
lbpm_* command line utilities may be called directly within the NGC LBPM container this example will utilize a convenience script,
run.sh. The environment variable
BENCHMARK_DIR will be used throughout the example to refer to the directory containing the three files listed above.
export BENCHMARK_DIR=$PWD wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/LBPM/single-node/input.db wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/LBPM/single-node/run.sh wget https://gitlab.com/NVHPC/ngc-examples/-/raw/master/LBPM/single-node/mask_water_flooded_water_and_oil.raw.morphdrain.raw chmod +x run.sh
cd $BENCHMARK_DIR docker run --rm --gpus all -v $BENCHMARK_DIR:/benchmark -w /benchmark nvcr.io/hpc/lbpm:2020.10 ./run.sh
Docker versions below 1.40 must enable GPU support with
cd $BENCHMARK_DIR docker run --rm --runtime nvidia -v $BENCHMARK_DIR:/benchmark -w /benchmark nvcr.io/hpc/lbpm:2020.10 ./run.sh
cd $BENCHMARK_DIR singularity run --nv -B $BENCHMARK_DIR:/benchmark --pwd /benchmark docker://nvcr.io/hpc/lbpm:2020.10 ./run.sh
There is currently an issue in Singularity versions below v3.5 causing the
LD_LIBRARY_PATH to be incorrectly set within the container environment. As a workaround The
LD_LIBRARY_PATH must be unset before invoking Singularity:
LD_LIBRARY_PATH="" singularity run --nv -B $PWD:/benchmark --pwd /benchmark docker://nvcr.io/hpc/lbpm:2020.10 ./run.sh
Clusters running the Slurm resource manager and Singularity container runtime may launch parallel LBPM experiments directly through
srun. The NGC LBPM container supports
pmi2, which is available within most Slurm installations, as well as
pmix3. A typical parallel experiment would take the following form.
srun --mpi=pmi2 [srun_flags] singularity run --nv [singularity_flags] [lbpm_executable] [lbpm_input.db]
An example Slrum batch script that may be modified for your specific cluster setup may be viewed here.
By pulling and using the container, you accept the terms and conditions of the LBPM License