NGC | Catalog
Welcome Guest
CatalogResourcesDLRM for TensorFlow2

DLRM for TensorFlow2

For downloads and more information, please view on a desktop device.
Logo for DLRM for TensorFlow2

Description

The Deep Learning Recommendation Model (DLRM) is a recommendation model designed to make use of both categorical and numerical inputs.

Publisher

NVIDIA

Use Case

Recommender

Framework

TensorFlow2

Latest Version

21.06.1

Modified

February 3, 2022

Compressed Size

1.46 MB

The Deep Learning Recommendation Model (DLRM) is a recommendation model designed to make use of both categorical and numerical inputs. It was first described in Deep Learning Recommendation Model for Personalization and Recommendation Systems. This repository provides a reimplementation of the code-base provided originally here. The scripts enable you to train DLRM on the Criteo Terabyte Dataset.

Using the scripts provided here, you can efficiently train models that are too large to fit into a single GPU. This is because we use a hybrid-parallel approach, which combines model parallelism with data parallelism for different parts of the neural network. This is explained in details in the next section.

This model uses a slightly different preprocessing procedure than the one found in the original implementation. You can find a detailed description of the preprocessing steps in the Dataset guidelines section.

Using DLRM, you can train a high-quality general model for recommendations.

This model is trained with mixed precision using Tensor Cores on Volta, Turing and NVIDIA Ampere GPU architectures. Therefore, researchers can get results 2x faster than training without Tensor Cores while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

Model architecture

DLRM accepts two types of features: categorical and numerical. For each categorical feature, an embedding table is used to provide dense representation to each unique value. The dense features enter the model and are transformed by a simple neural network referred to as "bottom MLP".

This part of the network consists of a series of linear layers with ReLU activations. The output of the bottom MLP and the embedding vectors are then fed into the "dot interaction" operation. The output of "dot interaction" is then concatenated with the features resulting from bottom MLP and fed into the "top MLP" which is a series of dense layers with activations. The model outputs a single number which can be interpreted as a likelihood of a certain user clicking an ad.


Figure 1. The architecture of DLRM.

Default configuration

The following features were implemented in this model:

  • general
    • static loss scaling for Tensor Cores (mixed precision) training
    • hybrid-parallel multi-GPU training
  • preprocessing
    • dataset preprocessing using Spark 3 on GPUs

Feature support matrix

The following features are supported by this model:

Feature DLRM
Automatic mixed precision (AMP) Yes
XLA Yes
Hybrid-parallel multiGPU with Horovod all-to-all Yes
Preprocessing on GPU with Spark 3 Yes
Multi-node training Yes

Features

Automatic Mixed Precision (AMP) Enables mixed precision training without any changes to the code-base by performing automatic graph rewrites and loss scaling controlled by an environmental variable.

XLA

The training script supports a --xla flag. It can be used to enable XLA JIT compilation. Currently, we use XLA Lite. It delivers a steady 10-30% performance boost depending on your hardware platform, precision, and the number of GPUs. It is turned off by default.

Horovod Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, see the Horovod official repository.

Hybrid-parallel multiGPU with Horovod all-to-all Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, see example sources in this repository or see the TensorFlow tutorial. For the detailed description of our multi-GPU approach, visit this section.

Multi-node training This repository supports multinode training. For more information refer to the multinode section

Mixed precision training

Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3.4x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps:

  1. Porting the model to use the FP16 data type where appropriate.
  2. Adding loss scaling to preserve small gradient values.

The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK.

For information about:

Enabling mixed precision

Mixed precision training is turned off by default. To turn it on, issue the --amp flag to the main.py script.

Enabling TF32

TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.

TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.

For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.

TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.

Hybrid-parallel multi-GPU with all-2-all communication

Many recommendation models contain very large embedding tables. As a result, the model is often too large to fit onto a single device. This could be easily solved by training in a model-parallel way, using either the CPU or other GPUs as "memory donors". However, this approach is suboptimal as the "memory donor" devices' compute is not utilized. In this repository, we use the model-parallel approach for the bottom part of the model (Embedding Tables + bottom MLP) while using a usual data parallel approach for the top part of the model (Dot Interaction + top MLP). This way, we can train models much larger than what would normally fit into a single GPU while at the same time making the training faster by using multiple GPUs. We call this approach hybrid-parallel training.

The transition from model-parallel to data-parallel in the middle of the neural net needs a specific multi-GPU communication pattern called all-2-all which is available in our TensorFlow 2 21.02-py3 NGC Docker container. In the original DLRM whitepaper this has been referred to as "butterfly shuffle".


Figure 2. The default multi-GPU mode.

As the example shows, in this repository we train models of two sizes: "small" (~15 GB) and "large" (~82 GB). The "large" model cannot be trained in a single GPU paradigm as it will not fit into a single GPU memory.

Embedding table placement and load balancing (default mode)

By default, we use the following heuristic for dividing the work between the GPUs:

  • The bottom MLP is placed on GPU-0 and no embedding tables are placed on this device.
  • The tables are sorted from the largest to the smallest.
  • Set max_tables_per_gpu := ceil(number_of_embedding_tables / number_of_available_gpus).
  • Repeat until all embedding tables have an assigned device:
    • Out of all the available GPUs, find the one with the largest amount of unallocated memory.
    • Place the largest unassigned embedding table on this GPU. Raise an exception if it does not fit.
    • If the number of embedding tables on this GPU is now equal to max_tables_per_gpu, remove this GPU from the list of available GPUs, so that no more embedding tables will be placed on this GPU. This ensures the all-2-all communication is well-balanced between all devices.

Training very large embedding tables (experimental mode)

The default multi-GPU paradigm described above has a constraint – each individual table has to fit into a single device's memory. If that's not met, then an Out-of-Memory error will be raised. To enable experimentation with very large models, we provide a way of circumventing this constraint by passing the --experimental_columnwise_split --data_parallel_bottom_mlp command-line flags. As a result, each table will be split across the latent space dimension. Some dimensions of the latent space will be placed on one GPU and the rest of them are stored on other GPUs. This means that a table that originally encoded C unique categories into D dense dimensions will now become N separate tables of shape [C, D / N] each stored on a different GPU, where N is the number of GPUs used. Symbolically, the computations are exactly equivalent.

The figure below illustrates this paradigm for a model with 2 embedding tables distributed across two GPUs. Note that this approach is currently slower than the default mode described above.


Figure 3. The "columnwise split" multi-GPU mode.

We tested this approach by training a DLRM model on the Criteo Terabyte dataset with the frequency limiting option turned off (set to zero). The weights of the resulting model take 421 GB. The largest table weighs 140 GB. Here are the commands you can use to reproduce this:

# build and run the preprocessing container as in the Quick Start Guide
# then when preprocessing set the frequency limit to 0:
./prepare_dataset.sh DGX2 0

# build and run the training container same as in the Quick Start Guide
# then append options necessary for training very large embedding tables:
horovodrun -np 8 -H localhost:8 --mpi-args=--oversubscribe numactl --interleave=all -- python -u main.py --dataset_path /data/dlrm/ --amp --tf_gpu_memory_limit_gb 72 --experimental_columnwise_split --data_parallel_bottom_mlp --xla

When using this method on a DGX A100 with 8 A100-80GB GPUs and a large-enough dataset, it is possible to train a single embedding table of up to 600 GB. You can also use multi-node training (described below) to train even larger recommender systems.

Multi-node training

Multi-node training is supported. Depending on the exact interconnect hardware and model configuration, you might experience only a modest speedup with multi-node. Multi-node training can also be used to train larger models. For example, to train a 1.68 TB variant of DLRM on multi-node, you can run:

cmd='numactl --interleave=all -- python -u main.py --dataset_path /data/dlrm/full_criteo_data --amp \
--tf_gpu_memory_limit_gb 72 --experimental_columnwise_split --data_parallel_bottom_mlp \
--embedding_dim 512 --bottom_mlp_dims 512,256,512' \
srun_flags='--mpi=pmix' \
cont=nvidia_dlrm_tf \
mounts=/data/dlrm:/data/dlrm \
sbatch -n 32 -N 4 -t 00:20:00 slurm_multinode.sh

Preprocessing on GPU with Spark 3

Refer to the "Preprocessing with Spark" section for a detailed description of the Spark 3 GPU functionality.