This resource is using open-source code maintained in github (see the quick-start-guide section) and available for download from NGC
The Deep Learning Recommendation Model (DLRM) is a recommendation model designed to make use of both categorical and numerical inputs. It was first described in Deep Learning Recommendation Model for Personalization and Recommendation Systems. This repository provides a reimplementation of the code-base provided originally here. The scripts enable you to train DLRM on the Criteo Terabyte Dataset.
Using the scripts provided here, you can efficiently train models that are too large to fit into a single GPU. This is because we use a hybrid-parallel approach, which combines model parallelism with data parallelism for different parts of the neural network. This is explained in details in the next section.
This model uses a slightly different preprocessing procedure than the one found in the original implementation.
Most importantly, we use a technique called frequency thresholding to demonstrate models of different size.
The smallest model can be trained on a single V100-32GB GPU, while the largest one needs 8xA100-80GB GPUs.
The table below summarizes the model sizes and frequency thresholds used in this repository:
Name | Frequency threshold | Number of parameters | Model size |
---|---|---|---|
Small | 15 | 4.2B | 15.6 GiB |
Large | 3 | 22.8B | 84.9 GiB |
Extra large | 0 | 113B | 421 GiB |
You can find a detailed description of the preprocessing steps in the Dataset guidelines section.
Using DLRM, you can train a high-quality general model for recommendations.
This model is trained with mixed precision using Tensor Cores on Volta, Turing and NVIDIA Ampere GPU architectures. Therefore, researchers can get results 2x faster than training without Tensor Cores while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
DLRM accepts two types of features: categorical and numerical. For each categorical feature, an embedding table is used to provide dense representation to each unique value. The dense features enter the model and are transformed by a simple neural network referred to as "bottom MLP".
This part of the network consists of a series of linear layers with ReLU activations. The output of the bottom MLP and the embedding vectors are then fed into the "dot interaction" operation. The output of "dot interaction" is then concatenated with the features resulting from bottom MLP and fed into the "top MLP" which is a series of dense layers with activations. The model outputs a single number which can be interpreted as a likelihood of a certain user clicking an ad.
Figure 1. The architecture of DLRM.
The following features were implemented in this model:
The following features are supported by this model:
Feature | DLRM |
---|---|
Automatic mixed precision (AMP) | Yes |
XLA | Yes |
Hybrid-parallel training with Merlin Distributed Embeddings | Yes |
Preprocessing on GPU with Spark 3 | Yes |
Multi-node training | Yes |
Automatic Mixed Precision (AMP) Enables mixed precision training without any changes to the code-base by performing automatic graph rewrites and loss scaling controlled by an environmental variable.
XLA
The training script supports a --xla
flag. It can be used to enable XLA JIT compilation. Currently, we use XLA Lite. It delivers a steady 10-30% performance boost depending on your hardware platform, precision, and the number of GPUs. It is turned off by default.
Horovod Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, see the Horovod official repository.
Hybrid-parallel training with Merlin Distributed Embeddings Our model uses Merlin Distributed Embeddings to implement efficient multi-GPU training. For details, see example sources in this repository or see the TensorFlow tutorial. For the detailed description of our multi-GPU approach, visit this section.
Multi-node training This repository supports multinode training. For more information refer to the multinode section
Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3.4x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps:
The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK.
For information about:
Mixed precision training is turned off by default. To turn it on, issue the --amp
flag to the main.py
script.
TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.
For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
Many recommendation models contain very large embedding tables. As a result, the model is often too large to fit onto a single device. This could be easily solved by training in a model-parallel way, using either the CPU or other GPUs as "memory donors". However, this approach is suboptimal as the "memory donor" devices' compute is not utilized. In this repository, we use the model-parallel approach for the Embedding Tables while employing a usual data parallel approach for the more compute-intensive MLPs and Dot Interaction layer. This way, we can train models much larger than what would normally fit into a single GPU while at the same time making the training faster by using multiple GPUs. We call this approach hybrid-parallel training.
To implement this approach we use the Merlin Distributed Embeddings library.
It provides a scalable model parallel wrapper called distributed_embeddings.dist_model_parallel
. This wrapper automatically distributes embedding tables to multiple GPUs.
This way embeddings can be scaled beyond single GPU's memory capacity without
complex code to handle cross-worker communication.
Under the hood, Merlin Distributed Embeddings uses a specific multi-GPU communication pattern called all-2-all to transition from model-parallel to data-parallel paradigm. In the original DLRM whitepaper this has been referred to as "butterfly shuffle".
An example model using Hybrid Parallelism is shown in Figure 2. The compute intensive dense layers are run in data-parallel mode. The smaller embedding tables are run model-parallel, such that each smaller table is placed entirely on a single device. This is not suitable for larger tables that need more memory than can be provided by a single device. Therefore, those large tables are split into multiple parts and each part is run on a different GPU.
Figure 2. Hybrid parallelism with Merlin Distributed Embeddings.
In this repository we train models of three sizes: "small" (15.6 GiB), "large" (84.9 GiB) and "extra large" (421 GiB). The "small" model can be trained on a single V100-32GB GPU. The "large" model needs at least 8xV100-32GB GPUs, but each of the tables it uses can fit on a singleGPU.
The "extra large" model, on the other hand, contains tables that do not fit into a singledevice, and will be automatically split and stored across multiple GPUs by Merlin Distributed Embeddings.
We tested this approach by training a DLRM model on the Criteo Terabyte dataset with the frequency limiting option turned off (set to zero). The weights of the resulting model take 421 GiB. The largest table weighs 140 GiB. Here are the commands you can use to reproduce this:
# build and run the preprocessing container as in the Quick Start Guide
# then when preprocessing set the frequency limit to 0:
./prepare_dataset.sh DGX2 0
# build and run the training container same as in the Quick Start Guide
# then append options necessary for training very large embedding tables:
horovodrun -np 8 -H localhost:8 --mpi-args=--oversubscribe numactl --interleave=all -- python -u main.py --dataset_path /data/dlrm/ --amp --xla
When using this method on a DGX A100 with 8 A100-80GB GPUs and a large-enough dataset, it is possible to train a single embedding table of up to 600 GB. You can also use multi-node training (described below) to train even larger recommender systems.
This mode was used to train the 421GiB "extra large" model in the DGX A100-80G performance section.
Multi-node training is supported. Depending on the exact interconnect hardware and model configuration, you might experience only a modest speedup with multi-node. Multi-node training can also be used to train larger models. For example, to train a 1.68 TB variant of DLRM on multi-node, you can run:
cmd='numactl --interleave=all -- python -u main.py --dataset_path /data/dlrm/full_criteo_data --amp --xla\
--embedding_dim 512 --bottom_mlp_dims 512,256,512' \
srun_flags='--mpi=pmix' \
cont=nvidia_dlrm_tf \
mounts=/data/dlrm:/data/dlrm \
sbatch -n 32 -N 4 -t 00:20:00 slurm_multinode.sh
Refer to the "Preprocessing with Spark" section for a detailed description of the Spark 3 GPU functionality.
This section describes how you can train the DeepLearningExamples RecSys models on your own datasets without changing the model or data loader and with similar performance to the one published in each repository. This can be achieved thanks to Dataset Feature Specification, which describes how the dataset, data loader and model interact with each other during training, inference and evaluation. Dataset Feature Specification has a consistent format across all recommendation models in NVIDIA's DeepLearningExamples repository, regardless of dataset file type and the data loader, giving you the flexibility to train RecSys models on your own datasets.
The Dataset Feature Specification consists of three mandatory and one optional section:
feature_spec provides a base of features that may be referenced in other sections, along with their metadata.
Format: dictionary (feature name) => (metadata name => metadata value)
source_spec provides information necessary to extract features from the files that store them.
Format: dictionary (mapping name) => (list of chunks)
channel_spec determines how features are used. It is a mapping (channel name) => (list of feature names).
Channels are model specific magic constants. In general, data within a channel is processed using the same logic. Example channels: model output (labels), categorical ids, numerical inputs, user data, and item data.
metadata is a catch-all, wildcard section: If there is some information about the saved dataset that does not fit into the other sections, you can store it here.
Data flow can be described abstractly: Input data consists of a list of rows. Each row has the same number of columns; each column represents a feature. The columns are retrieved from the input files, loaded, aggregated into channels and supplied to the model/training script.
FeatureSpec contains metadata to configure this process and can be divided into three parts:
Specification of how data is organized on disk (source_spec). It describes which feature (from feature_spec) is stored in which file and how files are organized on disk.
Specification of features (feature_spec). Describes a dictionary of features, where key is feature name and values are features' characteristics such as dtype and other metadata (for example, cardinalities for categorical features)
Specification of model's inputs and outputs (channel_spec). Describes a dictionary of model's inputs where keys specify model channel's names and values specify lists of features to be loaded into that channel. Model's channels are groups of data streams to which common model logic is applied, for example categorical/continuous data, user/item ids. Required/available channels depend on the model
The FeatureSpec is a common form of description regardless of underlying dataset format, dataset data loader form and model.
The typical data flow is as follows:
Fig.1. Data flow in Recommender models in NVIDIA Deep Learning Examples repository. Channels of the model are drawn in green.
As an example, let's consider a Dataset Feature Specification for a small CSV dataset for some abstract model.
feature_spec:
user_gender:
dtype: torch.int8
cardinality: 3 #M,F,Other
user_age: #treated as numeric value
dtype: torch.int8
user_id:
dtype: torch.int32
cardinality: 2655
item_id:
dtype: torch.int32
cardinality: 856
label:
dtype: torch.float32
source_spec:
train:
- type: csv
features:
- user_gender
- user_age
files:
- train_data_0_0.csv
- train_data_0_1.csv
- type: csv
features:
- user_id
- item_id
- label
files:
- train_data_1.csv
test:
- type: csv
features:
- user_id
- item_id
- label
- user_gender
- user_age
files:
- test_data.csv
channel_spec:
numeric_inputs:
- user_age
categorical_user_inputs:
- user_gender
- user_id
categorical_item_inputs:
- item_id
label_ch:
- label
The data contains five features: (user_gender, user_age, user_id, item_id, label). Their data types and necessary metadata are described in the feature specification section.
In the source mapping section, two mappings are provided: one describes the layout of the training data, the other of the testing data. The layout for training data has been chosen arbitrarily to showcase the flexibility. The train mapping consists of two chunks. The first one contains user_gender and user_age, saved as a CSV, and is further broken down into two files. For specifics of the layout, refer to the following example and consult the glossary. The second chunk contains the remaining columns and is saved in a single file. Notice that the order of columns is different in the second chunk - this is alright, as long as the order matches the order in that file (that is, columns in the .csv are also switched)
Let's break down the train source mapping. The table contains example data color-paired to the files containing it.
The channel spec describes how the data will be consumed. Four streams will be produced and available to the script/model. The feature specification does not specify what happens further: names of these streams are only lookup constants defined by the model/script. Based on this example, we can speculate that the model has three input channels: numeric_inputs, categorical_user_inputs, categorical_item_inputs, and one output channel: label. Feature names are internal to the FeatureSpec and can be freely modified.
In order to train any Recommendation model in NVIDIA Deep Learning Examples one can follow one of three possible ways:
One delivers already preprocessed dataset in the Intermediary Format supported by data loader used by the training script (different models use different data loaders) together with FeatureSpec yaml file describing at least specification of dataset, features and model channels
One uses a transcoding script
One delivers dataset in non-preprocessed form and uses preprocessing scripts that are a part of the model repository. In order to use already existing preprocessing scripts, the format of the dataset needs to match the one of the original datasets. This way, the FeatureSpec file will be generated automatically, but the user will have the same preprocessing as in the original model repository.