The Deep Learning Recommendation Model (DLRM) is a recommendation model designed to make use of both categorical and numerical inputs. It was first described in Deep Learning Recommendation Model for Personalization and Recommendation Systems. This repository provides a reimplementation of the codebase provided originally here. The scripts provided enable you to train DLRM on the Criteo Terabyte Dataset.
Using the scripts provided here, you can efficiently train models that are too large to fit into a single GPU. This is because we use a hybrid-parallel approach, which combines model parallelism for the embedding tables with data parallelism for the Top MLP. This is explained in details in next sections.
This model uses a slightly different preprocessing procedure than the one found in the original implementation. You can find a detailed description of the preprocessing steps in the Dataset guidelines section.
Using DLRM you can train a high-quality general model for providing recommendations.
This model is trained with mixed precision using Tensor Cores on Volta, Turing, and NVIDIA Ampere GPU architectures. Therefore, researchers can get results up to 3.3x faster than training without Tensor Cores while experiencing the benefits of mixed precision training. It is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
DLRM accepts two types of features: categorical and numerical. For each categorical feature, an embedding table is used to provide dense representation to each unique value. The dense features enter the model and are transformed by a simple neural network referred to as "bottom MLP". This part of the network consists of a series of linear layers with ReLU activations. The output of the bottom MLP and the embedding vectors are then fed into the "dot interaction" operation. The output of "dot interaction" is then concatenated with the features resulting from the bottom MLP and fed into the "top MLP" which is also a series of dense layers with activations. The model outputs a single number which can be interpreted as a likelihood of a certain user clicking an ad.
Figure 1. The architecture of DLRM.
The following features were implemented in this model:
This model supports the following features:
Feature | DLRM |
---|---|
Automatic mixed precision (AMP) | yes |
CUDA Graphs | yes |
Hybrid-parallel multi-GPU with all-2-all | yes |
Preprocessing on GPU with NVTabular | yes |
Preprocessing on GPU with Spark 3 | yes |
Automatic Mixed Precision (AMP) - enables mixed precision training without any changes to the code-base by performing automatic graph rewrites and loss scaling controlled by an environmental variable.
CUDA Graphs - This feature allows to launch multiple GPU operations through a single CPU operation. The result is a vast reduction in CPU overhead. The benefits are particularly pronounced when training with relatively small batch sizes. The CUDA Graphs feature has been available through a native PyTorch API starting from PyTorch v1.10.
Multi-GPU training with PyTorch distributed - our model uses torch.distributed
to implement efficient multi-GPU training with NCCL. For details, see example sources in this repository or see the PyTorch Tutorial.
Preprocessing on GPU with NVTabular - Criteo dataset preprocessing can be conducted using NVTabular. For more information on the framework, see the Announcing the NVIDIA NVTabular Open Beta with Multi-GPU Support and New Data Loaders.
Preprocessing on GPU with Spark 3 - Criteo dataset preprocessing can be conducted using Apache Spark 3.0. For more information on the framework and how to leverage GPU to preprocessing, see the Accelerating Apache Spark 3.0 with GPUs and RAPIDS.
Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in the half-precision floating-point format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision – up to 3.3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps:
The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK.
For information about:
Mixed precision training is turned off by default. To turn it on issue the --amp
flag to the main.py
script.
TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models that require a high dynamic range for weights or activations.
For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.
Many recommendation models contain very large embedding tables. As a result, the model is often too large to fit onto a single device. This could be easily solved by training in a model-parallel way, using either the CPU or other GPUs as "memory donors". However, this approach is suboptimal as the "memory donor" devices' compute is not utilized. In this repository, we use the model-parallel approach for the bottom part of the model (Embedding Tables + Bottom MLP) while using a usual data parallel approach for the top part of the model (Dot Interaction + Top MLP). This way we can train models much larger than what would normally fit into a single GPU while at the same time making the training faster by using multiple GPUs. We call this approach hybrid-parallel.
The transition from model-parallel to data-parallel in the middle of the neural net needs a specific multi-GPU communication pattern called all-2-all which is available in our PyTorch 21.04-py3 NGC docker container. In the original DLRM whitepaper this has been also referred to as "butterfly shuffle".
In the example shown in this repository we train models of three sizes: "small" (~15 GB), "large" (~82 GB), and "xlarge" (~142 GB). We use the hybrid-parallel approach for the "large" and "xlarge" models, as they do not fit in a single GPU.
We use the following heuristic for dividing the work between the GPUs:
max_tables_per_gpu := ceil(number_of_embedding_tables / number_of_available_gpus)
max_tables_per_gpu
remove this GPU from the list of available GPUs so that no more embedding tables will be placed on this GPU. This ensures the all2all communication is well balanced between all devices.Please refer to the "Preprocessing" section for a detailed description of the Apache Spark 3.0 and NVTabular GPU functionality
This section describes how you can train the DeepLearningExamples RecSys models on your own datasets without changing the model or data loader and with similar performance to the one published in each repository. This can be achieved thanks to Dataset Feature Specification, which describes how the dataset, data loader and model interact with each other during training, inference and evaluation. Dataset Feature Specification has a consistent format across all recommendation models in NVIDIA's DeepLearningExamples repository, regardless of dataset file type and the data loader, giving you the flexibility to train RecSys models on your own datasets.
The Dataset Feature Specification consists of three mandatory and one optional section:
feature_spec provides a base of features that may be referenced in other sections, along with their metadata.
Format: dictionary (feature name) => (metadata name => metadata value)
source_spec provides information necessary to extract features from the files that store them.
Format: dictionary (mapping name) => (list of chunks)
channel_spec determines how features are used. It is a mapping (channel name) => (list of feature names).
Channels are model specific magic constants. In general, data within a channel is processed using the same logic. Example channels: model output (labels), categorical ids, numerical inputs, user data, and item data.
metadata is a catch-all, wildcard section: If there is some information about the saved dataset that does not fit into the other sections, you can store it here.
Data flow can be described abstractly: Input data consists of a list of rows. Each row has the same number of columns; each column represents a feature. The columns are retrieved from the input files, loaded, aggregated into channels and supplied to the model/training script.
FeatureSpec contains metadata to configure this process and can be divided into three parts:
Specification of how data is organized on disk (source_spec). It describes which feature (from feature_spec) is stored in which file and how files are organized on disk.
Specification of features (feature_spec). Describes a dictionary of features, where key is feature name and values are features' characteristics such as dtype and other metadata (for example, cardinalities for categorical features)
Specification of model's inputs and outputs (channel_spec). Describes a dictionary of model's inputs where keys specify model channel's names and values specify lists of features to be loaded into that channel. Model's channels are groups of data streams to which common model logic is applied, for example categorical/continuous data, user/item ids. Required/available channels depend on the model
The FeatureSpec is a common form of description regardless of underlying dataset format, dataset data loader form and model.
The typical data flow is as follows:
Fig.1. Data flow in Recommender models in NVIDIA Deep Learning Examples repository. Channels of the model are drawn in green.
As an example, let's consider a Dataset Feature Specification for a small CSV dataset for some abstract model.
feature_spec:
user_gender:
dtype: torch.int8
cardinality: 3 #M,F,Other
user_age: #treated as numeric value
dtype: torch.int8
user_id:
dtype: torch.int32
cardinality: 2655
item_id:
dtype: torch.int32
cardinality: 856
label:
dtype: torch.float32
source_spec:
train:
- type: csv
features:
- user_gender
- user_age
files:
- train_data_0_0.csv
- train_data_0_1.csv
- type: csv
features:
- user_id
- item_id
- label
files:
- train_data_1.csv
test:
- type: csv
features:
- user_id
- item_id
- label
- user_gender
- user_age
files:
- test_data.csv
channel_spec:
numeric_inputs:
- user_age
categorical_user_inputs:
- user_gender
- user_id
categorical_item_inputs:
- item_id
label_ch:
- label
The data contains five features: (user_gender, user_age, user_id, item_id, label). Their data types and necessary metadata are described in the feature specification section.
In the source mapping section, two mappings are provided: one describes the layout of the training data, the other of the testing data. The layout for training data has been chosen arbitrarily to showcase the flexibility. The train mapping consists of two chunks. The first one contains user_gender and user_age, saved as a CSV, and is further broken down into two files. For specifics of the layout, refer to the following example and consult the glossary. The second chunk contains the remaining columns and is saved in a single file. Notice that the order of columns is different in the second chunk - this is alright, as long as the order matches the order in that file (that is, columns in the .csv are also switched)
Let's break down the train source mapping. The table contains example data color-paired to the files containing it.
The channel spec describes how the data will be consumed. Four streams will be produced and available to the script/model. The feature specification does not specify what happens further: names of these streams are only lookup constants defined by the model/script. Based on this example, we can speculate that the model has three input channels: numeric_inputs, categorical_user_inputs, categorical_item_inputs, and one output channel: label. Feature names are internal to the FeatureSpec and can be freely modified.
In order to train any Recommendation model in NVIDIA Deep Learning Examples one can follow one of three possible ways:
One delivers already preprocessed dataset in the Intermediary Format supported by data loader used by the training script (different models use different data loaders) together with FeatureSpec yaml file describing at least specification of dataset, features and model channels
One uses a transcoding script
One delivers dataset in non-preprocessed form and uses preprocessing scripts that are a part of the model repository. In order to use already existing preprocessing scripts, the format of the dataset needs to match the one of the original datasets. This way, the FeatureSpec file will be generated automatically, but the user will have the same preprocessing as in the original model repository.