NGC | Catalog
CatalogResourcesWide & Deep for TensorFlow2

Wide & Deep for TensorFlow2

Logo for Wide & Deep for TensorFlow2
Wide & Deep Recommender model.
NVIDIA Deep Learning Examples
Latest Version
April 4, 2023
Compressed Size
63.12 KB

This resource is using open-source code maintained in github (see the quick-start-guide section) and available for download from NGC

Recommendation systems drive engagement on many of the most popular online platforms. As the volume of data available to power these systems grows exponentially, Data Scientists are increasingly turning from more traditional machine learning methods to highly expressive deep learning models to improve the quality of their recommendations.

Google's Wide & Deep Learning for Recommender Systems has emerged as a popular model for Click Through Rate (CTR) prediction tasks thanks to its power of generalization (deep part) and memorization (wide part). The difference between this Wide & Deep Recommender Model and the model from the paper is the size of the deep part of the model. Originally, in Google's paper, the fully connected part was three layers of 1,024, 512, and 256 neurons. Our model consists of five layers, each of 1,024 neurons.

This model is trained with mixed precision using Tensor Cores on NVIDIA Volta and NVIDIA Ampere GPU architectures. Therefore, researchers can get results 3.5 times faster than training without Tensor Cores while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

Model architecture

Wide & Deep refers to a class of networks that use the output of two parts working in parallel - wide model and deep model - to make a binary prediction of CTR. The wide model is a linear model of features together with their transforms. The deep model is a series of five hidden MLP layers of 1,024 neurons. The model can handle both numerical continuous features as well as categorical features represented as dense embeddings. The architecture of the model is presented in Figure 1.

Figure 1. The architecture of the Wide & Deep model.

Applications and dataset

As a reference dataset, we used a subset of the features engineered by the 19th place finisher in the Kaggle Outbrain Click Prediction Challenge. This competition challenged competitors to predict the likelihood with which a particular ad on a website's display would be clicked on. Competitors were given information about the user, display, document, and ad in order to train their models. More information can be found here.

Default configuration

The Outbrain Dataset is preprocessed in order to get features input to the model. To give context to the acceleration numbers described below, some important properties of our features and model are as follows.


  • Request Level:

    • Five scalar numeric features dtype=float32
    • Eight one-hot categorical features dtype=int32
    • Three multi-hot categorical features dtype=int32
    • 11 trainable embeddings of (dimension, cardinality of categorical variable, hotness for multi-hot):
      (128,300000), (19,4), (128,100000), (64,4000), (64,1000), (64,2500), (64,300), (64,2000), (64, 350, 3), (64, 10000, 3), (64, 100, 3)
    • 11 trainable embeddings for the wide part of size 1 (serving as an embedding from the categorical to scalar space for input to the wide portion of the model)
  • Item Level:

    • Eight scalar numeric features dtype=float32
    • Five one-hot categorical features dtype=int32
    • Five trainable embeddings of (dimension, cardinality of categorical variable): (128,250000), (64,2500), (64,4000), (64,1000), (128,5000)
    • Five trainable embeddings for the wide part of size 1 (working as trainable one-hot embeddings)

Features describe both the user (Request Level features) and Item (Item Level Features).

  • Model:
    • Input dimension is 29 (16 categorical and 13 numerical features)
    • Total embedding dimension is 1235
    • Five hidden layers, each with a size of 1024
    • Total number of model parameters is ~92M
    • Output dimension is 1 (y is the probability of click given Request-level and Item-level features)
    • Loss function: Binary Crossentropy

For more information about feature preprocessing, go to Dataset preprocessing.

Model accuracy metric

Model accuracy is defined with the MAP@12 metric. This metric follows the way of assessing model accuracy in the original Kaggle Outbrain Click Prediction Challenge. In this repository, the leaked clicked ads are not taken into account since in an industrial setup data scientists do not have access to leaked information when training the model. For more information about data leak in the Kaggle Outbrain Click Prediction challenge, visit this blogpost by the 19th place finisher in that competition.

Training and evaluation script also reports Loss (BCE) values.

Feature support matrix

This model supports the following features:

Feature Wide & Deep
Horovod Multi-GPU (NCCL) Yes
Accelerated Linear Algebra (XLA) Yes
Automatic mixed precision (AMP) Yes


Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, refer to : Horovod: Official repository.

Multi-GPU training with Horovod Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, refer to example sources in this repository or refer to: TensorFlow tutorial.

XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. Enabling XLA results in improvements to speed and memory usage: most internal benchmarks run ~1.1-1.5x faster after XLA is enabled. For more information on XLA, visit official XLA page.

Mixed precision training

Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training previously required two steps:

  1. Porting the model to use the FP16 data type where appropriate.
  2. Adding loss scaling to preserve small gradient values.

The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK.

For more information:

For information on the influence of mixed precision training on model accuracy in train and inference, go to Training accuracy results.

Enabling mixed precision

To enable Wide & Deep training to use mixed precision, add the additional flag --amp to the training script. Refer to the Quick Start Guide for more information.

Enabling TF32

TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.

TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models requiring high dynamic ranges for weights or activations.

For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.

TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.


Request level features Features that describe the person and context to which we wish to make recommendations.

Item level features Features that describe those objects which we are considering recommending.

BYO dataset functionality overview

This section describes how you can train the DeepLearningExamples RecSys models on your own datasets without changing the model or data loader and with similar performance to the one published in each repository. This can be achieved thanks to Dataset Feature Specification, which describes how the dataset, data loader and model interact with each other during training, inference and evaluation. Dataset Feature Specification has a consistent format across all recommendation models in NVIDIA's DeepLearningExamples repository, regardless of dataset file type and the data loader, giving you the flexibility to train RecSys models on your own datasets.


The Dataset Feature Specification consists of three mandatory and one optional section:

feature_spec provides a base of features that may be referenced in other sections, along with their metadata. Format: dictionary (feature name) => (metadata name => metadata value)

source_spec provides information necessary to extract features from the files that store them. Format: dictionary (mapping name) => (list of chunks)

  • Mappings are used to represent different versions of the dataset (think: train/validation/test, k-fold splits). A mapping is a list of chunks.
  • Chunks are subsets of features that are grouped together for saving. For example, some formats may constrain data saved in one file to a single data type. In that case, each data type would correspond to at least one chunk. Another example where this might be used is to reduce file size and enable more parallel loading. Chunk description is a dictionary of three keys:
    • type provides information about the format in which the data is stored. Not all formats are supported by all models.
    • features is a list of features that are saved in a given chunk. The order of this list may matter: for some formats, it is crucial for assigning read data to the proper feature.
    • files is a list of paths to files where the data is saved. For Feature Specification in yaml format, these paths are assumed to be relative to the yaml file's directory (basename). Order of this list matters: It is assumed that rows 1 to i appear in the first file, rows i+1 to j in the next one, etc.

channel_spec determines how features are used. It is a mapping (channel name) => (list of feature names).

Channels are model-specific magic constants. In general, data within a channel is processed using the same logic. Example channels: model output (labels), categorical ids, numerical inputs, user data, and item data.

metadata is a catch-all, wildcard section: If there is some information about the saved dataset that does not fit into the other sections, you can store it here.

Dataset feature specification

Data flow can be described abstractly: Input data consists of a list of rows. Each row has the same number of columns; each column represents a feature. The columns are retrieved from the input files, loaded, aggregated into channels, and supplied to the model/training script.

FeatureSpec contains metadata to configure this process and can be divided into three parts:

  • Specification of how data is organized on disk (source_spec). It describes which feature (from feature_spec) is stored in which file and how files are organized on disk.

  • Specification of features (feature_spec). Describes a dictionary of features, where key is the feature name and values are the features' characteristics, such as dtype and other metadata (for example, cardinalities for categorical features)

  • Specification of model's inputs and outputs (channel_spec). Describes a dictionary of model's inputs where keys specify model channel's names and values specify lists of features to be loaded into that channel. Model's channels are groups of data streams to which common model logic is applied, for example, categorical/continuous data, and user/item ids. Required/available channels depend on the model

The FeatureSpec is a common form of description regardless of underlying dataset format, dataset data loader form, and model.

Data flow in NVIDIA Deep Learning Examples recommendation models

The typical data flow is as follows:

  • S.0. Original dataset is downloaded to a specific folder.
  • S.1. Original dataset is preprocessed into Intermediary Format. For each model, the preprocessing is done differently, using different tools. The Intermediary Format also varies (for example, for WideAndDeep TF2, the Intermediary Format is parquet)
  • S.2. The Preprocessing Step outputs Intermediary Format with dataset split into training and validation/testing parts along with the Dataset Feature Specification yaml file. Metadata in the preprocessing step is automatically calculated.
  • S.3. Intermediary Format data together with Dataset Feature Specification are fed into training/evaluation scripts. The data loader reads Intermediary Format and feeds the data into the model according to the description in the Dataset Feature Specification.
  • S.4. The model is trained and evaluated

Fig.1. Data flow in Recommender models in NVIDIA Deep Learning Examples repository. Channels of the model are drawn in green.

Example of dataset feature specification

For example, let's consider a Dataset Feature Specification for a small CSV dataset for some abstract model.

    dtype: torch.int8
    cardinality: 3 #M,F,Other
  user_age: #treated as numeric value
    dtype: torch.int8
    dtype: torch.int32
    cardinality: 2655
    dtype: torch.int32
    cardinality: 856
    dtype: torch.float32

    - type: csv
        - user_gender
        - user_age
        - train_data_0_0.csv
        - train_data_0_1.csv
    - type: csv
        - user_id
        - item_id
        - label
        - train_data_1.csv
    - type: csv
        - user_id
        - item_id
        - label
        - user_gender
        - user_age
        - test_data.csv

    - user_age
    - user_gender
    - user_id
    - item_id
    - label

The data contains five features: (user_gender, user_age, user_id, item_id, label). Their data types and necessary metadata are described in the feature specification section.

In the source mapping section, two mappings are provided: one describes the layout of the training data, and the other of the testing data. The layout for training data has been chosen arbitrarily to showcase the flexibility. The train mapping consists of two chunks. The first one contains user_gender and user_age, saved as a CSV, and is further broken down into two files. For specifics of the layout, refer to the following example and consult the glossary. The second chunk contains the remaining columns and is saved in a single file. Notice that the order of columns is different in the second chunk - this is alright, as long as the order matches the order in that file (that is, columns in the .csv are also switched)

Let's break down the train source mapping. The table contains example data color-paired to the files containing it.

The channel spec describes how the data will be consumed. Four streams will be produced and available to the script/model. The feature specification does not specify what happens further: names of these streams are only lookup constants defined by the model/script. Based on this example, we can speculate that the model has three input channels: numeric_inputs, categorical_user_inputs, categorical_item_inputs, and one output channel: label. Feature names are internal to the FeatureSpec and can be freely modified.

BYO dataset functionality

In order to train any Recommendation model in NVIDIA Deep Learning Examples, one can follow one of three possible ways:

  • One delivers an already preprocessed dataset in the Intermediary Format supported by the data loader used by the training script (different models use different data loaders) together with FeatureSpec yaml file describing at least specification of dataset, features, and model channels

  • One uses a transcoding script

  • One delivers a dataset in non-preprocessed form and uses preprocessing scripts that are a part of the model repository. In order to use already existing preprocessing scripts, the format of the dataset needs to match one of the original datasets. This way, the FeatureSpec file will be generated automatically, but the user will have the same preprocessing as in the original model repository.