This resource is using open-source code maintained in github (see the quick-start-guide section) and available for download from NGC
This repository provides an implementation of the Transformer-XL model in PyTorch from the paper Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context. Transformer-XL is a transformer-based language model with a segment-level recurrence and a novel relative positional encoding. Enhancements introduced in Transformer-XL help capture better long-term dependencies by attending to tokens from multiple previous segments.
Our implementation is based on the codebase published by the authors of the Transformer-XL paper. Our implementation uses a modified model architecture. Our modifications were made to achieve better hardware utilization and to take advantage of Tensor Cores. Similar modifications were also proposed in an implementation available from github.com/cybertronai/transformer-xl. Refer to the Model architecture section for more details.
This model is trained with mixed precision using Tensor Cores on NVIDIA Volta and the NVIDIA Ampere GPU architectures and evaluated on Volta, Turing and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results up to 2.5x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.
The Transformer-XL "base" model for WikiText-103 dataset available in this repository was modified to use the following hyperparameter values:
Hyperparameter | Description | Original setting for the base model | Our modification for the base model |
---|---|---|---|
d_model |
hidden size | 410 | 512 |
n_head |
number of attention heads | 10 | 8 |
d_head |
size of each attention head | 41 | 64 |
d_inner |
hidden size in fully-connected layers | 2100 | 2048 |
tgt_len |
number of tokens to predict during training | 150 | 192 |
mem_len |
number of tokens cached from previous iterations during training | 150 | 192 |
Changes described above were made to align certain hyperparameters with powers of two, with this modification, the model is able to achieve better hardware utilization, and therefore higher training throughput.
The Transformer-XL "large" model for WikiText-103 dataset available in this repository uses the original hyperparameters from the reference implementation.
The following table lists the hyperparameters for the large and the base Transformer-XL models for WikiText-103 dataset available in this repository.
Hyperparameter | Description | Base model | Large model |
---|---|---|---|
n_layer |
number of layers | 16 | 18 |
d_model |
hidden size | 512 | 1024 |
n_head |
number of attention heads | 8 | 16 |
d_head |
size of each attention head | 64 | 64 |
d_inner |
inner hidden size in fully-connected layers | 2048 | 4096 |
dropout |
dropout | 0.1 | 0.2 |
dropatt |
dropout after softmax in the attention | 0.0 | 0.2 |
lr |
base learning rate | 0.01 | 0.01 |
eta_min |
minimum learning rate (for cosine decay) | 0.001 | 0.0001 |
max_step |
number of training steps | 40,000 | 100,000 |
warmup_step |
number of learning rate warmup steps | 1,000 | 16,000 |
batch_size |
training batch size | 256 | 128 |
tgt_len |
number of tokens to predict during training | 192 | 384 |
mem_len |
number of tokens cached from previous iterations during training | 192 | 384 |
The Transformer-XL model addresses the limitations of vanilla transformer-based language models, which are only able to use relatively short context, bounded by the segment length. The Transformer-XL introduces a recurrence mechanism, which is able to use a cached hidden state from previous segments. During training, the context consists of a concatenation of current segment's hidden state and cached states from previous iterations. Gradients are backpropagated only through the current segment, although the model is able to take advantage of the extra information stored in the cache and therefore is able to model long-term dependencies.
An illustration of the recurrence mechanism taken from the Transformer-XL paper is shown below.
The following features were implemented in this model:
O2
optimization
level and with a dynamic loss scalingThe following features are supported by this model:
Feature | Transformer-XL |
---|---|
Apex AMP | Yes |
PyTorch DistributedDataParallel | Yes |
LAMB | Yes |
Inference with TorchScript | Yes |
Multi-node training | Yes |
Apex AMP - a tool that enables Tensor Core-accelerated training. Refer to the Enabling mixed precision section for more details.
PyTorch DistributedDataParallel - a module wrapper that enables easy multiprocess distributed data-parallel training.
LAMB - stands for Layerwise Adaptive Moments Based optimizer, is a large batch optimization technique that helps accelerate training of deep neural networks using large minibatches.
TorchScript - is a way to create serializable and optimizable models from PyTorch code. Any TorchScript program can be saved from a Python process and loaded in a process where there is no Python dependency.
Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training previously required two steps:
The ability to train deep learning networks with lower precision was introduced in the Pascal architecture and first supported in CUDA 8 in the NVIDIA Deep Learning SDK.
For information about:
The pytorch/train.py
training script launches mixed precision training
with Tensor Cores if the flag --fp16
is set.
Mixed precision is enabled in PyTorch by using the Automatic Mixed Precision
(AMP), library from APEX that casts variables
to half-precision upon retrieval, while storing variables in single-precision
format. Furthermore, to preserve small gradient magnitudes in backpropagation,
a loss
scaling
step must be included when applying gradients. In PyTorch, loss scaling can be
easily applied by using scale_loss()
method provided by AMP. The scaling
value to be used can be
dynamic or fixed.
For an in-depth walk through on AMP, check out sample usage here. APEX is a PyTorch extension that contains utility libraries, such as AMP, which require minimal network code changes to leverage Tensor Cores performance.
The following steps were needed to enable mixed precision training in Transformer-XL:
from apex import amp
model, optimizer = amp.initialize(
model,
optimizer,
opt_level='O2',
)
scale_loss
context manager:with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
torch.nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.clip)
TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.
TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.
For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.
TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.