NGC | Catalog
CatalogResourcesELECTRA for TensorFlow2

ELECTRA for TensorFlow2

Logo for ELECTRA for TensorFlow2
Description
ELECTRA is method of pre-training language representations which outperforms existing techniques on a wide array of NLP tasks.
Publisher
NVIDIA Deep Learning Examples
Latest Version
20.07.0
Modified
April 4, 2023
Compressed Size
62.65 KB

This resource is using open-source code maintained in github (see the quick-start-guide section) and available for download from NGC

Electra (Efficiently Learning an Encoder that Classifies Token Replacements Accurately), is a novel pre-training method for language representations which outperforms existing techniques, given the same compute budget on a wide array of Natural Language Processing (NLP) tasks. This model is based on the ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators paper. NVIDIA's implementation of ELECTRA is an optimized version of the Hugging Face implementation, leveraging mixed precision arithmetic and Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures for faster training times with state-of-the-art accuracy.

This repository contains the scripts to interactively launch data download, training, benchmarking and inference routines in a Docker container for pre-training on your own dataset (Wikipedia and BookCorpus shown as an example), and fine-tuning for tasks such as question answering. The major differences between the original implementation as described in the paper and this version of ELECTRA are as follows:

  • Scripts to download Wikipedia and BookCorpus datasets
  • Scripts to preprocess downloaded data or a custom corpus into inputs and targets for pre-training in a modular fashion
  • Automatic mixed precision (AMP) support and optimized for performance
  • Multi-GPU and Multi-node training support with push-button scripts to reach state-of-the-art accuracy and performance.

Other publicly available implementations of Electra include:

  1. Hugging Face
  2. Google's implementation

This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Additionally, this model provides push-button solutions to pre-training, fine-tuning and inference and on a corpus of choice. As a result, researchers can get results up to 4x faster than training without Tensor Cores. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

Model architecture

ELECTRA is a combination of two Transformer models: a generator and a discriminator. The generator's role is to replace tokens in a sequence, and is therefore trained as a masked language model. The discriminator, which is the model we are interested in, tries to identify which tokens were replaced by the generator in the sequence. Both generator and discriminator use the same architecture as the encoder of the Transformer. The encoder is simply a stack of Transformer blocks, which consist of a multi-head attention layer followed by successive stages of feed-forward networks and layer normalization. The multi-head attention layer performs self-attention on multiple input representations.

Figure 1-1

Default configuration

ELECTRA uses a new pre-training task called replaced token detection (RTD), that trains a bidirectional model (like a MLM) while learning from all input positions (like a LM). Inspired by generative adversarial networks (GANs), instead of corrupting the input by replacing tokens with "[MASK]" as in BERT, the generator is trained to corrupt the input by replacing some input tokens with incorrect, but somewhat plausible, fakes. On the other hand, the discriminator is trained to distinguish between "real" and "fake" input data.

The Google ELECTRA repository reports the results for three configurations of ELECTRA, each corresponding to a unique model size. This implementation provides the same configurations by default, which are described in the table below.

Model Hidden layers Hidden unit size Parameters
ELECTRA_SMALL 12 encoder 256 14M
ELECTRA_BASE 12 encoder 768 110M
ELECTRA_LARGE 24 encoder 1024 335M

The following features were implemented in this model:

  • General:

    • Mixed precision support with TensorFlow Automatic Mixed Precision (TF-AMP)
    • Multi-GPU support using Horovod
    • XLA support
    • Multi-Node support
  • Training

    • Pre-training support
    • Fine-tuning example
  • Inference:

    • Joint predictions with beam search.

Feature support matrix

The following features are supported by this model.

Feature ELECTRA
LAMB Yes
Automatic mixed precision (AMP) Yes
XLA Yes
Horovod Multi-GPU Yes
Multi-node Yes

Features

Automatic Mixed Precision (AMP)

This implementation of ELECTRA uses AMP to implement mixed precision training. It allows us to use FP16 training with FP32 master weights by modifying just a few lines of code.

Horovod

Horovod is a distributed training framework for TensorFlow, Keras, PyTorch, and MXNet. The goal of Horovod is to make distributed deep learning fast and easy to use. For more information about how to get started with Horovod, see the Horovod: Official repository.

Multi-GPU training with Horovod

Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, see example sources in this repository or see the TensorFlow tutorial.

XLA support (experimental)

XLA is a domain-specific compiler for linear algebra that can accelerate TensorFlow models with potentially no source code changes. The results are improvements in speed and memory usage: most internal benchmarks run ~1.1-1.5x faster after XLA is enabled. AMP is an abbreviation used for automatic mixed precision training.

Multi-node Training

Supported on a Pyxis/Enroot Slurm cluster.

Mixed precision training

Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in half-precision format, while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training requires two steps:

  1. Porting the model to use the FP16 data type where appropriate.
  2. Adding loss scaling to preserve small gradient values.

This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full mixed precision methodology in your existing TensorFlow model code. AMP enables mixed precision training on Volta, Turing, and NVIDIA Ampere GPU architectures automatically. The TensorFlow framework code makes all necessary model changes internally.

In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling.

For information about:

Enabling mixed precision

This implementation exploits the TensorFlow Automatic Mixed Precision feature. To enable AMP, you simply need to supply the --amp flag to the run_pretraining.py or run_tf_squad.py script. For reference, enabling AMP required us to apply the following changes to the code:

  1. Set the Keras mixed precision policy:

    if config.amp:
        policy = tf.keras.mixed_precision.experimental.Policy("mixed_float16", loss_scale="dynamic")
        tf.keras.mixed_precision.experimental.set_policy(policy)
    
  2. Use the loss scaling wrapper on the optimizer:

    if config.amp:
         optimizer = tf.keras.mixed_precision.experimental.LossScaleOptimizer(optimizer, "dynamic")
    
  3. Use scaled loss to calculate the gradients:

    #Scale loss
    if config.amp:
        total_loss = optimizer.get_scaled_loss(total_loss)
    gradients = tape.gradient(total_loss, model.trainable_variables)
    #Get unscaled gradients if AMP
    if config.amp:
        gradients = optimizer.get_unscaled_gradients(gradients)
    

Enabling TF32

TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.

TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.

For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.

TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.

Glossary

Fine-tuning
Training an already pretrained model further using a task specific dataset for subject-specific refinements, by adding task-specific layers on top if required.

Language Model
Assigns a probability distribution over a sequence of words. Given a sequence of words, it assigns a probability to the whole sequence.

Pre-training
Training a model on vast amounts of data on the same (or different) task to build general understandings.

Transformer
The paper Attention Is All You Need introduces a novel architecture called Transformer that uses an attention mechanism and transforms one sequence into another.

Phase 1 Pretraining on samples of sequence length 128 and at most 15% masked predictions per sequence.

Phase 2 Pretraining on samples of sequence length 512 and at most 15% masked predictions per sequence.