For downloads and more information, please view on a desktop device.

ResNet with bottleneck 3x3 Convolutions substituted by 3x3 Grouped Convolutions.

NVIDIA Deep Learning Examples

20.12.0

November 4, 2022

45.85 KB

This resource is using open-source code maintained in github (see the quick-start-guide section) and available for download from NGC

The ResNeXt101-32x4d is a model introduced in the Aggregated Residual Transformations for Deep Neural Networks paper.

It is based on a regular ResNet model, substituting 3x3 convolutions inside the bottleneck block for 3x3 grouped convolutions.

The following performance optimizations were implemented in this model:

This model is trained with mixed precision using Tensor Cores on Volta, Turing, and the NVIDIA Ampere GPU architectures. Therefore, researchers can get results 3x faster than training without Tensor Cores, while experiencing the benefits of mixed precision training. This model is tested against each NGC monthly container release to ensure consistent accuracy and performance over time.

*Image source: Aggregated Residual Transformations for Deep Neural Networks*

Image shows difference between ResNet bottleneck block and ResNeXt bottleneck block. ResNeXt bottleneck block splits single convolution into multiple smaller, parallel convolutions.

ResNeXt101-32x4d model's cardinality equals 32 and bottleneck width equals 4. This means instead of single convolution with 64 filters 32 parallel convolutions with only 4 filters are used.

The following sections highlight the default configuration for the ResNext101-32x4d model.

This model uses the SGD optimizer with the following hyperparameters:

- Momentum (0.875).
- Learning rate (LR) = 0.256 for 256 batch size, for other batch sizes we linearly scale the learning rate.
- Learning rate schedule - we use cosine LR schedule.
- For bigger batch sizes (512 and up) we use linear warmup of the learning rate. during the first 5 epochs according to Training ImageNet in 1 hour.
- Weight decay: 6.103515625e-05 (1/16384).
- We do not apply Weight decay on batch norm trainable parameters (gamma/bias).
- Label Smoothing: 0.1.
- We train for:
- 90 Epochs -> 90 epochs is a standard for ResNet family networks.
- 250 Epochs -> best possible accuracy.

- For 250 epoch training we also use MixUp regularization.

This model uses the following data augmentation:

- For training:
- Normalization.
- Random resized crop to 224x224.
- Scale from 8% to 100%.
- Aspect ratio from 3/4 to 4/3.

- Random horizontal flip.

- For inference:
- Normalization.
- Scale to 256x256.
- Center crop to 224x224.

The following features are supported by this model.

Feature | ResNext101-32x4d Tensorflow |
---|---|

Multi-GPU training with Horovod | Yes |

NVIDIA DALI | Yes |

Automatic mixed precision (AMP) | Yes |

Multi-GPU training with Horovod - Our model uses Horovod to implement efficient multi-GPU training with NCCL. For details, refer to the example sources in this repository or the TensorFlow tutorial.

NVIDIA DALI - DALI is a library accelerating data preparation pipeline. To accelerate your input pipeline, you only need to define your data loader with the DALI library. For details, refer to the example sources in this repository or the DALI documentation.

Automatic mixed precision (AMP) - Computation graph can be modified by TensorFlow on runtime to support mixed precision training. Detailed explanation of mixed precision can be found in the next section.

Mixed precision is the combined use of different numerical precisions in a computational method. Mixed precision training offers significant computational speedup by performing operations in half-precision format while storing minimal information in single-precision to retain as much information as possible in critical parts of the network. Since the introduction of Tensor Cores in Volta, and following with both the Turing and Ampere architectures, significant training speedups are experienced by switching to mixed precision -- up to 3x overall speedup on the most arithmetically intense model architectures. Using mixed precision training previously required two steps:

- Porting the model to use the FP16 data type where appropriate.
- Adding loss scaling to preserve small gradient values.

This can now be achieved using Automatic Mixed Precision (AMP) for TensorFlow to enable the full mixed precision methodology in your existing TensorFlow model code. AMP enables mixed precision training on Volta and Turing GPUs automatically. The TensorFlow framework code makes all necessary model changes internally.

In TF-AMP, the computational graph is optimized to use as few casts as necessary and maximize the use of FP16, and the loss scaling is automatically applied inside of supported optimizers. AMP can be configured to work with the existing tf.contrib loss scaling manager by disabling the AMP scaling with a single environment variable to perform only the automatic mixed-precision optimization. It accomplishes this by automatically rewriting all computation graphs with the necessary operations to enable mixed precision training and automatic loss scaling.

For information about:

- How to train using mixed precision, see the Mixed Precision Training paper and Training With Mixed Precision documentation.
- Techniques used for mixed precision training, see the Mixed-Precision Training of Deep Neural Networks blog.
- How to access and enable AMP for TensorFlow, see Using TF-AMP from the TensorFlow User Guide.

Mixed precision is enabled in TensorFlow by using the Automatic Mixed Precision (TF-AMP) extension which casts variables to half-precision upon retrieval, while storing variables in single-precision format. Furthermore, to preserve small gradient magnitudes in backpropagation, a loss scaling step must be included when applying gradients. In TensorFlow, loss scaling can be applied statically by using simple multiplication of loss by a constant value or automatically, by TF-AMP. Automatic mixed precision makes all the adjustments internally in TensorFlow, providing two benefits over manual operations. First, programmers need not modify network model code, reducing development and maintenance effort. Second, using AMP maintains forward and backward compatibility with all the APIs for defining and running TensorFlow models.

To enable mixed precision, you can simply add the values to the environmental variables inside your training script:

Enable TF-AMP graph rewrite:

`os.environ["TF_ENABLE_AUTO_MIXED_PRECISION_GRAPH_REWRITE"] = "1"`

Enable Automated Mixed Precision:

`os.environ['TF_ENABLE_AUTO_MIXED_PRECISION'] = '1'`

TensorFloat-32 (TF32) is the new math mode in NVIDIA A100 GPUs for handling the matrix math also called tensor operations. TF32 running on Tensor Cores in A100 GPUs can provide up to 10x speedups compared to single-precision floating-point math (FP32) on Volta GPUs.

TF32 Tensor Cores can speed up networks using FP32, typically with no loss of accuracy. It is more robust than FP16 for models which require high dynamic range for weights or activations.

For more information, refer to the TensorFloat-32 in the A100 GPU Accelerates AI Training, HPC up to 20x blog post.

TF32 is supported in the NVIDIA Ampere GPU architecture and is enabled by default.