BERT, or Bidirectional Encoder Representations from Transformers, is a method of pre-training language representations which obtains state-of-the-art results on a wide array of Natural Language Processing (NLP) tasks. NVIDIA's BERT is an optimized version of Google's official implementation, leveraging mixed precision arithmetic and Tensor Cores on A100, V100 and T4 GPUs for faster training times while maintaining target accuracy.
This resource contains Dockerfile which extends the TensorFlow NGC container and encapsulates some dependencies. Aside from those, make sure you have the following components:
In the File Browser section, you can find the jupyter notebook which you can preview and download. You can download the zip file which contains the dockerfile that you will need for setting up the container to run the the notebook in.
In the Setup section, you can find the link to the main repository which contains the dockerfile and the notebooks. This is an alternative to downloading the main repository as a zip file from the "File Browser" section.
In the Quick Start Guide section, you can see how to prepare the dataset, download pretrained NVIDIA BERT models, and perform fine-tuning with mixed precision for the Question Answering task by running the jupyter notebooks inside the container.