NGC | Catalog
Welcome Guest
CatalogModelsBERT-Large(fine-tuning) - SQuAD 1.1, seqLen=384

BERT-Large(fine-tuning) - SQuAD 1.1, seqLen=384

For downloads and more information, please view on a desktop device.
Logo for BERT-Large(fine-tuning) - SQuAD 1.1, seqLen=384

Description

Pretrained weights for the BERT-Large(fine-tuning) model. (Large, SQuAD 1.1, seqLen=384)

Publisher

NVIDIA

Use Case

Language Modelling

Framework

Tensorflow

Latest Version

2

Modified

November 3, 2021

Size

3.75 GB

BERT-Large(fine-tuning) for TensorFlow

Pretrained weights for the BERT-Large(fine-tuning) model. (Large, SQuAD 1.1, seqLen=384)

Using the Model

Training

Model-scripts available in the NGC model scripts registry.

Re-training

The model was generated using BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper, from the NGC model scripts registry. For researchers aiming to improve upon or tailor the model, we recommend starting with information in README. It captures details about the architecture, accuracy and performance result, and corresponding scripts.

Inferencing

For a quick start follow the sections on inference in the model-scripts quick start guide

Datasets