Pretrained weights for the BERT-Large(fine-tuning) model. (Large, SQuAD 2.0, seqLen=384)
Model-scripts available in the NGC model scripts registry.
The model was generated using BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper, from the NGC model scripts registry. For researchers aiming to improve upon or tailor the model, we recommend starting with information in README. It captures details about the architecture, accuracy and performance result, and corresponding scripts.
For a quick start follow the sections on inference in the model-scripts quick start guide