NGC | Catalog
Welcome Guest
CatalogModelsBioBERT TF checkpoint (Base, Uncased, pretraining, AMP, LAMB)

BioBERT TF checkpoint (Base, Uncased, pretraining, AMP, LAMB)

For downloads and more information, please view on a desktop device.
Logo for BioBERT TF checkpoint (Base, Uncased, pretraining, AMP, LAMB)

Description

BioBERT Base Uncased TensorFlow checkpoint pretrained using LAMB optimizer

Publisher

NVIDIA Deep Learning Examples

Use Case

Language Modeling

Framework

TensorFlow

Latest Version

19.08.1

Modified

October 29, 2021

Size

1.65 GB

Model Overview

BERT for biomedical text-mining

Model Architecture

In the original BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding paper, pre-training is done on Wikipedia and Books Corpus, with state-of-the-art results demonstrated on SQuAD (Stanford Question Answering Dataset) benchmark.

Meanwhile, many works, including BioBERT, SciBERT, NCBI-BERT, ClinicalBERT (MIT), ClinicalBERT (NYU, Princeton), and others at BioNLP'19 workshop, show that additional pre-training of BERT on large biomedical text corpus such as PubMed results in better performance in biomedical text-mining tasks.

This repository provides scripts and recipe to adopt the NVIDIA BERT code-base to achieve state-of-the-art results in the following biomedical text-mining benchmark tasks:

Training

This model was trained using script available on NGC and in GitHub repo

Dataset

The following datasets were used to train this model:

  • PubMed - Database contains more than 33 million citations and abstracts of biomedical literature.

Performance

Performance numbers for this model are available in NGC

References

License

This model was trained using open-source software available in Deep Learning Examples repository. For terms of use, please refer to the license of the script and the datasets the model was derived from.