This is a checkpoint for the BioBERT Large Cased model compatible with NeMo that is converted from https://github.com/dmis-lab/biobert#download. This model has the same network architecture as the original BERT, but instead of Wikipedia and BookCorpus it is pretrained on PubMed, a large biomedical text corpus, and uses a larger vocabulary which achieves better performance in biomedical downstream tasks, such as question answering(QA), named entity recognition(NER) and relationship extraction(RE). This model was trained for 1M steps. For more information please refer to the original paper https://academic.oup.com/bioinformatics/article/36/4/1234/5566506.
The model achieves weighted SAcc/MRR/LAcc of 45.24/51.02/59.70 on BioASQ-7b-factoid test set (after being finetuned on SQuADv1.1 dataset), macro precision/recall/f1 of 81.51/77.74/79.53 on RE dataset ChemProt.
Please be sure to download the latest version in order to ensure compatibility with the latest NeMo release.
For more details regarding BERT and pretraining please refer to https://ngc.nvidia.com/catalog/models/nvidia:bertlargeuncasedfornemo. For more details about BioBERT and training setup please refer to https://academic.oup.com/bioinformatics/article/36/4/1234/5566506.
Source code and developer guide is available at https://github.com/NVIDIA/NeMo Refer to documentation at https://docs.nvidia.com/deeplearning/nemo/neural-modules-release-notes/index.html Code to pretrain and reproduce this model checkpoint are available at https://github.com/NVIDIA/NeMo.
This model checkpoint can be used for either finetuning BioBERT on your custom dataset, or finetuning downstream tasks. All of these tasks and scripts can be found at https://github.com/NVIDIA/NeMo.
In the following we show examples for how to finetune BioBERT on different downstream tasks.