This is a checkpoint for BioMegatron 345m with biomedical domain vocabulary (50k size), cased.
Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA, which was trained with multinode and using mixed precision. Unlike BERT, the position of the layer normalization and the residual connection in the model architecture (similar to GPT-2 architucture) are swapped, which allowed the models to continue to improve as they were scaled up. This model reaches higher scores compared to BERT on a range of Natural Language Processing (NLP) tasks. BioMegatron has the same network architecture as the Megatron, but is pretrained on a different dataset - PubMed, a large biomedical text corpus, which achieves better performance in biomedical downstream tasks than the original Megatron.
It contains
More details about the model can be found in the BioMegatron paper: https://arxiv.org/abs/2010.06060
Source code and developer guide is available at https://github.com/NVIDIA/NeMo and https://github.com/NVIDIA/Megatron-LM
This model checkpoint can be used for finetuning on biomedical question answering datasets, such as named entity recognition(NER), question answering (QA), or relationship extraction (RE).
In the following we show examples for how to finetune BioMegatron on different downstream tasks.
https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Token_Classification-BioMegatron.ipynb
https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Relation_Extraction-BioMegatron.ipynb