NGC | Catalog
CatalogModelsPunctuation En Bert

Punctuation En Bert

Logo for Punctuation En Bert
Description
Punctuation and Capitalization model with BERT
Publisher
NVIDIA
Latest Version
1.0.0rc1
Modified
April 4, 2023
Size
387.1 MB

Model Overview

Automatic Speech Recognition (ASR) systems typically generate text with no punctuation and capitalization of the words. Besides being hard to read, the ASR output could be an input to named entity recognition, machine translation or text-to-speech models. If the input text has punctuation and words are capitalized correctly, this could potentially boost the performance of such models.

For each word in the input text, the model:

  1. predicts a punctuation mark that should follow the word (if any). The model supports commas, periods and question marks.
  2. predicts if the word should be capitalized or not.

Trained or fine-tuned NeMo models (with the file extenstion .nemo) can be converted to Riva models (with the file extension .riva) and then deployed. Here is a pre-trained Riva Punctuation and Capitalization model for English using BERT.

Model Architecture

The Punctuation and Capitalization model consists of the pre-trained Bidirectional Encoder Representations from Transformers (BERT)[1] followed by two token classification heads. One classification head is responsible for the punctuation task, the other one handles the capitalization task. Both token level classification heads take the BERT encoded representation of the [CLS] token as input. Such architecture allows this model to solve two tasks at once with only a single pass through the BERT. Finally, all the parameters are fine-tuned on this joint task.

Training

The model was trained with NeMo BERT base uncased checkpoint.

Datasets

The model was trained on a subset of data from the following sources:

Performance

Each word in the input sequence could be split into one or more tokens, as a result, there are two possible ways of the model evaluation:

  • marking the whole entity as a single label
  • perform evaluation on the sub token level

During training, the first approach was applied, and the predictions for the first token of the input were used to label the whole word. Each task is evaluated separately. Due to the high class unbalancing, the suggested metric for this model is F1 score (with macro averaging).

This model was evaluated on an internal dataset, and it reached the F1 score of 77%.

How to use this model

The model is available for use in the NeMo toolkit [2], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.

Automatically load the model from NGC

import nemo
import nemo.collections.nlp as nemo_nlp
model = nemo_nlp.models.PunctuationCapitalizationModel.from_pretrained(model_name="punctuation_en_bert")

Use the model to add punctuation and capitalization

model.add_punctuation_capitalization(['how are you', 'great how about you'])

Input

The model accepts lower cased English text without punctuation marks.

Output

Text with punctuation and capitalization restored.

Limitations

The length of the input text is currently constrained by the maximum sequence length of the BERT base uncased model, which is 512 tokens after tokenization. The punctuation model supports commas, periods and question marks.

References

[1] Devlin, Jacob, et al. "Bert: Pre-training of deep bidirectional transformers for language understanding." arXiv preprint arXiv:1810.04805 (2018).

[2] NVIDIA NeMo Toolkit

Licence

License to use this model is covered by the NGC TERMS OF USE unless another License/Terms Of Use/EULA is clearly specified. By downloading the public and release version of the model, you accept the terms and conditions of the NGC TERMS OF USE.