This model can be used for translating text in source language (De) to a text in target language (En).
The model is based on Transformer "Big" architecture originally presented in "Attention Is All You Need" paper . In this particular instance, the model has 12 layers in the encoder and 2 layers in the decoder. It is using YouTokenToMe tokenizer .
These models were trained on a collection of many publicly available datasets comprising of millions of parallel sentences. The NeMo toolkit  was used for training this model over roughly 300k steps.
While training this model, we used the following datasets:
We used the YouTokenToMe tokenizer  with shared encoder and decoder BPE tokenizers.
The accuracy of translation models are often measured using BLEU scores . On WMT20 Test set this model achieves 36.4 BLEU score measured using SacreBLEU package . BLEU+case.mixed+lang.de-en+numrefs.1+smooth.exp+test.wmt20+tok.13a+version.1.5.0 = 36.4 71.7/46.7/32.6/23.3 (BP = 0.912 ratio = 0.915 hyp_len = 34980 ref_len = 38220)
The model is available for use in the NeMo toolkit , and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
import nemo import nemo.collections.nlp as nemo_nlp nmt_model = nemo_nlp.models.machine_translation.MTEncDecModel.from_pretrained(model_name="nmt_de_en_transformer12x2")
python [NEMO_GIT_FOLDER]/examples/nlp/machine_translation/nmt_transformer_infer.py --model=nmt_de_en_transformer12x2.nemo --srctext=[TEXT_IN_SRC_LANGUAGE] --tgtout=[WHERE_TO_SAVE_TRANSLATION] --target_lang en --source_lang de
This translate method of the NMT model accepts a list of de-tokenized strings.
The translate method outputs a list of de-tokenized strings in the target language.
No known limitations at this time.
 Vaswani, Ashish, et al. "Attention is all you need." arXiv preprint arXiv:1706.03762 (2017).