BART is a denoising autoencoder for pretraining sequence-to-sequence models.
BART uses a standard sequence-to-sequence Transformer architecture with GeLU activations. The base model consists of 6 layers in encoder and decoder, whereas large consists of 12. The architecture has roughly 10% more parameters than BERT.
BART is trained by corrupting documents and then optimizing the reconstruction loss. The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
This model was trained using script available on NGC and in GitHub repo
The following datasets were used to train this model:
Performance numbers for this model are available in NGC