Speaker Identification is a broad research area that solves two major tasks: speaker identification (who is speaking?) and speaker verification (is the speaker who they claim to be?). In this work, we focus on far-field, text-independent speaker recognition when the identity of the speaker is based on how the speech is spoken, not necessarily on what is being said. Typically such SR systems operate on unconstrained speech utterances, which are converted into vectors of fixed length, called speaker embeddings. Speaker embeddings are also used in automatic speech recognition (ASR) and speech synthesis.
This model is trained end-to-end using angular softmax loss for speaker verification purposes and for extracting speaker embeddings
SpeakerNet models consists of 1D Depth-wise separable convolutional layers. These encoded information is then pooled by statistical means based on mean and variance as described in paper 
These models were trained on a composite dataset comprising of several thousand hours of speech, compiled from various publicly available sources. The NeMo toolkit  was used for training this model over few hundred epochs on multiple GPUs.
The following datasets are used for training
This speakernet-M model which is based on Quartznet Encoder structure with 5M parameters achieves 1.93% EER on voxceleb clean test trial file
For a single audio file, one can also extract embeddings inline using
import nemo.collections.asr as nemo_asr speaker_model = nemo_asr.models.EncDecSpeakerLabelModel.from_pretrained(model_name="speakerverification_speakernet") embs = speaker_model.get_embedding('audio_path')
This model accepts 16000 KHz Mono-channel Audio (wav files) as input.
This model provides embeddings of size 256 from a speaker for a given audio sample.
This model is trained on non-telephonic speech from voxceleb datasets, hence may not work as well for telephonic speech. If it happens consider finetuning for that speech domain.
License to use this model is covered by the license of the NeMo Toolkit . By downloading the public and release version of the model, you accept the terms and conditions of this license.