NGC | Catalog
Welcome Guest


For downloads and more information, please view on a desktop device.
Logo for clara_train_covid19_3d_ct_classification


The model is trained using a 3D version of a densenet121 model



Use Case




Latest Version



August 17, 2021


276.43 MB


This training and inference pipeline was developed by NVIDIA. It is based on a segmentation and classification model developed by NVIDIA researchers in conjunction with the NIH. The Software is for Research Use Only. Software’s recommendation should not be solely or primarily relied upon to diagnose or treat COVID-19 by a Healthcare Professional. This research use only software has not been cleared or approved by FDA or any regulatory agency.

Model Overview

The model described in this card is used to classify the lung region from the 3D chest CT images to COVID and non-COVID. For a detailed description, please see Harmon et al. [1].

Model Architecture

The model is trained using a 3D version of a densenet121 model [2] with Clara Train SDK v3.0.

Training Algorithm

This model is developed by NVIDIA researchers in conjunction with the NIH. The training was performed with command with the config_train_naturecomm.json configuration, which required two 32GB NVIDIA Tesla V100 GPUs and its pipeline is developed with NVIDIA Clara Train.

Training Graph Input Shape: Model Input: 192 x 192 x 64

Input and Output formats

Input: 1 channel CT image with intensity in HU and 1 lung segmentation image. The CT image will be cropped and resized to fit the model input based on provided lung segmentation. The lung segmentation image needs to be of the same size as the CT image.

For example, you can use Clara_Train_COVID19_CT_Lung_Seg to provide a segmentation mask (binary, 1 for lung, 0 for background) of each lung or provide your own lung segmentation which is used for cropping the region to be classified.

Output: 2 class probabilities: 0: non-COVID; 1: COVID

The dataset.json needs to provide the path to both the image and lung mask image and target label for training. For example:

"training": [
          "image": "/workspace/Data/COVID/COVID_Test_Data/LIDC-IDRI-0095_1.nii.gz",
          "label_image": "/workspace/Data/COVID/COVID_Test_Data/Mask/0095.nii.gz",
          "label": [
          "image": "/workspace/Data/COVID/COVID_Test_Data/LIDC-IDRI-0050_1.nii.gz",
          "label_image": "/workspace/Data/COVID/COVID_Test_Data/Mask/0050.nii.gz",
          "label": [

Training / Validation Data

This model was trained and evaluated on a global dataset with thousands of experimental cohorts collected from across the globe.

Example training data used in this MMAR is a subset of the LIDC-IDRI dataset. The dicom images must be converted to NifTI format before training:

nvmidl-dataconvert -d ${SOURCE_IMAGE_ROOT} -s .dcm -e .nii.gz -o ${DESTINATION_IMAGE_ROOT}

Note: To match up with the default setting, we suggest that ${DESTINATION_IMAGE_ROOT} match DATA_ROOT as defined in environment.json in this MMAR's config folder.

Performance KPI

This classification model achieved an accuracy of greater than 90% on a test set consisting of more than one thousand CT images collected across the globe. Specifically, the model achieved an AUC value of 0.953 for predicting COVID positive cases with the provided MMAR configuration.

Note: The AUC of 0.949 reported in [1] was achieved with the same pre-trained checkpoint when the order of "ScaleIntensityRange" and "CropForegroundObject" in pre_transforms is switched. Configuration examples using the settings as in the paper are available as config_*_naturecomm.json.


End User License Agreement is included with the product. Licenses are also available along with the model application zip file. By pulling and using the Clara Train SDK container and downloading models, you accept the terms and conditions of these licenses.


[1] Harmon, Stephanie A., et al. "Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets." Nature communications 11.1 (2020): 1-7.

[2] Huang, Gao, et al. "Densely connected convolutional networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.