NGC | Catalog
Welcome Guest
CatalogModelsclara_train_covid19_ct_lesion_seg

clara_train_covid19_ct_lesion_seg

For downloads and more information, please view on a desktop device.
Logo for clara_train_covid19_ct_lesion_seg

Description

A pre-trained model for volumetric (3D) segmentation of COVID-19 affected region from CT image

Publisher

NVIDIA

Use Case

Image Segmentation

Framework

Medical

Latest Version

1

Modified

August 20, 2021

Size

93.85 MB

Disclaimer

This training and inference pipeline was developed by NVIDIA. It is based on a segmentation and classification model developed by NVIDIA researchers in conjunction with the NIH. The Software is for Research Use Only. Software's recommendation should not be solely or primarily relied upon to diagnose or treat COVID-19 by a Healthcare Professional. This research use only software has not been cleared or approved by FDA or any regulatory agency.

Model Overview

The model described in this card is used to segment the COVID-19 affected region from the 3D chest CT images.

Model Architecture

The model is a deep neural network with 3D SegResNet [2].

Training Algorithm

This model is developed by NVIDIA researchers in conjunction with the NIH. The segmentation of COVID-19 affected regions is formulated as the voxel-wise binary classification. Each voxel is predicted as either foreground (COVID-19 affected region) or background. And the model is optimized with gradient descent method minimizing soft dice loss [3] and voxel cross-entropy loss between the predicted mask and ground truth segmentation. The model is trained using eight 32GB NVIDIA Tesla V100 GPUs, and its pipeline is developed with NVIDIA Clara Train.

Intended Use

Primary use case intended for this model is COVID-19 affected region segmentation for CT images with certain disease-related patterns. The model is for research purposes only.

Input

3D CT volume with intensity in HU. Within the Clara Train pipeline for both training and inference, the images are preprocessed by “pre_transforms” [1] so that: 1) images are resampled to a resolution of 0.8 mm x 0.8 mm x 5.0 mm and 2) intensity is clipped to [-1000, 500] HU. For output, the mask predictions are resampled back to original resolution using “post_transforms” [1]. The actual input of the model is a cropped region-of-interest (ROI) with fixed size 384x384x32. The patches sampled from the CT volumes are fed into the network for training, and the sliding-window scheme is utilized to achieve segmentation of the entire CT at inference.

Output

Binary mask of the COVID-19 affected region in the input image. The model also predicts the ratio between the volume of the area affected by COVID-19 and the volume of the lung mask to indicate the severity of the disease (if the lung mask is prepared in advance).

How to use this model

This model needs to be used with NVIDIA hardware and software. For hardware, the model can run on any NVIDIA GPU with memory greater than 12 GB. For software, this model can be used with NVIDIA Clara Train. Users can test the pre-trained models with chest CT data with positive COVID-19 cases from this public dataset.

Computing ratios of COVID-19 affected area to total lung volume

In addition to the typical validate script included with a Clara Train MMAR to generate model predictions and mean dice results, this MMAR contains scripts configured with custom components to calculate ratios and volumes.

The command validate_ratio.sh can be used to calculate the COVID-19 affected area as a ratio of the total lung volume with the custom metric code included. The command validate_ratio_volume.sh additionally calculates and outputs the COVID-19 affected and total lung volumes with the SegmentationVolume custom metric.

In order to use these commands, the lung masks for the images are required and must be configured in the data list as done in the provided dataset_ratio_example.json and explained in the Readme contained in the docs directory of this MMAR. The masks can be pre-computed as the output of the model clara_train_covid19_ct_lung_seg with the same images, and the paths should be modified to correspond to each respective image's mask.

Training / Validation Data

This model was trained on a global dataset with a large experimental cohort collected from across the globe. The CT volumes of 913 independent subjects with experts’ COVID-19 affected region annotations were used for training, validation, and testing sets. The images and labels used were from an in-house database, which is not publicly available.

Performance KPI

Dice score is used for evaluating the performance of the model. On the internally used validation set, the trained model achieved 71.3% on average (range 0.565 - 0.899, stdev 0.114).

Limitations

The model is trained specifically for addressing GGO/consolidation patterns, thus may not universally cover all conditions, or work ideally for other types of disease patterns.

License

End User License Agreement is included with the product. Licenses are also available along with the model application zip file. By pulling and using the Clara Train SDK container and downloading models, you accept the terms and conditions of these licenses.

References

[1] https://developer.nvidia.com/clara-medical-imaging

[2] Myronenko, A., 2018, September. 3D MRI brain tumor segmentation using autoencoder regularization. In International MICCAI Brainlesion Workshop (pp. 311-320). Springer, Cham. https://arxiv.org/pdf/1810.11654.pdf

[3] Milletari, F., Navab, N. and Ahmadi, S.A., 2016, October. V-net: Fully convolutional neural networks for volumetric medical image segmentation. In 2016 Fourth International Conference on 3D Vision (3DV) (pp. 565-571). IEEE.