NGC | Catalog
Welcome Guest
CatalogModelsclara_pt_liver_and_tumor_ct_segmentation

clara_pt_liver_and_tumor_ct_segmentation

For downloads and more information, please view on a desktop device.
Logo for clara_pt_liver_and_tumor_ct_segmentation

Description

A pre-trained model for volumetric (3D) segmentation of the liver and tumour from CT image.

Publisher

NVIDIA

Use Case

Segmentation

Framework

PyTorch

Latest Version

4.1

Modified

March 25, 2022

Size

36.98 MB

Model Overview

A pre-trained model for volumetric (3D) segmentation of the liver and tumour from CT image.

Note: The 4.1 version of this model is only compatible with the 4.1 version of the Clara Train SDK container

Model Architecture

This model is trained using the U-Net architecture [1].

Diagram showing the flow from model input, through the model architecture, and to model output

The segmentation of liver and tumour region is formulated as the voxel-wise 3-class classification. Each voxel is predicted as either foreground (liver body, tumour) or background. And the model is optimized with gradient descent method minimizing soft dice loss [2] between the predicted mask and ground truth segmentation.

Training

The training was performed with the following:

  • Script: train_multi_gpu.sh
  • GPU: 4 GPUs of at least 16GB of GPU memory
  • Actual Model Input: 96 x 96 x 96 for training, 160 x 160 x 160 for validation/testing
  • AMP: True
  • Optimizer: Adam
  • Learning Rate: 5e-4
  • Loss: DiceLoss

If out-of-memory or program crash occurs while caching the data set, please change the cache_rate in CacheDataset to a lower value in the range (0, 1).

Dataset

The training data is from the Medical Decathlon.

  • Target: Liver and tumour
  • Task: Segmentation
  • Modality: CT
  • Size: 131 3D volumes (91 Training, 26 Validation, 14 Testing)
  • Challenge: Large ranging foreground size

The training dataset contains 91 images while the validation and testing datasets contain 26 and 14 images respectively.

Performance

Dice score is used for evaluating the performance of the model. The trained model achieved average testing Dice score 0.8053 over 14 volumes (0.9281 for class 1, and 0.6826 for class 2).

Training

Training loss over 1000 epochs.

Graph that shows training acc over 1000 epochs

Validation

Validation mean dice score over 1000 epochs.

Graph that shows validation mean dice getting higher over 1000 epochs

How to Use this Model

The model was validated with NVIDIA hardware and software. For hardware, the model can run on any NVIDIA GPU with memory greater than 16 GB. For software, this model is usable only as part of Transfer Learning & Annotation Tools in Clara Train SDK container. Find out more about Clara Train at the Clara Train Collections on NGC.

Full instructions for the training and validation workflow can be found in our documentation.

Input

Input: 1 channel CT image with intensity

Preprocessing:

  1. Converting to channel first
  2. Normalizing intensities to range [0, 1]
  3. Cropping foreground surrounding regions

Augmentation for training:

  1. Cropping random fixed sized regions of size 96 x 96 x 96 with the center being a foreground or background voxel at ratio 1 : 1
  2. Randomly flipping volumes
  3. Randomly rotating volumes
  4. Randomly shifting intensity of the volume

Output

Output: 3 channels

  • Label 0: background
  • Label 1: liver body
  • Label 2: liver tumour

Sliding-window Inference

Inference is performed on 3D volumes in a sliding window manner with a specified stride.

Limitations

This training and inference pipeline was developed by NVIDIA. It is based on a segmentation model developed by NVIDIA researchers. This research use only software has not been cleared or approved by FDA or any regulatory agency. Clara pre-trained models are for developmental purposes only and cannot be used directly for clinical procedures.

References

[1] Çiçek, Özgün, et al. "3D U-Net: learning dense volumetric segmentation from sparse annotation." International conference on medical image computing and computer-assisted intervention. Springer, Cham, 2016. https://arxiv.org/abs/1606.06650.

[2] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-net: Fully convolutional neural networks for volumetric medical image segmentation." 2016 fourth international conference on 3D vision (3DV). IEEE, 2016. https://arxiv.org/abs/1606.04797.

License

End User License Agreement is included with the product. Licenses are also available along with the model application zip file. By pulling and using the Clara Train SDK container and downloading models, you accept the terms and conditions of these licenses.