The model described on this page is used to segment the heart and aorta on ECG-gated/non-gated chest CT scan with/without contrast. This training and inference pipeline was developed by NVIDIA. It is based on a segmentation model developed by NVIDIA Solution Architects in conjunction with Medical Data Analytics Laboratory (MeDA Lab) and Taiwan Cardiovascular AI consortium (TWCVAI consortium) team. The software is for research use only. Software’s recommendation should not be solely or primarily relied upon to diagnose or treat any cardiac diseases by a Healthcare Professional. This Research Use Only software has not been cleared or approved by FDA or any regulatory agency. Information about MeDA Lab can be found here. Information about TWCVAI consortium team can be found here.
Inspired by SegResNet [1], we modified the model structure with bottleneck residual block and adding attention gate. This model is called HeaortaNet.
This model is developed by NVIDIA SAs, in conjunction with the MeDA Lab and TWCVAI team. The training was performed with python script wrote with ai4med library, API of NVIDIA Clara Train. It requires one 48GB NVIDIA Quadro RTX8000 GPU.
The data is trained on a subset of a large scale cardiovascular image database collected by Taiwan Cardiovascular AI consortium(TWCVAI consortium), combined with reworked annotation based on SegTHOR dataset (link) Each case in the dataset belongs to one of the following CT scan type:
(Left: Our annotation; Right: heart and aorta annotation in SegTHOR)
Clinical DICOM are converted to nii before imported for training.
This segmentation model achieved a dice score coefficient of greater than 0.93 for heart, 0.9 for ascending aorta and 0.75 for descending aorta segmentation on the test set.
This model needs to be used with NVIDIA hardware and software. For hardware, the model can run inference on Pascal or newer NVIDIA GPU with memory greater than 12 GB. For software, this model can be used with NVIDIA Clara Train v2.0, v3.0 and 3.1. For inference, please use the window-scanned method. Users can test the pre-trained models with their own ECG-gated/non-gated chest CT scans with/without contrast, or open dataset like SegTHOR. For more details about the data pre-processing and model outputs are as below.
1 channel CT image with intensity in HU with fixed spacing (1x1x1 mm) and non-fixed image size (about [1xx-3xx, 1xx-3xx, 2xx-5xx]). The CT image will be applied pre-processing below:
3 channel segmentations with size (160, 160, 160): 0: Heart segmentation; 1: Ascending aorta; 2: Descending aorta.
This model was trained on chest CT. It may not be able to give accurate predictions with the CT containing the abdomen or neck region.
[1] Myronenko, Andriy. "3D MRI brain tumor segmentation using autoencoder regularization." International MICCAI Brainlesion Workshop. Springer, Cham, 2018. https://arxiv.org/pdf/1810.11654.pdf.
These deliverables are NVIDIA Containerized Software governed by the NVIDA GPU Cloud terms of use. Otherwise, you have no rights to access or use these deliverables in any manner. Read NVIDIA GPU Cloud terms of use here: https://ngc.nvidia.com/legal/terms
---------------------------------------------------------------------------
NVIDIA Proprietary License
---------------------------------------------------------------------------
* Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
* NVIDIA CORPORATION and its licensors retain all intellectual property
* and proprietary rights in and to this software, related documentation
* and any modifications thereto. Any use, reproduction, disclosure or
* distribution of this software and related documentation without an express
* license agreement from NVIDIA CORPORATION is strictly prohibited.
---------------------------------------------------------------------------
The model is trained with National Taiwan University Hospital dataset and is licensed under CC BY-NC-ND for the following purpose only: for use in research and development, including the development of AI based imaging workflows, and not for instruments deployed with patients or patient diagnostics.
===========================================================================