This resource contains three ultrasound machine learning models developed by iCardio.ai to be used concurrently in the Multi-AI Inference pipeline released in the Holoscan SDK v0.4, as well as a sample video.
Note: The provided models are in ONNX format. They will automatically be converted into TensorRT models (.engine) the first time they're processed by a Holoscan application.
In clinical practice, echocardiograms are produced by placing a transducer at various points along a patient's body at different axis of rotation and degrees of skew. Used around the world, this standard procedure of generating an ultrasound examination produces over 20 "views" of the heart. Each view corresponds to various angles of the heart and contains expected cardiac anatomical components. Using this model, we can automatically determine the class of the given view, and therefore also determine the cardiac anatomical components contained therein. The determination of the view is the first step in upstream processing for both humans and machines.
The model output: The model output produces a confidence for each frame pertaining to 1 of 28 cardiac views as defined by the guidelines of the American Society of Echocardiography. The confidence of the most prominent class for each frame is averaged over the length of the video to arrive at a definitive perspective classification. For instance, a confidence of 100% for “PLAX Standard” suggests the model computes, based on each individual frame in that video, that the overall video is most-likely to be “PLAX Standard” perspective of the heart.
This model identifies four crucial linear measurements of the heart. From top to bottom, the chamber model automatically generates an estimated caliper placement to measure the diameter of the Right Ventricle, the thickness of the Interventricular Septum, the diameter of the Left Ventricle, and the thickness of the Posterior Wall. These measurements are crucial in diagnosing the most common cardiac abnormalities or diseases. For instance, if it is determined that the diameter of the Left Ventricle is bigger than expected for that patient (after considering for gender and patient constitution), this can be a telling sign of diastolic dysfunction, or moreover, various forms of heart failure.
The model output: This model interprets the most likely pixel location of each class. The distance between the pixels can be assumed to be the length of the underlying anatomical component. Illustrating the distance between the points we can see the model calculate the Left and Right Ventricular chamber sizes through the extent of the cardiac cycle.
Aortic Stenosis is a well-studied heart disease affecting the function of the aortic valve. Aortic Stenosis can affect the flow of blood from the Left Ventricle into the rest of the body. A patient with severe aortic stenosis may suffer from cardiac dysfunction and therefore early detection of this disease is crucial. Aortic stenosis has been known to be underdiagnosed, resulting in the suboptimal treatment of many patients. The challenge is, determining aortic stenosis can involve the measurement of multiple parameters making the diagnosis potentially very tricky. Unlike traditional diagnosis of Aortic Stenosis, this innovative model provides a propensity for the presence of aortic stenosis directly from standard ultrasound images, making it amenable in the use of a variety of real-world settings.
The sample data is an .avi video that consists of echocardiogram frames with features identifiable by the three models above.
Note: the .avi file must be converted into a GXF tensor file using the convert_video_to_gxf_entities.py
script on GitHub to be used with the VideoStreamReplayer
Holoscan operator.
Refer to the NVIDIA Holoscan Software license file supplied within for use of the models and data, and to the Holoscan SDK EULA.