August 20, 2014
Omar Arif, a postdoctoral researcher in the Computational Vision Lab, headed by Prof. Ganesh Sundaramoorthi has created, along with collaborators, new interactive segmentation and tracking tools specifically tailored for detecting and tracking structures in medical images (MRI, CT). Many cardiovascular clinical applications (e.g., detection of arrhythmias - a heart ailment, and surgical planning for inserting pacemakers) require accurate shape estimates of the heart muscle (myocardium) and ventricles as they deform in time. These can be estimated non-invasively from cardiac MRI and CT. Creating segmentation tools for these images is challenging as the structures have non-homogenoeous appearance and complex shape, faint boundaries and non-salient features, and the shape, appearance and motion of these structures vary drastically across patients.
While fully automatic segmentation of these 4D (3D in space + time) images is the ultimate goal, most clinical commercial applications require a high degree of segmentation accuracy, which fully automated methods do not offer, as they are still at the research stage. Therefore, most of the commercial applications require clinicians to manually segment the entire 4D image, an extremely time consuming and tedious process. A tradeoff between fully automatic and fully manual are interactive techniques where a user aids the computer. Current interactive techniques require too much interaction, costing user time. Dr. Arif and collaborators have constructed a method to quickly propagate a segmentation from one slice to the rest of the slices and across time to the rest of the image. The method is based on the new insights for motion estimation that the group noticed that takes into account physical constraints from fluid mechanics that interacting structures must satisfy. What is interesting is that these rather generic constraints lead to precise segmentation, and even lead to much more accurate results than the traditional approach of incorporating large amounts of manually annotated training images. Also, compared to recent and widely used commercial software for cardiac analysis, the techniques created by Dr. Arif and collaborators are much more accurate, thereby saving the user much time in interaction.
Initial results of this work have been accepted for publication in 2014 to the IEEE Transactions on Medical Imaging, the top venue in medical imaging. A preprint of the publication is available Click Here.
The authors are continuing the work to create a full interactive software suite for cardiac analysis.
Omar Arif received his PhD in Electrical & Computer Engineering from the Georgia Institute of Technology in 2010. After his PhD, he was a research scientist at the Georgia Tech Research Insititute. He joined KAUST in the middle of 2012 as a postdoctoral researcher. His research interests are in video target tracking and analysis with particular emphasis on statistical and machine learning methods.
IEEE Transactions on Medical Imaging is a top venue in medical imaging, and publishes original contributions on medical imaging achieved by various modalities, such as ultrasound, X-rays (including CT) magnetic resonance, radionuclides, microwaves, and light, as well as medical image processing and analysis, visualization, pattern recognition, and related methods. The journal focuses on a unified common ground where instrumentation, systems, components, hardware and software, mathematics and physics contribute to the studies.