Paper ID | BIO-9.4 | ||
Paper Title | Unsupervised Multimodal Image Registration with Adaptative Gradient Guidance | ||
Authors | Zhe Xu, Jiangpeng Yan, Tsinghua University, China; Jie Luo, Harvard Medical School, United States; Xiu Li, Tsinghua University, United States; Jagadeesan Jayender, Harvard Medical School, United States | ||
Session | BIO-9: Medical Image Analysis | ||
Location | Gather.Town | ||
Session Time: | Wednesday, 09 June, 14:00 - 14:45 | ||
Presentation Time: | Wednesday, 09 June, 14:00 - 14:45 | ||
Presentation | Poster | ||
Topic | Biomedical Imaging and Signal Processing: [BIO-MIA] Medical image analysis | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Multimodal image registration is a fundamental procedure in many image-guided therapies. Recently, unsupervised learning-based methods have demonstrated promising performance over accuracy and efficiency in deformable image registration. However, the estimated deformation fields of the existing methods fully rely on the to-be-registered image pair. It is difficult for the networks to be aware of the mismatched boundaries, resulting in unsatisfactory organ boundary alignment. In this paper, we propose a novel multimodal registration framework, which leverages the deformation fields estimated from both: (i) the original to-be-registered image pair, (ii) their corresponding gradient intensity maps, and adaptively fuses them with the proposed gated fusion module. With the help of auxiliary gradient-space guidance, the network can concentrate more on the spatial relationship of the organ boundary. Experimental results on two clinically acquired CT-MRI datasets demonstrate the effectiveness of our proposed approach. |