Multi-modal image registration has received significant research attention over the past decade. SymmetricSIFT is a recently proposed local description technique that can be used for registering multi-modal images. It is based on a well-known general image registration technique named Scale Invariant Feature Transform (SIFT). Symmetric-SIFT, however, achieves this invariance to multi-modality at the cost of losing important information. In this paper, we show how this loss may adversely affect the accuracy of registration results. We then propose an improvement to Symmetric-SIFT to overcome the problem. Our experimental results show that the proposed technique can improve the number of true matches by up to 10 times and overall matching accuracy by up to 30%.
Multimodal image registration (MMIR) is the alignment of contents in images captured from different sensors or instruments. MMIR is important in medical applications as it enables the visualization of the complementary contents in biomedical microscopic images. The registration for such images can be challenging as the structures of their contents are usually only partially similar. Thus in this paper, we propose a new method to maximize the structural similarity of the contents in such images by utilizing intensity relationships among Red-Green-Blue color channels. Our experimental results will demonstrate that our proposed method substantially improves the accuracy of registering such images as compared to the state-of-the-art methods.