Multi-modal image registration has received significant research attention over the past decade. SymmetricSIFT is a recently proposed local description technique that can be used for registering multi-modal images. It is based on a well-known general image registration technique named Scale Invariant Feature Transform (SIFT). Symmetric-SIFT, however, achieves this invariance to multi-modality at the cost of losing important information. In this paper, we show how this loss may adversely affect the accuracy of registration results. We then propose an improvement to Symmetric-SIFT to overcome the problem. Our experimental results show that the proposed technique can improve the number of true matches by up to 10 times and overall matching accuracy by up to 30%.
Scale Invariant Feature Transform (SIFT) has been applied in numerous applications especially in the domain of computer vision. In these applications, image information used for building the SIFT descriptor can have a significant impact on its performance. When building orientation histograms for descriptors, a critical step is how to increment the values in the orientation bins. The original scheme for this step in SIFT was improved in . Two different types of gradient information are used for building orientation histograms. The limitations of the two schemes are identified in this paper and we then propose three new schemes which use both types of gradient information in the feature description and matching stages. Our experimental results show that the proposed schemes can achieve better registration performances than the schemes proposed in SIFT and .