A novel strategy to balance the results of cross-modal hashing
- Authors: Zhong, Fangming , Chen, Zhikui , Min, Geyong , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 107, no. (2020), p.
- Full Text: false
- Reviewed:
- Description: Hashing methods for cross-modal retrieval has drawn increasing research interests and has been widely studied in recent years due to the explosive growth of multimedia big data. However, a significant phenomenon which has been ignored is that there is a large gap between the results of cross-modal hashing in most cases. For example, the results of Text-to-Image frequently outperform that of Image-to-Text with a large margin. In this paper, we propose a strategy named semantic augmentation to improve and balance the results of cross-modal hashing. An intermediate semantic space is constructed to re-align the feature representations that embedded with weak semantic information. By using the intermediate semantic space, the semantic information of visual features can be further augmented before being sent to cross-modal hashing algorithms. Extensive experiments are carried out on four datasets via seven state-of-the-art cross-modal hashing methods. Compared against the results without semantic augmentation, the Image-to-Text results of these methods with semantic augmentation are improved considerably, which demonstrates the effectiveness of the proposed semantic augmentation strategy in bridging the gap between the results of cross-modal retrieval. Additional experiments are conducted on the real-valued, semi-supervised, semi-paired, partial-paired, and unpaired cross-modal retrieval methods, the results further indicates the effectiveness of our strategy in improving performance of cross-modal retrieval. © 2020 Elsevier Ltd
MESH : a flexible manifold-embedded semantic hashing for cross-modal retrieval
- Authors: Zhong, Fangming , Wang, Guangze , Chen, Zhikui , Xia, Feng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 147569-147579
- Full Text:
- Reviewed:
- Description: Hashing based methods for cross-modal retrieval has been widely explored in recent years. However, most of them mainly focus on the preservation of neighborhood relationship and label consistency, while ignore the proximity of neighbors and proximity of classes, which degrades the discrimination of hash codes. And most of them learn hash codes and hashing functions simultaneously, which limits the flexibility of algorithms. To address these issues, in this article, we propose a two-step cross-modal retrieval method named Manifold-Embedded Semantic Hashing (MESH). It exploits Local Linear Embedding to model the neighborhood proximity and uses class semantic embeddings to consider the proximity of classes. By so doing, MESH can not only extract the manifold structure in different modalities, but also can embed the class semantic information into hash codes to further improve the discrimination of learned hash codes. Moreover, the two-step scheme makes MESH flexible to various hashing functions. Extensive experimental results on three datasets show that MESH is superior to 10 state-of-the-art cross-modal hashing methods. Moreover, MESH also demonstrates superiority on deep features compared with the deep cross-modal hashing method. © 2013 IEEE.