Automatic image search based on improved feature descriptors and decision tree
- Authors: Hou, Jin , Chen, Zeng , Qin, Xue , Zhang, Dengsheng
- Date: 2011
- Type: Text , Journal article
- Relation: Integrated Computer-Aided Engineering Vol. 18, no. 2 (2011), p. 167-180
- Full Text: false
- Reviewed:
- Description: There has been a growing interest in implementing image search engine at the semantic level. However, most existing practical systems including popular commercial image search engines like Google and Yahoo! are either text-based or a simple hybrid of texts and visual features. This paper proposes a novel image search system based on automatic image annotation. We develop a technology which learns semantic image concepts from image contents and transforms unstructured images into textual documents, so that images are indexed and retrieved in the same way as textual documents. Existing database management systems can be used to effectively manage image contents, and image search can be as efficient as text search by transforming images into textual documents through machine learning. Experiments in both the Corel dataset and real Web dataset are performed to validate our system and the results are promising. This system suggests a new combination of texts and visual features in order to achieve a semantic image search, and is expected to become a re-ranking system to the existing image search result via the Internet.
Composite feature modeling and retrieval
- Authors: Hou, Jin , Zhang, Dengsheng , Chen, Zeng , Xu, Xuerong , Nakamura, Takahiro
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the 2008 10th International Conference on Control, Automation, Robotics & Vision p. 2176-2181
- Full Text: false
- Reviewed:
- Description: Feature-based intelligent design and manufacturing systems in the Internet environment are an evolution of traditional geometric and solid modeling systems. This paper presents some novel algorithms including a new face-base representation, composite feature modeling and retrieval technology, and efficient communication mechanism, to construct an interactive framework for composite feature modeling and retrieval. The proposed system consists of a feature modeler developed on Wolfram Research Mathematica, Java and Java 3D enabled GUI (graphical user interface), and DB (database). Experiments demonstrate that this system reflects designers' intent properly and is user-friendly to experts coming from various technical backgrounds. This paper provides some fundamental principles for composite feature modeling and retrieval in web-based distributed environment.
Automatic image annotation based on decision tree machine learning
- Authors: Jiang, Lixing , Hou, Jin , Zeng, Chen , Zhang, Dengsheng
- Date: 2009
- Type: Text , Conference paper
- Relation: Proceedings of the International Conference on Cyber-Enabled Distributed Computing and Knowledge Discovery p. 170-175
- Full Text: false
- Reviewed:
- Description: With the rapid development of digital imaging technology, image annotation is an important and challenging task in image retrieval. At present, many machine learning methods have been applied to solve the problem of automatic image annotation (AIA). However, there exists enormous semantic expressive gap between the low-level image features and high-level semantic concepts. Due to the problem, the annotation performance of existing methods is not satisfactory, and needs to be further improved. This paper proposes an automatic annotation framework via a novel decision tree-based Bayesian (DTB) machine learning algorithm. It is a hybrid approach that attempts to utilize the advantages of both DT and Naive-Bayesian (NB). We firstly segment an image into different regions and extract low-level features of each region. From these features, high-level semantic concepts are obtained using a DTB learning algorithm. Finally, experiments conducted on the Corel dataset demonstrate the effectiveness of DTB machine learning. The DTB can not only enhance the classification accuracy, but also associate low-level region features with high-level image concepts. This method presents the advantages of the Bayesian method and the DT. Moreover, this semantic interpretation capability is a natural simulation of human learning.
Digital image retrieval using intermediate semantic features and multistep search
- Authors: Zhang, Dengsheng , Liu, Ying , Hou, Jin
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the Digital Image Computing: Techniques and Applications p. 513-518
- Full Text: false
- Reviewed:
- Description: Recently, semantic image retrieval has attracted large amount of interest due to the rapid growth of digital image storage. However, existing approaches have severe limitations. In this paper, a new approach to digital image retrieval using intermediate semantic features and multistep search has been proposed. Instead of looking for human level semantics which is too challenging at this stage, the research looks for heuristic information and intermediate semantic features which can describe image content objectively. Different from the conventional approaches, the intermediate features are used as filters to eliminate large amount of irrelevant images. Conventional content based image retrieval techniques and relevance feedback (RF) are applied following the filtering to improve the retrieval accuracy. The proposed system has the power of capturing both regional features and global features, and making use of both semantic features and low level features. The proposed system also uses a powerful user interface to provide users with convenient retrieval mechanisms including SQL, RF and query by example. Results show the system has a significant gain over existing region based and global image retrieval approaches
An annotation rule extraction algorithm for image retrieval
- Authors: Chen, Zeng , Hou, Jin , Zhang, Dengsheng , Qin, Xue
- Date: 2012
- Type: Text , Journal article
- Relation: Pattern Recognition Letters Vol. 33, no. 10 (2012), p.1257-1268
- Full Text: false
- Reviewed:
- Description: Automatic image annotation can be used to facilitate semantic search in large image databases. However, retrieval performance of the existing annotation schemes is far from the users’ expectation. In this paper, we propose a novel method to automatically annotate image through the rules generated by support vector machines and decision trees. In order to obtain the rules, we collect a set of training regions by image segmentation, feature extraction and discretization. We first employ a support vector machine as a preprocessing technique to refine the input training data and then use it to improve the rules generated by decision tree learning. The preprocessing can effectively deal with the similar regions in an image as well. Moreover, we integrate the original rules to the modified ones, so as to formulate the complete and effective annotation rules. We can translate an unknown image into text by this algorithm, and the proposed system can retrieve images queried by both images and keywords. Experiments are carried out in a standard Corel dataset and images collected from the Web to test the accuracy and robustness of the proposed system. Experimental results show the proposed algorithm can annotate and retrieve images more efficiently than traditional learning algorithms.
Semantic image retrieval using region based inverted file
- Authors: Zhang, Dengsheng , Islam, Md , Lu, Guojun , Hou, Jin
- Date: 2009
- Type: Text , Journal article
- Relation: Journal of Visual Communication and Image Representation Vol. 24, no. 7 (2009), p.242-249
- Full Text: false
- Reviewed:
- Description: Image retrieval has lagged far behind text retrieval despite more than two decades of intensive research effort. Most of the research on image retrieval in the last two decades are on content based image retrieval or image retrieval based on low level features. Recent research in this area focuses on semantic image retrieval using automatic image annotation. Most semantic image retrieval techniques in literature, however, treat an image as a bag of features/words while ignore the structural or spatial information in the image. In this paper, we propose a structural image retrieval method based on automatic image annotation and region based inverted file. In the proposed system, regions in an image are treated the same way as keywords in a structural text document, semantic concepts are learnt from image data to label image regions as keywords and weight is assigned to each keyword according to spatial position and relationship. As the result, images are indexed and retrieved in the same way as structural document retrieval. Specifically, images are broken down to regions which are represented using colour, texture and shape features. Region features are then quantized to create visual dictionaries which are similar to monolingual dictionaries like English or Chinese dictionaries. In the next step, a semantic dictionary similar to a bilingual dictionary like the English–Chinese dictionary is learnt to mapping image regions to semantic concepts. Finally, images are then indexed and retrieved using a novel region based inverted file data structure. Results show the proposed method has significant advantage over the widely used Bayesian annotation models.