Automatic image annotation can be used to facilitate semantic search in large image databases. However, retrieval performance of the existing annotation schemes is far from the users’ expectation. In this paper, we propose a novel method to automatically annotate image through the rules generated by support vector machines and decision trees. In order to obtain the rules, we collect a set of training regions by image segmentation, feature extraction and discretization. We first employ a support vector machine as a preprocessing technique to refine the input training data and then use it to improve the rules generated by decision tree learning. The preprocessing can effectively deal with the similar regions in an image as well. Moreover, we integrate the original rules to the modified ones, so as to formulate the complete and effective annotation rules. We can translate an unknown image into text by this algorithm, and the proposed system can retrieve images queried by both images and keywords. Experiments are carried out in a standard Corel dataset and images collected from the Web to test the accuracy and robustness of the proposed system. Experimental results show the proposed algorithm can annotate and retrieve images more efficiently than traditional learning algorithms.
There has been a growing interest in implementing image search engine at the semantic level. However, most existing practical systems including popular commercial image search engines like Google and Yahoo! are either text-based or a simple hybrid of texts and visual features. This paper proposes a novel image search system based on automatic image annotation. We develop a technology which learns semantic image concepts from image contents and transforms unstructured images into textual documents, so that images are indexed and retrieved in the same way as textual documents. Existing database management systems can be used to effectively manage image contents, and image search can be as efficient as text search by transforming images into textual documents through machine learning. Experiments in both the Corel dataset and real Web dataset are performed to validate our system and the results are promising. This system suggests a new combination of texts and visual features in order to achieve a semantic image search, and is expected to become a re-ranking system to the existing image search result via the Internet.