Integrating object ontology and region semantic template for crime scene investigation image retrieval
- Authors: Liu, Ying , Huang, Yuan , Zhang, Shuai , Zhang, Dengsheng , Ling, Nam
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA); Siem Reap, Cambodia; 18th-20th June 2017 p. 149-153
- Full Text: false
- Reviewed:
- Description: Crime Scene Investigation (CSI) image retrieval plays an important role in solving crimes by providing useful clues for the police force. However, there has been little work done in this area due to limited public data access by researchers. Tested on real-world CSI images, it was observed that existing content-based image retrieval (CBIR) methods do not necessarily retrieve as effectively on CSI image database as compared to other general image databases. Hence, it is important to design CBIR algorithm tuned to CSI image database. This paper proposes a region-based semantic learning method based on object ontology which associates image categories with 'objects' in CSI images. Each object corresponds to a pre-defined semantic template (ST) which is defined as the average color and texture feature of a set of sample regions. In this way, low-level features of each region in a CSI image can be converted to an 'object' by comparing the region features with the set of pre-defined STs. The 'objects' in an image categorize the image based on the object ontology. The above process is referred to as 'On-Set'. To further improve retrieval performance of On-Set, a weighting strategy named object-frequency-based weighting (OFW) is designed inspired by the idea of term frequency-inverse document frequency (TF-IDF). In OFW, heavier weight is assigned to regions that appear more often in one class and less often in other classes. Experimental results on real-world image data proved the effectiveness of the proposed method for CSI image database retrieval.
Rotation invariant curvelet features for region based image retrieval
- Authors: Zhang, Dengsheng , Islam, Md , Lu, Guojun , Sumana, Ishrat
- Date: 2011
- Type: Text , Journal article
- Relation: International Journal of Computer Vision Vol. 98, no. 2 (2011), p. 187-201
- Full Text: false
- Reviewed:
- Description: There have been much interest and a large amount of research on content based image retrieval (CBIR) in recent years due to the ever increasing number of digital images. Texture features play a key role in CBIR. Many texture features exist in literature, however, most of them are neither rotation invariant nor robust to scale and other variations. Texture features based on Gabor filters have been shown with significant advantages over other methods, and they are adopted by MPEG-7 as one of the texture descriptors for image retrieval. In this paper, we propose a rotation invariant curvelet features for texture representation. With systematic analysis and rigorous experiments, we show that the proposed curvelet texture features significantly outperforms the widely used Gabor texture features. A novel region padding method is also proposed to apply curvelet transform to region based image retrieval. Retrieval results from standard image databases show that curvelet features are promising for both texture and region representation.
Corners-based composite descriptor for shapes
- Authors: Sajjanhar, Atul , Lu, Guojun , Zhang, Dengsheng , Zhou, Wanlei
- Date: 2008
- Type: Text , Conference paper
- Relation: Proceedings of the First International Congress on Image and Signal Processing CISP2008 p. 714-718
- Full Text: false
- Reviewed:
- Description: In this paper, a composite descriptor for shape retrieval is proposed. The composite descriptor is obtained based upon corner-points and shape region. In an earlier paper, we proposed a composite descriptor based on shape region and shape contour, however, the descriptor was not effective for all perspective and geometric transformations. Hence, we modify the composite descriptor by replacing contour features with corner-points features. The proposed descriptor is obtained from Generic FourierDescriptors (GFD) of the shape region and the GFD ofthe corner-points. We study the performance of the proposed composite descriptor. The proposed method is evaluated using Item S8 within the MPEG-7 Still Images Content Set. Experimental results show that the proposed descriptor is effective.
Region-based image retrieval with high-level semantics using decision tree learning
- Authors: Liu, Ying , Zhang, Dengsheng , Lu, Guojun
- Date: 2008
- Type: Text , Journal article
- Relation: Pattern Recognition Vol. 41, no. 8 (2008), p. 2554-2570
- Full Text: false
- Reviewed:
- Description: Semantic-based image retrieval has attracted great interest in recent years. This paper proposes a region-based image retrieval system with high-level semantic learning. The key features of the system are: (1) it supports both query by keyword and query by region of interest. The system segments an image into different regions and extracts low-level features of each region. From these features, high-level concepts are obtained using a proposed decision tree-based learning algorithm named DT-ST. During retrieval, a set of images whose semantic concept matches the query is returned. Experiments on a standard real-world image database confirm that the proposed system significantly improves the retrieval performance, compared with a conventional content-based image retrieval system. (2) The proposed decision tree induction method DT-ST for image semantic learning is different from other decision tree induction algorithms in that it makes use of the semantic templates to discretize continuous-valued region features and avoids the difficult image feature discretization problem. Furthermore, it introduces a hybrid tree simplification method to handle the noise and tree fragmentation problems, thereby improving the classification performance of the tree. Experimental results indicate that DT-ST outperforms two well-established decision tree induction algorithms ID3 and C4.5 in image semantic learning.