Improved kernel descriptors for effective and efficient image classification
- Authors: Karmakar, Priyabrata , Teng, Shyh , Zhang, Dengsheng , Liu, Ying , Lu, Guojun
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA); Sydney, Australia; 29th November-1st December 2017 p. 195-202
- Full Text: false
- Reviewed:
- Description: Kernel descriptors have been proven to outperform existing histogram based local descriptors as such descriptors are extracted from the match kernels which measure similarities between image patches using different pixel attributes (gradient, colour or LBP pattern). The extraction of kernel descriptors does not require coarse quantization of pixel attributes. Instead, each pixel equally participates in matching between two image patches. In this paper, by leveraging the kernel properties, we propose a unique approach which simultaneously increases the effectiveness and efficiency of the existing kernel descriptors. Specifically, this is done by improving the similarity measure between two different patches in terms of any pixel attribute. The proposed kernel descriptors are more discriminant, take less time to be extracted and have much lower dimensions. Our experiments on Scene Categories and Caltech 101 databases show that our proposed approach outperforms the existing kernel descriptors.
A kernel-based approach for content-based image retrieval
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 2018 International Conference on Image and Vision Computing New Zealand; Auckland, New Zealand; 19th-21st November 2018 p. 1-6
- Full Text: false
- Reviewed:
- Description: Content-based image retrieval (CBIR) is a popular approach to retrieve images based on a query. In CBIR, retrieval is executed based on the properties of image contents (e.g. gradient, shape, color, texture) which are generally encoded into image descriptors. Among the various image descriptors, histogram-based descriptors are very popular. However, they suffer from the limitation of coarse quantization. In contrast, the use of kernel descriptors (KDES) is proven to be more effective than histogram-based descriptors in other applications, e.g. image classification. This is because, in the KDES framework, instead of the quantization of pixel attributes, each pixel equally takes part in the similarity measurement between two images. In this paper, we propose an approach for how the conventional KDES and its improved version can be used for CBIR. In addition, we have provided a detailed insight into the effectiveness of improved kernel descriptors. Finally, our experiment results will show that kernel descriptors are significantly more effective than histogram-based descriptors in CBIR.
Combining pyramid match kernel and spatial pyramid for image classification
- Authors: Karmakar, Priyabrata , Teng, Shyh , Zhang, Dengsheng , Lu, Guojun , Liu, Ying
- Date: 2016
- Type: Text
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 486-493
- Full Text: false
- Reviewed:
- Description: This paper proposes a new approach for image classification by combining pyramid match kernel (PMK) with spatial pyramid. Unlike the conventional spatial pyramid matching (SPM) approach which only uses a single-resolution feature vector to represent an image, we use a multi-resolution feature vector to represent an image for SPM. We then calculate the match scores at each resolution of SPM representation and finally compute the matching between two images by applying the concept of PMK using the match scores obtained from the multiple resolutions. Our experimental results show that the proposed combined pyramid matching achieves a significant improvement on classification performance.
An enhancement to the spatial pyramid matching for image classification and retrieval
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2020
- Type: Text , Journal article
- Relation: IEEE Access Vol. 8, no. (2020), p. 22463-22472
- Full Text:
- Reviewed:
- Description: Spatial pyramid matching (SPM) is one of the widely used methods to incorporate spatial information into the image representation. Despite its effectiveness, the traditional SPM is not rotation invariant. A rotation invariant SPM has been proposed in the literature but it has many limitations regarding the effectiveness. In this paper, we investigate how to make SPM robust to rotation by addressing those limitations. In an SPM framework, an image is divided into an increasing number of partitions at different pyramid levels. In this paper, our main focus is on how to partition images in such a way that the resulting structure can deal with image-level rotations. To do that, we investigate three concentric ring partitioning schemes. Apart from image partitioning, another important component of the SPM framework is a weight function. To apportion the contribution of each pyramid level to the final matching between two images, the weight function is needed. In this paper, we propose a new weight function which is suitable for the rotation-invariant SPM structure. Experiments based on image classification and retrieval are performed on five image databases. The detailed result analysis shows that we are successful in enhancing the effectiveness of SPM for image classification and retrieval. © 2013 IEEE.
Rotation invariant spatial pyramid matching for image classification
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: This paper proposes a new Spatial Pyramid representation approach for image classification. Unlike the conventional Spatial Pyramid, the proposed method is invariant to rotation changes in the images. This method works by partitioning an image into concentric rectangles and organizing them into a pyramid. Each pyramidal region is then represented using a histogram of visual words. Our experimental results show that our proposed method significantly outperforms the conventional method. © 2015 IEEE.
Improved Tamura features for image classification using kernel based descriptors
- Authors: Karmakar, Priyabrata , Teng, Shyh , Zhang, Dengsheng , Liu, Ying , Lu, Guojun
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA); Sydney, Australia; 29th November-1st December 2017 p. 461-467
- Full Text: false
- Reviewed:
- Description: Tamura features are based on human visual perception and have huge potential in image representation. Conventional Tamura features only work on homogeneous texture images and perform poor on generic images. Therefore, many researchers attempt to improve Tamura features and most of the improvements are based on histogram based representation. Kernel descriptors have been shown to outperform existing histogram based local features as such descriptors do not require coarse quantization of pixel attributes. Instead, in kernel descriptor framework, each pixel equally participates in matching between two image patches. In this paper, we propose a set of kernel descriptors that are based on Tamura features. Additionally, the proposed descriptors are invariant to local rotations. Experimental results show that our proposed approach outperforms the conventional Tamura features significantly.
A novel fusion approach in the extraction of kernel descriptor with improved effectiveness and efficiency
- Authors: Karmakar, Priyabrata , Teng, Shyh , Lu, Guojun , Zhang, Dengsheng
- Date: 2021
- Type: Text , Journal article
- Relation: Multimedia Tools and Applications Vol. 80, no. 10 (Apr 2021), p. 14545-14564
- Full Text:
- Reviewed:
- Description: Image representation using feature descriptors is crucial. A number of histogram-based descriptors are widely used for this purpose. However, histogram-based descriptors have certain limitations and kernel descriptors (KDES) are proven to overcome them. Moreover, the combination of more than one KDES performs better than an individual KDES. Conventionally, KDES fusion is performed by concatenating them after the gradient, colour and shape descriptors have been extracted. This approach has limitations in regard to the efficiency as well as the effectiveness. In this paper, we propose a novel approach to fuse different image features before the descriptor extraction, resulting in a compact descriptor which is efficient and effective. In addition, we have investigated the effect on the proposed descriptor when texture-based features are fused along with the conventionally used features. Our proposed descriptor is examined on two publicly available image databases and shown to provide outstanding performances.