Kernel descriptors have been proven to outperform existing histogram based local descriptors as such descriptors are extracted from the match kernels which measure similarities between image patches using different pixel attributes (gradient, colour or LBP pattern). The extraction of kernel descriptors does not require coarse quantization of pixel attributes. Instead, each pixel equally participates in matching between two image patches. In this paper, by leveraging the kernel properties, we propose a unique approach which simultaneously increases the effectiveness and efficiency of the existing kernel descriptors. Specifically, this is done by improving the similarity measure between two different patches in terms of any pixel attribute. The proposed kernel descriptors are more discriminant, take less time to be extracted and have much lower dimensions. Our experiments on Scene Categories and Caltech 101 databases show that our proposed approach outperforms the existing kernel descriptors.
Tamura features are based on human visual perception and have huge potential in image representation. Conventional Tamura features only work on homogeneous texture images and perform poor on generic images. Therefore, many researchers attempt to improve Tamura features and most of the improvements are based on histogram based representation. Kernel descriptors have been shown to outperform existing histogram based local features as such descriptors do not require coarse quantization of pixel attributes. Instead, in kernel descriptor framework, each pixel equally participates in matching between two image patches. In this paper, we propose a set of kernel descriptors that are based on Tamura features. Additionally, the proposed descriptors are invariant to local rotations. Experimental results show that our proposed approach outperforms the conventional Tamura features significantly.