Enhancing the effectiveness of local descriptor based image matching
- Authors: Hossain, Md Tahmid , Teng, Shyh , Zhang, Dengsheng , Lim, Suryani , Lu, Guojun
- Date: 2018
- Type: Text , Conference proceedings , Conference paper
- Relation: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018; Canberra, Australia; 10th-13th December 2018 p. 1-8
- Full Text: false
- Reviewed:
- Description: Image registration has received great attention from researchers over the last few decades. SIFT (Scale Invariant Feature Transform), a local descriptor-based technique is widely used for registering and matching images. To establish correspondences between images, SIFT uses a Euclidean Distance ratio metric. However, this approach leads to a lot of incorrect matches and eliminating these inaccurate matches has been a challenge. Various methods have been proposed attempting to mitigate this problem. In this paper, we propose a scale and orientation harmony-based pruning method that improves image matching process by successfully eliminating incorrect SIFT descriptor matches. Moreover, our technique can predict the image transformation parameters based on a novel adaptive clustering method with much higher matching accuracy. Our experimental results have shown that the proposed method has achieved averages of approximately 16% and 10% higher matching accuracy compared to the traditional SIFT and a contemporary method respectively.
- Description: 2018 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2018
Distortion robust image classification using deep convolutional neural network with discrete cosine transform
- Authors: Hossain, Md Tahmid , Teng, Shyh Wei , Zhang, Dengsheng , Lim, Suryani , Lu, Guojun
- Date: 2019
- Type: Text , Conference proceedings
- Relation: 2019 IEEE International Conference on Image Processing (ICIP);Taipei, Taiwan; 22-25 Sept, 2019 p. 659-663
- Full Text: false
- Reviewed:
- Description: Convolutional Neural Networks are highly effective for image classification. However, it is still vulnerable to image distortion. Even a small amount of noise or blur can severely hamper the performance of these CNNs. Most work in the literature strives to mitigate this problem simply by fine-tuning a pre-trained CNN on mutually exclusive or a union set of distorted training data. This iterative fine-tuning process with all known types of distortion is exhaustive and the network struggles to handle unseen distortions. In this work, we propose distortion robust DCT-Net, a Discrete Cosine Transform based module integrated into a deep network which is built on top of VGG16 [1]. Unlike other works in the literature, DCT-Net is "blind" to the distortion type and level in an image both during training and testing. The DCT-Net is trained only once and applied in a more generic situation without further retraining. We also extend the idea of dropout and present a training adaptive version of the same. We evaluate our proposed DCT-Net on a number of benchmark datasets. Our experimental results show that once trained, DCT-Net not only generalizes well to a variety of unseen distortions but also outperforms other comparable networks in the literature.
BackNet: An Enhanced backbone network for accurate detection of objects with large scale variations
- Authors: Hossain, Md Tahmid , Teng, Shyh Wei , Lu, Guojun
- Date: 2019
- Type: Text , Book chapter
- Relation: Image and Video Technology. PSIVT 2019 p. 52-64
- Full Text: false
- Reviewed:
- Description: Deep Convolutional Neural Networks (CNNs) have induced significant progress in the field of computer vision including object detection and classification. Two-stage detectors like Faster RCNN and its variants are found to be more accurate compared to its one-stage counter-parts. Faster RCNN combines an ImageNet pretrained backbone network (e.g VGG16) and a Region Proposal Network (RPN) for object detection. Although Faster RCNN performs well on medium and large scale objects, detecting smaller ones with high accuracy while maintaining stable performance on larger objects still remains a challenging task. In this work, we focus on designing a robust backbone network for Faster RCNN that is capable of detecting objects with large variations in scale. Considering the difficulties posed by small objects, our aim is to design a backbone network that allows signals extracted from small objects to be propagated right through to the deepest layers of the network. This being our motivation, we propose a robust network: BackNet, which can be integrated as a backbone into any two-stage detector. We evaluate the performance of BackNet-Faster RCNN on MS COCO dataset and show that the proposed method outperforms five contemporary methods.
Towards robust convolutional neural networks in challenging environments
- Authors: Hossain, Md Tahmid
- Date: 2021
- Type: Text , Thesis , PhD
- Full Text:
- Description: Image classification is one of the fundamental tasks in the field of computer vision. Although Artificial Neural Network (ANN) showed a lot of promise in this field, the lack of efficient computer hardware subdued its potential to a great extent. In the early 2000s, advances in hardware coupled with better network design saw the dramatic rise of Convolutional Neural Network (CNN). Deep CNNs pushed the State-of-The-Art (SOTA) in a number of vision tasks, including image classification, object detection, and segmentation. Presently, CNNs dominate these tasks. Although CNNs exhibit impressive classification performance on clean images, they are vulnerable to distortions, such as noise and blur. Fine-tuning a pre-trained CNN on mutually exclusive or a union set of distortions is a brute-force solution. This iterative fine-tuning process with all known types of distortion is, however, exhaustive and the network struggles to handle unseen distortions. CNNs are also vulnerable to image translation or shift, partly due to common Down-Sampling (DS) layers, e.g., max-pooling and strided convolution. These operations violate the Nyquist sampling rate and cause aliasing. The textbook solution is low-pass filtering (blurring) before down-sampling, which can benefit deep networks as well. Even so, non-linearity units, such as ReLU, often re-introduce the problem, suggesting that blurring alone may not suffice. Another important but under-explored issue for CNNs is unknown or Open Set Recognition (OSR). CNNs are commonly designed for closed set arrangements, where test instances only belong to some ‘Known Known’ (KK) classes used in training. As such, they predict a class label for a test sample based on the distribution of the KK classes. However, when used under the OSR setup (where an input may belong to an ‘Unknown Unknown’ or UU class), such a network will always classify a test instance as one of the KK classes even if it is from a UU class. Historically, CNNs have struggled with detecting objects in images with large difference in scale, especially small objects. This is because the DS layers inside a CNN often progressively wipe out the signal from small objects. As a result, the final layers are left with no signature from these objects leading to degraded performance. In this work, we propose solutions to the above four problems. First, we improve CNN robustness against distortion by proposing DCT based augmentation, adaptive regularisation, and noise suppressing Activation Functions (AF). Second, to ensure further performance gain and robustness to image transformations, we introduce anti-aliasing properties inside the AF and propose a novel DS method called blurpool. Third, to address the OSR problem, we propose a novel training paradigm that ensures detection of UU classes and accurate classification of the KK classes. Finally, we introduce a novel CNN that enables a deep detector to identify small objects with high precision and recall. We evaluate our methods on a number of benchmark datasets and demonstrate that they outperform contemporary methods in the respective problem set-ups.
- Description: Doctor of Philosophy
Robust image classification using a low-pass activation function and DCT augmentation
- Authors: Hossain, Md Tahmid , Teng, Shyh , Sohel, Ferdous , Lu, Guojun
- Date: 2021
- Type: Text , Journal article
- Relation: IEEE Access Vol. 9, no. (2021), p. 86460-86474
- Full Text:
- Reviewed:
- Description: Convolutional Neural Network's (CNN's) performance disparity on clean and corrupted datasets has recently come under scrutiny. In this work, we analyse common corruptions in the frequency domain, i.e., High Frequency corruptions (HFc, e.g., noise) and Low Frequency corruptions (LFc, e.g., blur). Although a simple solution to HFc is low-pass filtering, ReLU - a widely used Activation Function (AF), does not have any filtering mechanism. In this work, we instill low-pass filtering into the AF (LP-ReLU) to improve robustness against HFc. To deal with LFc, we complement LP-ReLU with Discrete Cosine Transform based augmentation. LP-ReLU, coupled with DCT augmentation, enables a deep network to tackle the entire spectrum of corruption. We use CIFAR-10-C and Tiny ImageNet-C for evaluation and demonstrate improvements of 5% and 7.3% in accuracy respectively, compared to the State-Of-The-Art (SOTA). We further evaluate our method's stability on a variety of perturbations in CIFAR-10-P and Tiny ImageNet-P, achieving new SOTA in these experiments as well. To further strengthen our understanding regarding CNN's lack of robustness, a decision space visualisation process is proposed and presented in this work. © 2013 IEEE.
Anti-aliasing deep image classifiers using novel depth adaptive blurring and activation function
- Authors: Hossain, Md Tahmid , Teng, Shyh , Lu, Guojun , Rahman, Mohammad Arifur , Sohel, Ferdous
- Date: 2023
- Type: Text , Journal article
- Relation: Neurocomputing Vol. 536, no. (2023), p. 164-174
- Full Text: false
- Reviewed:
- Description: Deep convolutional networks are vulnerable to image translation or shift, partly due to common down-sampling layers, e.g., max-pooling and strided convolution. These operations violate the Nyquist sampling rate and cause aliasing. The textbook solution is low-pass filtering (blurring) before down-sampling, which can benefit deep networks as well. Even so, non-linearity units, such as ReLU, often re-introduce the problem, suggesting that blurring alone may not suffice. In this work, first, we analyse deep features with Fourier transform and show that Depth Adaptive Blurring is more effective, as opposed to monotonic blurring. To this end, we propose a novel Depth Adaptive Blur-pool (DAB-pool) module to replace existing down-sampling methods. Second, we introduce a novel activation function – with a built-in low pass filter, as an additional measure, to keep the problem from reappearing. From experiments, we observe generalisation on other forms of transformations and corruptions as well, e.g., rotation, scale, and noise. We evaluate our method under three challenging settings: (1) a variety of image translations; (2) adversarial attacks – both