A robust forgery detection method for copy-move and splicing attacks in images
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
Efficient coding strategy for HEVC performance improvement by exploiting motion features
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, QLD, 19th-24th April, 2015 p. 1414-1418
- Full Text: false
- Reviewed:
- Description: The striking feature of High Efficiency Video Coding (HEVC) Standard is emphasized by 50% bit-rate reduction compared to its predecessor H.264/AVC while keeping the same perceptual image quality. The time complexity - a congenital issue of HEVC has also increased to intensify the compression ratio. However, it is really a demanding task for the researchers to reduce the encoding time while preserving expected quality of the video sequences. Our contribution is to trim down the computational time by efficient selection of appropriate block-partitioning modes in HEVC using motion features based on phase-correlation. In this paper, we use phase-correlation between current and reference blocks to extract three motion features and combine them to determine binary motion pattern of the current block. The motion pattern is then matched against a codebook of predefined pattern templates to determine a subset of the inter-modes. Only the selected modes are exhaustively motion estimated and compensated for a coding unit. The experimental outcomes demonstrate that the average computational time can be down scaled by 30% of the HEVC while providing improved rate-distortion performance.
On temporal order invariance for view-invariant action recognition
- Authors: Ul-Haq, Anwaar , Gondal, Iqbal , Murshed, Manzur
- Date: 2013
- Type: Text , Journal article
- Relation: IEEE Transactions on Circuits and Systems for Video Technology Vol. 23, no. 2 (2013), p. 203-211
- Full Text: false
- Reviewed:
- Description: View-invariant action recognition is one of the most challenging problems in computer vision. Various representations are being devised for matching actions across different viewpoints to achieve view invariance. In this paper, we explore the invariance property of temporal order of action instances during action execution and utilize it for devising a new view-invariant action recognition approach. To ensure temporal order during matching, we utilize spatiotemporal features, feature fusion and temporal order consistency constraint. We start by extracting spatiotemporal cuboid features from video sequences and applying feature fusion to encapsulate within-class similarity for the same viewpoints. For each action class, we construct a feature fusion table to facilitate feature matching across different views. An action matching score is then calculated based on global temporal order constraint and number of matching features. Finally, the action label of the class with the maximum value of the matching score is assigned to the query action. Experimentation is performed on multiple view Inria Xmas motion acquisition sequences and West Virginia University action datasets, with encouraging results, that are comparable to the existing view-invariant action recognition techniques.
Panic-driven event detection from surveillance video stream without track and motion features
- Authors: Haque, Mohammad , Murshed, Manzur
- Date: 2010
- Type: Text , Conference paper
- Relation: 2010 IEEE International Conference on Multimedia & Expo p. 173-178
- Full Text: false
- Reviewed:
- Description: Modern surveillance systems are becoming highly automated in terms of scene understanding and event detection capabilities, and most existing methods rely on track-and motion-based features for event classification and anomaly detection. However, trajectory-based methods fail in public scenarios due to frequently loosing the object tracks, while the capabilities of motion-based methods are limited in detection of direction and velocity related anomalies. In this paper, a novel feature extraction and event detection method is presented without using any track and motion features where event discriminating characteristics are discovered from the dynamics of multiple temporal features extracted from foreground blobs and then confined in support vector machine based models for real-time event detection. Experimental results on benchmark datasets show that the proposed method can successfully discriminate panic-driven events like sudden split, runaway, and fighting from usual events.