A Centroid Algorithm for Stabilization of Turbulence-Degraded Underwater Videos
- Authors: Halder, Kalyan Kumar , Paul, Manoranjan , Tahtali, Murat , Anavatti, Sreenatha G. , Murshed, Manzur
- Date: 2016
- Type: Text , Conference paper
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications DICTA 2016 p. 1-6
- Full Text: false
- Reviewed:
- Description: This paper addresses the problem of stabilizing underwater videos with non-uniform geometric deformations or warping due to a wavy water surface. It presents an improved method to correct these geometric deformations of the frames, providing a high-quality stabilized video output. For this purpose, a non-rigid image registration technique is employed to accurately align the warped frames with respect to a prototype frame and to estimate the deformation parameters, which in turn, are applied in an image dewarping technique. The prototype frame is chosen from the video sequence based on a sharpness assessment. The effectiveness of the proposed method is validated by applying it on both synthetic and real- world sequences using various quality metrics. A performance comparison with an existing method confirms the higher efficacy of the proposed method.
A coarse representation of frames oriented video coding by leveraging cuboidal partitioning of image data
- Authors: Ahmmed, Ashe , Paul, Manoranjan , Murshed, Manzur , Taubman, David
- Date: 2020
- Type: Text , Conference paper
- Relation: 22nd IEEE International Workshop on Multimedia Signal Processing, MMSP 2020, Virtual Tampere, Finland 21-24 September 2020
- Full Text:
- Reviewed:
- Description: Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. In this work, we form a coarse representation of the current frame by minimizing commonality within that frame while preserving important structural properties of the frame. The building blocks of this coarse representation are rectangular regions called cuboids, which are computationally simple and has a compact description. Then we propose to employ the coarse frame as an additional source for predictive coding of the current frame. Experimental results show an improvement in bit rate savings over a reference codec for HEVC, with minor increase in the codec computational complexity. © 2020 IEEE.
A commonality modeling framework for enhanced video coding leveraging on the cuboidal partitioning based representation of frames
- Authors: Ahmmed, Ashek , Murshed, Manzur , Paul, Manoranjan , Taubman, David
- Date: 2022
- Type: Text , Journal article
- Relation: IEEE Transactions on Multimedia Vol. 24, no. (2022), p. 4446-4457
- Full Text: false
- Reviewed:
- Description: Video coding algorithms attempt to minimize the significant commonality that exists within a video sequence. Each new video coding standard contains tools that can perform this task more efficiently compared to its predecessors. Modern video coding systems are block-based wherein commonality modeling is carried out only from the perspective of the block that need be coded next. In this work, we argue for a commonality modeling approach that can provide a seamless blending between global and local homogeneity information. For this purpose, at first the frame that need be coded, is recursively partitioned into rectangular regions based on the homogeneity information of the entire frame. After that each obtained rectangular region's feature descriptor is taken to be the average value of all the pixels' intensities encompassing the region. In this way, the proposed approach generates a coarse representation of the current frame by minimizing both global and local commonality. This coarse frame is computationally simple and has a compact representation. It attempts to preserve important structural properties of the current frame which can be viewed subjectively as well as from improved rate-distortion performance of a reference scalable HEVC coder that employs the coarse frame as a reference frame for encoding the current frame. © 1999-2012 IEEE.
A hybrid object detection technique from dynamic background using Gaussian mixture models
- Authors: Haque, Mohammad , Murshed, Manzur , Paul, Manoranjan
- Date: 2008
- Type: Text , Conference paper
- Relation: 2008 IEEE 10th Workshop on Multimedia Signal Processing p. 915-920
- Full Text: false
- Reviewed:
- Description: Adaptive background modelling based object detection techniques are widely used in machine vision applications for handling the challenges of real-world multimodal background. But they are constrained to specific environment due to relying on environment specific parameters, and their performances also fluctuate across different operating speeds. On the other side, basic background subtraction (BBS) is not suitable for real applications due to manual background initialization requirement and its inability to handle repetitive multimodal background. However, it shows better stability across different operating speeds and can better eliminate noise, shadow, and trailing effect than adaptive techniques as no model adaptability or environment related parameters are involved. In this paper, we propose a hybrid object detection technique for incorporating the strengths of both approaches. In our technique, Gaussian mixture models (GMM) is used for maintaining an adaptive background model and both probabilistic and basic subtraction decisions are utilized for calculating inexpensive neighbourhood statistics for guiding the final object detection decision. Experimental results with two benchmark datasets and comparative analysis with recent adaptive object detection technique show the strength of the proposed technique in eliminating noise, shadow, and trailing effect while maintaining better stability across variable operating speeds.
A hybrid wireless sensor network framework for range-free event localization
- Authors: Iqbal, Anindya , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Ad Hoc Networks Vol. 27, no. (2015), p. 81-98
- Full Text: false
- Reviewed:
- Description: In event localization, wireless sensors try to locate the source of an event from its emitted power. This is more challenging than sensor localization as the power level at the source of an event is neither predictable with precision nor can be controlled. Considering the emerging trend of long sensing range for cost-effective sensor deployment, locating events within a region much smaller than the sensing area of a single sensor has gained research interest. This paper proposes the first range-free event localization framework, which avoids expensive hardware needed by the range-based counterparts. Our approach first develops a sensing range model from the statistical information on the emitted power of a type of events so that user-defined event-detection quality can be provisioned using a minimal network of static sensors. Then an accurate event location boundary estimation technique is developed from the sensing feedbacks, which also facilitates guided expansion of the area of possible event location (APEL) to deal with sensing errors. Finally, user-defined event-localization quality guarantee is provisioned cost-effectively by inviting mobile sensors on-demand to target positions. Analytical solutions are provided whenever appropriate and comprehensive simulations are carried out to evaluate localization performance. The proposed event localization technique outperforms the state-of-the-art range-based counterpart (Xu et al., 2011) in realistic environment with path loss, shadow fading, and sensor positioning errors.
A motion-based approach for segmenting dynamic textures
- Authors: Rahman, Ashfaqur , Murshed, Manzur
- Date: 2009
- Type: Text , Journal article
- Relation: International Journal of Signal and Imaging Systems Engineering Vol. 2, no. (2009), p. 88-96
- Full Text: false
- Reviewed:
A novel anonymization technique to trade off location privacy and data integrity in participatory sensing systems
- Authors: Murshed, Manzur , Sabrina, Tishna , Iqbal, Anindya , Alam, K.
- Date: 2010
- Type: Text , Conference paper
- Relation: 2010 Fourth International Conference on Network and System Security p. 345-350
- Full Text: false
- Reviewed:
- Description: Abstract—Preserving privacy in participatory sensing systems has recently gained research interest as voluntary contribution in such systems is not worthy if the privacy of the participants is not protected. On the other hand, data integrity is desired imperatively to make the service trustworthy and user-friendly. In this paper, we have proposed an adaptive location anonymization technique, which is capable of retaining an acceptable level of data integrity while keeping its vulnerability to eavesdropping adversaries low. Experimental results establish the proposed concept as a superior approach in balancing, somehow orthogonal, user privacy and data integrity.
A novel color image fusion QoS measure for multi-sensor night vision applications
- Authors: Anwaar, Ul-Haq , Gondal, Iqbal , Murshed, Manzur
- Date: 2010
- Type: Text , Conference proceedings
- Full Text: false
- Description: Color image fusion of visible and infra-red imagery can play an important role in multi-sensor night vision systems that are an integral part of modern warfare. Image fusion minimizes the amount of required bandwidth by transmitting the fused image rather than multiple sensor images. Color image fusion can be achieved by combining inputs from original colored sensors or by employing pseudo colorization and color transfer to grayscale images. Various quality measures have been proposed for multi-sensor grayscale image fusion techniques; but no appropriate quality measure has been devised for the quality evaluation of multi-sensor color image fusion. In this paper, we propose a novel color image fusion quality measure, Color Fusion Objective Index (CFOI) based on colorfulness, gradient similarity and mutual information techniques. Experimental results show the effectiveness of CFOI to evaluate the color and salient feature extraction introduced by color fusion techniques into the final fused imagery as well as its consistency with subjective evaluation.
A novel depth edge prioritization based coding technique to boost-UP HEVC performance
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2016
- Type: Text , Conference paper
- Relation: 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
- Full Text: false
- Reviewed:
- Description: In addition to the texture, multiview video employs the utilization of depth coding for the reconstruction of 3D video and Free viewpoint video. Standing on some texture-depth correlations, a number of methods in literature reuses texture motion vector for the corresponding depth coding to reduce encoding time by avoiding costly motion estimation process. However, texture similarity metric is not always equivalent to the corresponding depth similarity metric especially at edge levels. Since their approaches could not explicitly detect and encode acute edge motions of depth objects, eventually, could not reach the similar or improved rate-distortion (RD) performance against the High Efficiency Video Coding (HEVC) reference test model (HM). With a view to more accurate motion detection and modeling, the proposed technique exploits an extra Pattern Mode comprising a group of pattern templates (GPTs) with different rectangular and non-rectangular object shapes and edges compared to the existing HEVC block partitioning modes. Moreover, the proposed Pattern Mode only encodes the motion areas and skips the background areas. The experimental results show that the proposed technique could save 30% encoding time and improve average 0.1dB Bjontegard Delta peak signal-to-noise ratio (BD-PSNR) compared to the HM.
A novel depth motion vector coding exploiting spatial and inter-component clustering tendency
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: Visual Communications and Image Processing, VCIP 2015; Singapore; 13th-16th December 2015 p. 1-4
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Motion vectors of depth-maps in multiview and free-viewpoint videos exhibit strong spatial as well as inter-component clustering tendency. This paper presents a novel coding technique that first compresses the multidimensional bitmaps of macroblock mode and then encodes only the non-zero components of motion vectors. The bitmaps are partitioned into disjoint cuboids using binary tree based decomposition so that the 0's and 1's are either highly polarized or further sub-partitioning is unlikely to achieve any compression. Each cuboid is entropy-coded as a unit using binary arithmetic coding. This technique is capable of exploiting the spatial and inter-component correlations efficiently without the restriction of scanning the bitmap in any specific linear order as needed by run-length coding. As encoding of non-zero component values no longer requires denoting the zero value, further compression efficiency is achieved. Experimental results on standard multiview test video sequences have comprehensively demonstrated the superiority of the proposed technique, achieving overall coding gain against the state-of-the-art in the range [22%, 54%] and on average 38%. © 2015 IEEE.
- Description: 2015 Visual Communications and Image Processing, VCIP 2015
A novel motion classification based intermode selection strategy for HEVC performance improvement
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Neurocomputing Vol. 173, no. Part 3 (2015), p. 1211-1220
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard adopts several new approaches to achieve higher coding efficiency (approximately 50% bit-rate reduction) compared to its predecessor H.264/AVC with same perceptual image quality. Huge computational time has also increased due to the algorithmic complexity of HEVC compared to H.264/AVC. However, it is really a demanding task to reduce the encoding time while preserving the similar quality of the video sequences. In this paper, we propose a novel efficient intermode selection technique and incorporate into HEVC framework to predict motion estimation and motion compensation modes between current and reference blocks and perform faster inter mode selection based on three dissimilar motion types in divergent video sequences. Instead of exploring and traversing all the modes exhaustively, we merely select a subset of candidate modes and the final mode from the selected subset is determined based on their lowest Lagrangian cost function. The experimental results reveal that average encoding time can be downscaled by 40% with similar rate-distortion performance compared to the exhaustive mode selection strategy in HEVC.
- Description: High Efficiency Video Coding (HEVC) standard adopts several new approaches to achieve higher coding efficiency (approximately 50% bit-rate reduction) compared to its predecessor H.264/AVC with same perceptual image quality. Huge computational time has also increased due to the algorithmic complexity of HEVC compared to H.264/AVC. However, it is really a demanding task to reduce the encoding time while preserving the similar quality of the video sequences. In this paper, we propose a novel efficient intermode selection technique and incorporate into HEVC framework to predict motion estimation and motion compensation modes between current and reference blocks and perform faster inter mode selection based on three dissimilar motion types in divergent video sequences. Instead of exploring and traversing all the modes exhaustively, we merely select a subset of candidate modes and the final mode from the selected subset is determined based on their lowest Lagrangian cost function. The experimental results reveal that average encoding time can be downscaled by 40% with similar rate-distortion performance compared to the exhaustive mode selection strategy in HEVC. © 2015 Elsevier B.V.
A Novel multichannel cognitive radio network with throughput analysis at saturation load
- Authors: Hasan, Rashidul , Murshed, Manzur
- Date: 2011
- Type: Text , Conference proceedings
- Relation: 10th IEEE International Symposium on Network Computing and Applications (NCA), 25-27th August, 2011 Cambridge, MA, p. 1-6
- Full Text: false
- Reviewed:
- Description: Opportunistic access of licensed spectrum using a cognitive radio network (CRN) is getting research attraction due to its ability to improve utilisation of this scarce resource without affecting the primary users (PUs). To improve wide acceptability of CRN, it must be equipped with efficient protocols to deal with multiple primary networks to provision QoS guarantee for demand-driven applications by the secondary users (SUs). In this paper, a novel CSMA/CA-based multichannel cognitive radio medium access control (MCR-MAC) protocol is developed by modifying the 4-way handshaking based IEEE 802.11 DCF to dynamically assign contending SUs to free channels using an innovative random arbitration scheme. The paper also presents a detailed analytical model for cognitive interference to the PUs and SUs. The proposed protocol is designed to keep the interference level in check to remain transparent to the PUs. A throughput analysis at saturation load reveals that this fully ad-hoc MCR-MAC is capable of achieving throughput comparable to the ideal scenario (when SUs are equally divided to the channels) without using any centralised infrastructure or dedicated control channel. Extensive simulation results validate the accuracy of the theoretical analysis and establish MCR-MAC as a highly practical solution to construct a CRN in a region overlapped with multiple primary networks to offer data-rate sensitive applications by the SUs.
A novel no-reference subjective quality metric for free viewpoint video using human eye movement
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 8th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2017; Wuhan, China; 20th-24th November 2017; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 10749 LNCS, p. 237-251
- Full Text:
- Reviewed:
- Description: The free viewpoint video (FVV) allows users to interactively control the viewpoint and generate new views of a dynamic scene from any 3D position for better 3D visual experience with depth perception. Multiview video coding exploits both texture and depth video information from various angles to encode a number of views to facilitate FVV. The usual practice for the single view or multiview quality assessment is characterized by evolving the objective quality assessment metrics due to their simplicity and real time applications such as the peak signal-to-noise ratio (PSNR) or the structural similarity index (SSIM). However, the PSNR or SSIM requires reference image for quality evaluation and could not be successfully employed in FVV as the new view in FVV does not have any reference view to compare with. Conversely, the widely used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain knowledge, and many other factors that may actively influence on actual assessment. To address this limitation, in this work, we devise a no-reference subjective quality assessment metric by simply exploiting the pattern of human eye browsing on FVV. Over different quality contents of FVV, the participants eye-tracker recorded spatio-temporal gaze-data indicate more concentrated eye-traversing approach for relatively better quality. Thus, we calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the proposed QMET performs better than the SSIM and MOS in terms of assessing different aspects of coded video quality for a wide range of FVV contents.
- Description: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
A novel pattern identification scheme using distributed video coding concepts
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2009
- Type: Text , Conference paper
- Relation: 2009 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2009) p. 729-732
- Full Text: false
- Reviewed:
- Description: Pattern-based video coding focusing on moving region in a macroblock has already established its superiority over recent H.264 video coding standard at very low bit rate. Obviously, a large number of pattern templates approximate the moving regions better however, after a certain limit no coding gain is observed due to the increase number of pattern identification bits. Recently, distributed video coding schemes used syndrome coding to predict the original information in decoder using side information. In this paper a novel pattern identification scheme is proposed which predicts the pattern from the syndrome codes and side information in decoder so that actual pattern identification number is not needed in the bitstream. The experimental results confirm that this new scheme successfully improves the rate-distortion performance compared to the existing pattern-based video coding as well as H.264 standard. This new scheme will also open another window of syndrome coding application.
A novel quality metric using spatiotemporal correlational data of human eye maneuver
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 International Conference on Digital Image Computing : Techniques and Applications, DICTA 2017; Sydney, Australia; 29th November-1st December 2017 Vol. 2017-December, p. 1-8
- Full Text:
- Reviewed:
- Description: The popularly used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain expertise, and many other factors that may actively influence on actual assessment. We therefore, devise a no- reference subjective quality assessment metric by exploiting the nature of human eye browsing on videos. The participants' eye-tracker recorded gaze-data indicate more concentrated eye- traversing approach for relatively better quality. We calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the quality evaluation carried out by QMET demonstrates a strong correlation with the most widely used peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the MOS.
- Description: DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications
A novel video coding scheme using a scene adaptive non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference paper
- Relation: 16th IEEE International Workshop on Multimedia Signal Processing, MMSP 2014 p. 1-6
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Video coding techniques utilising background frames, provide better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. Parametric approaches such as the mixture of Gaussian (MoG) based background modeling has been widely used however they require prior knowledge about the test videos for parameter estimation. Recently introduced non-parametric (NP) based background modeling techniques successfully improved video coding performance through a HEVC integrated coding scheme. The inherent nature of the NP technique naturally exhibits superior performance in dynamic background scenarios compared to the MoG based technique without a priori knowledge of video data distribution. Although NP based coding schemes showed promising coding performances, they suffer from a number of key challenges - (a) determination of the optimal subset of training frames for generating a suitable background that can be used as a reference frame during coding, (b) incorporating dynamic changes in the background effectively after the initial background frame is generated, (c) managing frequent scene change leading to performance degradation, and (d) optimizing coding quality ratio between an I-frame and other frames under bit rate constraints. In this study we develop a new scene adaptive coding scheme using the NP based technique, capable of solving the current challenges by incorporating a new continuously updating background generation process. Extensive experimental results are also provided to validate the effectiveness of the new scheme.
A robust forgery detection method for copy-move and splicing attacks in images
- Authors: Islam, Mohammad , Karmakar, Gour , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2020
- Type: Text , Journal article
- Relation: Electronics Vol. 9, no. 9 (2020), p. 1-22
- Full Text:
- Reviewed:
- Description: Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy-move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors' physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.
- Description: This research was funded by Research Priority Area (RPA) scholarship of Federation University Australia.
A robust local texture descriptor in the parametric space of the weibull distribution
- Authors: Tania, Sheikh , Karmakar, Gour , Teng, Shyh , Murshed, Manzur
- Date: 2023
- Type: Text , Journal article
- Relation: IEEE Transactions on Multimedia Vol. 25, no. (2023), p. 6053-6066
- Full Text: false
- Reviewed:
- Description: Research in texture feature approximation is still in the embryonic stage because of difficulties in developing a sound theoretical model to express the unique pattern in the intensity-variation of pixels in the neighbourhood of the pixel-of-interest so that it can sufficiently discriminate different textures. Local texture descriptors are widely used in image segmentation as they comprise pixel-wise features. The Weber local descriptor (WLD) with differential excitation and gradient orientation components, inspired by Weber's Law, has been leveraged in the state-of-the-art iterative contraction and merging (ICM) image segmentation technique. However, WLD has inherent drawbacks in the formulation of the components that limit its discriminatory capability. This paper introduces a novel texture descriptor by directly modelling the distribution of intensity-variation in the parametric space of the Weibull distribution using its shape and scale parameters. A unified 'joint scale' texture property is introduced, which can discriminate textures better than the individual parameters while keeping the length of the descriptor shorter. Additionally, the accuracy of WLD's gradient orientation component is improved by using an extended Sobel operator and expressing gradients in -
A subset coding based k-anonymization technique to trade-off location privacy and data integrity in participatory sensing systems
- Authors: Murshed, Manzur , Iqbal, Anindya , Sabrina, Tishna , Alam, K.
- Date: 2011
- Type: Text , Conference paper
- Full Text: false
- Reviewed:
Abnormal event detection in unseen scenarios
- Authors: Haque, Mahfuzul , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Relation: 2012 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Melbourne, 9-13th July, 2012. pg 1-6
- Full Text: false
- Reviewed:
- Description: Event detection in unseen scenarios is a challenging problem due to high variability of scene type, viewing direction, nature of scene entities, and environmental conditions. Existing event detection approaches mostly rely on context-specific tuning and training. Consequently, these techniques fail to achieve high scalability in a large surveillance network with hundreds of video feeds where scenario specific tuning/training is impossible. In this paper, we present a generic event detection approach where the extracted low-level features represent the global characteristics of the target scene instead of any context-specific information. From the temporal evolution of these context-invariant features over a timeframe, a fixed number of temporal features are extracted based on the periodicity of significant transition points and associated temporal orders. Finally, top-ranked temporal features are used to train binary classifier-based event models. In this approach, supervised training and exhaustive feature extraction are required only once while building the target event models. During real-time operation in unseen scenarios, event detection is performed based on the trained event models by extracting the required features only. The proposed event detection approach has been demonstrated for abnormal event detection in completely unseen public place scenarios from benchmark datasets without additional training and tuning. Furthermore, the proposed event detection approach has also outperformed recent optical flow based event detection technique.