Predicting Hot-Spots in distributed cloud databases using association rule mining
- Authors: Kaml, Joarder , Murshed, Manzur , Gaber, Mohamed
- Date: 2014
- Type: Text , Conference paper
- Relation: IEEE/ACM 7th International Conference Utility and Cloud Computing (UCC), 2014; London; 8-11th December, 2014 p. 800-805
- Full Text: false
- Reviewed:
- Description: Data partitioning is a popular technique to horizontally or vertically split table attributes of a Cloud database cluster to evenly distribute increasing workloads. However, hot-spots can be created due to inappropriate partitioning scheme and static partition management without considering the dynamic workload characteristics. In this paper, an automatic database partition management scheme - APM - is proposed which periodically analyses workload logs to predict the formation of any potential hot-spot using association rule mining. A detailed illustration of the proposed scheme is presented with examples along with a cost model following by experimental observations from running a HBase cluster with YCSB workloads in AWS.
Progressive data stream mining and transaction classification for workload-aware incremental database repartitioning
- Authors: Kamal, Joarder , Murshed, Manzur , Gaber, Mohamed
- Date: 2014
- Type: Text , Conference proceedings
- Relation: IEEE/ACM International Symposium on Big Data Computing, BDC 2014; London, United Kingdom; 8th-11th December 2014; p. 8-15
- Full Text:
- Reviewed:
- Description: Minimising the impact of distributed transactions (DTs) in a shared-nothing distributed database is extremely challenging for transactional workloads. With dynamic workload nature and rapid growth in data volume the underlying database requires incremental repartitioning to maintain acceptable level of DTs and data load balance with minimum physical data migrations. In a workload-aware repartitioning scheme transactional workload is modelled as graph or hyper graph, and subsequently perform k-way min-cut clustering guaranteeing minimum edge cuts can reduce the impact of DTs significantly by mapping the workload clusters into logical database partitions. However, without exploring the inherent workload characteristics, the overall processing and computing times for large-scale workload networks increase in polynomial orders. In this paper, a workload-aware incremental database repartitioning technique is proposed, which effectively exploits proactive transaction classification and workload stream mining techniques. Workload batches are modelled in graph, hyper graph, and compressed hyper graph then repartitioned to produce a fresh tuple-to-partition data migration plan for every incremental cycle. Experimental studies in a simulated TPC-C environment demonstrate that the proposed model can be effectively adopted in managing rapid data growth and dynamic workloads, thus progressively reduce the overall processing time required to operate over the workload networks.
Very low bit rate video coding
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Book
- Full Text: false
- Reviewed:
Virtual machine consolidation in cloud data centers using ACO metaheuristic C3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Authors: Ferdaus, Md Hasanul , Murshed, Manzur , Calheiros, Rodrigo , Buyya, Rajkumar
- Date: 2014
- Type: Text , Conference paper
- Relation: 20th International Conference on Parallel Processing, Euro-Par 2014 Vol. 8632 LNCS, p. 306-317
- Full Text: false
- Reviewed:
- Description: In this paper, we propose the AVVMC VM consolidation scheme that focuses on balanced resource utilization of servers across different computing resources (CPU, memory, and network I/O) with the goal of minimizing power consumption and resource wastage. Since the VM consolidation problem is strictly NP-hard and computationally infeasible for large data centers, we propose adaptation and integration of the Ant Colony Optimization (ACO) metaheuristic with balanced usage of computing resources based on vector algebra. Our simulation results show that AVVMC outperforms existing methods and achieves improvement in both energy consumption and resource wastage reduction.
Workload-aware incremental repartitioning of shared-nothing distributed databases for scalable cloud applications
- Authors: Kamal, Joarder , Murshed, Manzur , Buyya, Rajkumar
- Date: 2014
- Type: Text , Conference paper
- Relation: 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing (UCC) p. 213-222
- Full Text: false
- Reviewed:
- Description: Cloud applications often rely on shared-nothing distributed databases that can sustain rapid growth in data volume. Distributed transactions (DTs) that involve data tuples from multiple geo-distributed servers can adversely impact the performance of such databases, especially when the transactions are short-lived in and require immediate response. The k-way min-cut graph clustering algorithm has been found effective to reduce the number of DTs with acceptable level of load balancing. Benefits of such a static partitioning scheme, however, is short-lived in Cloud applications with dynamically varying workload patterns where DT profile changes over time. This paper addresses this emerging challenge by introducing incremental repartitioning. In each repartitioning cycle, DT profile is learnt online and k-way min-cut clustering algorithm is applied on a special sub-graph representing all DTs as well as those non-DTs that have at least one tuple in a DT. The latter ensures that the min-cut algorithm minimally reintroduces new DTs from the non-DTs while maximally transforming existing DTs into non-DTs in the new partitioning. Potential load imbalance risk is mitigated by applying the graph clustering algorithm on the finer logical partitions instead of the servers and relying on random one-to-one cluster-to-partition mapping that naturally balances out loads. Inter-server data-migration due to repartitioning is kept in check with two special mappings favouring the current partition of majority tuples in a cluster -- the many-to-one version minimising data migrations alone and the one-to-one version reducing data migration without affecting load balancing. A distributed data lookup process, inspired by the roaming protocol in mobile networks, is introduced to efficiently handle data migration without affecting scalability. The effectiveness of the proposed framework is evaluated on realistic TPC-C workloads comprehensively using graph, hyper graph, and compressed hyper graph representations used in the literature. Simulation results convincingly support incremental repartitioning against static partitioning.
Disparity-adjusted 3D multi-view video coding with dynamic background modelling
- Authors: Paul, Manoranjan , Evans, Christopher , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: Proceedings of IEEE International Conference on Image Processing (ICIP 2013). 15th-18th Sept, Melbourne, Vic. p.1719-1723
- Full Text: false
- Reviewed:
- Description: Capturing a scene using multiple cameras from different angles is expected to provide the necessary interactivity in the 3D space to satisfy end-users' demands for observing objects and actions from different angles and depths. Existing multiview video coding (MVC) technologies are not sufficiently agile to exploit the interactivity and inefficient in terms of image quality and computational time. In this paper a novel technique is proposed using disparity-adjusted 3D MVC (DA-3D-MVC) with 3D motion estimation (ME) and 3D coding to overcome the problems. In the proposed scheme, a 3D frame is formed using the same temporal frames of all disparity-adjusted views and ME is carried out for the current 3D macroblock using the immediate previous 3D frame as a reference frame. Then, 3D coding technique is used for better compression. As all the same temporal position frames of all views are encoded at the same time, the proposed scheme provides better interactivity and reduced computational time compared to the H.264/MVC. To improve the rate-distortion (RD) performance of the proposed technique, an additional reference frame comprising dynamic background is also used. Experimental results reveal that the proposed scheme outperforms the H.264/MVC in terms of RD performance, computational time, and interactivity.
Exploiting spatial smoothness to recover undecoded coefficients for transform domain distributed video coding
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: IEEE International Conference on Image Processing; Melbourne, Australia; 15th-18th September 2013, p. 1782-1786
- Relation: http://purl.org/au-research/grants/arc/DP1095487
- Full Text: false
- Reviewed:
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM.
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM
High quality region-of-interest coding for video conferencing based remote general practitioner training
- Authors: Murshed, Manzur , Siddique, Md Atiur Rahman , Islam, Saikat , Ali, Mortuza , Lu, Guojun , Villanueva, Elmer , Brown, James
- Date: 2013
- Type: Text , Conference paper
- Relation: Proceedings of the International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED 2013), Wilmington, DE, 1st October 2013 pg 240-245
- Full Text: false
- Reviewed:
On temporal order invariance for view-invariant action recognition
- Authors: Ul-Haq, Anwaar , Gondal, Iqbal , Murshed, Manzur
- Date: 2013
- Type: Text , Journal article
- Relation: IEEE Transactions on Circuits and Systems for Video Technology Vol. 23, no. 2 (2013), p. 203-211
- Full Text: false
- Reviewed:
- Description: View-invariant action recognition is one of the most challenging problems in computer vision. Various representations are being devised for matching actions across different viewpoints to achieve view invariance. In this paper, we explore the invariance property of temporal order of action instances during action execution and utilize it for devising a new view-invariant action recognition approach. To ensure temporal order during matching, we utilize spatiotemporal features, feature fusion and temporal order consistency constraint. We start by extracting spatiotemporal cuboid features from video sequences and applying feature fusion to encapsulate within-class similarity for the same viewpoints. For each action class, we construct a feature fusion table to facilitate feature matching across different views. An action matching score is then calculated based on global temporal order constraint and number of matching features. Finally, the action label of the class with the maximum value of the matching score is assigned to the query action. Experimentation is performed on multiple view Inria Xmas motion acquisition sequences and West Virginia University action datasets, with encouraging results, that are comparable to the existing view-invariant action recognition techniques.
Perception-inspired background subtraction
- Authors: Haque, Mahfuzul , Murshed, Manzur
- Date: 2013
- Type: Text , Journal article
- Relation: IEEE Transactions on Circuits and Systems for Video Technology Vol. 23, no. 12 (2013 2013), p. 2127-2140
- Full Text: false
- Reviewed:
- Description: Developing universal and context-invariant methods is one of the hardest challenges in computer vision. Background subtraction (BS), an essential precursor in most machine vision applications used for foreground detection, is no exception. Due to overreliance on statistical observations, most BS techniques show unpredictable behavior in dynamic unconstrained scenarios in which the characteristics of the operating environment are either unknown or change drastically. To achieve superior foreground detection quality across unconstrained scenarios, we propose a new technique, called perception-inspired background subtraction (PBS), which avoids overreliance on statistical observations by making key modeling decisions based on the characteristics of human visual perception. PBS exploits the human perception-inspired confidence interval to associate an observed intensity value with another intensity value during both model learning and background-foreground classification. The concept of perception-inspired confidence interval is also used for identifying redundant samples, thus ensuring the optimal number of samples in the background model. Furthermore, PBS dynamically varies the model adaptation speed (learning rate) at pixel level based on observed scene dynamics to ensure faster adaptation of changed background regions, as well as longer retention of stationary foregrounds. Extensive experimental evaluations on a wide range of benchmark datasets validate the efficacy of PBS compared to the state of the art for unconstraint video analytics.
Performance scalable motion estimation for video coding : An overview of current status and a promising approach
- Authors: Sorwar, Golam , Murshed, Manzur
- Date: 2013
- Type: Text , Book chapter
- Relation: Multimedia networking and coding Chapter 3 p. 50-75
- Full Text: false
- Reviewed:
- Description: Motion estimation is one of the major bottlenecks in real-time performance scalable video coding applications due to high computational complexity of exhaustive search. To address this, researchers so far focused on low-complexity motion estimation and rate-distortion optimization in isolation. Proliferation of power-constrained handheld devices with image capturing capability has created demand for much smarter approach where motion estimation is integrated with rate control such that rate-distortion-complexity optimization can be effectively achieved. It is indeed crucial to provide such performance scalability in motion estimation to facilitate complexity management in such devices. This chapter presents an overview of motion estimation. Beginning with an introduction to the importance of motion estimation, it systematically examines various motion estimation techniques and their strengths and weaknesses, focussing primarily on block-based motion search. It then examines the limitation of the existing techniques in accommodating performance scalability, introduces a promising approach, Distance-dependent Thresholding Search (DTS) motion search, to fill in this gap, and concludes with future research directions in the field. The authors suggest that the content of the chapter will make a significant contribution and serve as a reference for multimedia signal processing research at postgraduate level.
Predictive coding of integers with real-valued predictions
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: DCC 2013 Data Compression Conference; Snowbird, USA; 20th-22nd March 2013; p. 431-440
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: In this paper, we have extended the Rice-Golomb code so that it can operate at fractional precision to efficiently exploit the real-valued predictions. Coding at infinitesimal precision allows the residuals to be modeled with the Lap lace distribution. Unlike the Rice-Golomb code, which maps equally probable opposite-signed residuals to different integers, the proposed coding scheme is symmetric in the sense that, at infinitesimal precision, it assigns code words of equal length to equally probable residual intervals. The symmetry of both the Lap lace distribution and the coding scheme facilitates the analysis of the proposed coding scheme to determine the average code-length and the optimal value of the associated coding parameter.
Preface
- Authors: Karim, Mohammad , Kaykobad, Mohammad , Murshed, Manzur
- Date: 2013
- Type: Text , Book chapter
- Relation: Technical challenges and design issues in Bangla language processing Preface p. xiv-xvii
- Full Text: false
- Reviewed:
Privacy in participatory sensing systems
- Authors: Sabrina, Tishna , Murshed, Manzur
- Date: 2013
- Type: Text , Book chapter , Book
- Relation: Network and Traffic Engineering in Emerging Distributed Computing Applications. Chapter 6. pg 124-143
- Full Text: false
- Reviewed:
- Description: Participatory sensing is a revolutionary new paradigm where ordinary citizens voluntarily sense their environment using readily available sensor devices such as mobile phones and systematically study, and then reflect on and share this information using existing wireless networks. It provides data collection, processing, and dissemination opportunities for socially-responsible applications spanning environmental monitoring, intelligent transportation, and public health, which are often not cost-viable using dedicated sensing infrastructure. The uniqueness of the participatory sensing system lies in its data communication infrastructure which is constituted by the deliberate participation of community people. However, the potential lack of privacy of the participants in such system makes it harder to ensure their voluntary contribution. Thus preserving privacy of the individuals contributing data has introduced a key challenge in this area. On the other hand, data integrity is desired imperatively to make the service trustworthy and user-friendly. Different interesting approaches have been proposed so far to protect privacy that will encourage participation of the owners of data sources in turn.
Technical challenges and design issues in Bangla language processing
- Authors: Karim, Mohammad , Kaykobad, Mohammad , Murshed, Manzur
- Date: 2013
- Type: Text , Book
- Full Text: false
- Reviewed:
Undecoded coefficients recovery in distributed video coding by exploiting spatio-temporal correlation: a linear programming approach
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA 2013), Hobart, November 26-28th, 2013, p 1-7
- Full Text: false
- Reviewed:
- Description: Distributed video coding (DVC) aims at achieving low-complexity encoding in contrast to the existing video coding standards' high complexity encoding. According to the Wyner-Ziv theorem this can be achieved, under certain conditions, by independent encoding of the frames while resorting to joint decoding. However, the performance of a Wyner-Ziv coding scheme significantly depends on its knowledge about the spatio-temporal correlation of the video. Unfortunately, correlation statistics in a video widely varies both along the spatial and temporal directions. Therefore, we argue that in a feedback free transform domain DVC scheme the decoder will fail to recover all the transform coefficients with a nonzero probability. Thus, we suggest to integrate a recovery method with the decoder that aims at recovering the undecoded coefficients by exploiting the spatio-temporal correlation of the video. Besides, we extend and modify a recovery scheme, recently proposed in the context of images, for DVC so that it exploits both spatial and temporal correlations in recovering the undecoded coefficients. The essential idea of this scheme is to formulate the recovery problem as a linear optimization problem which can be solved efficiently using linear programming. Our simulation results demonstrated that the proposed scheme can significantly improve the PSNR and visual quality of the erroneous video frames produced by a DVC decoder.
Verifiable and privacy preserving electronic voting with untrusted machines
- Authors: Murshed, Manzur , Sabrina, Tishna , Iqbal, Anindya , Ali, Mortuza
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of the 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2013) Melbourne, Vic, 16-18th July, 2013 p. 798-804
- Full Text: false
- Reviewed:
- Description: Designing a trustworthy voting system that uses electronic voting machines (EVMs) for efficiency and accuracy is a challenging task. It is difficult, if not impossible, to ensure the trustworthiness of EVMs that possess computation, storage, and communication capabilities. Thus an electronic voting system that does not assume trusted EVMs is clearly desirable. In this paper, we have proposed a k-anonymized electronic voting scheme that achieves this goal by assuming a hardware-controlled trusted random number generator external to the EVM. The proposed scheme relies on a k-anonymization technique to protect privacy and resort to joint de-anonymization of the votes for counting. Since the joint de-anonymization takes into account all the votes, it is difficult to manipulate an individual vote, even by the EVM, without being detected. Besides the anonymization technique, the proposed scheme relies on standard cryptographic hashing and the concept of floating receipt to provide end-to-end verifiability that prevents coercion or vote trading.
Abnormal event detection in unseen scenarios
- Authors: Haque, Mahfuzul , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Relation: 2012 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), Melbourne, 9-13th July, 2012. pg 1-6
- Full Text: false
- Reviewed:
- Description: Event detection in unseen scenarios is a challenging problem due to high variability of scene type, viewing direction, nature of scene entities, and environmental conditions. Existing event detection approaches mostly rely on context-specific tuning and training. Consequently, these techniques fail to achieve high scalability in a large surveillance network with hundreds of video feeds where scenario specific tuning/training is impossible. In this paper, we present a generic event detection approach where the extracted low-level features represent the global characteristics of the target scene instead of any context-specific information. From the temporal evolution of these context-invariant features over a timeframe, a fixed number of temporal features are extracted based on the periodicity of significant transition points and associated temporal orders. Finally, top-ranked temporal features are used to train binary classifier-based event models. In this approach, supervised training and exhaustive feature extraction are required only once while building the target event models. During real-time operation in unseen scenarios, event detection is performed based on the trained event models by extracting the required features only. The proposed event detection approach has been demonstrated for abnormal event detection in completely unseen public place scenarios from benchmark datasets without additional training and tuning. Furthermore, the proposed event detection approach has also outperformed recent optical flow based event detection technique.
Analysis of location privacy risk in a plain-text communication based participatory sensing system using subset coding and mix network
- Authors: Sabrina, Tishna , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Full Text: false
Background subtraction for real-time video analytics based on multi-hypothesis mixture-of-Gaussians
- Authors: Haque, Mahfuzul , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Relation: 2012 IEEE Ninth International Conference on Advanced Video and Signal-Based Surveillance (AVSS), 18th-21st Sept, 2012. p1-6
- Full Text: false
- Reviewed:
- Description: Robust background subtraction (BS) is essential for high quality foreground detection in most video analytics systems. Recent BS techniques achieve superior detection quality mostly by exploiting the complementary strengths of multiple background models or processing stages. Consequently, these techniques fail to meet the operational requirements of real-time video analytics due to high computational overhead where BS is just the primary processing task. In this paper, we propose a new BS technique, named multi-hypothesis mixture-of-Gaussians (MH-MOG), suitable for real-time video analytics. The essential idea is to maintain a single background model based on perception-aware mixture-of-Gaussians and then, generating multiple detection hypotheses with different processing bases. Finally, only during the detection stage, the complementary strengths of the hypotheses are exploited to achieve superior detection quality without significant computational overhead. Comprehensive experimental evaluation validates the efficacy of MH-MOG.