Workload-aware incremental repartitioning of shared-nothing distributed databases for scalable OLTP applications
- Authors: Kamal, Joarder , Murshed, Manzur , Buyya, Rajkumar
- Date: 2016
- Type: Text , Journal article
- Relation: Future Generation Computer Systems Vol. 56, no. March (2016), p. 421-436
- Full Text: false
- Reviewed:
- Description: On-line Transaction Processing (OLTP) applications often rely on shared-nothing distributed databases that can sustain rapid growth in data volume. Distributed transactions (DTs) that involve data tuples from multiple geo-distributed servers can adversely impact the performance of such databases, especially when the transactions are short-lived and these require immediate responses. The. k-way min-cut graph clustering based database repartitioning algorithms can be used to reduce the number of DTs with acceptable level of load balancing. Web applications, where DT profile changes over time due to dynamically varying workload patterns, frequent database repartitioning is needed to keep up with the change. This paper addresses this emerging challenge by introducing incremental repartitioning. In each repartitioning cycle, DT profile is learnt online and. k-way min-cut clustering algorithm is applied on a special sub-graph representing all DTs as well as those non-DTs that have at least one tuple in a DT. The latter ensures that the min-cut algorithm minimally reintroduces new DTs from the non-DTs while maximally transforming existing DTs into non-DTs in the new partitioning. Potential load imbalance risk is mitigated by applying the graph clustering algorithm on the finer logical partitions instead of the servers and relying on random one-to-one cluster-to-partition mapping that naturally balances out loads. Inter-server data-migration due to repartitioning is kept in check with two special mappings favouring the current partition of majority tuples in a cluster-the many-to-one version minimising data migrations alone and the one-to-one version reducing data migration without affecting load balancing. A distributed data lookup process, inspired by the roaming protocol in mobile networks, is introduced to efficiently handle data migration without affecting scalability. The effectiveness of the proposed framework is evaluated on realistic TPC-C workloads comprehensively using graph, hypergraph, and compressed hypergraph representations used in the literature. To compare the performance of any incremental repartitioning framework without any bias of the external min-cut algorithm due to graph size variations, a transaction generation model is developed that can maintain a target number of unique transactions in any arbitrary observation window, irrespective of new transaction arrival rate. The overall impact of DTs at any instance is estimated from the exponential moving average of the recurrence period of unique transactions to avoid transient fluctuations. The effectiveness and adaptability of the proposed incremental repartitioning framework for transactional workloads have been established with extensive simulations on both range partitioned and consistent hash partitioned databases. © 2015 Elsevier B.V.
Workload-aware incremental repartitioning of shared-nothing distributed databases for scalable cloud applications
- Authors: Kamal, Joarder , Murshed, Manzur , Buyya, Rajkumar
- Date: 2014
- Type: Text , Conference paper
- Relation: 2014 IEEE/ACM 7th International Conference on Utility and Cloud Computing (UCC) p. 213-222
- Full Text: false
- Reviewed:
- Description: Cloud applications often rely on shared-nothing distributed databases that can sustain rapid growth in data volume. Distributed transactions (DTs) that involve data tuples from multiple geo-distributed servers can adversely impact the performance of such databases, especially when the transactions are short-lived in and require immediate response. The k-way min-cut graph clustering algorithm has been found effective to reduce the number of DTs with acceptable level of load balancing. Benefits of such a static partitioning scheme, however, is short-lived in Cloud applications with dynamically varying workload patterns where DT profile changes over time. This paper addresses this emerging challenge by introducing incremental repartitioning. In each repartitioning cycle, DT profile is learnt online and k-way min-cut clustering algorithm is applied on a special sub-graph representing all DTs as well as those non-DTs that have at least one tuple in a DT. The latter ensures that the min-cut algorithm minimally reintroduces new DTs from the non-DTs while maximally transforming existing DTs into non-DTs in the new partitioning. Potential load imbalance risk is mitigated by applying the graph clustering algorithm on the finer logical partitions instead of the servers and relying on random one-to-one cluster-to-partition mapping that naturally balances out loads. Inter-server data-migration due to repartitioning is kept in check with two special mappings favouring the current partition of majority tuples in a cluster -- the many-to-one version minimising data migrations alone and the one-to-one version reducing data migration without affecting load balancing. A distributed data lookup process, inspired by the roaming protocol in mobile networks, is introduced to efficiently handle data migration without affecting scalability. The effectiveness of the proposed framework is evaluated on realistic TPC-C workloads comprehensively using graph, hyper graph, and compressed hyper graph representations used in the literature. Simulation results convincingly support incremental repartitioning against static partitioning.
VSAMS : Video stabilization approach for multiple sensors
- Authors: Ul-Haq, Anwaar , Gondal, Iqbal , Murshed, Manzur
- Date: 2010
- Type: Text , Conference proceedings
- Relation: 2010 International Conference on Digital Image Computing: Techniques and Applications, Dec. 2010, pp.411-416
- Full Text: false
- Description: Video Stabilization is now considered an old problem which is almost solved but there are still some connecting problems which needs research attention. One of such issues arises due to multiple unstable videos streams coming from multiple sensors which often contain complementary information. To enhance system performance, instability should be removed in a single go rather than stabilizing each sensor individually. This paper proposes a cooperative video stabilization framework, VSAMS for multisensory aerial data based on robust boosting curves which encapsulate stability of high spatial frequency information as used by flying parakeets (budgerigars). For reducing shake and jitter and preservation of actual camera path, a multistage smoothing approach is visualized. Experiments are performed on multisensory UAV data which contains infrared and electro-optical video streams. Subjective and objective quality evaluation proves effectiveness of the proposed cooperative stabilization framework.
Virtual machine consolidation in cloud data centers using ACO metaheuristic C3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Authors: Ferdaus, Md Hasanul , Murshed, Manzur , Calheiros, Rodrigo , Buyya, Rajkumar
- Date: 2014
- Type: Text , Conference paper
- Relation: 20th International Conference on Parallel Processing, Euro-Par 2014 Vol. 8632 LNCS, p. 306-317
- Full Text: false
- Reviewed:
- Description: In this paper, we propose the AVVMC VM consolidation scheme that focuses on balanced resource utilization of servers across different computing resources (CPU, memory, and network I/O) with the goal of minimizing power consumption and resource wastage. Since the VM consolidation problem is strictly NP-hard and computationally infeasible for large data centers, we propose adaptation and integration of the Ant Colony Optimization (ACO) metaheuristic with balanced usage of computing resources based on vector algebra. Our simulation results show that AVVMC outperforms existing methods and achieves improvement in both energy consumption and resource wastage reduction.
Video coding using arbitrarily shaped block partitions in globally optimal perspective
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2011
- Type: Text , Journal article
- Relation: EURASIP Journal on Advances in Signal Processing Vol. 16, no. (2011), p.
- Full Text:
- Reviewed:
- Description: Algorithms using content-based patterns to segment moving regions at the macroblock (MB) level have exhibited good potential for improved coding efficiency when embedded into the H.264 standard as an extra mode. The content-based pattern generation (CPG) algorithm provides local optimal result as only one pattern can be optimally generated from a given set of moving regions. But, it failed to provide optimal results for multiple patterns from entire sets. Obviously, a global optimal solution for clustering the set and then generation of multiple patterns enhances the performance farther. But a global optimal solution is not achievable due to the non-polynomial nature of the clustering problem. In this paper, we propose a near-optimal content-based pattern generation (OCPG) algorithm which outperforms the existing approach. Coupling OCPG, generating a set of patterns after clustering the MBs into several disjoint sets, with a direct pattern selection algorithm by allowing all the MBs in multiple pattern modes outperforms the existing pattern-based coding when embedded into the H.264.
Video coding focusing on block partitioning and occlusion
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Image Processing Vol. 19, no. 3 (2010), p. 691-701
- Full Text: false
- Reviewed:
- Description: Among the existing block partitioning schemes, the pattern-based video coding (PVC) has already established its superiority at low bit-rate. Its innovative segmentation process with regular-shaped pattern templates is very fast as it avoids handling the exact shape of the moving objects. It also judiciously encodes the pattern-uncovered background segments capturing high level of interblock temporal redundancy without any motion compensation, which is favoured by the rate-distortion optimizer at low bit-rates. The existing PVC technique, however, uses a number of content-sensitive thresholds and thus setting them to any predefined values risks ignoring some of the macroblocks that would otherwise be encoded with patterns. Furthermore, occluded background can potentially degrade the performance of this technique. In this paper, a robust PVC scheme is proposed by removing all the content-sensitive thresholds, introducing a new similarity metric, considering multiple top-ranked patterns by the rate-distortion optimizer, and refining the Lagrangian multiplier of the H.264 standard for efficient embedding. A novel pattern-based residual encoding approach is also integrated to address the occlusion issue. Once embedded into the H.264 Baseline profile, the proposed PVC scheme improves the image quality perceptually significantly by at least 0.5 dB in low bit-rate video coding applications. A similar trend is observed for moderate to high bit-rate applications when the proposed scheme replaces the bi-directional predictive mode in the H.264 High profile.
Very low bit rate video coding
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Book
- Full Text: false
- Reviewed:
Verifiable and privacy preserving electronic voting with untrusted machines
- Authors: Murshed, Manzur , Sabrina, Tishna , Iqbal, Anindya , Ali, Mortuza
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of the 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2013) Melbourne, Vic, 16-18th July, 2013 p. 798-804
- Full Text: false
- Reviewed:
- Description: Designing a trustworthy voting system that uses electronic voting machines (EVMs) for efficiency and accuracy is a challenging task. It is difficult, if not impossible, to ensure the trustworthiness of EVMs that possess computation, storage, and communication capabilities. Thus an electronic voting system that does not assume trusted EVMs is clearly desirable. In this paper, we have proposed a k-anonymized electronic voting scheme that achieves this goal by assuming a hardware-controlled trusted random number generator external to the EVM. The proposed scheme relies on a k-anonymization technique to protect privacy and resort to joint de-anonymization of the votes for counting. Since the joint de-anonymization takes into account all the votes, it is difficult to manipulate an individual vote, even by the EVM, without being detected. Besides the anonymization technique, the proposed scheme relies on standard cryptographic hashing and the concept of floating receipt to provide end-to-end verifiability that prevents coercion or vote trading.
Unsaturated throughput analysis of a novel interference-constrained multi-channel random access protocol for cognitive radio networks
- Authors: Hasan, Rashidul , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Relation: Proceedings of the 23rd IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC 2012) Sydney 9-12th September, 2012 p. 178-194
- Full Text: false
- Reviewed:
- Description: Opportunistic access of licensed spectrum using a cognitive radio network (CRN) is getting research attraction due to its ability to improve utilisation of this scarce resource without affecting the primary users (PUs). To improve wide acceptability of CRN, it must be equipped with efficient protocols to deal with multiple primary networks to provision QoS guarantee for demand-driven applications by the secondary users (SUs). In this paper, an accurate unsaturated throughput analysis is presented for our novel CSMA/CA-based multi-channel cognitive radio medium access control (MCR-MAC) protocol. Developed by modifying the 4-way handshaking-based IEEE 802.11 DCF, MCR-MAC dynamically assigns contending SUs to free channels using an innovative random arbitration scheme while keeping cognitive interference to the PUs in check by attenuating the packet size. Not only has the analytical model covered the full spectrum, from very light load to saturation, extensive simulation results have validated the accuracy of the analysis.
Undecoded coefficients recovery in distributed video coding by exploiting spatio-temporal correlation: a linear programming approach
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA 2013), Hobart, November 26-28th, 2013, p 1-7
- Full Text: false
- Reviewed:
- Description: Distributed video coding (DVC) aims at achieving low-complexity encoding in contrast to the existing video coding standards' high complexity encoding. According to the Wyner-Ziv theorem this can be achieved, under certain conditions, by independent encoding of the frames while resorting to joint decoding. However, the performance of a Wyner-Ziv coding scheme significantly depends on its knowledge about the spatio-temporal correlation of the video. Unfortunately, correlation statistics in a video widely varies both along the spatial and temporal directions. Therefore, we argue that in a feedback free transform domain DVC scheme the decoder will fail to recover all the transform coefficients with a nonzero probability. Thus, we suggest to integrate a recovery method with the decoder that aims at recovering the undecoded coefficients by exploiting the spatio-temporal correlation of the video. Besides, we extend and modify a recovery scheme, recently proposed in the context of images, for DVC so that it exploits both spatial and temporal correlations in recovering the undecoded coefficients. The essential idea of this scheme is to formulate the recovery problem as a linear optimization problem which can be solved efficiently using linear programming. Our simulation results demonstrated that the proposed scheme can significantly improve the PSNR and visual quality of the erroneous video frames produced by a DVC decoder.
Threshold-free pattern-based low bit rate video coding
- Authors: Paul, Manoranjan , Murshed, Manzur
- Date: 2008
- Type: Text , Conference paper
- Relation: 2008 15th IEEE International Conference on Image Processing p. 1584-1587
- Full Text: false
- Reviewed:
- Description: Pattern-based video coding (PVC) has already established its superiority over recent video coding standard H.264, at low bit rate because of an extra pattern-mode to segment out the arbitrary shape of the moving region within the macroblock (MB). To determine the pattern-mode, the PVC however uses three thresholds to reduce the number of MBs coded using the pattern- mode. By setting these content-sensitive thresholds to any predefined values, the technique risks ignoring some MBs that would otherwise be selected by the rate-distortion optimization function for this mode. Consequently, the ultimate achievable performance is sacrificed to save motion estimation times. In this paper, a novel PVC scheme is proposed by removing all thresholds to determine this mode and hence more efficient performance is achieved without knowing the content of the video sequences. To keep computational complexity in check, pattern motion is approximated from the motion vector of the MB. In addition, efficient pattern similarity metric and new Lagrangian multipliers are also developed. The experimental results confirm that this new scheme improves the image quality by at least 0.5 dB and 1.0 dB compared to the existing PVC and the H.264 respectively
Texture based vein biometrics for human identification : A comparative study
- Authors: Bashar, Khayrul , Murshed, Manzur
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 42nd IEEE Computer Software and Applications Conference, COMPSAC 2018; Tokyo, Japan; 23rd-27th July 2018 Vol. 2, p. 571-576
- Full Text:
- Reviewed:
- Description: Hand vein biometric is an important modality for human authentication and liveness detection in many applications. Reliable feature extraction is vital to any biometric system. Over the past years, two major categories of vein features, namely vein structures and vein image textures, were proposed for hand dorsal vein based biometric identification. Of them, texture features seem important as it can combine skin micro-textures along with vein properties. In this study, we have performed a comparative study to identify potential texture features and feature-classifier combination that produce efficient vein biometric systems. Seven texture features (HOG, GABOR, GLCM, SSF, DWT, WPT, and LBP) and three multiclass classifiers (LDA, ESVM, and KNN) were explored towards the supervised identification of human from vein images. An experiment with 400 infrared (IR) hand images from 40 adults indicates the superior performance of the histogram of oriented gradients (HOG) and simple local statistical feature (SSF) with LDA and ESVM classifiers in terms of average accuracy (> 90%), average Fscore (> 58%) and average specificity (>93%). The decision-level fusion of the LDA and ESVM classifier with single texture features showed improved performances (by 2.2 to 13.2% of average Fscore) over individual classifier for human identification with IR hand vein images.
- Description: Proceedings - International Computer Software and Applications Conference
Temporal texture characterization : A review
- Authors: Rahman, Ashfaqur , Murshed, Manzur
- Date: 2008
- Type: Text , Book chapter
- Relation: Computational Intelligence in Multimedia Processing: Recent Advances p. 291-316
- Full Text: false
- Reviewed:
- Description: Summary. A large class of objects commonly experienced in a real world scenario exhibits characteristic motion with certain form of regularities. Contemporary literature coined the term “temporal texture”1 to identify image sequences of such motion patterns that exhibit spatiotemporal regularity. The study of temporal textures dates back to the early nineties. Many researchers in the computer vision community have formulated techniques to analyse temporal textures. This chapter aims to provide a comprehensive literature survey of the existing temporal texture characterization technique
Technical challenges and design issues in Bangla language processing
- Authors: Karim, Mohammad , Kaykobad, Mohammad , Murshed, Manzur
- Date: 2013
- Type: Text , Book
- Full Text: false
- Reviewed:
Symbol coding of Laplacian distributed prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Digital Signal Processing: A Review Journal Vol. 44, no. 1 (2015), p. 76-87
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS.
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS. © 2015 Elsevier Inc. All rights reserved.
Soil moisture, organic carbon, and nitrogen content prediction with hyperspectral data using regression models
- Authors: Datta, Dristi , Paul, Manoranjan , Murshed, Manzur , Teng, Shyh Wei , Schmidtke, Leigh
- Date: 2022
- Type: Text , Journal article
- Relation: Sensors (Basel, Switzerland) Vol. 22, no. 20 (2022), p.
- Full Text:
- Reviewed:
- Description: Soil moisture, soil organic carbon, and nitrogen content prediction are considered significant fields of study as they are directly related to plant health and food production. Direct estimation of these soil properties with traditional methods, for example, the oven-drying technique and chemical analysis, is a time and resource-consuming approach and can predict only smaller areas. With the significant development of remote sensing and hyperspectral (HS) imaging technologies, soil moisture, carbon, and nitrogen can be estimated over vast areas. This paper presents a generalized approach to predicting three different essential soil contents using a comprehensive study of various machine learning (ML) models by considering the dimensional reduction in feature spaces. In this study, we have used three popular benchmark HS datasets captured in Germany and Sweden. The efficacy of different ML algorithms is evaluated to predict soil content, and significant improvement is obtained when a specific range of bands is selected. The performance of ML models is further improved by applying principal component analysis (PCA), a dimensional reduction method that works with an unsupervised learning method. The effect of soil temperature on soil moisture prediction is evaluated in this study, and the results show that when the soil temperature is considered with the HS band, the soil moisture prediction accuracy does not improve. However, the combined effect of band selection and feature transformation using PCA significantly enhances the prediction accuracy for soil moisture, carbon, and nitrogen content. This study represents a comprehensive analysis of a wide range of established ML regression models using data preprocessing, effective band selection, and data dimension reduction and attempt to understand which feature combinations provide the best accuracy. The outcomes of several ML models are verified with validation techniques and the best- and worst-case scenarios in terms of soil content are noted. The proposed approach outperforms existing estimation techniques.
Search and tracking algorithms for swarms of robots: A survey
- Authors: Senanayake, Madhubhashi , Senthooran, Ilankaikaone , Barca, Jan , Chung, Hoam , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2016
- Type: Text , Journal article
- Relation: Robotics and Autonomous Systems Vol. 75, no. Part B (2016), p. 422-434
- Full Text: false
- Reviewed:
- Description: Target search and tracking is a classical but difficult problem in many research domains, including computer vision, wireless sensor networks and robotics. We review the seminal works that addressed this problem in the area of swarm robotics, which is the application of swarm intelligence principles to the control of multi-robot systems. Robustness, scalability and flexibility, as well as distributed sensing, make swarm robotic systems well suited for the problem of target search and tracking in real-world applications. We classify the works we review according to the variations and aspects of the search and tracking problems they addressed. As this is a particularly application-driven research area, the adopted taxonomy makes this review serve as a quick reference guide to our readers in identifying related works and approaches according to their problem at hand. By no means is this an exhaustive review, but an overview for researchers who are new to the swarm robotics field, to help them easily start off their research. © 2015 Elsevier B.V.
Scarf : Semi-automatic colorization and reliable image fusion
- Authors: Ul-Haq, Anwaar , Gondal, Iqbal , Murshed, Manzur
- Date: 2010
- Type: Text , Conference paper
- Relation: 2010 Digital Image Computing: Techniques and Applications p. 435-440
- Full Text: false
- Reviewed:
- Description: Nighttime imagery poses significant challenges to its enhancement due to loss of color information and limitation of single sensor to capture complete visual information at night. To cope with this challenge, multiple sensors are used to capture reliable nighttime imagery which presents additional demands for reliable visual information fusion. In this paper, we present a system, Scarf, which proposes reliable image fusion using advanced feature extraction techniques and a novel semi-automatic colorization based on optimization conformal to human visual system. Subjective and objective quality evaluation proves the effectiveness of proposed system.
Robust background subtraction based on perceptual mixture-of-Gaussians with dynamic adaptation speed
- Authors: Haque, Mahfuzul , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Relation: 2012 IEEE International Conference on Multimedia and Expo Workshops (ICMEW), 9th-13th July, Melbourne, 2012
- Full Text: false
- Reviewed:
- Description: In this paper, we propose a new background subtraction technique based on perceptual mixture-of-Gaussians (PMOG). Unlike numerous variants of the classical MOG based approach [1], which can ensure reliable detection result only in known operating environments through proper parameter tuning, PMOG shows superior detection performance across dynamic unconstrained scenarios without any tuning. This is due to PMOG's intrinsic capability of exploiting several perceptual characteristics of human visual system for better understanding of the operating environment to avoid blind reliance on statistical observations. Furthermore, the proposed technique dynamically varies the model adaptation speed, i.e., learning rate, based on observed scene statistics for faster adaptation of changed background and better persistency of detected foreground entities. Comprehensive experimental evaluation on a number of standard datasets validates the robustness of the technique compared to the state-of-the-art.
Range-free passive localization using static and mobile sensors
- Authors: Iqbal, Anindya , Murshed, Manzur
- Date: 2012
- Type: Text , Conference proceedings
- Relation: 2012 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), San Francisco, CA, 25th-28th June, 2012 p. 1-6
- Full Text: false
- Reviewed:
- Description: In passive localization, sensors try to locate an event without any knowledge of event's emitted power. So, this is a more challenging problem compared to active localization. Existing passive localization schemes use expensive and noise-vulnerable range-based techniques. In this paper, we propose, to the best of our knowledge for the first time, a cost-effective range-free passive localization scheme exploiting hybrid sensor network model where mobile sensors are deployed on demand once an event is sensed by a static sensor. Efficient use of mobile sensors leads to two concomitant optimization problems: (1) positioning the mobile sensors so that the expected possible event location area is minimized; and (2) minimizing their overall traversed distance. To solve the first problem, we have developed a novel arc-coding based range-free localization technique that can accurately define the area of possible event location from the feedback of arbitrarily placed sensors without relying on expensive hardware to estimate range of signals. We have achieved significantly high localization accuracy with a low number of mobile sensors even after considering significant environmental noise. To solve the second problem, three alternative deployment strategies for the mobile sensors were simulated to recommend the best.