A novel depth edge prioritization based coding technique to boost-UP HEVC performance
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2016
- Type: Text , Conference paper
- Relation: 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
- Full Text: false
- Reviewed:
- Description: In addition to the texture, multiview video employs the utilization of depth coding for the reconstruction of 3D video and Free viewpoint video. Standing on some texture-depth correlations, a number of methods in literature reuses texture motion vector for the corresponding depth coding to reduce encoding time by avoiding costly motion estimation process. However, texture similarity metric is not always equivalent to the corresponding depth similarity metric especially at edge levels. Since their approaches could not explicitly detect and encode acute edge motions of depth objects, eventually, could not reach the similar or improved rate-distortion (RD) performance against the High Efficiency Video Coding (HEVC) reference test model (HM). With a view to more accurate motion detection and modeling, the proposed technique exploits an extra Pattern Mode comprising a group of pattern templates (GPTs) with different rectangular and non-rectangular object shapes and edges compared to the existing HEVC block partitioning modes. Moreover, the proposed Pattern Mode only encodes the motion areas and skips the background areas. The experimental results show that the proposed technique could save 30% encoding time and improve average 0.1dB Bjontegard Delta peak signal-to-noise ratio (BD-PSNR) compared to the HM.
Anonymization techniques for preserving data quality in participatory sensing
- Authors: Sabrina, Tishna , Murshed, Manzur , Iqbal, Anindya
- Date: 2016
- Type: Text , Conference paper
- Relation: 2016 IEEE 41st Conference on Local Computer Networks (LCN) p. 607-610
- Full Text: false
- Reviewed:
- Description: Participatory sensing is a revolutionary new paradigm where citizens voluntarily sense their surroundings using readily available sensing devices such as mobile phones and share this information for mutual benefit of community members. To encourage ample participation of users, ensuring their privacy is inevitable. Existing techniques that attempt to protect location privacy with spatial cloaking suffer from irrecoverable data quality degradation. To the best of our knowledge, very few works provided a solution preserving high data quality/utility at the destination server, however, suffered from unacceptable computational overhead. This paper presents an improved deterministic alternative and also a faster variant by exploiting several optimization issues. Theoretical formulations and extensive simulation results are presented to establish the applicability of our proposed techniques.
Lossless depth map coding using binary tree based decomposition and context-based arithmetic coding
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 2016 IEEE International Conference on Multimedia and Expo, ICME 2016; Seattle, United States; 11th-15th July 2016; published in Proceedings of the 2016 IEEE International Conference on Mulitmedia and Expo Vol. 2016-August, p. 1-6
- Full Text: false
- Reviewed:
- Description: Depth maps are becoming increasingly important in the context of emerging video coding and processing applications. Depth images represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. To enable high quality view rendering at the receiver side, preservation of these characteristics is important. Lossless coding enables avoiding rendering artifacts in synthesized views due to depth compression artifacts. In this paper, we propose a binary tree based lossless depth coding scheme that arranges the residual frame into integer or binary residual bitmap. High spatial correlation in depth residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard 3D video sequences, the proposed lossless depth coding has achieved compression ratio in the range of 20 to 80. © 2016 IEEE.
- Description: Proceedings - IEEE International Conference on Multimedia and Expo
Poster : Privacy protection for real world participatory sensing system
- Authors: Abrar, Nafeez , Iqbal, Anindya , Zaman, Shaolin , Murshed, Manzur
- Date: 2016
- Type: Text , Conference paper
- Relation: 14th Annual International Conference on Mobile Systems, Applications, and Services Companion
- Full Text: false
- Reviewed:
- Description: Participatory Sensing System (PSS) is an emerging technology for collection of useful information as the use of smart-phones has been increasing lately among community people. It has a wide range of applications like environmental monitoring, product price sharing, health monitoring etc. But people have to share their location and other information which is a high privacy risk. Our main contribution of this work is to develop a technique for PSS which can provide privacy protection for the participants in manageable time in real world.
Search and tracking algorithms for swarms of robots: A survey
- Authors: Senanayake, Madhubhashi , Senthooran, Ilankaikaone , Barca, Jan , Chung, Hoam , Kamruzzaman, Joarder , Murshed, Manzur
- Date: 2016
- Type: Text , Journal article
- Relation: Robotics and Autonomous Systems Vol. 75, no. Part B (2016), p. 422-434
- Full Text: false
- Reviewed:
- Description: Target search and tracking is a classical but difficult problem in many research domains, including computer vision, wireless sensor networks and robotics. We review the seminal works that addressed this problem in the area of swarm robotics, which is the application of swarm intelligence principles to the control of multi-robot systems. Robustness, scalability and flexibility, as well as distributed sensing, make swarm robotic systems well suited for the problem of target search and tracking in real-world applications. We classify the works we review according to the variations and aspects of the search and tracking problems they addressed. As this is a particularly application-driven research area, the adopted taxonomy makes this review serve as a quick reference guide to our readers in identifying related works and approaches according to their problem at hand. By no means is this an exhaustive review, but an overview for researchers who are new to the swarm robotics field, to help them easily start off their research. © 2015 Elsevier B.V.
Workload-aware incremental repartitioning of shared-nothing distributed databases for scalable OLTP applications
- Authors: Kamal, Joarder , Murshed, Manzur , Buyya, Rajkumar
- Date: 2016
- Type: Text , Journal article
- Relation: Future Generation Computer Systems Vol. 56, no. March (2016), p. 421-436
- Full Text: false
- Reviewed:
- Description: On-line Transaction Processing (OLTP) applications often rely on shared-nothing distributed databases that can sustain rapid growth in data volume. Distributed transactions (DTs) that involve data tuples from multiple geo-distributed servers can adversely impact the performance of such databases, especially when the transactions are short-lived and these require immediate responses. The. k-way min-cut graph clustering based database repartitioning algorithms can be used to reduce the number of DTs with acceptable level of load balancing. Web applications, where DT profile changes over time due to dynamically varying workload patterns, frequent database repartitioning is needed to keep up with the change. This paper addresses this emerging challenge by introducing incremental repartitioning. In each repartitioning cycle, DT profile is learnt online and. k-way min-cut clustering algorithm is applied on a special sub-graph representing all DTs as well as those non-DTs that have at least one tuple in a DT. The latter ensures that the min-cut algorithm minimally reintroduces new DTs from the non-DTs while maximally transforming existing DTs into non-DTs in the new partitioning. Potential load imbalance risk is mitigated by applying the graph clustering algorithm on the finer logical partitions instead of the servers and relying on random one-to-one cluster-to-partition mapping that naturally balances out loads. Inter-server data-migration due to repartitioning is kept in check with two special mappings favouring the current partition of majority tuples in a cluster-the many-to-one version minimising data migrations alone and the one-to-one version reducing data migration without affecting load balancing. A distributed data lookup process, inspired by the roaming protocol in mobile networks, is introduced to efficiently handle data migration without affecting scalability. The effectiveness of the proposed framework is evaluated on realistic TPC-C workloads comprehensively using graph, hypergraph, and compressed hypergraph representations used in the literature. To compare the performance of any incremental repartitioning framework without any bias of the external min-cut algorithm due to graph size variations, a transaction generation model is developed that can maintain a target number of unique transactions in any arbitrary observation window, irrespective of new transaction arrival rate. The overall impact of DTs at any instance is estimated from the exponential moving average of the recurrence period of unique transactions to avoid transient fluctuations. The effectiveness and adaptability of the proposed incremental repartitioning framework for transactional workloads have been established with extensive simulations on both range partitioned and consistent hash partitioned databases. © 2015 Elsevier B.V.
A hybrid wireless sensor network framework for range-free event localization
- Authors: Iqbal, Anindya , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Ad Hoc Networks Vol. 27, no. (2015), p. 81-98
- Full Text: false
- Reviewed:
- Description: In event localization, wireless sensors try to locate the source of an event from its emitted power. This is more challenging than sensor localization as the power level at the source of an event is neither predictable with precision nor can be controlled. Considering the emerging trend of long sensing range for cost-effective sensor deployment, locating events within a region much smaller than the sensing area of a single sensor has gained research interest. This paper proposes the first range-free event localization framework, which avoids expensive hardware needed by the range-based counterparts. Our approach first develops a sensing range model from the statistical information on the emitted power of a type of events so that user-defined event-detection quality can be provisioned using a minimal network of static sensors. Then an accurate event location boundary estimation technique is developed from the sensing feedbacks, which also facilitates guided expansion of the area of possible event location (APEL) to deal with sensing errors. Finally, user-defined event-localization quality guarantee is provisioned cost-effectively by inviting mobile sensors on-demand to target positions. Analytical solutions are provided whenever appropriate and comprehensive simulations are carried out to evaluate localization performance. The proposed event localization technique outperforms the state-of-the-art range-based counterpart (Xu et al., 2011) in realistic environment with path loss, shadow fading, and sensor positioning errors.
A novel depth motion vector coding exploiting spatial and inter-component clustering tendency
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: Visual Communications and Image Processing, VCIP 2015; Singapore; 13th-16th December 2015 p. 1-4
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Motion vectors of depth-maps in multiview and free-viewpoint videos exhibit strong spatial as well as inter-component clustering tendency. This paper presents a novel coding technique that first compresses the multidimensional bitmaps of macroblock mode and then encodes only the non-zero components of motion vectors. The bitmaps are partitioned into disjoint cuboids using binary tree based decomposition so that the 0's and 1's are either highly polarized or further sub-partitioning is unlikely to achieve any compression. Each cuboid is entropy-coded as a unit using binary arithmetic coding. This technique is capable of exploiting the spatial and inter-component correlations efficiently without the restriction of scanning the bitmap in any specific linear order as needed by run-length coding. As encoding of non-zero component values no longer requires denoting the zero value, further compression efficiency is achieved. Experimental results on standard multiview test video sequences have comprehensively demonstrated the superiority of the proposed technique, achieving overall coding gain against the state-of-the-art in the range [22%, 54%] and on average 38%. © 2015 IEEE.
- Description: 2015 Visual Communications and Image Processing, VCIP 2015
A novel motion classification based intermode selection strategy for HEVC performance improvement
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Neurocomputing Vol. 173, no. Part 3 (2015), p. 1211-1220
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard adopts several new approaches to achieve higher coding efficiency (approximately 50% bit-rate reduction) compared to its predecessor H.264/AVC with same perceptual image quality. Huge computational time has also increased due to the algorithmic complexity of HEVC compared to H.264/AVC. However, it is really a demanding task to reduce the encoding time while preserving the similar quality of the video sequences. In this paper, we propose a novel efficient intermode selection technique and incorporate into HEVC framework to predict motion estimation and motion compensation modes between current and reference blocks and perform faster inter mode selection based on three dissimilar motion types in divergent video sequences. Instead of exploring and traversing all the modes exhaustively, we merely select a subset of candidate modes and the final mode from the selected subset is determined based on their lowest Lagrangian cost function. The experimental results reveal that average encoding time can be downscaled by 40% with similar rate-distortion performance compared to the exhaustive mode selection strategy in HEVC.
- Description: High Efficiency Video Coding (HEVC) standard adopts several new approaches to achieve higher coding efficiency (approximately 50% bit-rate reduction) compared to its predecessor H.264/AVC with same perceptual image quality. Huge computational time has also increased due to the algorithmic complexity of HEVC compared to H.264/AVC. However, it is really a demanding task to reduce the encoding time while preserving the similar quality of the video sequences. In this paper, we propose a novel efficient intermode selection technique and incorporate into HEVC framework to predict motion estimation and motion compensation modes between current and reference blocks and perform faster inter mode selection based on three dissimilar motion types in divergent video sequences. Instead of exploring and traversing all the modes exhaustively, we merely select a subset of candidate modes and the final mode from the selected subset is determined based on their lowest Lagrangian cost function. The experimental results reveal that average encoding time can be downscaled by 40% with similar rate-distortion performance compared to the exhaustive mode selection strategy in HEVC. © 2015 Elsevier B.V.
An analysis of human engagement behaviour using descriptors from human feedback, eye tracking, and saliency modelling
- Authors: Podder, Pallab , Paul, Manoranjan , Debnath, Tanmoy , Murshed, Manzur
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2015, Adelaide, 23-25th Nov 2015 in Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: In this paper an analysis of human engagement behaviour with video is presented based on real life experiments. An engagement model could be employed in classroom education, enhancing programming skills, reading etc. Two groups of people, independent of one another, watched eighteen video clips separately at different times. The first group's participants' eye gaze locations, right and left pupil sizes, and eye blinking patterns are recorded by a state of the art Tobii eye tracker. The second group of people who are video experts opined about the most significant attention points of the videos. A well-known bottom-up visual saliency model, Graph-Based Visual Saliency (GBVS), is also utilized to create salient points for the videos. Taking into consideration all the above mentioned descriptors the introduced behaviour analysis demonstrates the level of participants' concentration with the videos.
An efficient cooperative lane-changing algorithm for sensor- and communication-enabled automated vehicles
- Authors: Awal, Tanveer , Murshed, Manzur , Ali, Mortuza
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: A key goal in transportation system is to attain efficient road traffic through minimization of trip time, fuel consumption and pollutant-emission without compromising safety. In dense traffic lane-changes and merging are often key ingredients to cause safety hazards, traffic breakdowns and travel delays. In this paper, we propose an efficient cooperative lane-changing algorithm CLA for sensor- and communication-enabled automated vehicles to reduce the lane-changing bottlenecks. For discretionary lane-changing, we consider the advantages of the subject vehicle, the follower in the current lane and k (an integer) lag vehicles in the target lane to maximize speed gains. Our algorithm simultaneously minimizes the impact of lane-change on traffic flow and the overall trip time, fuel-consumption and pollutant-emission. For mandatory lane-changing CLA dissociates the decision-making point from the actual mandatory lane-changing point and computes a suitable lane-changing slot in order to minimize lane-changing (merging) time. Our algorithm outperforms the potential cooperative lane-changing algorithm MOBIL proposed by Kesting et al. [1] in terms of merging time and rate, waiting time, fuel consumption, average velocity and flow (especially at the point in front of the merging point) at the cost of slightly increased average trip time for the mainroad vehicles compared to MOBIL. We also highlight important directions for further research. © 2015 IEEE.
Cuboid coding of depth motion vectors using binary tree based decomposition
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference paper
- Relation: Data Compression Conference (DCC), 2015 p. 469
- Full Text: false
- Reviewed:
- Description: Motion vectors of depth-maps in multiview and free-viewpoint videos exhibit strong spatial as well as inter-component clustering tendency. This paper presents a novel motion vector coding technique that first compresses the multidimensional bitmaps of macro block mode information and then encodes only the non-zero components of motion vectors. The bitmaps are partitioned into disjoint cuboids using binary tree based decomposition so that the 0's and 1's are either highly polarized or further sub-partitioning is unlikely to achieve any compression. Each cuboid is entropy-coded as a unit using binary arithmetic coding. This technique is capable of exploiting the spatial and inter-component correlations efficiently without the restriction of scanning the bitmap in any specific linear order as needed by run-length coding. As encoding of non-zero component values no longer requires denoting the zero value, further compression efficiency is achieved. Experimental results on standard multiview test video sequences have comprehensively demonstrated the superiority of the proposed technique, achieving overall coding gain against the state-of-the-art in the range [17%,51%] and on average 31%.
Efficient coding strategy for HEVC performance improvement by exploiting motion features
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, QLD, 19th-24th April, 2015 p. 1414-1418
- Full Text: false
- Reviewed:
- Description: The striking feature of High Efficiency Video Coding (HEVC) Standard is emphasized by 50% bit-rate reduction compared to its predecessor H.264/AVC while keeping the same perceptual image quality. The time complexity - a congenital issue of HEVC has also increased to intensify the compression ratio. However, it is really a demanding task for the researchers to reduce the encoding time while preserving expected quality of the video sequences. Our contribution is to trim down the computational time by efficient selection of appropriate block-partitioning modes in HEVC using motion features based on phase-correlation. In this paper, we use phase-correlation between current and reference blocks to extract three motion features and combine them to determine binary motion pattern of the current block. The motion pattern is then matched against a codebook of predefined pattern templates to determine a subset of the inter-modes. Only the selected modes are exhaustively motion estimated and compensated for a coding unit. The experimental outcomes demonstrate that the average computational time can be down scaled by 30% of the HEVC while providing improved rate-distortion performance.
Fast inter-mode decision strategy for HEVC on depth videos
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT) p. 288-293
- Full Text: false
- Reviewed:
- Description: Multiview video employs the utilization of both texture and depth video information from different angles to create a 3D video for more realistic view of a scene. Unlike texture, depth video is a gray scale map that represents the distance between the camera and 3D points in a scene. Existing multiview video coding (MVC) techniques including 3D-High Efficiency Video Coding (HEVC) standard encode both texture and depth videos jointly by exploiting texture video information for the corresponding depth video coding (DVC) to reduce computational time as the texture and depth videos have motion similarity in representing the same scene. This strategy has two limitations: (i) more bits and computational time might be required due to the large residuals for the misalignment between depth and texture edges and (ii) switching between different views may require more times due to the increased dependency between texture and depth. In this paper, we propose an independent DVC technique using HEVC (a video coding standard for single view) so that we can improve the rate distortion (RD) performance and reduce computational time by improving switching speed. For this, we use motion features to reduce a number of motion estimation (ME) and motion compensation (MC) modes in HEVC. As we use motion feature which is the underlying criteria for selecting different modes in the standard and then we select a subset of modes which can provide almost the same RD performance. Experimental outcomes reveal a reduction of 48% encoding time of HEVC encoder with similar RD performance and better interactivity.
Foreground motion and spatial saliency-based efficient HEVC Video Coding
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ)
- Full Text: false
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) could not provide real time facilities to the limited processing and battery powered electronic devices as its encoding time complexity increases multiple times compared to its predecessor. Numerous researchers contribute to address this limitation by reducing a number of motion estimation (ME) modes where they analyze homogeneity, residual and statistical correlation among different modes. Although their approaches save some encoding time, however, could not reach the similar rate-distortion (RD) performance with HEVC encoder as they merely depend on existing Lagrangian cost function (LCF) within HEVC framework. To overcome this limitation, in this paper, we capture visual attentive Foreground motion and salient region (FMSR) which are sensitive to human visual system for quality assessment. The FMSR features captured by visual attentive and dynamic background modeling are adaptively synthesized to determine a subset of candidate modes. This preprocessing phase is independent from LCF. Since the proposed technique can avoid exhaustive exploration of all modes with simple criteria, it can reduce 27% encoding time on average. With efficient selection of FMSR-based appropriate block partitioning modes, it can also improve up to 1.0dB peak signal-to-noise ratio (PSNR).
From Tf-Idf to learning-to-rank : An overview
- Authors: Ibrahim, Yousef , Murshed, Manzur
- Date: 2015
- Type: Text , Book chapter
- Relation: Handbook of research on innovations in information retrieval, analysis, and management Chapter 3 p. 62-109
- Full Text: false
- Reviewed:
- Description: Ranking a set of documents based on their relevances with respect to a given query is a central problem of information retrieval (IR). Traditionally people have been using unsupervised scoring methods like tf-idf, BM25, Language Model etc., but recently supervised machine learning framework is being used successfully to learn a ranking function, which is called learning-to-rank (LtR) problem. There are a few surveys on LtR in the literature; but these reviews provide very little assistance to someone who, before delving into technical details of different algorithms, wants to have a broad understanding of LtR systems and its evolution from and relation to the traditional IR methods. This chapter tries to address this gap in the literature. Mainly the following aspects are discussed: the fundamental concepts of IR, the motivation behind LtR, the evolution of LtR from and its relation to the traditional methods, the relationship between LtR and other supervised machine learning tasks, the general issues pertaining to an LtR algorithm, and the theory of LtR. © 2016 by IGI Global. All rights reserved.
Lossless image coding using binary tree decomposition of prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur , Shahriyar, Shampa , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: State-of-the-art lossless image compression schemes, such as, JPEG-LS and CALIC, have been proposed in the context adaptive predictive coding framework. These schemes involve a prediction step followed by context adaptive entropy coding of the residuals. It can be observed that there exist significant spatial correlation among the residuals after prediction. The efficient schemes proposed in the literature rely on context adaptive entropy coding to exploit this spatial correlation. In this paper, we propose an alternative approach to exploit this spatial correlation. The proposed scheme also involves a prediction stage. However, we resort to a binary tree based hierarchical decomposition technique to efficiently exploit the spatial correlation. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall compression gain of 2.1% against JPEG-LS. © 2015 IEEE.
Network-aware virtual machine placement and migration in cloud data centres
- Authors: Ferdaus, Md Hasanul , Murshed, Manzur , Clalheiros, Rodrigo , Buyya, Rajkumar
- Date: 2015
- Type: Text , Book chapter
- Relation: Emerging research in cloud distributed computing systems (Advances in systems analysis, software engineering, and high performance computing (ASASEHPC) book series) Chapter 2 p. 42-91
- Full Text: false
- Reviewed:
- Description: With the pragmatic realization of computing as a utility, Cloud Computing has recently emerged as a highly successful alternative IT paradigm. Cloud providers are deploying large-scale data centers across the globe to meet the Cloud customers’ compute, storage, and network resource demands. Efficiency and scalability of these data centers, as well as the performance of the hosted applications’ highly depend on the allocations of the data center resources. Very recently, network-aware Virtual Machine (VM) placement and migration is developing as a very promising technique for the optimization of compute-network resource utilization, energy consumption, and network traffic minimization. This chapter presents the relevant background information and a detailed taxonomy that characterizes and classifies the various components of VM placement and migration techniques, as well as an elaborate survey and comparative analysis of the state of the art techniques. Besides highlighting the various aspects and insights of the network-aware VM placement and migration strategies and algorithms proposed by the research community, the survey further identifies the benefits and limitations of the existing techniques and discusses on the future research directions.
Symbol coding of Laplacian distributed prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Digital Signal Processing: A Review Journal Vol. 44, no. 1 (2015), p. 76-87
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS.
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS. © 2015 Elsevier Inc. All rights reserved.
Distributed database management systems : Architectural design choices for the cloud
- Authors: Kamal, Joarder , Murshed, Manzur
- Date: 2014
- Type: Text , Book chapter
- Relation: Cloud Computing : Challenges, Limitations and R&D Solutions (Computer Communications and Networks series) Chapter 2 p. 23-50
- Full Text: false
- Reviewed:
- Description: Cloud computing has changed the way we used to exploit software and systems. The two decades’ practice of architecting solutions and services over the Internet has just revolved within the past few years. End users are now relying more on paying for what they use instead of purchasing a full-phase license. System owners are also in rapid hunt for business profits by deploying their services in the Cloud and thus maximising global outreach and minimising overall management costs. However, deploying and scaling Cloud applications regionally and globally are highly challenging. In this context, distributed data management systems in the Cloud promise rapid elasticity and horizontal scalability so that Cloud applications can sustain enormous growth in data volume, velocity, and value. Besides, distributed data replication and rapid partitioning are the two fundamental hammers to nail down these challenges. While replication ensures database read scalability and georeachability, data partitioning favours database write scalability and system-level load balance. System architects and administrators often face difficulties in managing a multi-tenant distributed database system in Cloud scale as the underlying workload characteristics change frequently. In this chapter, the inherent challenges of such phenomena are discussed in detail alongside their historical backgrounds. Finally, potential way outs to overcome such architectural barriers are presented under the light of recent research and development in this area.