An efficient cooperative lane-changing algorithm for sensor- and communication-enabled automated vehicles
- Authors: Awal, Tanveer , Murshed, Manzur , Ali, Mortuza
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: A key goal in transportation system is to attain efficient road traffic through minimization of trip time, fuel consumption and pollutant-emission without compromising safety. In dense traffic lane-changes and merging are often key ingredients to cause safety hazards, traffic breakdowns and travel delays. In this paper, we propose an efficient cooperative lane-changing algorithm CLA for sensor- and communication-enabled automated vehicles to reduce the lane-changing bottlenecks. For discretionary lane-changing, we consider the advantages of the subject vehicle, the follower in the current lane and k (an integer) lag vehicles in the target lane to maximize speed gains. Our algorithm simultaneously minimizes the impact of lane-change on traffic flow and the overall trip time, fuel-consumption and pollutant-emission. For mandatory lane-changing CLA dissociates the decision-making point from the actual mandatory lane-changing point and computes a suitable lane-changing slot in order to minimize lane-changing (merging) time. Our algorithm outperforms the potential cooperative lane-changing algorithm MOBIL proposed by Kesting et al. [1] in terms of merging time and rate, waiting time, fuel consumption, average velocity and flow (especially at the point in front of the merging point) at the cost of slightly increased average trip time for the mainroad vehicles compared to MOBIL. We also highlight important directions for further research. © 2015 IEEE.
Cuboid coding of depth motion vectors using binary tree based decomposition
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference paper
- Relation: Data Compression Conference (DCC), 2015 p. 469
- Full Text: false
- Reviewed:
- Description: Motion vectors of depth-maps in multiview and free-viewpoint videos exhibit strong spatial as well as inter-component clustering tendency. This paper presents a novel motion vector coding technique that first compresses the multidimensional bitmaps of macro block mode information and then encodes only the non-zero components of motion vectors. The bitmaps are partitioned into disjoint cuboids using binary tree based decomposition so that the 0's and 1's are either highly polarized or further sub-partitioning is unlikely to achieve any compression. Each cuboid is entropy-coded as a unit using binary arithmetic coding. This technique is capable of exploiting the spatial and inter-component correlations efficiently without the restriction of scanning the bitmap in any specific linear order as needed by run-length coding. As encoding of non-zero component values no longer requires denoting the zero value, further compression efficiency is achieved. Experimental results on standard multiview test video sequences have comprehensively demonstrated the superiority of the proposed technique, achieving overall coding gain against the state-of-the-art in the range [17%,51%] and on average 31%.
Efficient coding strategy for HEVC performance improvement by exploiting motion features
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Brisbane, QLD, 19th-24th April, 2015 p. 1414-1418
- Full Text: false
- Reviewed:
- Description: The striking feature of High Efficiency Video Coding (HEVC) Standard is emphasized by 50% bit-rate reduction compared to its predecessor H.264/AVC while keeping the same perceptual image quality. The time complexity - a congenital issue of HEVC has also increased to intensify the compression ratio. However, it is really a demanding task for the researchers to reduce the encoding time while preserving expected quality of the video sequences. Our contribution is to trim down the computational time by efficient selection of appropriate block-partitioning modes in HEVC using motion features based on phase-correlation. In this paper, we use phase-correlation between current and reference blocks to extract three motion features and combine them to determine binary motion pattern of the current block. The motion pattern is then matched against a codebook of predefined pattern templates to determine a subset of the inter-modes. Only the selected modes are exhaustively motion estimated and compensated for a coding unit. The experimental outcomes demonstrate that the average computational time can be down scaled by 30% of the HEVC while providing improved rate-distortion performance.
Fast coding strategy for HEVC by motion features and saliency applied on difference between successive image blocks
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference proceedings
- Relation: ConferencePacific-Rim Symposium on Image and Video Technology, Auckland, 23-27th Nov, 2016, In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).9431 p. 175-186
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Introducing a number of innovative and powerful coding tools, the High Efficiency Video Coding (HEVC) standard promises double compression efficiency, compared to its predecessor H.264, with similar perceptual quality. The increased computational time complexity is an important issue for the video coding research community as well. An attempt to reduce this complexity of HEVC is adopted in this paper, by efficient selection of appropriate block-partitioning modes based on motion features and the saliency applied to the difference between successive image blocks. As this difference gives us the explicit visible motion and salient information, we develop a cost function by combining the motion features and image difference salient feature. The combined features are then converted into area of interest (AOI) based binary pattern for the current block. This pattern is then compared with a previously defined codebook of binary pattern templates for a subset of mode selection. Motion estimation (ME) and motion compensation (MC) are performed only on the selected subset of modes, without exhaustive exploration of all modes available in HEVC. The experimental results reveal a reduction of 42% encoding time complexity of HEVC encoder with similar subjective and objective image quality.
Fast inter-mode decision strategy for HEVC on depth videos
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT) p. 288-293
- Full Text: false
- Reviewed:
- Description: Multiview video employs the utilization of both texture and depth video information from different angles to create a 3D video for more realistic view of a scene. Unlike texture, depth video is a gray scale map that represents the distance between the camera and 3D points in a scene. Existing multiview video coding (MVC) techniques including 3D-High Efficiency Video Coding (HEVC) standard encode both texture and depth videos jointly by exploiting texture video information for the corresponding depth video coding (DVC) to reduce computational time as the texture and depth videos have motion similarity in representing the same scene. This strategy has two limitations: (i) more bits and computational time might be required due to the large residuals for the misalignment between depth and texture edges and (ii) switching between different views may require more times due to the increased dependency between texture and depth. In this paper, we propose an independent DVC technique using HEVC (a video coding standard for single view) so that we can improve the rate distortion (RD) performance and reduce computational time by improving switching speed. For this, we use motion features to reduce a number of motion estimation (ME) and motion compensation (MC) modes in HEVC. As we use motion feature which is the underlying criteria for selecting different modes in the standard and then we select a subset of modes which can provide almost the same RD performance. Experimental outcomes reveal a reduction of 48% encoding time of HEVC encoder with similar RD performance and better interactivity.
Fast intermode selection for HEVC video coding using phase correlation
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
Foreground motion and spatial saliency-based efficient HEVC Video Coding
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference paper
- Relation: 2015 International Conference on Image and Vision Computing New Zealand (IVCNZ)
- Full Text: false
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) could not provide real time facilities to the limited processing and battery powered electronic devices as its encoding time complexity increases multiple times compared to its predecessor. Numerous researchers contribute to address this limitation by reducing a number of motion estimation (ME) modes where they analyze homogeneity, residual and statistical correlation among different modes. Although their approaches save some encoding time, however, could not reach the similar rate-distortion (RD) performance with HEVC encoder as they merely depend on existing Lagrangian cost function (LCF) within HEVC framework. To overcome this limitation, in this paper, we capture visual attentive Foreground motion and salient region (FMSR) which are sensitive to human visual system for quality assessment. The FMSR features captured by visual attentive and dynamic background modeling are adaptively synthesized to determine a subset of candidate modes. This preprocessing phase is independent from LCF. Since the proposed technique can avoid exhaustive exploration of all modes with simple criteria, it can reduce 27% encoding time on average. With efficient selection of FMSR-based appropriate block partitioning modes, it can also improve up to 1.0dB peak signal-to-noise ratio (PSNR).
From Tf-Idf to learning-to-rank : An overview
- Authors: Ibrahim, Yousef , Murshed, Manzur
- Date: 2015
- Type: Text , Book chapter
- Relation: Handbook of research on innovations in information retrieval, analysis, and management Chapter 3 p. 62-109
- Full Text: false
- Reviewed:
- Description: Ranking a set of documents based on their relevances with respect to a given query is a central problem of information retrieval (IR). Traditionally people have been using unsupervised scoring methods like tf-idf, BM25, Language Model etc., but recently supervised machine learning framework is being used successfully to learn a ranking function, which is called learning-to-rank (LtR) problem. There are a few surveys on LtR in the literature; but these reviews provide very little assistance to someone who, before delving into technical details of different algorithms, wants to have a broad understanding of LtR systems and its evolution from and relation to the traditional IR methods. This chapter tries to address this gap in the literature. Mainly the following aspects are discussed: the fundamental concepts of IR, the motivation behind LtR, the evolution of LtR from and its relation to the traditional methods, the relationship between LtR and other supervised machine learning tasks, the general issues pertaining to an LtR algorithm, and the theory of LtR. © 2016 by IGI Global. All rights reserved.
Joint texture and depth coding using cuboid data compression
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
Lossless image coding using binary tree decomposition of prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur , Shahriyar, Shampa , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: State-of-the-art lossless image compression schemes, such as, JPEG-LS and CALIC, have been proposed in the context adaptive predictive coding framework. These schemes involve a prediction step followed by context adaptive entropy coding of the residuals. It can be observed that there exist significant spatial correlation among the residuals after prediction. The efficient schemes proposed in the literature rely on context adaptive entropy coding to exploit this spatial correlation. In this paper, we propose an alternative approach to exploit this spatial correlation. The proposed scheme also involves a prediction stage. However, we resort to a binary tree based hierarchical decomposition technique to efficiently exploit the spatial correlation. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall compression gain of 2.1% against JPEG-LS. © 2015 IEEE.
Network-aware virtual machine placement and migration in cloud data centres
- Authors: Ferdaus, Md Hasanul , Murshed, Manzur , Clalheiros, Rodrigo , Buyya, Rajkumar
- Date: 2015
- Type: Text , Book chapter
- Relation: Emerging research in cloud distributed computing systems (Advances in systems analysis, software engineering, and high performance computing (ASASEHPC) book series) Chapter 2 p. 42-91
- Full Text: false
- Reviewed:
- Description: With the pragmatic realization of computing as a utility, Cloud Computing has recently emerged as a highly successful alternative IT paradigm. Cloud providers are deploying large-scale data centers across the globe to meet the Cloud customers’ compute, storage, and network resource demands. Efficiency and scalability of these data centers, as well as the performance of the hosted applications’ highly depend on the allocations of the data center resources. Very recently, network-aware Virtual Machine (VM) placement and migration is developing as a very promising technique for the optimization of compute-network resource utilization, energy consumption, and network traffic minimization. This chapter presents the relevant background information and a detailed taxonomy that characterizes and classifies the various components of VM placement and migration techniques, as well as an elaborate survey and comparative analysis of the state of the art techniques. Besides highlighting the various aspects and insights of the network-aware VM placement and migration strategies and algorithms proposed by the research community, the survey further identifies the benefits and limitations of the existing techniques and discusses on the future research directions.
Symbol coding of Laplacian distributed prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Digital Signal Processing: A Review Journal Vol. 44, no. 1 (2015), p. 76-87
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS.
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS. © 2015 Elsevier Inc. All rights reserved.
A novel video coding scheme using a scene adaptive non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference paper
- Relation: 16th IEEE International Workshop on Multimedia Signal Processing, MMSP 2014 p. 1-6
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Video coding techniques utilising background frames, provide better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. Parametric approaches such as the mixture of Gaussian (MoG) based background modeling has been widely used however they require prior knowledge about the test videos for parameter estimation. Recently introduced non-parametric (NP) based background modeling techniques successfully improved video coding performance through a HEVC integrated coding scheme. The inherent nature of the NP technique naturally exhibits superior performance in dynamic background scenarios compared to the MoG based technique without a priori knowledge of video data distribution. Although NP based coding schemes showed promising coding performances, they suffer from a number of key challenges - (a) determination of the optimal subset of training frames for generating a suitable background that can be used as a reference frame during coding, (b) incorporating dynamic changes in the background effectively after the initial background frame is generated, (c) managing frequent scene change leading to performance degradation, and (d) optimizing coding quality ratio between an I-frame and other frames under bit rate constraints. In this study we develop a new scene adaptive coding scheme using the NP based technique, capable of solving the current challenges by incorporating a new continuously updating background generation process. Extensive experimental results are also provided to validate the effectiveness of the new scheme.
An efficient video coding technique using a novel non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
Distributed database management systems : Architectural design choices for the cloud
- Authors: Kamal, Joarder , Murshed, Manzur
- Date: 2014
- Type: Text , Book chapter
- Relation: Cloud Computing : Challenges, Limitations and R&D Solutions (Computer Communications and Networks series) Chapter 2 p. 23-50
- Full Text: false
- Reviewed:
- Description: Cloud computing has changed the way we used to exploit software and systems. The two decades’ practice of architecting solutions and services over the Internet has just revolved within the past few years. End users are now relying more on paying for what they use instead of purchasing a full-phase license. System owners are also in rapid hunt for business profits by deploying their services in the Cloud and thus maximising global outreach and minimising overall management costs. However, deploying and scaling Cloud applications regionally and globally are highly challenging. In this context, distributed data management systems in the Cloud promise rapid elasticity and horizontal scalability so that Cloud applications can sustain enormous growth in data volume, velocity, and value. Besides, distributed data replication and rapid partitioning are the two fundamental hammers to nail down these challenges. While replication ensures database read scalability and georeachability, data partitioning favours database write scalability and system-level load balance. System architects and administrators often face difficulties in managing a multi-tenant distributed database system in Cloud scale as the underlying workload characteristics change frequently. In this chapter, the inherent challenges of such phenomena are discussed in detail alongside their historical backgrounds. Finally, potential way outs to overcome such architectural barriers are presented under the light of recent research and development in this area.
Efficient coding of depth map by exploiting temporal correlation
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 International Conference on Digital Image Computing : Techniques and Applications (DICTA); Wollongong, Australia; 25th-27th November 2014
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Description: With the growing demands for 3D and multi-view video content, efficient depth data coding becomes a vital issue in image and video coding area. In this paper, we propose a simple depth coding scheme using multiple prediction modes exploiting temporal correlation of depth map. Current depth coding techniques mostly depend on intra-coding mode that cannot get the advantage of temporal redundancy in the depth maps and higher spatial redundancy in inter-predicted depth residuals. Depth maps are characterized by smooth regions with sharp edges that play an important role in the view synthesis process. As depth maps are more sensitive to coding errors, use of transformation or approximation of edges by explicit edge modelling has impact on view synthesis quality. Moreover, lossy compression of depth map brings additional geometrical distortion to synthetic view. In this paper, we have demonstrated that encoding inter-coded depth block residuals with quantization at pixel domain is more efficient than the intra-coding techniques relying on explicit edge preservation. On standard 3D video sequences, the proposed depth coding has achieved superior image quality of synthesized views against the new 3D-HEVC standard for depth map bit-rate 0.25 bpp or higher.
Efficient HEVC scheme using motion type categorization
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 10th International Conference on emerging Networking EXperiments and Technologies (CoNEXT); Sydney, Australia; 2nd-5th December 2014; published in Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming p. 41-42
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard introduces a number of innovative tools which can reduce approximately 50% bit-rate compared to its predecessor H.264/AVC at the same perceptual video quality whereas the computational time has increased multiple times. To reduce the encoding time while preserving the expected video quality has become a real challenge today for video transmission and streaming especially using low-powered devices. Motion estimation (ME) and motion compensation (MC) using variable-size blocks (i.e., intermodes) require 60-80% of total computational time. In this paper we propose a new efficient intermode selection technique based on phase correlation and incorporate into HEVC framework to predict ME and MC modes and perform faster intermode selection based on three dissimilar motion types in different videos. Instead of exploring all the modes exhaustively we select a subset of modes using motion type and the final mode is selected based on the Lagrangian cost function. The experimental results show that compared to HEVC the average computational time can be downscaled by 34% while providing the similar rate-distortion (RD) performance.
Energy-aware virtual machine consolidation in IaaS cloud computing
- Authors: Ferdaus, Md Hasanul , Murshed, Manzur
- Date: 2014
- Type: Text , Book chapter
- Relation: Cloud Computing : Challenges, Limitations and R&D Solutions (Computer Communications and Networks series) Chapter 8 p. 179-208
- Full Text: false
- Reviewed:
- Description: With immense success and rapid growth within the past few years, cloud computing has been established as the dominant paradigm of IT industry. To meet the increasing demand of computing and storage resources, infrastructure cloud providers are deploying planet-scale data centers across the world, consisting of hundreds of thousands, even millions of servers. These data centers incur very high investment and operating costs for the compute and network devices as well as for the energy consumption. Moreover, because of the huge energy usage, such data centers leave large carbon footprints and thus have adverse effects on the environment. As a result, efficient computing resource utilization and energy consumption reduction are becoming crucial issues to make cloud computing successful. Intelligent workload placement and relocation is one of the primary means to address these issues. This chapter presents an overview of the infrastructure resource management systems and technologies and detailed description of the proposed solution approaches for efficient cloud resource utilization and minimization of power consumption and resource wastages. Different types of server consolidation mechanisms are presented along with the solution approaches proposed by the researchers of both academia and industry. Various aspects of workload reconfiguration mechanisms and existing works on workload relocation techniques are described.
Inherently edge-preserving depth-map coding without explicit edge detection and approximation C3 - Proceedings - IEEE International Conference on Multimedia and Expo
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Full Text: false
- Description: In emerging 3D video coding, depth has significant importance in view synthesis, scene analysis, and 3D object reconstruction. Depth images can be characterized by sharp edges and smooth large regions. Most of the existing depth coding techniques use intra-coding mode and try to preserve edges explicitly with approximated edge modelling. However, edges can be implicitly preserved as long as the transformation is avoided. In this paper, we have demonstrated that inherent edge preserving encoding of inter-coded block residuals, uniformly quantized at pixel domain using motion data from associated texture components, is more efficient than explicitly edge preserving intra-coding techniques. Experimental results show that the proposed technique have achieved superior image quality of synthesized views against the new 3D-HEVC standard. Lossless applications of the proposed technique has achieved on average 66% and 23% bit-rate savings against 3D-HEVC with negligible quantization and perceptually unnoticeable view synthesis, respectively.
On demand-driven movement strategy for moving beacons in sensor localization
- Authors: Iqbal, Anindya , Murshed, Manzur
- Date: 2014
- Type: Text , Journal article
- Relation: Journal of Network and Computer Applications Vol. 44, no. (2014), p. 46-62
- Full Text: false
- Reviewed:
- Description: In wireless sensor networks, estimating sensor location demands a large number of neighbor location references due to the unavoidable wireless signal attenuation problem. However, the cost of deployment increases with the increase in beacon location references. This limitation can be overcome using moving beacons exploiting the control over the number, position, and strength of beacon transmissions. In this scenario, the trade-off between localization cost and accuracy, which are directly linked up with anchor movement and transmission pattern, introduces many challenges that have recently attracted research interest. This paper aims to propose a noise-tolerant and cost-effective range-free localization technique using moving beacons that localize randomly deployed sensor nodes within a maximum localization error bound while minimizing the cost of beacon traversal and transmissions. We found that the mean localization error can be kept within 20–35% of the maximum transmission radius by selecting the movement and beacon transmission parameters according to user demand. The proposed schemes are compared with other works and also shown to be robust against positional errors of the moving beacon.