Exploiting spatial smoothness to recover undecoded coefficients for transform domain distributed video coding
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: IEEE International Conference on Image Processing; Melbourne, Australia; 15th-18th September 2013, p. 1782-1786
- Relation: http://purl.org/au-research/grants/arc/DP1095487
- Full Text: false
- Reviewed:
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM.
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM
A parametric approach to list decoding of Reed-Solomon codes using interpolation
- Authors: Ali, Mortuza , Kiujper, Margreta
- Date: 2011
- Type: Text , Journal article
- Relation: IEEE Transaction on Information Theory Vol. 57, no. 10 (2011), p. 6718-6728
- Full Text: false
- Reviewed:
- Description: Abstract—In this paper, we present a minimal list decoding algorithm for Reed-Solomon (RS) codes. Minimal list decoding for a code refers to list decoding with radius , where is the minimum of the distances between the received word and any codeword in . We consider the problem of determining the value of as well as determining all the codewords at distance . Our approach involves a parametrization of interpolating polynomials of a minimal Gröbner basis . We present two efficient ways to compute . We also show that so-called re-encoding can be used to further reduce the complexity. We then demonstrate how our parametric approach can be solved by a computationally feasible rational curve fitting solution from a recent paper by Wu. Besides, we present an algorithm to compute the minimum multiplicity as well as the optimal values of the parameters associated with this multiplicity, which results in overall savings in both memory and computation
Efficient coding of depth map by exploiting temporal correlation
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 International Conference on Digital Image Computing : Techniques and Applications (DICTA); Wollongong, Australia; 25th-27th November 2014
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Description: With the growing demands for 3D and multi-view video content, efficient depth data coding becomes a vital issue in image and video coding area. In this paper, we propose a simple depth coding scheme using multiple prediction modes exploiting temporal correlation of depth map. Current depth coding techniques mostly depend on intra-coding mode that cannot get the advantage of temporal redundancy in the depth maps and higher spatial redundancy in inter-predicted depth residuals. Depth maps are characterized by smooth regions with sharp edges that play an important role in the view synthesis process. As depth maps are more sensitive to coding errors, use of transformation or approximation of edges by explicit edge modelling has impact on view synthesis quality. Moreover, lossy compression of depth map brings additional geometrical distortion to synthetic view. In this paper, we have demonstrated that encoding inter-coded depth block residuals with quantization at pixel domain is more efficient than the intra-coding techniques relying on explicit edge preservation. On standard 3D video sequences, the proposed depth coding has achieved superior image quality of synthesized views against the new 3D-HEVC standard for depth map bit-rate 0.25 bpp or higher.
Prefix coding of integers with real-valued predictions using cosets
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2007
- Type: Text , Journal article
- Relation: IEEE Communications Letters, vol. 11, no. 10, IEEE Communications Society, p. 814-816
- Full Text: false
- Description: In predictive coding of integers real-valued residuals are mapped to integers before encoding, leaving room for improvement by reducing the loss due to rounding. In this paper, we propose a new prefix coding scheme where actual integer values, instead of the residuals, are encoded using cosets with real domain predictions as the side information. This novel coding scheme outperforms Golomb-based coding by reducing the rounding loss with similar computational and memory complexity.
Vendor selection using fuzzy C means algorithm and analytic hierarchy process
- Authors: Nine, M.S.Q.Z. , Khan, M.A.K. , Hoque, M.H. , Ali, Mortuza , Shil, N.C. , Sorwar, Golam
- Date: 2009
- Type: Text , Conference paper
- Relation: Fuzzy Systems, 2009. FUZZ-IEEE 2009. IEEE International Conference
- Full Text: false
- Reviewed:
- Description: Vendor selection is a strategic issue in supply chain management for any organization to identify the right supplier. Such selection in most cases is based on the analysis of some specific criteria. Most of the researches so far concentrate on multi-criteria decision-making analysis. Though many approaches have been proposed, analytic hierarchy process (AHP) is the most well known as it can deal with a very complex criteria structure. In AHP, the selected criteria are ranked and organized in a hierarchical order from generic to specific to formulate the problem. Though this order of ranking is acceptably logical, it incurs a huge computational complexity when a large number of alternatives are considered as the selection criteria. Moreover, the AHP may generate wrong selection due to computational error. To address these limitations, a novel model namely vendor selection using fuzzy c-means algorithm and analytic hierarchy process (VFA) is presented in this paper by integrating the fuzzy c-means clustering (FCM) algorithm with analytic hierarchy process (AHP). The outcome of the proposed VFA algorithm is compared with the basic AHP algorithm and VFA outperforms the basic AHP and reduces the computational complexity of AHP by a factor of 7.
Algorithm for conversion of Bangla sentence to Universal Networking Language
- Authors: Ali,M , Ali, Mortuza , Nurannabi, Abu Mohammad , Das, Jugal
- Date: 2010
- Type: Text , Journal article
- Relation: Vol. , no. (2010), p.
- Full Text: false
- Reviewed:
Adaptive contention window based wireless medium access mechanism for periodic sensor data collection applications
- Authors: Haque, Ahsanul , Murshed, Manzur , Ali, Mortuza
- Date: 2009
- Type: Text , Conference paper
- Relation: Communications (MICC), 2009 IEEE 9th Malaysia International Conference
- Full Text: false
- Reviewed:
- Description: Contention window based medium access protocols are of practical interest in low data rate wireless communication scenarios. In periodic data collection applications, nodes mostly produce small data packets that are collected by the cluster heads and routed to the base station. In this paper, a new mechanism for adaptively selecting the size of the contention window based on the number of contending nodes has been presented. The proposed scheme effectively reduces the number of collisions in periodic collection scenarios with fixed number of nodes. Theoretical analysis and simulation results demonstrate that, in periodic data collection processes, the new protocol reduces the data collection time significantly as compared to IEEE 802.11 and the recently proposed Synchronized Shared Contention Window (SSCW) based scheme.
Predictive coding of integers with real-valued predictions
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: DCC 2013 Data Compression Conference; Snowbird, USA; 20th-22nd March 2013; p. 431-440
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: In this paper, we have extended the Rice-Golomb code so that it can operate at fractional precision to efficiently exploit the real-valued predictions. Coding at infinitesimal precision allows the residuals to be modeled with the Lap lace distribution. Unlike the Rice-Golomb code, which maps equally probable opposite-signed residuals to different integers, the proposed coding scheme is symmetric in the sense that, at infinitesimal precision, it assigns code words of equal length to equally probable residual intervals. The symmetry of both the Lap lace distribution and the coding scheme facilitates the analysis of the proposed coding scheme to determine the average code-length and the optimal value of the associated coding parameter.
Motion compensation for block-based lossless video coding using lattice-based binning
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2010
- Type: Text , Conference paper
- Full Text: false
- Reviewed:
- Description: Abstract— A block-based lossless video coding scheme using the notion of binning has been proposed in [1]. To further improve the compression and reduce the complexity, in this paper we investigate the impact of two sub-optimal motion search algorithms on the performance of this lattice-based scheme. While one of the algorithm tries avoiding motion vectors, the other tries to reduce complexity. Our experimental results have demonstrated that the loss due to sub-optimal motion search outweighs the gain when motion vectors are avoided. However, experimental results have shown that there is negligible performance loss when lowcomplexity sub-optimal three step search is used.
Efficient contention resolution in MAC protocol for periodic data collection in WSNs
- Authors: Haque, Ahsanul , Murshed, Manzur , Ali, Mortuza
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 6th International Wireless Communications and Mobile Computing Conference
- Full Text: false
- Reviewed:
- Description: Due to the infrequent medium access in Wireless Sensor Networks (WSN), their MAC protocols are mostly based on CSMA. In this paper we present an efficient contention resolution scheme for CSMA based MAC protocols which is suitable for periodic data collection in WSNs. Taking into account that the number of nodes in a single cluster is fixed, this protocol uses successively decreasing contention window. It is characterized by non-overlapping contention window, that maintains a constant successful transmission rate. It significantly decreases data collection time by minimizing the time wastage due to collisions. At the same time, by using adaptive CW, it reduces the time wastage in empty slots. Experimental results demonstrate that in periodic data collection within a single hop cluster this scheme has performance superior to the recently proposed Synchronous Shared Contention Window (SSCW) based scheme in terms of time wastage and throughput.
Inherently edge-preserving depth-map coding without explicit edge detection and approximation C3 - Proceedings - IEEE International Conference on Multimedia and Expo
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Full Text: false
- Description: In emerging 3D video coding, depth has significant importance in view synthesis, scene analysis, and 3D object reconstruction. Depth images can be characterized by sharp edges and smooth large regions. Most of the existing depth coding techniques use intra-coding mode and try to preserve edges explicitly with approximated edge modelling. However, edges can be implicitly preserved as long as the transformation is avoided. In this paper, we have demonstrated that inherent edge preserving encoding of inter-coded block residuals, uniformly quantized at pixel domain using motion data from associated texture components, is more efficient than explicitly edge preserving intra-coding techniques. Experimental results show that the proposed technique have achieved superior image quality of synthesized views against the new 3D-HEVC standard. Lossless applications of the proposed technique has achieved on average 66% and 23% bit-rate savings against 3D-HEVC with negligible quantization and perceptually unnoticeable view synthesis, respectively.
Lossless depth map coding using binary tree based decomposition and context-based arithmetic coding
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 2016 IEEE International Conference on Multimedia and Expo, ICME 2016; Seattle, United States; 11th-15th July 2016; published in Proceedings of the 2016 IEEE International Conference on Mulitmedia and Expo Vol. 2016-August, p. 1-6
- Full Text: false
- Reviewed:
- Description: Depth maps are becoming increasingly important in the context of emerging video coding and processing applications. Depth images represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. To enable high quality view rendering at the receiver side, preservation of these characteristics is important. Lossless coding enables avoiding rendering artifacts in synthesized views due to depth compression artifacts. In this paper, we propose a binary tree based lossless depth coding scheme that arranges the residual frame into integer or binary residual bitmap. High spatial correlation in depth residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard 3D video sequences, the proposed lossless depth coding has achieved compression ratio in the range of 20 to 80. © 2016 IEEE.
- Description: Proceedings - IEEE International Conference on Multimedia and Expo
Adaptive weighted non-parametric background model for efficient video coding
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2017
- Type: Text , Journal article
- Relation: Neurocomputing Vol. 226, no. (2017), p. 35-45
- Full Text:
- Reviewed:
- Description: Dynamic background frame based video coding using mixture of Gaussian (MoG) based background modelling has achieved better rate distortion performance compared to the H.264 standard. However, they suffer from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we introduce the application of the non-parametric (NP) background modelling approach for video coding domain. We present a novel background modelling technique, called weighted non-parametric (WNP) which balances the historical trend and the recent value of the pixel intensities adaptively based on the content and characteristics of any particular video. WNP is successfully embedded into the latest HEVC video coding standard for better rate-distortion performance. Moreover, a novel scene adaptive non-parametric (SANP) technique is also developed to handle video sequences with high dynamic background. Being non-parametric, the proposed techniques naturally exhibit superior performance in dynamic background modelling without a priori knowledge of video data distribution.
Object segmentation based on split and merge algorithm
- Authors: Faruquzzaman, A. B. M. , Paiker, Nafize , Arafat, Jahidul , Karim, Ziaul , Ali, Mortuza
- Date: 2008
- Type: Text , Conference proceedings
- Relation: 2008 IEEE Region 10 Conference, TENCON 2008; Hyderabad; India; 19th -21st Nov published in IEEE Region 10 Annual International Conference, Proceedings/TENCON p. 1-4
- Full Text: false
- Reviewed:
- Description: Image segmentation is a feverish issue as it is a challenging job and most digital imaging applications require it as a preprocessing step. Among various algorithms, although split and merge (SM) algorithm is highly lucrative because of its simplicity and effectiveness in segmenting homogeneous regions, however, it is unable to segment all types of objects in an image using a general framework due to not most natural objects being homogeneous. Addressing this issue, a new algorithm namely object segmentation based on split and merge algorithm (OSSM) is proposed in this paper considering image feature stability, inter- and intra-object variability, and human visual perception. The qualitative analysis has been conducted and the segmentation results are compared with the basic SM algorithm and a shape-based fuzzy clustering algorithm namely object based image segmentation using fuzzy clustering (OSF). The OSSM algorithm outperforms both the SM and the OSF algorithms and hence increases the application area of segmentation algorithms.
- Description: IEEE Region 10 Annual International Conference, Proceedings/TENCON
An improved pipelined processor architecture eliminating branch and jump penalty
- Authors: Hasan, Raquibal , Rahman, M. S. , Hasan, Masud , Hasan, Mahmudul , Ali, Mortuza
- Date: 2010
- Type: Text , Conference proceedings
- Relation: Computer Engineering and Applications (ICCEA), Bali, ICCEA 2010, 19th March 2010; published in 2010 2nd International Conference on Computer Engineering and Applications, ICCEA 2010 Vol. 1, p. 621-625
- Full Text: false
- Reviewed:
- Description: Control dependencies are one of the major limitations to increase the performance of pipelined processors. This paper deals with eliminating penalties in pipelined processor. We present our discussion in the light of MIPS pipelined processor architecture. Here we present an improved pipelined processor architecture eliminating branch and jump penalty. In the proposed architecture CPI for branch and jump instruction is less than that of MIPS architecture. We also have shown the design of the required cache memory cell for the improved architecture.
- Description: Second International Conference on Computer Engineering and Applications (ICCEA), 2010
Lossless hyperspectral image compression using binary tree based decomposition
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.
Symbol coding of Laplacian distributed prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Digital Signal Processing: A Review Journal Vol. 44, no. 1 (2015), p. 76-87
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS.
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS. © 2015 Elsevier Inc. All rights reserved.
Conversion of Bangla sentence for universal networking language
- Authors: Ali, Md N. Y. , Nurannabi, Abu Mohammad , Ali, Mortuza , Das, Jugal , Ahmed, Golum
- Date: 2010
- Type: Text , Conference proceedings
- Relation: 13th International Conference on Computer and Information Technology, ICCIT 2010,Dhaka, Bangladesh, 23-25 Dec, 2010 published in Computer and Information Technology (ICCIT), 2010 13th International Conference p. 108-113
- Full Text: false
- Reviewed:
- Description: Conversion from Bangla language to another native language using Universal Networking Language (UNL) is highly demanding due to increasing the usage of Internet based application. Since Bangla case structure plays a fundamental role in Bangla grammartical structures, this paper presents some rules for Bangla case structures that will be used to convert Bangla sentence to UNL expression. The theoretical analysis shows that the defined rules can be used successful conversion of Bangla sentence. ©2010 IEEE.
- Description: Proceedings of 2010 13th International Conference on Computer and Information Technology, ICCIT 2010, 23-25 Dec, 2010
An efficient video coding technique using a novel non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
Literature on image segmentation based on split - and - Merge techniques
- Authors: Faruquzzaman, A. B. M. , Paiker, Nafize , Arafat, Jahidul , Ali, Mortuza , Sorwar, Golam
- Date: 2008
- Type: Text , Conference proceedings , Conference paper
- Relation: ICITA 2008, Cairns, Qld., 23-26 June, ICITA, published in Proceedings of 5th International Conference on Information Technology and Application pp. 120-125.
- Full Text: false
- Reviewed:
- Description: Image segmentation is a feverish issue due to drastically increasing the use of computer and the Internet. Various algorithms have been invented on this aspect. Among them, split-and-merge (SM) algorithm is highly lucrative now-a-days due to its simplicity and effectiveness in the sector of image processing. Numerous researchers have performed their research work on this algorithm to triumph over its drawbacks for its sustainable and competent implementation. This paper has consolidated the useful consideration and proposal of various researchers to formulate a strong base of knowledge for the future researcher. It has also tinted few unsettled drawbacks of SM algorithm which will open the casement of brainstorming as well as persuade them for future research on SM algorithm, thereby allow SM algorithm to attain a globally optimal algorithm for image segmentation.
- Description: 5th International Conference on Information Technology and Applications, ICITA 2008