Exploiting spatial smoothness to recover undecoded coefficients for transform domain distributed video coding
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: IEEE International Conference on Image Processing; Melbourne, Australia; 15th-18th September 2013, p. 1782-1786
- Relation: http://purl.org/au-research/grants/arc/DP1095487
- Full Text: false
- Reviewed:
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM.
- Description: In a transform domain distributed video coding scheme, the correlation between the current encoding unit, e.g. block and slice, and the corresponding side-information is modeled using a virtual channel. This correlation model is then used for rate allocation, quantization, and Wyner-Ziv coding. Since the encoder can only have an estimate of the correlation instead of the exact knowledge of the side-information, the decoder will fail to recover the quantized transformed coeffi- cients with a nonzero probability. In this paper, we propose to integrate a scheme at the decoder to recover the undecoded coefficients using the spatial smoothness property of individual video frames. Simulation results demonstrated that, at different decoding failure probabilities, a transformed coeffi- cient recovery scheme can significantly improve the quality of videos in terms of both PSNR and SSIM
Prefix coding of integers with real-valued predictions using cosets
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2007
- Type: Text , Journal article
- Relation: IEEE Communications Letters, vol. 11, no. 10, IEEE Communications Society, p. 814-816
- Full Text: false
- Description: In predictive coding of integers real-valued residuals are mapped to integers before encoding, leaving room for improvement by reducing the loss due to rounding. In this paper, we propose a new prefix coding scheme where actual integer values, instead of the residuals, are encoded using cosets with real domain predictions as the side information. This novel coding scheme outperforms Golomb-based coding by reducing the rounding loss with similar computational and memory complexity.
Predictive coding of integers with real-valued predictions
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference paper
- Relation: DCC 2013 Data Compression Conference; Snowbird, USA; 20th-22nd March 2013; p. 431-440
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: In this paper, we have extended the Rice-Golomb code so that it can operate at fractional precision to efficiently exploit the real-valued predictions. Coding at infinitesimal precision allows the residuals to be modeled with the Lap lace distribution. Unlike the Rice-Golomb code, which maps equally probable opposite-signed residuals to different integers, the proposed coding scheme is symmetric in the sense that, at infinitesimal precision, it assigns code words of equal length to equally probable residual intervals. The symmetry of both the Lap lace distribution and the coding scheme facilitates the analysis of the proposed coding scheme to determine the average code-length and the optimal value of the associated coding parameter.
Motion compensation for block-based lossless video coding using lattice-based binning
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2010
- Type: Text , Conference paper
- Full Text: false
- Reviewed:
- Description: Abstract— A block-based lossless video coding scheme using the notion of binning has been proposed in [1]. To further improve the compression and reduce the complexity, in this paper we investigate the impact of two sub-optimal motion search algorithms on the performance of this lattice-based scheme. While one of the algorithm tries avoiding motion vectors, the other tries to reduce complexity. Our experimental results have demonstrated that the loss due to sub-optimal motion search outweighs the gain when motion vectors are avoided. However, experimental results have shown that there is negligible performance loss when lowcomplexity sub-optimal three step search is used.
Symbol coding of Laplacian distributed prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2015
- Type: Text , Journal article
- Relation: Digital Signal Processing: A Review Journal Vol. 44, no. 1 (2015), p. 76-87
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS.
- Description: Predictive coding schemes, proposed in the literature, essentially model the residuals with discrete distributions. However, real-valued residuals can arise in predictive coding, for example, from the usage of an r order linear predictor specified by r real-valued coefficients. In this paper, we propose a symbol-by-symbol coding scheme for the Laplace distribution, which closely models the distribution of real-valued residuals in practice. To efficiently exploit the real-valued predictions at a given precision, the proposed scheme essentially combines the process of residual computation and coding, in contrast to conventional schemes that separate these two processes. In the context of adaptive predictive coding framework, where the source statistics must be learnt from the data, the proposed scheme has the advantage of lower 'model cost' as it involves learning only one parameter. In this paper, we also analyze the proposed parametric coding scheme to establish the relationship between the optimal value of the coding parameter and the scale parameter of the Laplace distribution. Our experimental results demonstrated the compression efficiency and computational simplicity of the proposed scheme in adaptive coding of residuals against the widely used arithmetic coding, Rice-Golomb coding, and the Merhav-Seroussi-Weinberger scheme adopted in JPEG-LS. © 2015 Elsevier Inc. All rights reserved.
Lossless image coding using hierarchical decomposition and recursive partitioning
- Authors: Ali, Mortuza , Murshed, Manzur , Shahriyar, Shampa , Paul, Manoranjan
- Date: 2016
- Type: Text , Journal article
- Relation: APSIPA Transactions on Signal and Information Processing Vol. 5, no. (2016), p. 1-11
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: State-Of-The-Art lossless image compression schemes, such as JPEG-LS and CALIC, have been proposed in the context-adaptive predictive coding framework. These schemes involve a prediction step followed by context-adaptive entropy coding of the residuals. However, the models for context determination proposed in the literature, have been designed using ad-hoc techniques. In this paper, we take an alternative approach where we fix a simpler context model and then rely on a systematic technique to efficiently exploit spatial correlation to achieve efficient compression. The essential idea is to decompose the image into binary bitmaps such that the spatial correlation that exists among non-binary symbols is captured as the correlation among few bit positions. The proposed scheme then encodes the bitmaps in a particular order based on the simple context model. However, instead of encoding a bitmap as a whole, we partition it into rectangular blocks, induced by a binary tree, and then independently encode the blocks. The motivation for partitioning is to explicitly identify the blocks within which the statistical correlation remains the same. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall bit-rate saving of 1.56% against JPEG-LS. © 2016 The Authors.
Lossless image coding using binary tree decomposition of prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur , Shahriyar, Shampa , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: State-of-the-art lossless image compression schemes, such as, JPEG-LS and CALIC, have been proposed in the context adaptive predictive coding framework. These schemes involve a prediction step followed by context adaptive entropy coding of the residuals. It can be observed that there exist significant spatial correlation among the residuals after prediction. The efficient schemes proposed in the literature rely on context adaptive entropy coding to exploit this spatial correlation. In this paper, we propose an alternative approach to exploit this spatial correlation. The proposed scheme also involves a prediction stage. However, we resort to a binary tree based hierarchical decomposition technique to efficiently exploit the spatial correlation. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall compression gain of 2.1% against JPEG-LS. © 2015 IEEE.
Undecoded coefficients recovery in distributed video coding by exploiting spatio-temporal correlation: a linear programming approach
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA 2013), Hobart, November 26-28th, 2013, p 1-7
- Full Text: false
- Reviewed:
- Description: Distributed video coding (DVC) aims at achieving low-complexity encoding in contrast to the existing video coding standards' high complexity encoding. According to the Wyner-Ziv theorem this can be achieved, under certain conditions, by independent encoding of the frames while resorting to joint decoding. However, the performance of a Wyner-Ziv coding scheme significantly depends on its knowledge about the spatio-temporal correlation of the video. Unfortunately, correlation statistics in a video widely varies both along the spatial and temporal directions. Therefore, we argue that in a feedback free transform domain DVC scheme the decoder will fail to recover all the transform coefficients with a nonzero probability. Thus, we suggest to integrate a recovery method with the decoder that aims at recovering the undecoded coefficients by exploiting the spatio-temporal correlation of the video. Besides, we extend and modify a recovery scheme, recently proposed in the context of images, for DVC so that it exploits both spatial and temporal correlations in recovering the undecoded coefficients. The essential idea of this scheme is to formulate the recovery problem as a linear optimization problem which can be solved efficiently using linear programming. Our simulation results demonstrated that the proposed scheme can significantly improve the PSNR and visual quality of the erroneous video frames produced by a DVC decoder.
An efficient cooperative lane-changing algorithm for sensor- and communication-enabled automated vehicles
- Authors: Awal, Tanveer , Murshed, Manzur , Ali, Mortuza
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: A key goal in transportation system is to attain efficient road traffic through minimization of trip time, fuel consumption and pollutant-emission without compromising safety. In dense traffic lane-changes and merging are often key ingredients to cause safety hazards, traffic breakdowns and travel delays. In this paper, we propose an efficient cooperative lane-changing algorithm CLA for sensor- and communication-enabled automated vehicles to reduce the lane-changing bottlenecks. For discretionary lane-changing, we consider the advantages of the subject vehicle, the follower in the current lane and k (an integer) lag vehicles in the target lane to maximize speed gains. Our algorithm simultaneously minimizes the impact of lane-change on traffic flow and the overall trip time, fuel-consumption and pollutant-emission. For mandatory lane-changing CLA dissociates the decision-making point from the actual mandatory lane-changing point and computes a suitable lane-changing slot in order to minimize lane-changing (merging) time. Our algorithm outperforms the potential cooperative lane-changing algorithm MOBIL proposed by Kesting et al. [1] in terms of merging time and rate, waiting time, fuel consumption, average velocity and flow (especially at the point in front of the merging point) at the cost of slightly increased average trip time for the mainroad vehicles compared to MOBIL. We also highlight important directions for further research. © 2015 IEEE.
Adaptive weighted non-parametric background model for efficient video coding
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2017
- Type: Text , Journal article
- Relation: Neurocomputing Vol. 226, no. (2017), p. 35-45
- Full Text:
- Reviewed:
- Description: Dynamic background frame based video coding using mixture of Gaussian (MoG) based background modelling has achieved better rate distortion performance compared to the H.264 standard. However, they suffer from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we introduce the application of the non-parametric (NP) background modelling approach for video coding domain. We present a novel background modelling technique, called weighted non-parametric (WNP) which balances the historical trend and the recent value of the pixel intensities adaptively based on the content and characteristics of any particular video. WNP is successfully embedded into the latest HEVC video coding standard for better rate-distortion performance. Moreover, a novel scene adaptive non-parametric (SANP) technique is also developed to handle video sequences with high dynamic background. Being non-parametric, the proposed techniques naturally exhibit superior performance in dynamic background modelling without a priori knowledge of video data distribution.
An efficient video coding technique using a novel non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
A novel video coding scheme using a scene adaptive non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference paper
- Relation: 16th IEEE International Workshop on Multimedia Signal Processing, MMSP 2014 p. 1-6
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Video coding techniques utilising background frames, provide better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. Parametric approaches such as the mixture of Gaussian (MoG) based background modeling has been widely used however they require prior knowledge about the test videos for parameter estimation. Recently introduced non-parametric (NP) based background modeling techniques successfully improved video coding performance through a HEVC integrated coding scheme. The inherent nature of the NP technique naturally exhibits superior performance in dynamic background scenarios compared to the MoG based technique without a priori knowledge of video data distribution. Although NP based coding schemes showed promising coding performances, they suffer from a number of key challenges - (a) determination of the optimal subset of training frames for generating a suitable background that can be used as a reference frame during coding, (b) incorporating dynamic changes in the background effectively after the initial background frame is generated, (c) managing frequent scene change leading to performance degradation, and (d) optimizing coding quality ratio between an I-frame and other frames under bit rate constraints. In this study we develop a new scene adaptive coding scheme using the NP based technique, capable of solving the current challenges by incorporating a new continuously updating background generation process. Extensive experimental results are also provided to validate the effectiveness of the new scheme.
Adaptive contention window based wireless medium access mechanism for periodic sensor data collection applications
- Authors: Haque, Ahsanul , Murshed, Manzur , Ali, Mortuza
- Date: 2009
- Type: Text , Conference paper
- Relation: Communications (MICC), 2009 IEEE 9th Malaysia International Conference
- Full Text: false
- Reviewed:
- Description: Contention window based medium access protocols are of practical interest in low data rate wireless communication scenarios. In periodic data collection applications, nodes mostly produce small data packets that are collected by the cluster heads and routed to the base station. In this paper, a new mechanism for adaptively selecting the size of the contention window based on the number of contending nodes has been presented. The proposed scheme effectively reduces the number of collisions in periodic collection scenarios with fixed number of nodes. Theoretical analysis and simulation results demonstrate that, in periodic data collection processes, the new protocol reduces the data collection time significantly as compared to IEEE 802.11 and the recently proposed Synchronized Shared Contention Window (SSCW) based scheme.
Efficient contention resolution in MAC protocol for periodic data collection in WSNs
- Authors: Haque, Ahsanul , Murshed, Manzur , Ali, Mortuza
- Date: 2010
- Type: Text , Conference paper
- Relation: Paper presented at 6th International Wireless Communications and Mobile Computing Conference
- Full Text: false
- Reviewed:
- Description: Due to the infrequent medium access in Wireless Sensor Networks (WSN), their MAC protocols are mostly based on CSMA. In this paper we present an efficient contention resolution scheme for CSMA based MAC protocols which is suitable for periodic data collection in WSNs. Taking into account that the number of nodes in a single cluster is fixed, this protocol uses successively decreasing contention window. It is characterized by non-overlapping contention window, that maintains a constant successful transmission rate. It significantly decreases data collection time by minimizing the time wastage due to collisions. At the same time, by using adaptive CW, it reduces the time wastage in empty slots. Experimental results demonstrate that in periodic data collection within a single hop cluster this scheme has performance superior to the recently proposed Synchronous Shared Contention Window (SSCW) based scheme in terms of time wastage and throughput.
High quality region-of-interest coding for video conferencing based remote general practitioner training
- Authors: Murshed, Manzur , Siddique, Md Atiur Rahman , Islam, Saikat , Ali, Mortuza , Lu, Guojun , Villanueva, Elmer , Brown, James
- Date: 2013
- Type: Text , Conference paper
- Relation: Proceedings of the International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED 2013), Wilmington, DE, 1st October 2013 pg 240-245
- Full Text: false
- Reviewed:
Verifiable and privacy preserving electronic voting with untrusted machines
- Authors: Murshed, Manzur , Sabrina, Tishna , Iqbal, Anindya , Ali, Mortuza
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of the 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2013) Melbourne, Vic, 16-18th July, 2013 p. 798-804
- Full Text: false
- Reviewed:
- Description: Designing a trustworthy voting system that uses electronic voting machines (EVMs) for efficiency and accuracy is a challenging task. It is difficult, if not impossible, to ensure the trustworthiness of EVMs that possess computation, storage, and communication capabilities. Thus an electronic voting system that does not assume trusted EVMs is clearly desirable. In this paper, we have proposed a k-anonymized electronic voting scheme that achieves this goal by assuming a hardware-controlled trusted random number generator external to the EVM. The proposed scheme relies on a k-anonymization technique to protect privacy and resort to joint de-anonymization of the votes for counting. Since the joint de-anonymization takes into account all the votes, it is difficult to manipulate an individual vote, even by the EVM, without being detected. Besides the anonymization technique, the proposed scheme relies on standard cryptographic hashing and the concept of floating receipt to provide end-to-end verifiability that prevents coercion or vote trading.
Efficient coding of depth map by exploiting temporal correlation
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 International Conference on Digital Image Computing : Techniques and Applications (DICTA); Wollongong, Australia; 25th-27th November 2014
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Description: With the growing demands for 3D and multi-view video content, efficient depth data coding becomes a vital issue in image and video coding area. In this paper, we propose a simple depth coding scheme using multiple prediction modes exploiting temporal correlation of depth map. Current depth coding techniques mostly depend on intra-coding mode that cannot get the advantage of temporal redundancy in the depth maps and higher spatial redundancy in inter-predicted depth residuals. Depth maps are characterized by smooth regions with sharp edges that play an important role in the view synthesis process. As depth maps are more sensitive to coding errors, use of transformation or approximation of edges by explicit edge modelling has impact on view synthesis quality. Moreover, lossy compression of depth map brings additional geometrical distortion to synthetic view. In this paper, we have demonstrated that encoding inter-coded depth block residuals with quantization at pixel domain is more efficient than the intra-coding techniques relying on explicit edge preservation. On standard 3D video sequences, the proposed depth coding has achieved superior image quality of synthesized views against the new 3D-HEVC standard for depth map bit-rate 0.25 bpp or higher.
Inherently edge-preserving depth-map coding without explicit edge detection and approximation C3 - Proceedings - IEEE International Conference on Multimedia and Expo
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Full Text: false
- Description: In emerging 3D video coding, depth has significant importance in view synthesis, scene analysis, and 3D object reconstruction. Depth images can be characterized by sharp edges and smooth large regions. Most of the existing depth coding techniques use intra-coding mode and try to preserve edges explicitly with approximated edge modelling. However, edges can be implicitly preserved as long as the transformation is avoided. In this paper, we have demonstrated that inherent edge preserving encoding of inter-coded block residuals, uniformly quantized at pixel domain using motion data from associated texture components, is more efficient than explicitly edge preserving intra-coding techniques. Experimental results show that the proposed technique have achieved superior image quality of synthesized views against the new 3D-HEVC standard. Lossless applications of the proposed technique has achieved on average 66% and 23% bit-rate savings against 3D-HEVC with negligible quantization and perceptually unnoticeable view synthesis, respectively.
Lossless depth map coding using binary tree based decomposition and context-based arithmetic coding
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 2016 IEEE International Conference on Multimedia and Expo, ICME 2016; Seattle, United States; 11th-15th July 2016; published in Proceedings of the 2016 IEEE International Conference on Mulitmedia and Expo Vol. 2016-August, p. 1-6
- Full Text: false
- Reviewed:
- Description: Depth maps are becoming increasingly important in the context of emerging video coding and processing applications. Depth images represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. To enable high quality view rendering at the receiver side, preservation of these characteristics is important. Lossless coding enables avoiding rendering artifacts in synthesized views due to depth compression artifacts. In this paper, we propose a binary tree based lossless depth coding scheme that arranges the residual frame into integer or binary residual bitmap. High spatial correlation in depth residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard 3D video sequences, the proposed lossless depth coding has achieved compression ratio in the range of 20 to 80. © 2016 IEEE.
- Description: Proceedings - IEEE International Conference on Multimedia and Expo
Lossless hyperspectral image compression using binary tree based decomposition
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.