Lossless depth map coding using binary tree based decomposition and context-based arithmetic coding
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 2016 IEEE International Conference on Multimedia and Expo, ICME 2016; Seattle, United States; 11th-15th July 2016; published in Proceedings of the 2016 IEEE International Conference on Mulitmedia and Expo Vol. 2016-August, p. 1-6
- Full Text: false
- Reviewed:
- Description: Depth maps are becoming increasingly important in the context of emerging video coding and processing applications. Depth images represent the scene surface and are characterized by areas of smoothly varying grey levels separated by sharp edges at the position of object boundaries. To enable high quality view rendering at the receiver side, preservation of these characteristics is important. Lossless coding enables avoiding rendering artifacts in synthesized views due to depth compression artifacts. In this paper, we propose a binary tree based lossless depth coding scheme that arranges the residual frame into integer or binary residual bitmap. High spatial correlation in depth residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard 3D video sequences, the proposed lossless depth coding has achieved compression ratio in the range of 20 to 80. © 2016 IEEE.
- Description: Proceedings - IEEE International Conference on Multimedia and Expo
Lossless hyperspectral image compression using binary tree based decomposition
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.
A novel depth motion vector coding exploiting spatial and inter-component clustering tendency
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: Visual Communications and Image Processing, VCIP 2015; Singapore; 13th-16th December 2015 p. 1-4
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Reviewed:
- Description: Motion vectors of depth-maps in multiview and free-viewpoint videos exhibit strong spatial as well as inter-component clustering tendency. This paper presents a novel coding technique that first compresses the multidimensional bitmaps of macroblock mode and then encodes only the non-zero components of motion vectors. The bitmaps are partitioned into disjoint cuboids using binary tree based decomposition so that the 0's and 1's are either highly polarized or further sub-partitioning is unlikely to achieve any compression. Each cuboid is entropy-coded as a unit using binary arithmetic coding. This technique is capable of exploiting the spatial and inter-component correlations efficiently without the restriction of scanning the bitmap in any specific linear order as needed by run-length coding. As encoding of non-zero component values no longer requires denoting the zero value, further compression efficiency is achieved. Experimental results on standard multiview test video sequences have comprehensively demonstrated the superiority of the proposed technique, achieving overall coding gain against the state-of-the-art in the range [22%, 54%] and on average 38%. © 2015 IEEE.
- Description: 2015 Visual Communications and Image Processing, VCIP 2015
An efficient cooperative lane-changing algorithm for sensor- and communication-enabled automated vehicles
- Authors: Awal, Tanveer , Murshed, Manzur , Ali, Mortuza
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: A key goal in transportation system is to attain efficient road traffic through minimization of trip time, fuel consumption and pollutant-emission without compromising safety. In dense traffic lane-changes and merging are often key ingredients to cause safety hazards, traffic breakdowns and travel delays. In this paper, we propose an efficient cooperative lane-changing algorithm CLA for sensor- and communication-enabled automated vehicles to reduce the lane-changing bottlenecks. For discretionary lane-changing, we consider the advantages of the subject vehicle, the follower in the current lane and k (an integer) lag vehicles in the target lane to maximize speed gains. Our algorithm simultaneously minimizes the impact of lane-change on traffic flow and the overall trip time, fuel-consumption and pollutant-emission. For mandatory lane-changing CLA dissociates the decision-making point from the actual mandatory lane-changing point and computes a suitable lane-changing slot in order to minimize lane-changing (merging) time. Our algorithm outperforms the potential cooperative lane-changing algorithm MOBIL proposed by Kesting et al. [1] in terms of merging time and rate, waiting time, fuel consumption, average velocity and flow (especially at the point in front of the merging point) at the cost of slightly increased average trip time for the mainroad vehicles compared to MOBIL. We also highlight important directions for further research. © 2015 IEEE.
Lossless image coding using binary tree decomposition of prediction residuals
- Authors: Ali, Mortuza , Murshed, Manzur , Shahriyar, Shampa , Paul, Manoranjan
- Date: 2015
- Type: Text , Conference proceedings
- Full Text: false
- Description: State-of-the-art lossless image compression schemes, such as, JPEG-LS and CALIC, have been proposed in the context adaptive predictive coding framework. These schemes involve a prediction step followed by context adaptive entropy coding of the residuals. It can be observed that there exist significant spatial correlation among the residuals after prediction. The efficient schemes proposed in the literature rely on context adaptive entropy coding to exploit this spatial correlation. In this paper, we propose an alternative approach to exploit this spatial correlation. The proposed scheme also involves a prediction stage. However, we resort to a binary tree based hierarchical decomposition technique to efficiently exploit the spatial correlation. On a set of standard test images, the proposed scheme, using the same predictor as JPEG-LS, achieved an overall compression gain of 2.1% against JPEG-LS. © 2015 IEEE.
An efficient video coding technique using a novel non-parametric background model
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
Efficient coding of depth map by exploiting temporal correlation
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 International Conference on Digital Image Computing : Techniques and Applications (DICTA); Wollongong, Australia; 25th-27th November 2014
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text: false
- Description: With the growing demands for 3D and multi-view video content, efficient depth data coding becomes a vital issue in image and video coding area. In this paper, we propose a simple depth coding scheme using multiple prediction modes exploiting temporal correlation of depth map. Current depth coding techniques mostly depend on intra-coding mode that cannot get the advantage of temporal redundancy in the depth maps and higher spatial redundancy in inter-predicted depth residuals. Depth maps are characterized by smooth regions with sharp edges that play an important role in the view synthesis process. As depth maps are more sensitive to coding errors, use of transformation or approximation of edges by explicit edge modelling has impact on view synthesis quality. Moreover, lossy compression of depth map brings additional geometrical distortion to synthetic view. In this paper, we have demonstrated that encoding inter-coded depth block residuals with quantization at pixel domain is more efficient than the intra-coding techniques relying on explicit edge preservation. On standard 3D video sequences, the proposed depth coding has achieved superior image quality of synthesized views against the new 3D-HEVC standard for depth map bit-rate 0.25 bpp or higher.
Inherently edge-preserving depth-map coding without explicit edge detection and approximation C3 - Proceedings - IEEE International Conference on Multimedia and Expo
- Authors: Shahriyar, Shampa , Murshed, Manzur , Ali, Mortuza , Paul, Manoranjan
- Date: 2014
- Type: Text , Conference proceedings
- Full Text: false
- Description: In emerging 3D video coding, depth has significant importance in view synthesis, scene analysis, and 3D object reconstruction. Depth images can be characterized by sharp edges and smooth large regions. Most of the existing depth coding techniques use intra-coding mode and try to preserve edges explicitly with approximated edge modelling. However, edges can be implicitly preserved as long as the transformation is avoided. In this paper, we have demonstrated that inherent edge preserving encoding of inter-coded block residuals, uniformly quantized at pixel domain using motion data from associated texture components, is more efficient than explicitly edge preserving intra-coding techniques. Experimental results show that the proposed technique have achieved superior image quality of synthesized views against the new 3D-HEVC standard. Lossless applications of the proposed technique has achieved on average 66% and 23% bit-rate savings against 3D-HEVC with negligible quantization and perceptually unnoticeable view synthesis, respectively.
Undecoded coefficients recovery in distributed video coding by exploiting spatio-temporal correlation: a linear programming approach
- Authors: Ali, Mortuza , Murshed, Manzur
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA 2013), Hobart, November 26-28th, 2013, p 1-7
- Full Text: false
- Reviewed:
- Description: Distributed video coding (DVC) aims at achieving low-complexity encoding in contrast to the existing video coding standards' high complexity encoding. According to the Wyner-Ziv theorem this can be achieved, under certain conditions, by independent encoding of the frames while resorting to joint decoding. However, the performance of a Wyner-Ziv coding scheme significantly depends on its knowledge about the spatio-temporal correlation of the video. Unfortunately, correlation statistics in a video widely varies both along the spatial and temporal directions. Therefore, we argue that in a feedback free transform domain DVC scheme the decoder will fail to recover all the transform coefficients with a nonzero probability. Thus, we suggest to integrate a recovery method with the decoder that aims at recovering the undecoded coefficients by exploiting the spatio-temporal correlation of the video. Besides, we extend and modify a recovery scheme, recently proposed in the context of images, for DVC so that it exploits both spatial and temporal correlations in recovering the undecoded coefficients. The essential idea of this scheme is to formulate the recovery problem as a linear optimization problem which can be solved efficiently using linear programming. Our simulation results demonstrated that the proposed scheme can significantly improve the PSNR and visual quality of the erroneous video frames produced by a DVC decoder.
Verifiable and privacy preserving electronic voting with untrusted machines
- Authors: Murshed, Manzur , Sabrina, Tishna , Iqbal, Anindya , Ali, Mortuza
- Date: 2013
- Type: Text , Conference proceedings
- Relation: Proceedings of the 12th IEEE International Conference on Trust, Security and Privacy in Computing and Communications (TrustCom 2013) Melbourne, Vic, 16-18th July, 2013 p. 798-804
- Full Text: false
- Reviewed:
- Description: Designing a trustworthy voting system that uses electronic voting machines (EVMs) for efficiency and accuracy is a challenging task. It is difficult, if not impossible, to ensure the trustworthiness of EVMs that possess computation, storage, and communication capabilities. Thus an electronic voting system that does not assume trusted EVMs is clearly desirable. In this paper, we have proposed a k-anonymized electronic voting scheme that achieves this goal by assuming a hardware-controlled trusted random number generator external to the EVM. The proposed scheme relies on a k-anonymization technique to protect privacy and resort to joint de-anonymization of the votes for counting. Since the joint de-anonymization takes into account all the votes, it is difficult to manipulate an individual vote, even by the EVM, without being detected. Besides the anonymization technique, the proposed scheme relies on standard cryptographic hashing and the concept of floating receipt to provide end-to-end verifiability that prevents coercion or vote trading.
An improved pipelined processor architecture eliminating branch and jump penalty
- Authors: Hasan, Raquibal , Rahman, M. S. , Hasan, Masud , Hasan, Mahmudul , Ali, Mortuza
- Date: 2010
- Type: Text , Conference proceedings
- Relation: Computer Engineering and Applications (ICCEA), Bali, ICCEA 2010, 19th March 2010; published in 2010 2nd International Conference on Computer Engineering and Applications, ICCEA 2010 Vol. 1, p. 621-625
- Full Text: false
- Reviewed:
- Description: Control dependencies are one of the major limitations to increase the performance of pipelined processors. This paper deals with eliminating penalties in pipelined processor. We present our discussion in the light of MIPS pipelined processor architecture. Here we present an improved pipelined processor architecture eliminating branch and jump penalty. In the proposed architecture CPI for branch and jump instruction is less than that of MIPS architecture. We also have shown the design of the required cache memory cell for the improved architecture.
- Description: Second International Conference on Computer Engineering and Applications (ICCEA), 2010
Conversion of Bangla sentence for universal networking language
- Authors: Ali, Md N. Y. , Nurannabi, Abu Mohammad , Ali, Mortuza , Das, Jugal , Ahmed, Golum
- Date: 2010
- Type: Text , Conference proceedings
- Relation: 13th International Conference on Computer and Information Technology, ICCIT 2010,Dhaka, Bangladesh, 23-25 Dec, 2010 published in Computer and Information Technology (ICCIT), 2010 13th International Conference p. 108-113
- Full Text: false
- Reviewed:
- Description: Conversion from Bangla language to another native language using Universal Networking Language (UNL) is highly demanding due to increasing the usage of Internet based application. Since Bangla case structure plays a fundamental role in Bangla grammartical structures, this paper presents some rules for Bangla case structures that will be used to convert Bangla sentence to UNL expression. The theoretical analysis shows that the defined rules can be used successful conversion of Bangla sentence. ©2010 IEEE.
- Description: Proceedings of 2010 13th International Conference on Computer and Information Technology, ICCIT 2010, 23-25 Dec, 2010
Literature on image segmentation based on split - and - Merge techniques
- Authors: Faruquzzaman, A. B. M. , Paiker, Nafize , Arafat, Jahidul , Ali, Mortuza , Sorwar, Golam
- Date: 2008
- Type: Text , Conference proceedings , Conference paper
- Relation: ICITA 2008, Cairns, Qld., 23-26 June, ICITA, published in Proceedings of 5th International Conference on Information Technology and Application pp. 120-125.
- Full Text: false
- Reviewed:
- Description: Image segmentation is a feverish issue due to drastically increasing the use of computer and the Internet. Various algorithms have been invented on this aspect. Among them, split-and-merge (SM) algorithm is highly lucrative now-a-days due to its simplicity and effectiveness in the sector of image processing. Numerous researchers have performed their research work on this algorithm to triumph over its drawbacks for its sustainable and competent implementation. This paper has consolidated the useful consideration and proposal of various researchers to formulate a strong base of knowledge for the future researcher. It has also tinted few unsettled drawbacks of SM algorithm which will open the casement of brainstorming as well as persuade them for future research on SM algorithm, thereby allow SM algorithm to attain a globally optimal algorithm for image segmentation.
- Description: 5th International Conference on Information Technology and Applications, ICITA 2008
Object segmentation based on split and merge algorithm
- Authors: Faruquzzaman, A. B. M. , Paiker, Nafize , Arafat, Jahidul , Karim, Ziaul , Ali, Mortuza
- Date: 2008
- Type: Text , Conference proceedings
- Relation: 2008 IEEE Region 10 Conference, TENCON 2008; Hyderabad; India; 19th -21st Nov published in IEEE Region 10 Annual International Conference, Proceedings/TENCON p. 1-4
- Full Text: false
- Reviewed:
- Description: Image segmentation is a feverish issue as it is a challenging job and most digital imaging applications require it as a preprocessing step. Among various algorithms, although split and merge (SM) algorithm is highly lucrative because of its simplicity and effectiveness in segmenting homogeneous regions, however, it is unable to segment all types of objects in an image using a general framework due to not most natural objects being homogeneous. Addressing this issue, a new algorithm namely object segmentation based on split and merge algorithm (OSSM) is proposed in this paper considering image feature stability, inter- and intra-object variability, and human visual perception. The qualitative analysis has been conducted and the segmentation results are compared with the basic SM algorithm and a shape-based fuzzy clustering algorithm namely object based image segmentation using fuzzy clustering (OSF). The OSSM algorithm outperforms both the SM and the OSF algorithms and hence increases the application area of segmentation algorithms.
- Description: IEEE Region 10 Annual International Conference, Proceedings/TENCON