A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling
- Huda, Shamsul, Yearwood, John, Togneri, Roberto
- Authors: Huda, Shamsul , Yearwood, John , Togneri, Roberto
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics Vol. 39, no. 1 (2009), p. 182-197
- Full Text:
- Reviewed:
- Description: This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). © 2008 IEEE.
- Authors: Huda, Shamsul , Yearwood, John , Togneri, Roberto
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics Vol. 39, no. 1 (2009), p. 182-197
- Full Text:
- Reviewed:
- Description: This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM). © 2008 IEEE.
Estimation of a regression function by maxima of minima of linear functions
- Bagirov, Adil, Clausen, Conny, Kohler, Michael
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 55, no. 2 (2009), p. 833-845
- Full Text:
- Reviewed:
- Description: In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data. © 2009 IEEE.
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2009
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 55, no. 2 (2009), p. 833-845
- Full Text:
- Reviewed:
- Description: In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data. © 2009 IEEE.
An L-2-Boosting Algorithm for Estimation of a Regression Function
- Bagirov, Adil, Clausen, Conny, Kohler, Michael
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429
- Full Text:
- Reviewed:
- Description: An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.
- Authors: Bagirov, Adil , Clausen, Conny , Kohler, Michael
- Date: 2010
- Type: Text , Journal article
- Relation: IEEE Transactions on Information Theory Vol. 56, no. 3 (2010), p. 1417-1429
- Full Text:
- Reviewed:
- Description: An L-2-boosting algorithm for estimation of a regression function from random design is presented, which consists of fitting repeatedly a function from a fixed nonlinear function space to the residuals of the data by least squares and by defining the estimate as a linear combination of the resulting least squares estimates. Splitting of the sample is used to decide after how many iterations of smoothing of the residuals the algorithm terminates. The rate of convergence of the algorithm is analyzed in case of an unbounded response variable. The method is used to fit a sum of maxima of minima of linear functions to a given data set, and is compared with other nonparametric regression estimates using simulated data.
Improving deep forest by confidence screening
- Pang, Ming, Ting, Kaiming, Zhao, Peng, Zhou, Zhi-Hua
- Authors: Pang, Ming , Ting, Kaiming , Zhao, Peng , Zhou, Zhi-Hua
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 Ieee International Conference on Data Mining; Singapore, Singapore; 17th-20th November 2018 p. 1194-1199
- Full Text:
- Reviewed:
- Description: Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by backpropagation. Recently, it has been shown that deep learning can also be realized by non-differentiable modules without backpropagation training called deep forest. The developed representation learning process is based on a cascade of cascades of decision tree forests, where the high memory requirement and the high time cost inhibit the training of large models. In this paper, we propose a simple yet effective approach to improve the efficiency of deep forest. The key idea is to pass the instances with high confidence directly to the final stage rather than passing through all the levels. We also provide a theoretical analysis suggesting a means to vary the model complexity from low to high as the level increases in the cascade, which further reduces the memory requirement and time cost. Our experiments show that the proposed approach achieves highly competitive predictive performance with significantly reduced time cost and memory requirement by up to one order of magnitude.
- Authors: Pang, Ming , Ting, Kaiming , Zhao, Peng , Zhou, Zhi-Hua
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 Ieee International Conference on Data Mining; Singapore, Singapore; 17th-20th November 2018 p. 1194-1199
- Full Text:
- Reviewed:
- Description: Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by backpropagation. Recently, it has been shown that deep learning can also be realized by non-differentiable modules without backpropagation training called deep forest. The developed representation learning process is based on a cascade of cascades of decision tree forests, where the high memory requirement and the high time cost inhibit the training of large models. In this paper, we propose a simple yet effective approach to improve the efficiency of deep forest. The key idea is to pass the instances with high confidence directly to the final stage rather than passing through all the levels. We also provide a theoretical analysis suggesting a means to vary the model complexity from low to high as the level increases in the cascade, which further reduces the memory requirement and time cost. Our experiments show that the proposed approach achieves highly competitive predictive performance with significantly reduced time cost and memory requirement by up to one order of magnitude.
A new building mask using the gradient of heights for automatic building extraction
- Siddiqui, Fasahat, Awrangjeb, Mohammad, Teng, Shyh, Lu, Guojun
- Authors: Siddiqui, Fasahat , Awrangjeb, Mohammad , Teng, Shyh , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 288-294
- Full Text:
- Reviewed:
- Description: A number of building detection methods have been proposed in the literature. However, they are not effective in detecting small buildings (typically, 50 m(2)) and buildings with transparent roof due to the way area thresholds and ground points are used. This paper proposes a new building mask to overcome these limitations and enables detection of buildings not only with transparent roof materials but also which are small in size. The proposed building detection method transforms the non-ground height information into an intensity image and then analyses the gradient information in the image. It uses a small area threshold of 1 m2 and, thereby, is able to detect small buildings such as garden sheds. The use of non-ground points allows analyses of the gradient on all types of roof materials and, thus, the method is also able to detect buildings with transparent roofs. Our experimental results show that the proposed method can successfully extract buildings even when their roofs are small and/or transparent, thereby, achieving relatively higher average completeness and quality.
- Authors: Siddiqui, Fasahat , Awrangjeb, Mohammad , Teng, Shyh , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 288-294
- Full Text:
- Reviewed:
- Description: A number of building detection methods have been proposed in the literature. However, they are not effective in detecting small buildings (typically, 50 m(2)) and buildings with transparent roof due to the way area thresholds and ground points are used. This paper proposes a new building mask to overcome these limitations and enables detection of buildings not only with transparent roof materials but also which are small in size. The proposed building detection method transforms the non-ground height information into an intensity image and then analyses the gradient information in the image. It uses a small area threshold of 1 m2 and, thereby, is able to detect small buildings such as garden sheds. The use of non-ground points allows analyses of the gradient on all types of roof materials and, thus, the method is also able to detect buildings with transparent roofs. Our experimental results show that the proposed method can successfully extract buildings even when their roofs are small and/or transparent, thereby, achieving relatively higher average completeness and quality.
A model for the introduction of Ayurvedic and Allopathic Electronic Health Records in Sri Lanka
- Stranieri, Andrew, Sahama, Tony, Butler-Henderson, Kerryn, Perera, Kamal
- Authors: Stranieri, Andrew , Sahama, Tony , Butler-Henderson, Kerryn , Perera, Kamal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 IEEE International Symposium on Technology and Society; Trivandrum, Kerala, India; 20th-22nd October 2016 p. 56-61
- Full Text:
- Reviewed:
- Description: Fully integrated electronic health records (EHR) provide healthcare providers and patients access to records across a health care system and promise efficient and effective provision of health care. However, fully integrated records have proven to be very expensive and difficult to establish. Currently. EHR's have been developed largely to accommodate Western medicine events. These barriers impact on the introduction of EHR's in Sri Lanka, where health budgets are already stretched and Ayurvedic medicine is routinely practiced alongside Allopathic medicine. This article identifies requirements for EHR in the Sri Lankan context and advances a model for the introduction of EHR's that suits that context. The model is justified by drawing on insights and experiences with EHR in Western nations.
- Authors: Stranieri, Andrew , Sahama, Tony , Butler-Henderson, Kerryn , Perera, Kamal
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 IEEE International Symposium on Technology and Society; Trivandrum, Kerala, India; 20th-22nd October 2016 p. 56-61
- Full Text:
- Reviewed:
- Description: Fully integrated electronic health records (EHR) provide healthcare providers and patients access to records across a health care system and promise efficient and effective provision of health care. However, fully integrated records have proven to be very expensive and difficult to establish. Currently. EHR's have been developed largely to accommodate Western medicine events. These barriers impact on the introduction of EHR's in Sri Lanka, where health budgets are already stretched and Ayurvedic medicine is routinely practiced alongside Allopathic medicine. This article identifies requirements for EHR in the Sri Lankan context and advances a model for the introduction of EHR's that suits that context. The model is justified by drawing on insights and experiences with EHR in Western nations.
Joint texture and depth coding using cuboid data compression
- Paul, Manoranjan, Chakraborty, Subrata, Murshed, Manzur, Podder, Pallab
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
Lossless hyperspectral image compression using binary tree based decomposition
- Shahriyar, Shampa, Paul, Manoranjan, Murshed, Manzur, Ali, Mortuza
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.
ECG reduction for wearable sensor
- Allami, Ragheed, Stranieri, Andrew, Balasubramanian, Venki, Jelinek, Herbert
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS); Naples, Italy; 28th November-1st December 2016 p. 520-525
- Full Text:
- Reviewed:
- Description: The transmission, storage and analysis of electrocardiogram (ECG) data in real-time is essential for remote patient monitoring with wearable ECG devices and mobile ECG contexts. However, this remains a challenge to achieve within the processing power and the storage capacity of mobile devices. ECG reduction algorithms have an important role to play in reducing the processing requirements for mobile devices, however many existing ECG reduction and compression algorithms are computationally expensive to execute in mobile devices and have not been designed for real-time computation and incremental data arrival. In this paper, we describe a computationally naive, yet effective, algorithm that achieves high ECG reduction rates while maintaining key diagnostic features including PR, QRS, ST, QT and RR intervals. While reduction does not enable ECG waves to be reproduced, the ability to transmit key indicators (diagnostic features) using minimal computational resources, is particularly useful in mobile health contexts involving power constrained sensors and devices. Results of the proposed reduction algorithm indicate that the proposed algorithm outperforms other ECG reduction algorithms at a reduction/compression ratio (CR) of 5:1. If power or processing capacity is low, the algorithm can readily switch to a compression ratio of up to 10: 1 while still maintaining an error rate below 10%.
- Authors: Allami, Ragheed , Stranieri, Andrew , Balasubramanian, Venki , Jelinek, Herbert
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 12th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS); Naples, Italy; 28th November-1st December 2016 p. 520-525
- Full Text:
- Reviewed:
- Description: The transmission, storage and analysis of electrocardiogram (ECG) data in real-time is essential for remote patient monitoring with wearable ECG devices and mobile ECG contexts. However, this remains a challenge to achieve within the processing power and the storage capacity of mobile devices. ECG reduction algorithms have an important role to play in reducing the processing requirements for mobile devices, however many existing ECG reduction and compression algorithms are computationally expensive to execute in mobile devices and have not been designed for real-time computation and incremental data arrival. In this paper, we describe a computationally naive, yet effective, algorithm that achieves high ECG reduction rates while maintaining key diagnostic features including PR, QRS, ST, QT and RR intervals. While reduction does not enable ECG waves to be reproduced, the ability to transmit key indicators (diagnostic features) using minimal computational resources, is particularly useful in mobile health contexts involving power constrained sensors and devices. Results of the proposed reduction algorithm indicate that the proposed algorithm outperforms other ECG reduction algorithms at a reduction/compression ratio (CR) of 5:1. If power or processing capacity is low, the algorithm can readily switch to a compression ratio of up to 10: 1 while still maintaining an error rate below 10%.
An improved building detection in complex sites using the LIDAR height variation and point density
- Siddiqui, Fasahat, Teng, Shyh, Lu, Guojun, Awrangjeb, Mohammad
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2013
- Type: Text , Conference proceedings
- Relation: 2013 28th International Conference on Image and Vision Computing New Zealand, IVCNZ 2013; Wellington; New Zealand; 27th-29th November 2013; published in International Conference Image and Vision Computing New Zealand p. 471-476
- Full Text:
- Reviewed:
- Description: In this paper, the height variation in LIDAR (Light Detection And Ranging) point cloud data and point density are analyzed to remove the false building detection in highly vegetation and hilly sites. In general, the LIDAR points in a tree area have higher height variations than those in a building area. Moreover, the density of points having similar height values is lower in a tree area than in a building area. The proposed method uses such information as an improvement to a current state-of-the-art building detection method. The qualitative and object-based quantitative analyzes have been performed to verify the effectiveness of the proposed building detection method as compared with a current method. The analysis shows that proposed building detection method successfully reduces false building detection (i.e. trees in high complex sites of Australia and Germany), and the average correctness and quality have been improved by 6.36% and 6.16% respectively.
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2013
- Type: Text , Conference proceedings
- Relation: 2013 28th International Conference on Image and Vision Computing New Zealand, IVCNZ 2013; Wellington; New Zealand; 27th-29th November 2013; published in International Conference Image and Vision Computing New Zealand p. 471-476
- Full Text:
- Reviewed:
- Description: In this paper, the height variation in LIDAR (Light Detection And Ranging) point cloud data and point density are analyzed to remove the false building detection in highly vegetation and hilly sites. In general, the LIDAR points in a tree area have higher height variations than those in a building area. Moreover, the density of points having similar height values is lower in a tree area than in a building area. The proposed method uses such information as an improvement to a current state-of-the-art building detection method. The qualitative and object-based quantitative analyzes have been performed to verify the effectiveness of the proposed building detection method as compared with a current method. The analysis shows that proposed building detection method successfully reduces false building detection (i.e. trees in high complex sites of Australia and Germany), and the average correctness and quality have been improved by 6.36% and 6.16% respectively.
An efficient video coding technique using a novel non-parametric background model
- Chakraborty, Subrata, Paul, Manoranjan, Murshed, Manzur, Ali, Mortuza
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
- Authors: Chakraborty, Subrata , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 2014 IEEE International Conference on Multimedia and Expo Workshops, ICMEW 2014; Chengdu; China; 14th-18th July 2014 p. 1-6
- Full Text:
- Reviewed:
- Description: Video coding technique with a background frame, extracted from mixture of Gaussian (MoG) based background modeling, provides better rate distortion performance by exploiting coding efficiency in uncovered background areas compared to the latest video coding standard. However, it suffers from high computation time, low coding efficiency for dynamic videos, and prior knowledge requirement of video content. In this paper, we present a novel adaptive weighted non-parametric (WNP) background modeling technique and successfully embed it into HEVC video coding standard. Being non-parametric (NP), the proposed technique naturally exhibits superior performance in dynamic background scenarios compared to MoG-based technique without a priori knowledge of video data distribution. In addition, the WNP technique significantly reduces noise-related drawbacks of existing NP techniques to provide better quality video coding with much lower computation time as demonstrated through extensive comparative studies against NP, MoG and HEVC techniques.
Progressive data stream mining and transaction classification for workload-aware incremental database repartitioning
- Kamal, Joarder, Murshed, Manzur, Gaber, Mohamed
- Authors: Kamal, Joarder , Murshed, Manzur , Gaber, Mohamed
- Date: 2014
- Type: Text , Conference proceedings
- Relation: IEEE/ACM International Symposium on Big Data Computing, BDC 2014; London, United Kingdom; 8th-11th December 2014; p. 8-15
- Full Text:
- Reviewed:
- Description: Minimising the impact of distributed transactions (DTs) in a shared-nothing distributed database is extremely challenging for transactional workloads. With dynamic workload nature and rapid growth in data volume the underlying database requires incremental repartitioning to maintain acceptable level of DTs and data load balance with minimum physical data migrations. In a workload-aware repartitioning scheme transactional workload is modelled as graph or hyper graph, and subsequently perform k-way min-cut clustering guaranteeing minimum edge cuts can reduce the impact of DTs significantly by mapping the workload clusters into logical database partitions. However, without exploring the inherent workload characteristics, the overall processing and computing times for large-scale workload networks increase in polynomial orders. In this paper, a workload-aware incremental database repartitioning technique is proposed, which effectively exploits proactive transaction classification and workload stream mining techniques. Workload batches are modelled in graph, hyper graph, and compressed hyper graph then repartitioned to produce a fresh tuple-to-partition data migration plan for every incremental cycle. Experimental studies in a simulated TPC-C environment demonstrate that the proposed model can be effectively adopted in managing rapid data growth and dynamic workloads, thus progressively reduce the overall processing time required to operate over the workload networks.
- Authors: Kamal, Joarder , Murshed, Manzur , Gaber, Mohamed
- Date: 2014
- Type: Text , Conference proceedings
- Relation: IEEE/ACM International Symposium on Big Data Computing, BDC 2014; London, United Kingdom; 8th-11th December 2014; p. 8-15
- Full Text:
- Reviewed:
- Description: Minimising the impact of distributed transactions (DTs) in a shared-nothing distributed database is extremely challenging for transactional workloads. With dynamic workload nature and rapid growth in data volume the underlying database requires incremental repartitioning to maintain acceptable level of DTs and data load balance with minimum physical data migrations. In a workload-aware repartitioning scheme transactional workload is modelled as graph or hyper graph, and subsequently perform k-way min-cut clustering guaranteeing minimum edge cuts can reduce the impact of DTs significantly by mapping the workload clusters into logical database partitions. However, without exploring the inherent workload characteristics, the overall processing and computing times for large-scale workload networks increase in polynomial orders. In this paper, a workload-aware incremental database repartitioning technique is proposed, which effectively exploits proactive transaction classification and workload stream mining techniques. Workload batches are modelled in graph, hyper graph, and compressed hyper graph then repartitioned to produce a fresh tuple-to-partition data migration plan for every incremental cycle. Experimental studies in a simulated TPC-C environment demonstrate that the proposed model can be effectively adopted in managing rapid data growth and dynamic workloads, thus progressively reduce the overall processing time required to operate over the workload networks.
A machine vision based automatic optical inspection system for measuring drilling quality of printed circuit boards
- Wang, Wei, Chen, Shang-Liang, Chen, Liang-Bi, Chang, Wan-Jung
- Authors: Wang, Wei , Chen, Shang-Liang , Chen, Liang-Bi , Chang, Wan-Jung
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 10817-10833
- Full Text:
- Reviewed:
- Description: In this paper, we develop and put into practice an automatic optical inspection (AOI) system based on machine vision to check the holes on a printed circuit board (PCB). We incorporate the hardware and software. For the hardware part, we combine a PC, the three-axis positioning system, a lighting device, and charge-coupled device cameras. For the software part, we utilize image registration, image segmentation, drill numbering, drill contrast, and defect displays to achieve this system. Results indicated that an accuracy of 5 mu m could be achieved in errors of the PCB holes allowing comparisons to be made. This is significant in inspecting the missing, the multi-hole, and the incorrect location of the holes. However, previous work only focuses on one or other feature of the holes. Our research is able to assess multiple features: missing holes, incorrectly located holes, and excessive holes. Equally, our results could be displayed as a bar chart and target plot. This has not been achieved before. These displays help users to analyze the causes of errors and immediately correct the problems. In addition, this AOI system is valuable for checking a large number of holes and finding out the defective ones on a PCB. Meanwhile, we apply a 0.1-mm image resolution, which is better than others used in industry. We set a detecting standard based on 2-mm diameter of circles to diagnose the quality of the holes within 10 s.
- Authors: Wang, Wei , Chen, Shang-Liang , Chen, Liang-Bi , Chang, Wan-Jung
- Date: 2017
- Type: Text , Journal article
- Relation: IEEE Access Vol. 5, no. (2017), p. 10817-10833
- Full Text:
- Reviewed:
- Description: In this paper, we develop and put into practice an automatic optical inspection (AOI) system based on machine vision to check the holes on a printed circuit board (PCB). We incorporate the hardware and software. For the hardware part, we combine a PC, the three-axis positioning system, a lighting device, and charge-coupled device cameras. For the software part, we utilize image registration, image segmentation, drill numbering, drill contrast, and defect displays to achieve this system. Results indicated that an accuracy of 5 mu m could be achieved in errors of the PCB holes allowing comparisons to be made. This is significant in inspecting the missing, the multi-hole, and the incorrect location of the holes. However, previous work only focuses on one or other feature of the holes. Our research is able to assess multiple features: missing holes, incorrectly located holes, and excessive holes. Equally, our results could be displayed as a bar chart and target plot. This has not been achieved before. These displays help users to analyze the causes of errors and immediately correct the problems. In addition, this AOI system is valuable for checking a large number of holes and finding out the defective ones on a PCB. Meanwhile, we apply a 0.1-mm image resolution, which is better than others used in industry. We set a detecting standard based on 2-mm diameter of circles to diagnose the quality of the holes within 10 s.
Power transaction management amongst coupled microgrids in remote areas
- Batool, Munira, Islam, Syed, Shahnia, Farhad
- Authors: Batool, Munira , Islam, Syed , Shahnia, Farhad
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 7th IEEE Innovative Smart Grid Technologies - Asia, ISGT-Asia 2017;Auckland, New Zealand; 4th-7th December 2017 p. 1-6
- Full Text:
- Reviewed:
- Description: Large remote areas normally have isolated and self-sufficient electricity supply systems, often referred to as microgrids. These systems also rely on a mix of dispatchable and non-dispatcha- ble distributed energy resources to reduce the overall cost of electricity production. Emergencies such as shortfalls, overloading, and faults can cause problems in the operation of these remote area microgrids. This paper presents a power transaction management scheme amongst a few such microgrids when they are coupled provisionally during emergencies. By definition, power transaction is an instance of buying and selling of electricity amongst problem and healthy microgrids. The developed technique aims to define the suitable power generation from all dispatchable sources and regulate the power transaction amongst the coupled microgrids. To this end, an optimization problem is formulated that aims to define the above parameters while minimizing the costs and technical impacts. A mixed- integer linear programming technique is used to solve the formulated problem. The performance of the proposed management strategy is evaluated by numerical analysis in MATLAB.
- Authors: Batool, Munira , Islam, Syed , Shahnia, Farhad
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 7th IEEE Innovative Smart Grid Technologies - Asia, ISGT-Asia 2017;Auckland, New Zealand; 4th-7th December 2017 p. 1-6
- Full Text:
- Reviewed:
- Description: Large remote areas normally have isolated and self-sufficient electricity supply systems, often referred to as microgrids. These systems also rely on a mix of dispatchable and non-dispatcha- ble distributed energy resources to reduce the overall cost of electricity production. Emergencies such as shortfalls, overloading, and faults can cause problems in the operation of these remote area microgrids. This paper presents a power transaction management scheme amongst a few such microgrids when they are coupled provisionally during emergencies. By definition, power transaction is an instance of buying and selling of electricity amongst problem and healthy microgrids. The developed technique aims to define the suitable power generation from all dispatchable sources and regulate the power transaction amongst the coupled microgrids. To this end, an optimization problem is formulated that aims to define the above parameters while minimizing the costs and technical impacts. A mixed- integer linear programming technique is used to solve the formulated problem. The performance of the proposed management strategy is evaluated by numerical analysis in MATLAB.
Master control unit based power exchange strategy for interconnected microgrids
- Batool, Munira, Islam, Syed, Shahnia, Farhad
- Authors: Batool, Munira , Islam, Syed , Shahnia, Farhad
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 2017 Australasian Universities Power Engineering Conference, AUPEC 2017; Melbourne, Australia; 19th-22nd November 2017 Vol. 2017, p. 1-6
- Full Text:
- Reviewed:
- Description: Large remote area networks normally have self-suffi-cient electricity systems. These systems also rely on non-dispatchable DGs (N-DGs) for overall reduction in cost of electricity production. It is a fact that uncertainties included in the nature of N-DGs as well as load demand can cause cost burden on islanded microgrids (MGs). This paper proposes development of power exchange strategy for an interconnected MGs (IMG) system as part of large remote area network with optimized controls of dispatchable (D-DGs) which are members of master control unit (MCU). MCU analysis includes equal cost increment principle to give idea about the amount of power exchange which could take place with neighbor MGs in case of overloading situation. Sudden changes in N-DGs and load are defined as interruptions and are part of analysis too. Optimization problem is formulated on the basis of MCU adjustment for overloading or under loading situation and suitability of support MG (S-MG) in IMG system for power exchange along with key features of low cost and minimum technical impacts. Mixed integer linear programming (MILP) technique is applied to solve the formulated problem. The impact of proposed strategy is assessed by numerical analysis in MATLAB programming under stochastic environment.
- Authors: Batool, Munira , Islam, Syed , Shahnia, Farhad
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 2017 Australasian Universities Power Engineering Conference, AUPEC 2017; Melbourne, Australia; 19th-22nd November 2017 Vol. 2017, p. 1-6
- Full Text:
- Reviewed:
- Description: Large remote area networks normally have self-suffi-cient electricity systems. These systems also rely on non-dispatchable DGs (N-DGs) for overall reduction in cost of electricity production. It is a fact that uncertainties included in the nature of N-DGs as well as load demand can cause cost burden on islanded microgrids (MGs). This paper proposes development of power exchange strategy for an interconnected MGs (IMG) system as part of large remote area network with optimized controls of dispatchable (D-DGs) which are members of master control unit (MCU). MCU analysis includes equal cost increment principle to give idea about the amount of power exchange which could take place with neighbor MGs in case of overloading situation. Sudden changes in N-DGs and load are defined as interruptions and are part of analysis too. Optimization problem is formulated on the basis of MCU adjustment for overloading or under loading situation and suitability of support MG (S-MG) in IMG system for power exchange along with key features of low cost and minimum technical impacts. Mixed integer linear programming (MILP) technique is applied to solve the formulated problem. The impact of proposed strategy is assessed by numerical analysis in MATLAB programming under stochastic environment.
Improved method to obtain the online impulse frequency response signature of a power transformer by multi scale complex CWT
- Zhao, Zhongyong, Tang, Chao, Yao, Chenguo, Zhou, Qu, Xu, Lingna, Gui, Yingang, Islam, Syed
- Authors: Zhao, Zhongyong , Tang, Chao , Yao, Chenguo , Zhou, Qu , Xu, Lingna , Gui, Yingang , Islam, Syed
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 48934-48945
- Full Text:
- Reviewed:
- Description: Online impulse frequency response analysis (IFRA) has proven to be a promising method to detect and diagnose the transformer winding mechanical faults when the transformer is in service. However, the existing fast Fourier transform (FFT) is actually not suitable for processing the transient signals in online IFRA. The field test result also shows that the IFRA signature obtained by FFT is easily distorted by noise. An improved method to obtain the online IFRA signature based on multi-scale complex continuous wavelet transform is proposed. The electrical model simulation and online experiment indicate the superiority of the wavelet transform compared with FFT. This paper provides guidance on the actual application of the online IFRA method.
- Authors: Zhao, Zhongyong , Tang, Chao , Yao, Chenguo , Zhou, Qu , Xu, Lingna , Gui, Yingang , Islam, Syed
- Date: 2018
- Type: Text , Journal article
- Relation: IEEE Access Vol. 6, no. (2018), p. 48934-48945
- Full Text:
- Reviewed:
- Description: Online impulse frequency response analysis (IFRA) has proven to be a promising method to detect and diagnose the transformer winding mechanical faults when the transformer is in service. However, the existing fast Fourier transform (FFT) is actually not suitable for processing the transient signals in online IFRA. The field test result also shows that the IFRA signature obtained by FFT is easily distorted by noise. An improved method to obtain the online IFRA signature based on multi-scale complex continuous wavelet transform is proposed. The electrical model simulation and online experiment indicate the superiority of the wavelet transform compared with FFT. This paper provides guidance on the actual application of the online IFRA method.
Investigation of microgrid instability caused by time delay
- Aghanoori, Navid, Masoum, Mohammad, Islam, Syed, Nethery, Steven
- Authors: Aghanoori, Navid , Masoum, Mohammad , Islam, Syed , Nethery, Steven
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 10th International Conference on Electrical and Electronics Engineering, ELECO 2017; Bursa, Turkey; 29th-2nd December 2017 Vol. 2018, p. 105-110
- Full Text:
- Reviewed:
- Description: This paper investigates the impact of time delay in the control of a grid-connected microgrid with renewable energy resources. The considered microgrid has a critical load that needs to be powered and protected in the event of grid voltage disturbance while the microgrid maintains connection to the grid. Three case studies are performed considering three different time delays to indicate the advantages of fast communication system in the performance of renewable microgrids. Detailed simulation results illustrate that the proposed communication system using IEC 61850 substation automation standard provides better voltage and current quality to the critical local load with larger phase and gain margins while keeping the microgid connected to main grid.
- Authors: Aghanoori, Navid , Masoum, Mohammad , Islam, Syed , Nethery, Steven
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: 10th International Conference on Electrical and Electronics Engineering, ELECO 2017; Bursa, Turkey; 29th-2nd December 2017 Vol. 2018, p. 105-110
- Full Text:
- Reviewed:
- Description: This paper investigates the impact of time delay in the control of a grid-connected microgrid with renewable energy resources. The considered microgrid has a critical load that needs to be powered and protected in the event of grid voltage disturbance while the microgrid maintains connection to the grid. Three case studies are performed considering three different time delays to indicate the advantages of fast communication system in the performance of renewable microgrids. Detailed simulation results illustrate that the proposed communication system using IEC 61850 substation automation standard provides better voltage and current quality to the critical local load with larger phase and gain margins while keeping the microgid connected to main grid.
Identification of coherent generators by support vector clustering with an embedding strategy
- Babaei, Mehdi, Muyeen, S., Islam, Syed
- Authors: Babaei, Mehdi , Muyeen, S. , Islam, Syed
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 105420-105431
- Full Text:
- Reviewed:
- Description: Identification of coherent generators (CGs) is necessary for the area-based monitoring and protection system of a wide area power system. Synchrophasor has enabled smarter monitoring and control measures to be devised; hence, measurement-based methodologies can be implemented in online applications to identify the CGs. This paper presents a new framework for coherency identification that is based on the dynamic coupling of generators. A distance matrix that contains the dissimilarity indices between any pair of generators is constructed from the pairwise dynamic coupling of generators after the post-disturbance data are obtained by phasor measurement units (PMUs). The dataset is embedded in Euclidean space to produce a new dataset with a metric distance between the points, and then the support vector clustering (SVC) technique is applied to the embedded dataset to identify the final clusters of generators. Unlike other clustering methods that need a priori knowledge about the number of clusters or the parameters of clustering, this information is set in an automatic search procedure that results in the optimal number of clusters. The algorithm is verified by time-domain simulations of defined scenarios in 39 bus and 118 bus test systems. Finally, the clustering result of 39 bus systems is validated by cluster validity measures, and a comparative study investigates the efficacy of the proposed algorithm to cluster the generators with an optimal number of clusters and also its computational efficiency compared with other clustering methods.
- Authors: Babaei, Mehdi , Muyeen, S. , Islam, Syed
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 105420-105431
- Full Text:
- Reviewed:
- Description: Identification of coherent generators (CGs) is necessary for the area-based monitoring and protection system of a wide area power system. Synchrophasor has enabled smarter monitoring and control measures to be devised; hence, measurement-based methodologies can be implemented in online applications to identify the CGs. This paper presents a new framework for coherency identification that is based on the dynamic coupling of generators. A distance matrix that contains the dissimilarity indices between any pair of generators is constructed from the pairwise dynamic coupling of generators after the post-disturbance data are obtained by phasor measurement units (PMUs). The dataset is embedded in Euclidean space to produce a new dataset with a metric distance between the points, and then the support vector clustering (SVC) technique is applied to the embedded dataset to identify the final clusters of generators. Unlike other clustering methods that need a priori knowledge about the number of clusters or the parameters of clustering, this information is set in an automatic search procedure that results in the optimal number of clusters. The algorithm is verified by time-domain simulations of defined scenarios in 39 bus and 118 bus test systems. Finally, the clustering result of 39 bus systems is validated by cluster validity measures, and a comparative study investigates the efficacy of the proposed algorithm to cluster the generators with an optimal number of clusters and also its computational efficiency compared with other clustering methods.
Classifying transformer winding deformation fault types and degrees using FRA based on support vector machine
- Liu, Jiangnan, Zhao, Zhongyong, Tang, Chao, Yao, Chenguo, Li, Chengxiang, Islam, Syed
- Authors: Liu, Jiangnan , Zhao, Zhongyong , Tang, Chao , Yao, Chenguo , Li, Chengxiang , Islam, Syed
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 112494-112504
- Full Text:
- Reviewed:
- Description: As an important part of power system, power transformer plays an irreplaceable role in the process of power transmission. Diagnosis of transformer's failure is of significance to maintain its safe and stable operation. Frequency response analysis (FRA) has been widely accepted as an effective tool for winding deformation fault diagnosis, which is one of the common failures for power transformers. However, there is no standard and reliable code for FRA interpretation as so far. In this paper, support vector machine (SVM) is combined with FRA to diagnose transformer faults. Furthermore, advanced optimization algorithms are also applied to improve the performance of models. A series of winding fault emulating experiments were carried out on an actual model transformer, the key features are extracted from measured FRA data, and the diagnostic model is trained and obtained, to arrive at an outcome for classifying the fault types and degrees of winding deformation faults with satisfactory accuracy. The diagnostic results indicate that this method has potential to be an intelligent, standardized, accurate and powerful tool.
- Authors: Liu, Jiangnan , Zhao, Zhongyong , Tang, Chao , Yao, Chenguo , Li, Chengxiang , Islam, Syed
- Date: 2019
- Type: Text , Journal article
- Relation: IEEE Access Vol. 7, no. (2019), p. 112494-112504
- Full Text:
- Reviewed:
- Description: As an important part of power system, power transformer plays an irreplaceable role in the process of power transmission. Diagnosis of transformer's failure is of significance to maintain its safe and stable operation. Frequency response analysis (FRA) has been widely accepted as an effective tool for winding deformation fault diagnosis, which is one of the common failures for power transformers. However, there is no standard and reliable code for FRA interpretation as so far. In this paper, support vector machine (SVM) is combined with FRA to diagnose transformer faults. Furthermore, advanced optimization algorithms are also applied to improve the performance of models. A series of winding fault emulating experiments were carried out on an actual model transformer, the key features are extracted from measured FRA data, and the diagnostic model is trained and obtained, to arrive at an outcome for classifying the fault types and degrees of winding deformation faults with satisfactory accuracy. The diagnostic results indicate that this method has potential to be an intelligent, standardized, accurate and powerful tool.
Performance evaluation of the dependable properties of a body area wireless sensor network
- Balasubramanian, Venki, Stranieri, Andrew
- Authors: Balasubramanian, Venki , Stranieri, Andrew
- Date: 2014
- Type: Text , Conference paper
- Relation: 2014 International Conference on Reliabilty, Optimization, & Information Technology (Icroit 2014); Faridabad, India; 6th-8th February 2014 p. 229-234
- Full Text:
- Reviewed:
- Description: Body Area Wireless Sensor Networks (BAWSNs) are self-organizing networks capable of monitoring health intrinsic data of a patient. BAWSNs extended with a health care application can be used to perform medical assessments by remotely monitoring patients. The accuracy of medical assessments fundamentally depends on the correctness of the data received from the BAWSN. However, data errors may arise at the sensor or during transmission across the wireless sensor network. Therefore, it is imperative to measure the health intrinsic data of a patient precisely. The formulated measurable properties in our work precisely measure the performance of the BAWSN in a remote Healthcare Monitoring Application (HMA). In this paper, we collated various performances using the measurable properties in our real-time test-bed and presented a comprehensive evaluation of these properties in a BAWSN.
- Authors: Balasubramanian, Venki , Stranieri, Andrew
- Date: 2014
- Type: Text , Conference paper
- Relation: 2014 International Conference on Reliabilty, Optimization, & Information Technology (Icroit 2014); Faridabad, India; 6th-8th February 2014 p. 229-234
- Full Text:
- Reviewed:
- Description: Body Area Wireless Sensor Networks (BAWSNs) are self-organizing networks capable of monitoring health intrinsic data of a patient. BAWSNs extended with a health care application can be used to perform medical assessments by remotely monitoring patients. The accuracy of medical assessments fundamentally depends on the correctness of the data received from the BAWSN. However, data errors may arise at the sensor or during transmission across the wireless sensor network. Therefore, it is imperative to measure the health intrinsic data of a patient precisely. The formulated measurable properties in our work precisely measure the performance of the BAWSN in a remote Healthcare Monitoring Application (HMA). In this paper, we collated various performances using the measurable properties in our real-time test-bed and presented a comprehensive evaluation of these properties in a BAWSN.