Collaboration success in the dataverse : Libraries as digital humanities research partners
- Owen, Sue, Verhoeven, Deb, Horn, Anne, Robertson, Sabina
- Authors: Owen, Sue , Verhoeven, Deb , Horn, Anne , Robertson, Sabina
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 35th International Association of Scientific and Technological University Libraries Conference (IATUL 2014); Espoo, Finland; 2nd-5th June 2014 p. 1-9
- Full Text:
- Reviewed:
- Description: At Deakin, the Humanities Networked Infrastructure project (HuNI), has paved new ground for facilitating the effective use and re-use of humanities research data. HuNI is one of the first largescale eResearch infrastructure projects for the humanities in Australia and the first national, crossdisciplinary Virtual Laboratory (VL) worldwide. HuNI provides new information infrastructure services for both humanities researchers and members of the public. Its development has been funded by the National eResearch Collaboration Tools and Resources project (NeCTAR) and undertaken by a consortium of thirteen institutions led by Deakin University. A Deakin University Library team with skills in data description, curation, retrieval and preservation is exploring with digital humanities researchers and developers effective means to support and maintain the HuNI project. HuNI ingests and aggregates data from a total of 31 different Australian cultural datasets which cover a wide range of disciplines in the humanities and creative arts. The HuNI VL also provides a number of online research capabilities for humanities researchers to discover and work with the large-scale aggregation of data. The HuNI VL enables researchers to create, save and publish selections of data; to analyse and manipulate the data; share findings and to export the data for reuse in external environments. In a major innovation, HuNI also enables researchers to assert relationships between entities in the form of ‘socially linked’ data. This capability contributes to the building of a ‘vernacular’ network of associations between HuNI records that embody diverse perspectives on knowledge and ramify avenues for research discovery beyond keyword and phrase searches. This paper reports on key milestones in this project, the future role of Libraries as digital humanities research partners and the challenges and sustainability issues that face national digital humanities research projects that are developed in strategic library settings.
- Authors: Owen, Sue , Verhoeven, Deb , Horn, Anne , Robertson, Sabina
- Date: 2014
- Type: Text , Conference proceedings , Conference paper
- Relation: 35th International Association of Scientific and Technological University Libraries Conference (IATUL 2014); Espoo, Finland; 2nd-5th June 2014 p. 1-9
- Full Text:
- Reviewed:
- Description: At Deakin, the Humanities Networked Infrastructure project (HuNI), has paved new ground for facilitating the effective use and re-use of humanities research data. HuNI is one of the first largescale eResearch infrastructure projects for the humanities in Australia and the first national, crossdisciplinary Virtual Laboratory (VL) worldwide. HuNI provides new information infrastructure services for both humanities researchers and members of the public. Its development has been funded by the National eResearch Collaboration Tools and Resources project (NeCTAR) and undertaken by a consortium of thirteen institutions led by Deakin University. A Deakin University Library team with skills in data description, curation, retrieval and preservation is exploring with digital humanities researchers and developers effective means to support and maintain the HuNI project. HuNI ingests and aggregates data from a total of 31 different Australian cultural datasets which cover a wide range of disciplines in the humanities and creative arts. The HuNI VL also provides a number of online research capabilities for humanities researchers to discover and work with the large-scale aggregation of data. The HuNI VL enables researchers to create, save and publish selections of data; to analyse and manipulate the data; share findings and to export the data for reuse in external environments. In a major innovation, HuNI also enables researchers to assert relationships between entities in the form of ‘socially linked’ data. This capability contributes to the building of a ‘vernacular’ network of associations between HuNI records that embody diverse perspectives on knowledge and ramify avenues for research discovery beyond keyword and phrase searches. This paper reports on key milestones in this project, the future role of Libraries as digital humanities research partners and the challenges and sustainability issues that face national digital humanities research projects that are developed in strategic library settings.
Principles and guidelines for Australian higher education Libraries : Capturing value
- Owen, Sue, Peasley, Jennifer, Paton, Barbara
- Authors: Owen, Sue , Peasley, Jennifer , Paton, Barbara
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: Second Annual TEQSA Conference; Melbourne, Australia; 29th November-1st December 2017 p. 146-158
- Full Text:
- Reviewed:
- Description: Reflecting on their time at university through an affinity survey, many alumni from Monash University reported affinity with their university library. Their Library! What makes that connection so strong? Aligning with institutional priorities and higher education standards, academic librarians have long partnered with faculties and divisions, conferred with research centres and liaised with student groups to augment university outcomes. However, tools for crystallising Library value are less advanced. In this paper, a new framework, Principles and Guidelines for Australian higher education libraries (2016), is introduced. Its purpose is to describe and assess the contribution of libraries to academic and research endeavour. It articulates Library value through major strategic priorities, each with high-level value statements or Principles and a suite of associated Guidelines. The framework marks a new generation of Library value and impact tools. Coupling the framework with associated performance indicators, library directors and stakeholders can be better informed of library value.
- Authors: Owen, Sue , Peasley, Jennifer , Paton, Barbara
- Date: 2017
- Type: Text , Conference proceedings , Conference paper
- Relation: Second Annual TEQSA Conference; Melbourne, Australia; 29th November-1st December 2017 p. 146-158
- Full Text:
- Reviewed:
- Description: Reflecting on their time at university through an affinity survey, many alumni from Monash University reported affinity with their university library. Their Library! What makes that connection so strong? Aligning with institutional priorities and higher education standards, academic librarians have long partnered with faculties and divisions, conferred with research centres and liaised with student groups to augment university outcomes. However, tools for crystallising Library value are less advanced. In this paper, a new framework, Principles and Guidelines for Australian higher education libraries (2016), is introduced. Its purpose is to describe and assess the contribution of libraries to academic and research endeavour. It articulates Library value through major strategic priorities, each with high-level value statements or Principles and a suite of associated Guidelines. The framework marks a new generation of Library value and impact tools. Coupling the framework with associated performance indicators, library directors and stakeholders can be better informed of library value.
Improving deep forest by confidence screening
- Pang, Ming, Ting, Kaiming, Zhao, Peng, Zhou, Zhi-Hua
- Authors: Pang, Ming , Ting, Kaiming , Zhao, Peng , Zhou, Zhi-Hua
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 Ieee International Conference on Data Mining; Singapore, Singapore; 17th-20th November 2018 p. 1194-1199
- Full Text:
- Reviewed:
- Description: Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by backpropagation. Recently, it has been shown that deep learning can also be realized by non-differentiable modules without backpropagation training called deep forest. The developed representation learning process is based on a cascade of cascades of decision tree forests, where the high memory requirement and the high time cost inhibit the training of large models. In this paper, we propose a simple yet effective approach to improve the efficiency of deep forest. The key idea is to pass the instances with high confidence directly to the final stage rather than passing through all the levels. We also provide a theoretical analysis suggesting a means to vary the model complexity from low to high as the level increases in the cascade, which further reduces the memory requirement and time cost. Our experiments show that the proposed approach achieves highly competitive predictive performance with significantly reduced time cost and memory requirement by up to one order of magnitude.
- Authors: Pang, Ming , Ting, Kaiming , Zhao, Peng , Zhou, Zhi-Hua
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 2018 Ieee International Conference on Data Mining; Singapore, Singapore; 17th-20th November 2018 p. 1194-1199
- Full Text:
- Reviewed:
- Description: Most studies about deep learning are based on neural network models, where many layers of parameterized nonlinear differentiable modules are trained by backpropagation. Recently, it has been shown that deep learning can also be realized by non-differentiable modules without backpropagation training called deep forest. The developed representation learning process is based on a cascade of cascades of decision tree forests, where the high memory requirement and the high time cost inhibit the training of large models. In this paper, we propose a simple yet effective approach to improve the efficiency of deep forest. The key idea is to pass the instances with high confidence directly to the final stage rather than passing through all the levels. We also provide a theoretical analysis suggesting a means to vary the model complexity from low to high as the level increases in the cascade, which further reduces the memory requirement and time cost. Our experiments show that the proposed approach achieves highly competitive predictive performance with significantly reduced time cost and memory requirement by up to one order of magnitude.
Joint texture and depth coding using cuboid data compression
- Paul, Manoranjan, Chakraborty, Subrata, Murshed, Manzur, Podder, Pallab
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
- Authors: Paul, Manoranjan , Chakraborty, Subrata , Murshed, Manzur , Podder, Pallab
- Date: 2015
- Type: Text , Conference proceedings
- Relation: 2015 18th International Conference on Computer and Information Technology (ICCIT); Dhaka, Bangladesh; 21st-23rd December 2015 p. 138-143
- Full Text:
- Reviewed:
- Description: The latest multiview video coding (MVC) standards such as 3D-HEVC and H.264/MVC normally encodes texture and depth videos separately. Significant amount of rate-distortion performance and computational performance are sacrificed due to separate encoding due to the lack of exploitation of joint information. Obviously, separate encoding also creates synchronization issue for 3D scene formation in the decoder. Moreover, the hierarchical frame referencing architecture in the MVC creates random access frame delay. In this paper we develop an encoder and decoder framework where we can encode texture and depth video jointly by forming and encoding 3D cuboid using high dimensional entropy coding. The results from our experiments show that our proposed framework outperforms the 3D-HEVC in rate-distortion performance and reduces the computational time significantly by reducing random access frame delay.
Improved image analysis methodology for detecting changes in evidence positioning at crime scenes
- Petty, Mark, Teng, Shyh, Murshed, Manzur
- Authors: Petty, Mark , Teng, Shyh , Murshed, Manzur
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2019
- Full Text:
- Reviewed:
- Description: This paper proposed an improved methodology to assist forensic investigators in detecting positional change of objects due to crime scene contamination. Either intentionally or by accident, crime scene contamination can occur during the investigation and documentation process. This new proposed methodology utilises an ASIFT-based feature detection algorithm that compares pre- and post-contaminated images of the same scene, taken from different viewpoints. The contention is that the ASIFT registration technique is better suited to real world crime scene photography, being more robust to affine distortion that occurs when capturing images from different viewpoints. The proposed methodology was tested with both the SIFT and ASIFT registration techniques to show that (1) it could identify missing, planted and displaced objects using both SIFT and ASIFT and (2) ASIFT is superior to SIFT in terms of error in displacement estimation, especially for larger viewpoint discrepancies between the pre- and post-contamination images. This supports the contention that our proposed methodology in combination with ASIFT is better suited to handle real world crime scene photography. © 2019 IEEE.
- Description: E1
- Authors: Petty, Mark , Teng, Shyh , Murshed, Manzur
- Date: 2019
- Type: Text , Conference proceedings , Conference paper
- Relation: 2019 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2019
- Full Text:
- Reviewed:
- Description: This paper proposed an improved methodology to assist forensic investigators in detecting positional change of objects due to crime scene contamination. Either intentionally or by accident, crime scene contamination can occur during the investigation and documentation process. This new proposed methodology utilises an ASIFT-based feature detection algorithm that compares pre- and post-contaminated images of the same scene, taken from different viewpoints. The contention is that the ASIFT registration technique is better suited to real world crime scene photography, being more robust to affine distortion that occurs when capturing images from different viewpoints. The proposed methodology was tested with both the SIFT and ASIFT registration techniques to show that (1) it could identify missing, planted and displaced objects using both SIFT and ASIFT and (2) ASIFT is superior to SIFT in terms of error in displacement estimation, especially for larger viewpoint discrepancies between the pre- and post-contamination images. This supports the contention that our proposed methodology in combination with ASIFT is better suited to handle real world crime scene photography. © 2019 IEEE.
- Description: E1
QMET : A new quality assessment metric for no-reference video coding by using human eye traversal
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Image and Vision Computing New Zealand, IVCNZ 2016; Palmerston North, New Zealand; 21st-22nd November 2016 p. 1-6
- Full Text:
- Reviewed:
- Description: The subjective quality assessment (SQA) is an ever demanding approach due to its in-depth interactivity to the human cognition. The addition of no-reference based scheme could equip the SQA techniques to tackle further challenges. Existing widely used objective metrics-peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) or the subjective estimator-mean opinion score (MOS) requires original image for quality evaluation that limits their uses for the situation having no-reference. In this work, we present a no-reference based SQA technique that could be an impressive substitute to the reference-based approaches for quality evaluation. The High Efficiency Video Coding (HEVC) reference test model (HM15.0) is first exploited to generate five different qualities of the HEVC recommended eight class sequences. To assess different aspects of coded video quality, a group of ten participants are employed and their eye-tracker (ET) recorded data demonstrate closer correlation among gaze plots for relatively better quality video contents. Therefore, we innovatively calculate the amount of approximation of smooth eye traversal (ASET) by using distance, angle, and pupil-size feature from recorded gaze trajectory data and develop a new-quality metric based on eye traversal (QMET). Experimental results show that the quality evaluation carried out by QMET is highly correlated to the HM recommended coding quality. The performance of the QMET is also compared with the PSNR and SSIM metrics to justify the effectiveness of each other.
- Description: International Conference Image and Vision Computing New Zealand
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Image and Vision Computing New Zealand, IVCNZ 2016; Palmerston North, New Zealand; 21st-22nd November 2016 p. 1-6
- Full Text:
- Reviewed:
- Description: The subjective quality assessment (SQA) is an ever demanding approach due to its in-depth interactivity to the human cognition. The addition of no-reference based scheme could equip the SQA techniques to tackle further challenges. Existing widely used objective metrics-peak signal-to-noise ratio (PSNR), structural similarity index (SSIM) or the subjective estimator-mean opinion score (MOS) requires original image for quality evaluation that limits their uses for the situation having no-reference. In this work, we present a no-reference based SQA technique that could be an impressive substitute to the reference-based approaches for quality evaluation. The High Efficiency Video Coding (HEVC) reference test model (HM15.0) is first exploited to generate five different qualities of the HEVC recommended eight class sequences. To assess different aspects of coded video quality, a group of ten participants are employed and their eye-tracker (ET) recorded data demonstrate closer correlation among gaze plots for relatively better quality video contents. Therefore, we innovatively calculate the amount of approximation of smooth eye traversal (ASET) by using distance, angle, and pupil-size feature from recorded gaze trajectory data and develop a new-quality metric based on eye traversal (QMET). Experimental results show that the quality evaluation carried out by QMET is highly correlated to the HM recommended coding quality. The performance of the QMET is also compared with the PSNR and SSIM metrics to justify the effectiveness of each other.
- Description: International Conference Image and Vision Computing New Zealand
Fast coding strategy for HEVC by motion features and saliency applied on difference between successive image blocks
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference proceedings
- Relation: ConferencePacific-Rim Symposium on Image and Video Technology, Auckland, 23-27th Nov, 2016, In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).9431 p. 175-186
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Introducing a number of innovative and powerful coding tools, the High Efficiency Video Coding (HEVC) standard promises double compression efficiency, compared to its predecessor H.264, with similar perceptual quality. The increased computational time complexity is an important issue for the video coding research community as well. An attempt to reduce this complexity of HEVC is adopted in this paper, by efficient selection of appropriate block-partitioning modes based on motion features and the saliency applied to the difference between successive image blocks. As this difference gives us the explicit visible motion and salient information, we develop a cost function by combining the motion features and image difference salient feature. The combined features are then converted into area of interest (AOI) based binary pattern for the current block. This pattern is then compared with a previously defined codebook of binary pattern templates for a subset of mode selection. Motion estimation (ME) and motion compensation (MC) are performed only on the selected subset of modes, without exhaustive exploration of all modes available in HEVC. The experimental results reveal a reduction of 42% encoding time complexity of HEVC encoder with similar subjective and objective image quality.
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2015
- Type: Text , Conference proceedings
- Relation: ConferencePacific-Rim Symposium on Image and Video Technology, Auckland, 23-27th Nov, 2016, In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).9431 p. 175-186
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: Introducing a number of innovative and powerful coding tools, the High Efficiency Video Coding (HEVC) standard promises double compression efficiency, compared to its predecessor H.264, with similar perceptual quality. The increased computational time complexity is an important issue for the video coding research community as well. An attempt to reduce this complexity of HEVC is adopted in this paper, by efficient selection of appropriate block-partitioning modes based on motion features and the saliency applied to the difference between successive image blocks. As this difference gives us the explicit visible motion and salient information, we develop a cost function by combining the motion features and image difference salient feature. The combined features are then converted into area of interest (AOI) based binary pattern for the current block. This pattern is then compared with a previously defined codebook of binary pattern templates for a subset of mode selection. Motion estimation (ME) and motion compensation (MC) are performed only on the selected subset of modes, without exhaustive exploration of all modes available in HEVC. The experimental results reveal a reduction of 42% encoding time complexity of HEVC encoder with similar subjective and objective image quality.
A novel no-reference subjective quality metric for free viewpoint video using human eye movement
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 8th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2017; Wuhan, China; 20th-24th November 2017; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 10749 LNCS, p. 237-251
- Full Text:
- Reviewed:
- Description: The free viewpoint video (FVV) allows users to interactively control the viewpoint and generate new views of a dynamic scene from any 3D position for better 3D visual experience with depth perception. Multiview video coding exploits both texture and depth video information from various angles to encode a number of views to facilitate FVV. The usual practice for the single view or multiview quality assessment is characterized by evolving the objective quality assessment metrics due to their simplicity and real time applications such as the peak signal-to-noise ratio (PSNR) or the structural similarity index (SSIM). However, the PSNR or SSIM requires reference image for quality evaluation and could not be successfully employed in FVV as the new view in FVV does not have any reference view to compare with. Conversely, the widely used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain knowledge, and many other factors that may actively influence on actual assessment. To address this limitation, in this work, we devise a no-reference subjective quality assessment metric by simply exploiting the pattern of human eye browsing on FVV. Over different quality contents of FVV, the participants eye-tracker recorded spatio-temporal gaze-data indicate more concentrated eye-traversing approach for relatively better quality. Thus, we calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the proposed QMET performs better than the SSIM and MOS in terms of assessing different aspects of coded video quality for a wide range of FVV contents.
- Description: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2018
- Type: Text , Conference proceedings
- Relation: 8th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2017; Wuhan, China; 20th-24th November 2017; published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Vol. 10749 LNCS, p. 237-251
- Full Text:
- Reviewed:
- Description: The free viewpoint video (FVV) allows users to interactively control the viewpoint and generate new views of a dynamic scene from any 3D position for better 3D visual experience with depth perception. Multiview video coding exploits both texture and depth video information from various angles to encode a number of views to facilitate FVV. The usual practice for the single view or multiview quality assessment is characterized by evolving the objective quality assessment metrics due to their simplicity and real time applications such as the peak signal-to-noise ratio (PSNR) or the structural similarity index (SSIM). However, the PSNR or SSIM requires reference image for quality evaluation and could not be successfully employed in FVV as the new view in FVV does not have any reference view to compare with. Conversely, the widely used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain knowledge, and many other factors that may actively influence on actual assessment. To address this limitation, in this work, we devise a no-reference subjective quality assessment metric by simply exploiting the pattern of human eye browsing on FVV. Over different quality contents of FVV, the participants eye-tracker recorded spatio-temporal gaze-data indicate more concentrated eye-traversing approach for relatively better quality. Thus, we calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the proposed QMET performs better than the SSIM and MOS in terms of assessing different aspects of coded video quality for a wide range of FVV contents.
- Description: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Fast intermode selection for HEVC video coding using phase correlation
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur, Chakraborty, Subrata
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur , Chakraborty, Subrata
- Date: 2015
- Type: Text , Conference proceedings , Conference paper
- Relation: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014; Wollongong, Australia; 25th-27th November 2014 p. 1-8
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: The recent High Efficiency Video Coding (HEVC) Standard demonstrates higher rate-distortion (RD) performance compared to its predecessor H.264/AVC using different new tools especially larger and asymmetric inter-mode variable size motion estimation and compensation. This requires more than 4 times computational time compared to H.264/AVC. As a result it has always been a big concern for the researchers to reduce the amount of time while maintaining the standard quality of the video. The reduction of computational time by smart selection of the appropriate modes in HEVC is our motivation. To accomplish this task in this paper, we use phase correlation to approximate the motion information between current and reference blocks by comparing with a number of different binary pattern templates and then select a subset of motion estimation modes without exhaustively exploring all possible modes. The experimental results exhibit that the proposed HEVC-PC (HEVC with Phase Correlation) scheme outperforms the standard HEVC scheme in terms of computational time while preserving-the same quality of the video sequences. More specifically, around 40% encoding time is reduced compared to the exhaustive mode selection in HEVC. © 2014 IEEE.
- Description: 2014 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2014
A novel quality metric using spatiotemporal correlational data of human eye maneuver
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 International Conference on Digital Image Computing : Techniques and Applications, DICTA 2017; Sydney, Australia; 29th November-1st December 2017 Vol. 2017-December, p. 1-8
- Full Text:
- Reviewed:
- Description: The popularly used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain expertise, and many other factors that may actively influence on actual assessment. We therefore, devise a no- reference subjective quality assessment metric by exploiting the nature of human eye browsing on videos. The participants' eye-tracker recorded gaze-data indicate more concentrated eye- traversing approach for relatively better quality. We calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the quality evaluation carried out by QMET demonstrates a strong correlation with the most widely used peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the MOS.
- Description: DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2017 International Conference on Digital Image Computing : Techniques and Applications, DICTA 2017; Sydney, Australia; 29th November-1st December 2017 Vol. 2017-December, p. 1-8
- Full Text:
- Reviewed:
- Description: The popularly used subjective estimator- mean opinion score (MOS) is often biased by the testing environment, viewers mode, domain expertise, and many other factors that may actively influence on actual assessment. We therefore, devise a no- reference subjective quality assessment metric by exploiting the nature of human eye browsing on videos. The participants' eye-tracker recorded gaze-data indicate more concentrated eye- traversing approach for relatively better quality. We calculate the Length, Angle, Pupil-size, and Gaze-duration features from the recorded gaze trajectory. The content and resolution invariant operation is carried out prior to synthesizing them using an adaptive weighted function to develop a new quality metric using eye traversal (QMET). Tested results reveal that the quality evaluation carried out by QMET demonstrates a strong correlation with the most widely used peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and the MOS.
- Description: DICTA 2017 - 2017 International Conference on Digital Image Computing: Techniques and Applications
Efficient HEVC scheme using motion type categorization
- Podder, Pallab, Paul, Manoranjan, Murshed, Manzur
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 10th International Conference on emerging Networking EXperiments and Technologies (CoNEXT); Sydney, Australia; 2nd-5th December 2014; published in Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming p. 41-42
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard introduces a number of innovative tools which can reduce approximately 50% bit-rate compared to its predecessor H.264/AVC at the same perceptual video quality whereas the computational time has increased multiple times. To reduce the encoding time while preserving the expected video quality has become a real challenge today for video transmission and streaming especially using low-powered devices. Motion estimation (ME) and motion compensation (MC) using variable-size blocks (i.e., intermodes) require 60-80% of total computational time. In this paper we propose a new efficient intermode selection technique based on phase correlation and incorporate into HEVC framework to predict ME and MC modes and perform faster intermode selection based on three dissimilar motion types in different videos. Instead of exploring all the modes exhaustively we select a subset of modes using motion type and the final mode is selected based on the Lagrangian cost function. The experimental results show that compared to HEVC the average computational time can be downscaled by 34% while providing the similar rate-distortion (RD) performance.
- Authors: Podder, Pallab , Paul, Manoranjan , Murshed, Manzur
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 10th International Conference on emerging Networking EXperiments and Technologies (CoNEXT); Sydney, Australia; 2nd-5th December 2014; published in Proceedings of the 2014 Workshop on Design, Quality and Deployment of Adaptive Video Streaming p. 41-42
- Relation: http://purl.org/au-research/grants/arc/DP130103670
- Full Text:
- Reviewed:
- Description: High Efficiency Video Coding (HEVC) standard introduces a number of innovative tools which can reduce approximately 50% bit-rate compared to its predecessor H.264/AVC at the same perceptual video quality whereas the computational time has increased multiple times. To reduce the encoding time while preserving the expected video quality has become a real challenge today for video transmission and streaming especially using low-powered devices. Motion estimation (ME) and motion compensation (MC) using variable-size blocks (i.e., intermodes) require 60-80% of total computational time. In this paper we propose a new efficient intermode selection technique based on phase correlation and incorporate into HEVC framework to predict ME and MC modes and perform faster intermode selection based on three dissimilar motion types in different videos. Instead of exploring all the modes exhaustively we select a subset of modes using motion type and the final mode is selected based on the Lagrangian cost function. The experimental results show that compared to HEVC the average computational time can be downscaled by 34% while providing the similar rate-distortion (RD) performance.
Research on EKF-based localization method of tracked mobile robot
- Qu, Junsuo, Zhang, Qipeng, Hou, Leichao, Zhang, Ruijun, Ting, Kaiming
- Authors: Qu, Junsuo , Zhang, Qipeng , Hou, Leichao , Zhang, Ruijun , Ting, Kaiming
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2nd International Conference on Computer Engineering, Information Science & Application Technology (ICCIA 2017); Wuhan, China; 8th-9th July 2017; published in ACSR-Advances in Computer Science Research series Vol. 74, p. 175-180
- Full Text:
- Reviewed:
- Description: To estimate the position and heading angle of mobile robot precisely, an measurement variable estimation model was proposed to adapt any angle. Fusing the predictive value of odometry and measurement data of multiple sensors by the Extended Kalman Filtering (EKF) for reducing the accumulative error by using only traditional odometry. The proposed models is verified by Matlab simulation and experimental results.
- Authors: Qu, Junsuo , Zhang, Qipeng , Hou, Leichao , Zhang, Ruijun , Ting, Kaiming
- Date: 2017
- Type: Text , Conference proceedings
- Relation: 2nd International Conference on Computer Engineering, Information Science & Application Technology (ICCIA 2017); Wuhan, China; 8th-9th July 2017; published in ACSR-Advances in Computer Science Research series Vol. 74, p. 175-180
- Full Text:
- Reviewed:
- Description: To estimate the position and heading angle of mobile robot precisely, an measurement variable estimation model was proposed to adapt any angle. Fusing the predictive value of odometry and measurement data of multiple sensors by the Extended Kalman Filtering (EKF) for reducing the accumulative error by using only traditional odometry. The proposed models is verified by Matlab simulation and experimental results.
Multi-modal reliability analysis of slope stability
- Reale, Cormac, Gavin, Kenneth, Prendergast, Luke, Xue, Jianfeng
- Authors: Reale, Cormac , Gavin, Kenneth , Prendergast, Luke , Xue, Jianfeng
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 6th Transport Research Arena; Warsaw, Poland; 18th-21st April 2016; published inTransportation Research Procedia Vol. 14, p. 2468-2476
- Full Text:
- Reviewed:
- Description: Probabilistic slope stability analysis typically requires an optimisation technique to locate the most probable slip surface. However, for many slopes particularly those containing many different soil layers or benches several distinct critical slip surfaces may exist. Furthermore, in large slopes these critical slip surfaces may be located at significant distances from each other. In such circumstances, finding and rehabilitating the most probable failure surface is of little merit, as rehabilitating that surface does not improve the safety of the slope as a whole. Unfortunately, existing slip surface search techniques were developed to converge on one global minimum. Therefore, to implement such methods to evaluate the stability of a slope with multiple failure mechanisms requires the user to define probable slip locations prior to calculation. This requires extensive engineering experience and places undue responsibility on the engineer in question. This paper proposes the use of a locally informed particle swarm optimisation method which is able to simultaneously converge to multiple critical slip surfaces. This optimisation model when combined with a reliability analysis is able to define all areas of concern within a slope. A case study of a railway slope is presented which highlights the benefits of the model over single objective optimisation models. The approach is of particular benefit when evaluating the stability of large existing slopes with complicated stratigraphy as these slopes are likely to contain multiple viable slip surfaces. © 2016 The Authors.
- Authors: Reale, Cormac , Gavin, Kenneth , Prendergast, Luke , Xue, Jianfeng
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 6th Transport Research Arena; Warsaw, Poland; 18th-21st April 2016; published inTransportation Research Procedia Vol. 14, p. 2468-2476
- Full Text:
- Reviewed:
- Description: Probabilistic slope stability analysis typically requires an optimisation technique to locate the most probable slip surface. However, for many slopes particularly those containing many different soil layers or benches several distinct critical slip surfaces may exist. Furthermore, in large slopes these critical slip surfaces may be located at significant distances from each other. In such circumstances, finding and rehabilitating the most probable failure surface is of little merit, as rehabilitating that surface does not improve the safety of the slope as a whole. Unfortunately, existing slip surface search techniques were developed to converge on one global minimum. Therefore, to implement such methods to evaluate the stability of a slope with multiple failure mechanisms requires the user to define probable slip locations prior to calculation. This requires extensive engineering experience and places undue responsibility on the engineer in question. This paper proposes the use of a locally informed particle swarm optimisation method which is able to simultaneously converge to multiple critical slip surfaces. This optimisation model when combined with a reliability analysis is able to define all areas of concern within a slope. A case study of a railway slope is presented which highlights the benefits of the model over single objective optimisation models. The approach is of particular benefit when evaluating the stability of large existing slopes with complicated stratigraphy as these slopes are likely to contain multiple viable slip surfaces. © 2016 The Authors.
Patient-empowered electronic health records
- Sahama, Tony, Stranieri, Andrew, Butler-Henderson, Kerryn
- Authors: Sahama, Tony , Stranieri, Andrew , Butler-Henderson, Kerryn
- Date: 2019
- Type: Text , Conference proceedings
- Relation: MEDINFO 2019: Health and Wellbeing e-Networks for All Vol. 264, p. 1765
- Full Text:
- Reviewed:
- Description: Electronic Health Records (EHRs) constitute evidence of online health information management. Critical healthcare information technology (HIT) infrastructure facilitates health information exchange of 'modern' health systems. The growth and implementation of EHRs are progressing in many countries while the adoption rate is lagging and lacking momentum amidst privacy and security concerns. This paper uses an interrupted time series (ITS) analysis of OECD data related to EHRs from many countries to make predictions about EHR adoption. The ITS model can be used to explore the impact of various HIT on adoption. Assumptions about the impact of Information Accountability are entered into the model to generate projections if information accountability technologies are developed. In this way, the OECD data and ITS analysis can be used to perform simulations for improving EHR adoption.
- Authors: Sahama, Tony , Stranieri, Andrew , Butler-Henderson, Kerryn
- Date: 2019
- Type: Text , Conference proceedings
- Relation: MEDINFO 2019: Health and Wellbeing e-Networks for All Vol. 264, p. 1765
- Full Text:
- Reviewed:
- Description: Electronic Health Records (EHRs) constitute evidence of online health information management. Critical healthcare information technology (HIT) infrastructure facilitates health information exchange of 'modern' health systems. The growth and implementation of EHRs are progressing in many countries while the adoption rate is lagging and lacking momentum amidst privacy and security concerns. This paper uses an interrupted time series (ITS) analysis of OECD data related to EHRs from many countries to make predictions about EHR adoption. The ITS model can be used to explore the impact of various HIT on adoption. Assumptions about the impact of Information Accountability are entered into the model to generate projections if information accountability technologies are developed. In this way, the OECD data and ITS analysis can be used to perform simulations for improving EHR adoption.
An evaluation of emergency plans and procedures in fitness facilities in Australia: Implications for policy and practice
- Sekendiz, Betul, Norton, Kevin, Keyzer, Patrick, Dietrich, Joachim, Coyle, Ian, Jones, Veronica, Finch, Caroline
- Authors: Sekendiz, Betul , Norton, Kevin , Keyzer, Patrick , Dietrich, Joachim , Coyle, Ian , Jones, Veronica , Finch, Caroline
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: In 2007-08, fitness facilities contributed $872.9 million to the Australian economy and provided savings in direct health care costs estimated up to $107.9 million through their positive impact on physical inactivity and associated diseases (1). In 2011-12, more than 4.3 million Australians participated in sport and physical recreation at indoor sports or fitness facilities (2). However, research across Queensland (3) and in Victoria (4) showed low compliance with emergency plans and safety practices in fitness facilities. The aim of this study was to analyse emergency plans and procedures in fitness facilities in Australia. A nationwide online risk management survey of fitness professionals (n=1178, mean age=39.9), and observational audits at randomly selected regional and metropolitan fitness facilities (n=11) in New South Wales, South Australia, Victoria and Queensland were conducted. The findings indicated that most of the fitness professionals (68.1%) rated the emergency evacuation plans and other emergency procedures in their facilities as extremely/very good (n=640). Yet, more than one fourth (27.4%) of fitness professionals were somewhat aware (n=152), or very unaware/not at all aware (n=49) of the emergency evacuation plans and other emergency procedures in their facilities. The observational audits showed that most of the fitness facilities did not clearly display their emergency response plans (73%, n=8), emergency evacuation procedures (55%, n=6) or emergency telephone numbers (91%, n=10). Many fitness facilities (36.4%, n=4) did not have an appropriate first aid kit accessible by all staff. Our study shows a lack of emergency preparedness in many fitness facilities in Australia. Emergency response capability is crucial for fitness facility managers to satisfy their duty of care to manage risks of medical emergencies and disasters such as fire, explosion, and floods. Our study has implications for policy development and education of fitness facility managers to improve emergency plans and procedures in fitness facilities in Australia.
- Authors: Sekendiz, Betul , Norton, Kevin , Keyzer, Patrick , Dietrich, Joachim , Coyle, Ian , Jones, Veronica , Finch, Caroline
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: In 2007-08, fitness facilities contributed $872.9 million to the Australian economy and provided savings in direct health care costs estimated up to $107.9 million through their positive impact on physical inactivity and associated diseases (1). In 2011-12, more than 4.3 million Australians participated in sport and physical recreation at indoor sports or fitness facilities (2). However, research across Queensland (3) and in Victoria (4) showed low compliance with emergency plans and safety practices in fitness facilities. The aim of this study was to analyse emergency plans and procedures in fitness facilities in Australia. A nationwide online risk management survey of fitness professionals (n=1178, mean age=39.9), and observational audits at randomly selected regional and metropolitan fitness facilities (n=11) in New South Wales, South Australia, Victoria and Queensland were conducted. The findings indicated that most of the fitness professionals (68.1%) rated the emergency evacuation plans and other emergency procedures in their facilities as extremely/very good (n=640). Yet, more than one fourth (27.4%) of fitness professionals were somewhat aware (n=152), or very unaware/not at all aware (n=49) of the emergency evacuation plans and other emergency procedures in their facilities. The observational audits showed that most of the fitness facilities did not clearly display their emergency response plans (73%, n=8), emergency evacuation procedures (55%, n=6) or emergency telephone numbers (91%, n=10). Many fitness facilities (36.4%, n=4) did not have an appropriate first aid kit accessible by all staff. Our study shows a lack of emergency preparedness in many fitness facilities in Australia. Emergency response capability is crucial for fitness facility managers to satisfy their duty of care to manage risks of medical emergencies and disasters such as fire, explosion, and floods. Our study has implications for policy development and education of fitness facility managers to improve emergency plans and procedures in fitness facilities in Australia.
Lossless hyperspectral image compression using binary tree based decomposition
- Shahriyar, Shampa, Paul, Manoranjan, Murshed, Manzur, Ali, Mortuza
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.
- Authors: Shahriyar, Shampa , Paul, Manoranjan , Murshed, Manzur , Ali, Mortuza
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 428-435
- Full Text:
- Reviewed:
- Description: A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing "original pixel intensity"based coding approaches using traditional image coders (e.g. JPEG) to the "residual" based approaches using a predictive coder exploiting band-wise correlation for better compression performance. Moreover, as HS images are used in detection or classification they need to be in original form; lossy schemes can trim off uninteresting data along with compression, which can be important to specific analysis purposes. A modified lossless HS coder is required to exploit spatial- spectral redundancy using predictive residual coding. Every spectral band of an HS image can be treated like they are the individual frame of a video to impose inter band prediction. In this paper, we propose a binary tree based lossless predictive HS coding scheme that arranges the residual frame into integer residual bitmap. High spatial correlation in HS residual frame is exploited by creating large homogeneous blocks of adaptive size, which are then coded as a unit using context based arithmetic coding. On the standard HS data set, the proposed lossless predictive coding has achieved compression ratio in the range of 1.92 to 7.94. In this paper, we compare the proposed method with mainstream lossless coders (JPEG-LS and lossless HEVC). For JPEG-LS, HEVCIntra and HEVCMain, proposed technique has reduced bit-rate by 35%, 40% and 6.79% respectively by exploiting spatial correlation in predicted HS residuals.
A biometric based authentication and encryption Framework for Sensor Health Data in Cloud
- Sharma, Surender, Balasubramanian, Venki
- Authors: Sharma, Surender , Balasubramanian, Venki
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms. © 2014 IEEE.
- Authors: Sharma, Surender , Balasubramanian, Venki
- Date: 2014
- Type: Text , Conference proceedings
- Full Text:
- Description: Use of remote healthcare monitoring application (HMA) can not only enable healthcare seeker to live a normal life while receiving treatment but also prevent critical healthcare situation through early intervention. For this to happen, the HMA have to provide continuous monitoring through sensors attached to the patient's body or in close proximity to the patient. Owing to elasticity nature of the cloud, recently, the implementation of HMA in cloud is of intense research. Although, cloud-based implementation provides scalability for implementation, the health data of patient is super-sensitive and requires high level of privacy and security for cloud-based shared storage. In addition, protection of real-time arrival of large volume of sensor data from continuous monitoring of patient poses bigger challenge. In this work, we propose a self-protective security framework for our cloud-based HMA. Our framework enable the sensor data in the cloud from (1) unauthorized access and (2) self-protect the data in case of breached access using biometrics. The framework is detailed in the paper using mathematical formulation and algorithms. © 2014 IEEE.
Performance improvement of vertical handoff algorithms for QoS support over heterogenuous wireless networks
- Sharna, Shusmita, Murshed, Manzur
- Authors: Sharna, Shusmita , Murshed, Manzur
- Date: 2011
- Type: Text , Conference proceedings
- Relation: Proceedings of the Thirty-Fourth Australasian Computer Science (ASSC 2011), 17th -20th January, Perth, 2011 p. 17-24
- Full Text:
- Reviewed:
- Description: During the vertical handoff procedure, handoff decision is the most important step that affects the normal working of communication. An incorrect handoff decision or selection of a non-optimal network can result in undesirable effects such as higher costs, poor service experience, degrade the quality of service and even break off current communication. The objective of this paper is to determine the conditions under which vertical handoff should be performed in heterogeneous wireless networks. In this paper, we present a comprehensive analysis of different vertical handoff decision algorithms. To evaluate tradeoffs between their performance and efficiency, we propose two improved vertical handoff decision algorithm based on Markov Decision Process which are referred to as MDP_SAW and MDP_TOPSIS. The proposed mechanism assists the terminal in selecting the top candidate network and offer better available bandwidth so that user satisfaction is effectively maximized. In addition, our proposed method avoids unbeneficial handoffs in the wireless overlay networks.
- Authors: Sharna, Shusmita , Murshed, Manzur
- Date: 2011
- Type: Text , Conference proceedings
- Relation: Proceedings of the Thirty-Fourth Australasian Computer Science (ASSC 2011), 17th -20th January, Perth, 2011 p. 17-24
- Full Text:
- Reviewed:
- Description: During the vertical handoff procedure, handoff decision is the most important step that affects the normal working of communication. An incorrect handoff decision or selection of a non-optimal network can result in undesirable effects such as higher costs, poor service experience, degrade the quality of service and even break off current communication. The objective of this paper is to determine the conditions under which vertical handoff should be performed in heterogeneous wireless networks. In this paper, we present a comprehensive analysis of different vertical handoff decision algorithms. To evaluate tradeoffs between their performance and efficiency, we propose two improved vertical handoff decision algorithm based on Markov Decision Process which are referred to as MDP_SAW and MDP_TOPSIS. The proposed mechanism assists the terminal in selecting the top candidate network and offer better available bandwidth so that user satisfaction is effectively maximized. In addition, our proposed method avoids unbeneficial handoffs in the wireless overlay networks.
A new building mask using the gradient of heights for automatic building extraction
- Siddiqui, Fasahat, Awrangjeb, Mohammad, Teng, Shyh, Lu, Guojun
- Authors: Siddiqui, Fasahat , Awrangjeb, Mohammad , Teng, Shyh , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 288-294
- Full Text:
- Reviewed:
- Description: A number of building detection methods have been proposed in the literature. However, they are not effective in detecting small buildings (typically, 50 m(2)) and buildings with transparent roof due to the way area thresholds and ground points are used. This paper proposes a new building mask to overcome these limitations and enables detection of buildings not only with transparent roof materials but also which are small in size. The proposed building detection method transforms the non-ground height information into an intensity image and then analyses the gradient information in the image. It uses a small area threshold of 1 m2 and, thereby, is able to detect small buildings such as garden sheds. The use of non-ground points allows analyses of the gradient on all types of roof materials and, thus, the method is also able to detect buildings with transparent roofs. Our experimental results show that the proposed method can successfully extract buildings even when their roofs are small and/or transparent, thereby, achieving relatively higher average completeness and quality.
- Authors: Siddiqui, Fasahat , Awrangjeb, Mohammad , Teng, Shyh , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 288-294
- Full Text:
- Reviewed:
- Description: A number of building detection methods have been proposed in the literature. However, they are not effective in detecting small buildings (typically, 50 m(2)) and buildings with transparent roof due to the way area thresholds and ground points are used. This paper proposes a new building mask to overcome these limitations and enables detection of buildings not only with transparent roof materials but also which are small in size. The proposed building detection method transforms the non-ground height information into an intensity image and then analyses the gradient information in the image. It uses a small area threshold of 1 m2 and, thereby, is able to detect small buildings such as garden sheds. The use of non-ground points allows analyses of the gradient on all types of roof materials and, thus, the method is also able to detect buildings with transparent roofs. Our experimental results show that the proposed method can successfully extract buildings even when their roofs are small and/or transparent, thereby, achieving relatively higher average completeness and quality.
An improved building detection in complex sites using the LIDAR height variation and point density
- Siddiqui, Fasahat, Teng, Shyh, Lu, Guojun, Awrangjeb, Mohammad
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2013
- Type: Text , Conference proceedings
- Relation: 2013 28th International Conference on Image and Vision Computing New Zealand, IVCNZ 2013; Wellington; New Zealand; 27th-29th November 2013; published in International Conference Image and Vision Computing New Zealand p. 471-476
- Full Text:
- Reviewed:
- Description: In this paper, the height variation in LIDAR (Light Detection And Ranging) point cloud data and point density are analyzed to remove the false building detection in highly vegetation and hilly sites. In general, the LIDAR points in a tree area have higher height variations than those in a building area. Moreover, the density of points having similar height values is lower in a tree area than in a building area. The proposed method uses such information as an improvement to a current state-of-the-art building detection method. The qualitative and object-based quantitative analyzes have been performed to verify the effectiveness of the proposed building detection method as compared with a current method. The analysis shows that proposed building detection method successfully reduces false building detection (i.e. trees in high complex sites of Australia and Germany), and the average correctness and quality have been improved by 6.36% and 6.16% respectively.
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2013
- Type: Text , Conference proceedings
- Relation: 2013 28th International Conference on Image and Vision Computing New Zealand, IVCNZ 2013; Wellington; New Zealand; 27th-29th November 2013; published in International Conference Image and Vision Computing New Zealand p. 471-476
- Full Text:
- Reviewed:
- Description: In this paper, the height variation in LIDAR (Light Detection And Ranging) point cloud data and point density are analyzed to remove the false building detection in highly vegetation and hilly sites. In general, the LIDAR points in a tree area have higher height variations than those in a building area. Moreover, the density of points having similar height values is lower in a tree area than in a building area. The proposed method uses such information as an improvement to a current state-of-the-art building detection method. The qualitative and object-based quantitative analyzes have been performed to verify the effectiveness of the proposed building detection method as compared with a current method. The analysis shows that proposed building detection method successfully reduces false building detection (i.e. trees in high complex sites of Australia and Germany), and the average correctness and quality have been improved by 6.36% and 6.16% respectively.