Automatic building extraction from LIDAR data covering complex urban scenes
- Awrangjeb, Mohammad, Lu, Guojun, Fraser, Clive
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2014
- Type: Text , Conference proceedings
- Relation: ISPRS Technical Commission III Symposium; Zurich, Switzerland; 5th-7th September 2014; published in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XL-3, p. 25-32
- Relation: http://purl.org/au-research/grants/arc/DE120101778
- Full Text:
- Reviewed:
- Description: This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex urban scenes.
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2014
- Type: Text , Conference proceedings
- Relation: ISPRS Technical Commission III Symposium; Zurich, Switzerland; 5th-7th September 2014; published in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XL-3, p. 25-32
- Relation: http://purl.org/au-research/grants/arc/DE120101778
- Full Text:
- Reviewed:
- Description: This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex urban scenes.
Detection and management of eating disorders by general practitioners in regional Australia
- Boyd, Candice, Aisbett, Damon, Howard, Andrew, Filiades, Toula
- Authors: Boyd, Candice , Aisbett, Damon , Howard, Andrew , Filiades, Toula
- Date: 2007
- Type: Text , Journal article
- Relation: Australian e-Journal for the Advancement of Mental Health Vol. 6, no. 2 (2007), p.
- Full Text:
- Reviewed:
- Description: The aim of this study was to explore the prevalence of eating disorders in primary care in the Ballarat region and to highlight the role of GPs in the detection and management of eating disorders in regional Australia. We used anonymous data previously collated by the Ballarat and District Division of General Practice on the prevalence of eating disorders and patterns of referral of eating disorder patients among GPs in their Division. Over half of GPs surveyed indicated that they treat patients with eating disorders within their practice rather than referring patients to other services. In referring on, GPs were more likely to refer to mental health professionals and dietitians. A notable finding was that these regional GPs were more likely to refer to metropolitan specialist services than local hospitals if their patients required an admission. GPs in regional Australia do significant work to detect and manage patients with clinical eating disorders in the absence of locally-based, specialist services. In this context, we recommend the establishment of linkage partnerships between GPs and mental health practitioners to facilitate early intervention for rural and regional eating disorder patients. Further research into the current treatment practices of regional GPs is also needed to ascertain their specific training needs with respect to this patient population.
- Description: C1
- Description: 2003005810
- Authors: Boyd, Candice , Aisbett, Damon , Howard, Andrew , Filiades, Toula
- Date: 2007
- Type: Text , Journal article
- Relation: Australian e-Journal for the Advancement of Mental Health Vol. 6, no. 2 (2007), p.
- Full Text:
- Reviewed:
- Description: The aim of this study was to explore the prevalence of eating disorders in primary care in the Ballarat region and to highlight the role of GPs in the detection and management of eating disorders in regional Australia. We used anonymous data previously collated by the Ballarat and District Division of General Practice on the prevalence of eating disorders and patterns of referral of eating disorder patients among GPs in their Division. Over half of GPs surveyed indicated that they treat patients with eating disorders within their practice rather than referring patients to other services. In referring on, GPs were more likely to refer to mental health professionals and dietitians. A notable finding was that these regional GPs were more likely to refer to metropolitan specialist services than local hospitals if their patients required an admission. GPs in regional Australia do significant work to detect and manage patients with clinical eating disorders in the absence of locally-based, specialist services. In this context, we recommend the establishment of linkage partnerships between GPs and mental health practitioners to facilitate early intervention for rural and regional eating disorder patients. Further research into the current treatment practices of regional GPs is also needed to ascertain their specific training needs with respect to this patient population.
- Description: C1
- Description: 2003005810
A general-purpose HLA collision detection framework
- Authors: Burns, Lance
- Date: 2006
- Type: Text , Conference paper
- Relation: Paper presented at SimTect 2006 Conference Proceedings, Simulation: Challenges & Opportunities for a Complex and Networked World, Melbourne : 29th May, 2006
- Full Text: false
- Reviewed:
- Description: Collision detection is fundamental to many kinds of simulation. Any simulation that needs to model interactions between solid objects needs some form of collision detection. However, despite this need, a general-purpose collision detection framework has not been developed for the High Level Architecture (HLA). This research paper proposes a framework which facilitates this need. The framework differs from previous solutions by conforming to the principles of low coupling and high cohesion, cornerstones of the HLA ideology, which promote reuse of simulation components. To this end, the framework does not bind itself to the existing Object Model of the simulation it supports. The HLA Data Distribution Management (DDM) services are used to increase the network and processing efficiency of the solution. By incorporation of advanced spatial partitioning and collision detection algorithms, the solution provides an accurate, fast collision detection service to HLA federates.
- Description: E1
- Description: 2003001877
Fusion of LiDAR data and multispectral imagery for effective building detection based on graph and connected component analysis
- Gilani, Alinaqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
An automatic building extraction and regularisation technique using LiDAR point cloud data and orthoimage
- Gilani, Sayed Ali Naqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Sayed Ali Naqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Remote Sensing Vol. 8, no. 3 (2016), p. 1-27
- Full Text:
- Reviewed:
- Description: The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors.
- Authors: Gilani, Sayed Ali Naqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Remote Sensing Vol. 8, no. 3 (2016), p. 1-27
- Full Text:
- Reviewed:
- Description: The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors.
Robust building roof segmentation using airborne point cloud data
- Gilani, Syed, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 23rd IEEE International Conference on Image Processing, ICIP 2016; Phoenix, United States; 25th-28th September 2016; published in Proceedings - International Conferenec on Image Processing, ICIP Vol. 2016-August, p. 859-863
- Full Text: false
- Reviewed:
- Description: Approximation of the geometric features is an essential step in point cloud segmentation and surface reconstruction. Often, the planar surfaces are estimated using principal component analysis (PCA), which is sensitive to noise and smooths the sharp features. Hence, the segmentation results into unreliable reconstructed surfaces. This article presents a point cloud segmentation method for building detection and roof plane extraction. It uses PCA for saliency feature estimation including surface curvature and point normal. However, the point normals around the anisotropic surfaces are approximated using a consistent isotropic sub-neighbourhood by Low-Rank Subspace with prior Knowledge (LRSCPK). The developed segmentation technique is tested using two real-world samples and two benchmark datasets. Per-object and per-area completeness and correctness results indicate the robustness of the approach and the quality of the reconstructed surfaces and extracted buildings. © 2016 IEEE.
- Description: Proceedings - International Conference on Image Processing, ICIP
A smart healthcare framework for detection and monitoring of COVID-19 using IoT and cloud computing
- Nasser, Nidal, Emad-ul-Haq, Qazi, Imran, Muhammad, Ali, Asmaa, Razzak, Imran, Al-Helali, Abdulaziz
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- Authors: Nasser, Nidal , Emad-ul-Haq, Qazi , Imran, Muhammad , Ali, Asmaa , Razzak, Imran , Al-Helali, Abdulaziz
- Date: 2023
- Type: Text , Journal article
- Relation: Neural Computing and Applications Vol. 35, no. 19 (2023), p. 13775-13789
- Full Text:
- Reviewed:
- Description: Coronavirus (COVID-19) is a very contagious infection that has drawn the world’s attention. Modeling such diseases can be extremely valuable in predicting their effects. Although classic statistical modeling may provide adequate models, it may also fail to understand the data’s intricacy. An automatic COVID-19 detection system based on computed tomography (CT) scan or X-ray images is effective, but a robust system design is challenging. In this study, we propose an intelligent healthcare system that integrates IoT-cloud technologies. This architecture uses smart connectivity sensors and deep learning (DL) for intelligent decision-making from the perspective of the smart city. The intelligent system tracks the status of patients in real time and delivers reliable, timely, and high-quality healthcare facilities at a low cost. COVID-19 detection experiments are performed using DL to test the viability of the proposed system. We use a sensor for recording, transferring, and tracking healthcare data. CT scan images from patients are sent to the cloud by IoT sensors, where the cognitive module is stored. The system decides the patient status by examining the images of the CT scan. The DL cognitive module makes the real-time decision on the possible course of action. When information is conveyed to a cognitive module, we use a state-of-the-art classification algorithm based on DL, i.e., ResNet50, to detect and classify whether the patients are normal or infected by COVID-19. We validate the proposed system’s robustness and effectiveness using two benchmark publicly available datasets (Covid-Chestxray dataset and Chex-Pert dataset). At first, a dataset of 6000 images is prepared from the above two datasets. The proposed system was trained on the collection of images from 80% of the datasets and tested with 20% of the data. Cross-validation is performed using a tenfold cross-validation technique for performance evaluation. The results indicate that the proposed system gives an accuracy of 98.6%, a sensitivity of 97.3%, a specificity of 98.2%, and an F1-score of 97.87%. Results clearly show that the accuracy, specificity, sensitivity, and F1-score of our proposed method are high. The comparison shows that the proposed system performs better than the existing state-of-the-art systems. The proposed system will be helpful in medical diagnosis research and healthcare systems. It will also support the medical experts for COVID-19 screening and lead to a precious second opinion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
- «
- ‹
- 1
- ›
- »