Segmentation of airborne point cloud data for automatic building roof extraction
- Gilani, Syed, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: GIScience & Remote Sensing Vol. 55, no. 1 (2018), p. 63-89
- Full Text:
- Reviewed:
- Description: Roof plane segmentation is a complex task since point cloud data carry no connection information and do not provide any semantic characteristics of the underlying scanned surfaces. Point cloud density, complex roof profiles, and occlusion add another layer of complexity which often encounter in practice. In this article, we present a new technique that provides a better interpolation of roof regions where multiple surfaces intersect creating non-manifold points. As a result, these geometric features are preserved to achieve automated identification and segmentation of the roof planes from unstructured laser data. The proposed technique has been tested using the International Society for Photogrammetry and Remote Sensing benchmark and three Australian datasets, which differ in terrain, point density, building sizes, and vegetation. The qualitative and quantitative results show the robustness of the methodology and indicate that the proposed technique can eliminate vegetation and extract buildings as well as their non-occluding parts from the complex scenes at a high success rate for building detection (between 83.9% and 100% per-object completeness) and roof plane extraction (between 73.9% and 96% per-object completeness). The proposed method works more robustly than some existing methods in the presence of occlusion and low point sampling as indicated by the correctness of above 95% for all the datasets.
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: GIScience & Remote Sensing Vol. 55, no. 1 (2018), p. 63-89
- Full Text:
- Reviewed:
- Description: Roof plane segmentation is a complex task since point cloud data carry no connection information and do not provide any semantic characteristics of the underlying scanned surfaces. Point cloud density, complex roof profiles, and occlusion add another layer of complexity which often encounter in practice. In this article, we present a new technique that provides a better interpolation of roof regions where multiple surfaces intersect creating non-manifold points. As a result, these geometric features are preserved to achieve automated identification and segmentation of the roof planes from unstructured laser data. The proposed technique has been tested using the International Society for Photogrammetry and Remote Sensing benchmark and three Australian datasets, which differ in terrain, point density, building sizes, and vegetation. The qualitative and quantitative results show the robustness of the methodology and indicate that the proposed technique can eliminate vegetation and extract buildings as well as their non-occluding parts from the complex scenes at a high success rate for building detection (between 83.9% and 100% per-object completeness) and roof plane extraction (between 73.9% and 96% per-object completeness). The proposed method works more robustly than some existing methods in the presence of occlusion and low point sampling as indicated by the correctness of above 95% for all the datasets.
An automatic building extraction and regularisation technique using LiDAR point cloud data and orthoimage
- Gilani, Sayed Ali Naqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Sayed Ali Naqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Remote Sensing Vol. 8, no. 3 (2016), p. 1-27
- Full Text:
- Reviewed:
- Description: The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors.
- Authors: Gilani, Sayed Ali Naqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Remote Sensing Vol. 8, no. 3 (2016), p. 1-27
- Full Text:
- Reviewed:
- Description: The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors.
Robust building roof segmentation using airborne point cloud data
- Gilani, Syed, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 23rd IEEE International Conference on Image Processing, ICIP 2016; Phoenix, United States; 25th-28th September 2016; published in Proceedings - International Conferenec on Image Processing, ICIP Vol. 2016-August, p. 859-863
- Full Text: false
- Reviewed:
- Description: Approximation of the geometric features is an essential step in point cloud segmentation and surface reconstruction. Often, the planar surfaces are estimated using principal component analysis (PCA), which is sensitive to noise and smooths the sharp features. Hence, the segmentation results into unreliable reconstructed surfaces. This article presents a point cloud segmentation method for building detection and roof plane extraction. It uses PCA for saliency feature estimation including surface curvature and point normal. However, the point normals around the anisotropic surfaces are approximated using a consistent isotropic sub-neighbourhood by Low-Rank Subspace with prior Knowledge (LRSCPK). The developed segmentation technique is tested using two real-world samples and two benchmark datasets. Per-object and per-area completeness and correctness results indicate the robustness of the approach and the quality of the reconstructed surfaces and extracted buildings. © 2016 IEEE.
- Description: Proceedings - International Conference on Image Processing, ICIP
Fusion of LiDAR data and multispectral imagery for effective building detection based on graph and connected component analysis
- Gilani, Alinaqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
Automatic building extraction from LIDAR data covering complex urban scenes
- Awrangjeb, Mohammad, Lu, Guojun, Fraser, Clive
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2014
- Type: Text , Conference proceedings
- Relation: ISPRS Technical Commission III Symposium; Zurich, Switzerland; 5th-7th September 2014; published in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XL-3, p. 25-32
- Relation: http://purl.org/au-research/grants/arc/DE120101778
- Full Text:
- Reviewed:
- Description: This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex urban scenes.
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive
- Date: 2014
- Type: Text , Conference proceedings
- Relation: ISPRS Technical Commission III Symposium; Zurich, Switzerland; 5th-7th September 2014; published in The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences Vol. XL-3, p. 25-32
- Relation: http://purl.org/au-research/grants/arc/DE120101778
- Full Text:
- Reviewed:
- Description: This paper presents a new method for segmentation of LIDAR point cloud data for automatic building extraction. Using the ground height from a DEM (Digital Elevation Model), the non-ground points (mainly buildings and trees) are separated from the ground points. Points on walls are removed from the set of non-ground points by applying the following two approaches: If a plane fitted at a point and its neighbourhood is perpendicular to a fictitious horizontal plane, then this point is designated as a wall point. When LIDAR points are projected on a dense grid, points within a narrow area close to an imaginary vertical line on the wall should fall into the same grid cell. If three or more points fall into the same cell, then the intermediate points are removed as wall points. The remaining non-ground points are then divided into clusters based on height and local neighbourhood. One or more clusters are initialised based on the maximum height of the points and then each cluster is extended by applying height and neighbourhood constraints. Planar roof segments are extracted from each cluster of points following a region-growing technique. Planes are initialised using coplanar points as seed points and then grown using plane compatibility tests. If the estimated height of a point is similar to its LIDAR generated height, or if its normal distance to a plane is within a predefined limit, then the point is added to the plane. Once all the planar segments are extracted, the common points between the neghbouring planes are assigned to the appropriate planes based on the plane intersection line, locality and the angle between the normal at a common point and the corresponding plane. A rule-based procedure is applied to remove tree planes which are small in size and randomly oriented. The neighbouring planes are then merged to obtain individual building boundaries, which are regularised based on long line segments. Experimental results on ISPRS benchmark data sets show that the proposed method offers higher building detection and roof plane extraction rates than many existing methods, especially in complex urban scenes.
LiDAR segmentation using suitable seed points for 3D building extraction
- Abdullah, S.M., Awrangjeb, Mohammad, Lu, Guojun
- Authors: Abdullah, S.M. , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2014
- Type: Text , Conference proceedings
- Full Text: false
- Description: Effective building detection and roof reconstruction has an influential demand over the remote sensing research community. In this paper, we present a new automatic LiDAR point cloud segmentation method using suitable seed points for building detection and roof plane extraction. Firstly, the LiDAR point cloud is separated into "ground" and "non-ground" points based on the analysis of DEM with a height threshold. Each of the non-ground point is marked as coplanar or non-coplanar based on a coplanarity analysis. Commencing from the maximum LiDAR point height towards the minimum, all the LiDAR points on each height level are extracted and separated into several groups based on 2D distance. From each group, lines are extracted and a coplanar point which is the nearest to the midpoint of each line is considered as a seed point. This seed point and its neighbouring points are utilised to generate the plane equation. The plane is grown in a region growing fashion until no new points can be added. A robust rule-based tree removal method is applied subsequently to remove planar segments on trees. Four different rules are applied in this method. Finally, the boundary of each object is extracted from the segmented LiDAR point cloud. The method is evaluated with six different data sets consisting hilly and densely vegetated areas. The experimental results indicate that the proposed method offers a high building detection and roof plane extraction rates while compared to a recently proposed method.
- «
- ‹
- 1
- ›
- »