A fast corner detector based on the chord-to-point distance accumulation technique
- Awrangjeb, Mohammad, Lu, Guojun, Fraser, Clive, Ravanbakhsh, Mehdi
- Authors: Awrangjeb, Mohammad , Lu, Guojun , Fraser, Clive , Ravanbakhsh, Mehdi
- Date: 2009
- Type: Text , Conference paper
- Relation: 2009 Digital Image Computing Techniques and Applications (DICTA 2009) p. 519-525
- Full Text: false
- Reviewed:
Fusion of LiDAR data and multispectral imagery for effective building detection based on graph and connected component analysis
- Gilani, Alinaqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
- Authors: Gilani, Alinaqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2015
- Type: Text , Conference proceedings
- Full Text:
- Description: Building detection in complex scenes is a non-trivial exercise due to building shape variability, irregular terrain, shadows, and occlusion by highly dense vegetation. In this research, we present a graph based algorithm, which combines multispectral imagery and airborne LiDAR information to completely delineate the building boundaries in urban and densely vegetated area. In the first phase, LiDAR data is divided into two groups: ground and non-ground data, using ground height from a bare-earth DEM. A mask, known as the primary building mask, is generated from the non-ground LiDAR points where the black region represents the elevated area (buildings and trees), while the white region describes the ground (earth). The second phase begins with the process of Connected Component Analysis (CCA) where the number of objects present in the test scene are identified followed by initial boundary detection and labelling. Additionally, a graph from the connected components is generated, where each black pixel corresponds to a node. An edge of a unit distance is defined between a black pixel and a neighbouring black pixel, if any. An edge does not exist from a black pixel to a neighbouring white pixel, if any. This phenomenon produces a disconnected components graph, where each component represents a prospective building or a dense vegetation (a contiguous block of black pixels from the primary mask). In the third phase, a clustering process clusters the segmented lines, extracted from multispectral imagery, around the graph components, if possible. In the fourth step, NDVI, image entropy, and LiDAR data are utilised to discriminate between vegetation, buildings, and isolated building's occluded parts. Finally, the initially extracted building boundary is extended pixel-wise using NDVI, entropy, and LiDAR data to completely delineate the building and to maximise the boundary reach towards building edges. The proposed technique is evaluated using two Australian data sets: Aitkenvale and Hervey Bay, for object-based and pixel-based completeness, correctness, and quality. The proposed technique detects buildings larger than 50 m2 and 10 m2 in the Aitkenvale site with 100% and 91% accuracy, respectively, while in the Hervey Bay site it performs better with 100% accuracy for buildings larger than 10 m2 in area.
An automatic building extraction and regularisation technique using LiDAR point cloud data and orthoimage
- Gilani, Sayed Ali Naqi, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Sayed Ali Naqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Remote Sensing Vol. 8, no. 3 (2016), p. 1-27
- Full Text:
- Reviewed:
- Description: The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors.
- Authors: Gilani, Sayed Ali Naqi , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Remote Sensing Vol. 8, no. 3 (2016), p. 1-27
- Full Text:
- Reviewed:
- Description: The development of robust and accurate methods for automatic building detection and regularisation using multisource data continues to be a challenge due to point cloud sparsity, high spectral variability, urban objects differences, surrounding complexity, and data misalignment. To address these challenges, constraints on object's size, height, area, and orientation are generally benefited which adversely affect the detection performance. Often the buildings either small in size, under shadows or partly occluded are ousted during elimination of superfluous objects. To overcome the limitations, a methodology is developed to extract and regularise the buildings using features from point cloud and orthoimagery. The building delineation process is carried out by identifying the candidate building regions and segmenting them into grids. Vegetation elimination, building detection and extraction of their partially occluded parts are achieved by synthesising the point cloud and image data. Finally, the detected buildings are regularised by exploiting the image lines in the building regularisation process. Detection and regularisation processes have been evaluated using the ISPRS benchmark and four Australian data sets which differ in point density (1 to 29 points/m2), building sizes, shadows, terrain, and vegetation. Results indicate that there is 83% to 93% per-area completeness with the correctness of above 95%, demonstrating the robustness of the approach. The absence of over- and many-to-many segmentation errors in the ISPRS data set indicate that the technique has higher per-object accuracy. While compared with six existing similar methods, the proposed detection and regularisation approach performs significantly better on more complex data sets (Australian) in contrast to the ISPRS benchmark, where it does better or equal to the counterparts. © 2016 by the authors.
Robust building roof segmentation using airborne point cloud data
- Gilani, Syed, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings , Conference paper
- Relation: 23rd IEEE International Conference on Image Processing, ICIP 2016; Phoenix, United States; 25th-28th September 2016; published in Proceedings - International Conferenec on Image Processing, ICIP Vol. 2016-August, p. 859-863
- Full Text: false
- Reviewed:
- Description: Approximation of the geometric features is an essential step in point cloud segmentation and surface reconstruction. Often, the planar surfaces are estimated using principal component analysis (PCA), which is sensitive to noise and smooths the sharp features. Hence, the segmentation results into unreliable reconstructed surfaces. This article presents a point cloud segmentation method for building detection and roof plane extraction. It uses PCA for saliency feature estimation including surface curvature and point normal. However, the point normals around the anisotropic surfaces are approximated using a consistent isotropic sub-neighbourhood by Low-Rank Subspace with prior Knowledge (LRSCPK). The developed segmentation technique is tested using two real-world samples and two benchmark datasets. Per-object and per-area completeness and correctness results indicate the robustness of the approach and the quality of the reconstructed surfaces and extracted buildings. © 2016 IEEE.
- Description: Proceedings - International Conference on Image Processing, ICIP
Segmentation of airborne point cloud data for automatic building roof extraction
- Gilani, Syed, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: GIScience & Remote Sensing Vol. 55, no. 1 (2018), p. 63-89
- Full Text:
- Reviewed:
- Description: Roof plane segmentation is a complex task since point cloud data carry no connection information and do not provide any semantic characteristics of the underlying scanned surfaces. Point cloud density, complex roof profiles, and occlusion add another layer of complexity which often encounter in practice. In this article, we present a new technique that provides a better interpolation of roof regions where multiple surfaces intersect creating non-manifold points. As a result, these geometric features are preserved to achieve automated identification and segmentation of the roof planes from unstructured laser data. The proposed technique has been tested using the International Society for Photogrammetry and Remote Sensing benchmark and three Australian datasets, which differ in terrain, point density, building sizes, and vegetation. The qualitative and quantitative results show the robustness of the methodology and indicate that the proposed technique can eliminate vegetation and extract buildings as well as their non-occluding parts from the complex scenes at a high success rate for building detection (between 83.9% and 100% per-object completeness) and roof plane extraction (between 73.9% and 96% per-object completeness). The proposed method works more robustly than some existing methods in the presence of occlusion and low point sampling as indicated by the correctness of above 95% for all the datasets.
- Authors: Gilani, Syed , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2018
- Type: Text , Journal article
- Relation: GIScience & Remote Sensing Vol. 55, no. 1 (2018), p. 63-89
- Full Text:
- Reviewed:
- Description: Roof plane segmentation is a complex task since point cloud data carry no connection information and do not provide any semantic characteristics of the underlying scanned surfaces. Point cloud density, complex roof profiles, and occlusion add another layer of complexity which often encounter in practice. In this article, we present a new technique that provides a better interpolation of roof regions where multiple surfaces intersect creating non-manifold points. As a result, these geometric features are preserved to achieve automated identification and segmentation of the roof planes from unstructured laser data. The proposed technique has been tested using the International Society for Photogrammetry and Remote Sensing benchmark and three Australian datasets, which differ in terrain, point density, building sizes, and vegetation. The qualitative and quantitative results show the robustness of the methodology and indicate that the proposed technique can eliminate vegetation and extract buildings as well as their non-occluding parts from the complex scenes at a high success rate for building detection (between 83.9% and 100% per-object completeness) and roof plane extraction (between 73.9% and 96% per-object completeness). The proposed method works more robustly than some existing methods in the presence of occlusion and low point sampling as indicated by the correctness of above 95% for all the datasets.
Voxel-based extraction of individual pylons and wires from lidar point cloud data
- Munir, Nosheen, Awrangjeb, Mohammad, Stantic, Bela, Lu, Guojun, Islam, Syed
- Authors: Munir, Nosheen , Awrangjeb, Mohammad , Stantic, Bela , Lu, Guojun , Islam, Syed
- Date: 2019
- Type: Text , Journal article
- Relation: ISPRS annals of the photogrammetry, remote sensing and spatial information sciences Vol. IV-4/W8, no. (2019), p. 91-98
- Full Text:
- Reviewed:
- Description: Extraction of individual pylons and wires is important for modelling of 3D objects in a power line corridor (PLC) map. However, the existing methods mostly classify points into distinct classes like pylons and wires, but hardly into individual pylons or wires. The proposed method extracts standalone pylons, vegetation and wires from LiDAR data. The extraction of individual objects is needed for a detailed PLC mapping. The proposed approach starts off with the separation of ground and non ground points. The non-ground points are then classified into vertical (e.g., pylons and vegetation) and non-vertical (e.g., wires) object points using the vertical profile feature (VPF) through the binary support vector machine (SVM) classifier. Individual pylons and vegetation are then separated using their shape and area properties. The locations of pylons are further used to extract the span points between two successive pylons. Finally, span points are voxelised and alignment properties of wires in the voxel grid is used to extract individual wires points. The results are evaluated on dataset which has multiple spans with bundled wires in each span. The evaluation results show that the proposed method and features are very effective for extraction of individual wires, pylons and vegetation with 99% correctness and 98% completeness.
- Authors: Munir, Nosheen , Awrangjeb, Mohammad , Stantic, Bela , Lu, Guojun , Islam, Syed
- Date: 2019
- Type: Text , Journal article
- Relation: ISPRS annals of the photogrammetry, remote sensing and spatial information sciences Vol. IV-4/W8, no. (2019), p. 91-98
- Full Text:
- Reviewed:
- Description: Extraction of individual pylons and wires is important for modelling of 3D objects in a power line corridor (PLC) map. However, the existing methods mostly classify points into distinct classes like pylons and wires, but hardly into individual pylons or wires. The proposed method extracts standalone pylons, vegetation and wires from LiDAR data. The extraction of individual objects is needed for a detailed PLC mapping. The proposed approach starts off with the separation of ground and non ground points. The non-ground points are then classified into vertical (e.g., pylons and vegetation) and non-vertical (e.g., wires) object points using the vertical profile feature (VPF) through the binary support vector machine (SVM) classifier. Individual pylons and vegetation are then separated using their shape and area properties. The locations of pylons are further used to extract the span points between two successive pylons. Finally, span points are voxelised and alignment properties of wires in the voxel grid is used to extract individual wires points. The results are evaluated on dataset which has multiple spans with bundled wires in each span. The evaluation results show that the proposed method and features are very effective for extraction of individual wires, pylons and vegetation with 99% correctness and 98% completeness.
Detection of Malleefowl Mounds from Point Cloud Data
- Parvin, Nahida, Awrangjeb, Mohammad, Irvin, Marc, Florentine, Singarayer, Murshed, Manzur, Lu, Guojun
- Authors: Parvin, Nahida , Awrangjeb, Mohammad , Irvin, Marc , Florentine, Singarayer , Murshed, Manzur , Lu, Guojun
- Date: 2021
- Type: Text , Conference paper
- Relation: 2021 International Conference on Digital Image Computing: Techniques and Applications, DICTA 2021, Gold Coast, 29 November to 1 December 2021
- Full Text: false
- Reviewed:
- Description: Airborne light detection and ranging (LiDAR) data have become cost and time-efficient means for estimating the size of timid fauna populations through the identification of artefacts that evidence their occurrence in a large, hostile geographic area. The unobtrusive detection method helps conservation managers to assess the stability of a population and to design appropriate conservation programs. Here we propose a mound (nest) detection method for Australia's native iconic bird, the Malleefowl, from point cloud data, which can be manipulated to act as a surrogate for population data. Existing detection methods are largely through manual observations, and are therefore not efficient for covering large and remote areas. The proposed mound detection method can identify mound feature based on height and intensity values provided by the point cloud data. Each candidate mound point is initially selected by applying a height threshold utilising the classified ground points and their corresponding digital elevation model (DEM). Then, another threshold based on intensity range derived from ground truth mound area analysis is applied on the extracted initial mound points to find the final candidate mound points. These extracted points are then used to generate a binary mask where the potential mound points are found sparse. To connect those points, a morphological filter is applied on the binary image and found the mound separated from other remaining non-mound objects. To obtain the mound from other non-mound objects, a morphological cleaning operation and a connected component analysis are carried out on the mask. The non-mound objects are removed from the mask utilising the area property of mound derived from the empirical analysis of ground-truth observations. Finally, the effectiveness of the proposed technique is calculated based on ground truth. Although the mound shapes and structures are highly variable in nature, our height and intensity-based mound point extraction method detected 55 % of the ground-truthed mounds. © 2021 IEEE.
A new building mask using the gradient of heights for automatic building extraction
- Siddiqui, Fasahat, Awrangjeb, Mohammad, Teng, Shyh, Lu, Guojun
- Authors: Siddiqui, Fasahat , Awrangjeb, Mohammad , Teng, Shyh , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 288-294
- Full Text:
- Reviewed:
- Description: A number of building detection methods have been proposed in the literature. However, they are not effective in detecting small buildings (typically, 50 m(2)) and buildings with transparent roof due to the way area thresholds and ground points are used. This paper proposes a new building mask to overcome these limitations and enables detection of buildings not only with transparent roof materials but also which are small in size. The proposed building detection method transforms the non-ground height information into an intensity image and then analyses the gradient information in the image. It uses a small area threshold of 1 m2 and, thereby, is able to detect small buildings such as garden sheds. The use of non-ground points allows analyses of the gradient on all types of roof materials and, thus, the method is also able to detect buildings with transparent roofs. Our experimental results show that the proposed method can successfully extract buildings even when their roofs are small and/or transparent, thereby, achieving relatively higher average completeness and quality.
- Authors: Siddiqui, Fasahat , Awrangjeb, Mohammad , Teng, Shyh , Lu, Guojun
- Date: 2016
- Type: Text , Conference proceedings
- Relation: 2016 International Conference on Digital Image Computing: Techniques and Applications (Dicta); Gold Coast, Australia; 30th November-2nd December 2016 p. 288-294
- Full Text:
- Reviewed:
- Description: A number of building detection methods have been proposed in the literature. However, they are not effective in detecting small buildings (typically, 50 m(2)) and buildings with transparent roof due to the way area thresholds and ground points are used. This paper proposes a new building mask to overcome these limitations and enables detection of buildings not only with transparent roof materials but also which are small in size. The proposed building detection method transforms the non-ground height information into an intensity image and then analyses the gradient information in the image. It uses a small area threshold of 1 m2 and, thereby, is able to detect small buildings such as garden sheds. The use of non-ground points allows analyses of the gradient on all types of roof materials and, thus, the method is also able to detect buildings with transparent roofs. Our experimental results show that the proposed method can successfully extract buildings even when their roofs are small and/or transparent, thereby, achieving relatively higher average completeness and quality.
An improved building detection in complex sites using the LIDAR height variation and point density
- Siddiqui, Fasahat, Teng, Shyh, Lu, Guojun, Awrangjeb, Mohammad
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2013
- Type: Text , Conference proceedings
- Relation: 2013 28th International Conference on Image and Vision Computing New Zealand, IVCNZ 2013; Wellington; New Zealand; 27th-29th November 2013; published in International Conference Image and Vision Computing New Zealand p. 471-476
- Full Text:
- Reviewed:
- Description: In this paper, the height variation in LIDAR (Light Detection And Ranging) point cloud data and point density are analyzed to remove the false building detection in highly vegetation and hilly sites. In general, the LIDAR points in a tree area have higher height variations than those in a building area. Moreover, the density of points having similar height values is lower in a tree area than in a building area. The proposed method uses such information as an improvement to a current state-of-the-art building detection method. The qualitative and object-based quantitative analyzes have been performed to verify the effectiveness of the proposed building detection method as compared with a current method. The analysis shows that proposed building detection method successfully reduces false building detection (i.e. trees in high complex sites of Australia and Germany), and the average correctness and quality have been improved by 6.36% and 6.16% respectively.
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2013
- Type: Text , Conference proceedings
- Relation: 2013 28th International Conference on Image and Vision Computing New Zealand, IVCNZ 2013; Wellington; New Zealand; 27th-29th November 2013; published in International Conference Image and Vision Computing New Zealand p. 471-476
- Full Text:
- Reviewed:
- Description: In this paper, the height variation in LIDAR (Light Detection And Ranging) point cloud data and point density are analyzed to remove the false building detection in highly vegetation and hilly sites. In general, the LIDAR points in a tree area have higher height variations than those in a building area. Moreover, the density of points having similar height values is lower in a tree area than in a building area. The proposed method uses such information as an improvement to a current state-of-the-art building detection method. The qualitative and object-based quantitative analyzes have been performed to verify the effectiveness of the proposed building detection method as compared with a current method. The analysis shows that proposed building detection method successfully reduces false building detection (i.e. trees in high complex sites of Australia and Germany), and the average correctness and quality have been improved by 6.36% and 6.16% respectively.
A robust gradient based method for building extraction from LiDAR and photogrammetric imagery
- Siddiqui, Fasahat, Teng, Shyh, Awrangjeb, Mohammad, Lu, Guojun
- Authors: Siddiqui, Fasahat , Teng, Shyh , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Sensors (Switzerland) Vol. 16, no. 7 (2016), p. 1-24
- Full Text:
- Reviewed:
- Description: Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existingmethods use numerous parameters to extract buildings in complex environments, e.g.,hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. © 2016 by the authors; licensee MDPI, Basel, Switzerland.
- Authors: Siddiqui, Fasahat , Teng, Shyh , Awrangjeb, Mohammad , Lu, Guojun
- Date: 2016
- Type: Text , Journal article
- Relation: Sensors (Switzerland) Vol. 16, no. 7 (2016), p. 1-24
- Full Text:
- Reviewed:
- Description: Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existingmethods use numerous parameters to extract buildings in complex environments, e.g.,hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. © 2016 by the authors; licensee MDPI, Basel, Switzerland.
Automatic Extraction of Buildings in an Urban Region
- Siddiqui, Fasahat, Teng, Shyh, Lu, Guojun, Awrangjeb, Mohammad
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 29th International Conference on Image and Vision Computing New Zealand, IVCNZ 2014; Hamilton; New Zealand; 19th-21st November 2014; published in ACM International Conference Proceeding Series p. 178-183
- Full Text:
- Reviewed:
- Description: There are currently several automatic building extraction methods introduced in the literature, but none of them are capable to completely extract portions of a building that are below a pre-defined building minimum height threshold. This paper proposes a systematic method which analyzes the height differences between the extracted adjacent planes above and below the height threshold as well as the planes' connectivity, thereby, extracting all portions belonging to buildings more completely. In general, the height difference between the edges of the adjacent planes above and below the height threshold that belong to the same building is more uniform. In addition, the extracted planes below the height threshold that belong to a building and their adjacent ground planes also have a clear height difference. The proposed method incorporates such information to achieve better performance in building extraction. We have compared our proposed method to a current state-of-the-art building extraction method qualitatively and quantitatively. Our experimental results show that our proposed method successfully recovers portions of a building below the height threshold, thereby achieving relatively higher average completeness (an improvement of 1.14%) and quality (an improvement of 0.93%).
- Authors: Siddiqui, Fasahat , Teng, Shyh , Lu, Guojun , Awrangjeb, Mohammad
- Date: 2014
- Type: Text , Conference proceedings
- Relation: 29th International Conference on Image and Vision Computing New Zealand, IVCNZ 2014; Hamilton; New Zealand; 19th-21st November 2014; published in ACM International Conference Proceeding Series p. 178-183
- Full Text:
- Reviewed:
- Description: There are currently several automatic building extraction methods introduced in the literature, but none of them are capable to completely extract portions of a building that are below a pre-defined building minimum height threshold. This paper proposes a systematic method which analyzes the height differences between the extracted adjacent planes above and below the height threshold as well as the planes' connectivity, thereby, extracting all portions belonging to buildings more completely. In general, the height difference between the edges of the adjacent planes above and below the height threshold that belong to the same building is more uniform. In addition, the extracted planes below the height threshold that belong to a building and their adjacent ground planes also have a clear height difference. The proposed method incorporates such information to achieve better performance in building extraction. We have compared our proposed method to a current state-of-the-art building extraction method qualitatively and quantitatively. Our experimental results show that our proposed method successfully recovers portions of a building below the height threshold, thereby achieving relatively higher average completeness (an improvement of 1.14%) and quality (an improvement of 0.93%).