ABSTRACT:Urban density is an important factor for several fields, e.g. urban design, planning and land management. Modern remote sensors deliver ample information for the estimation of specific urban land classification classes (2D indicators), and the height of urban land classification objects (3D indicators) within an Area of Interest (AOI). In this research, two of these indicators, Building Coverage Ratio (BCR) and Floor Area Ratio (FAR) are numerically and automatically derived from high-resolution airborne RGB orthophotos and LiDAR data. In the pre-processing step the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an improved normalized digital surface model (nDSM) is an upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. In a following step, a Multilayer Feedforward Neural Network (MFNN) is used to classify all pixels of the AOI to building or non-building categories. For the total surface of the block and the buildings we consider the number of their pixels and the surface of the unit pixel. Comparisons of the automatically derived BCR and FAR indicators with manually derived ones shows the applicability and effectiveness of the methodology proposed.
Nowadays there is an increasing demand for detailed 3D modeling of buildings using elevation data such as those acquired from LiDAR airborne scanners. The various techniques that have been developed for this purpose typically perform segmentation into homogeneous regions followed by boundary extraction and are based on some combination of LiDAR data, digital maps, satellite images and aerial orthophotographs. In the present work, our dataset includes an aerial RGB orthophoto, a DSM and a DTM with spatial resolutions of 20cm, 1m and 2m respectively. Next, a normalized DSM (nDSM) is generated and fused with the optical data in order to increase its resolution to 20cm. The proposed methodology can be described as a two-step approach. First, a nearest neighbor interpolation is applied on the low resolution nDSM to obtain a low quality, ragged, elevation image. Next, we performed a mean shift-based discontinuity preserving smoothing on the fused data. The outcome is on the one hand a more homogeneous RGB image, with smoothed terrace coloring while at the same time preserving the optical edges and on the other hand an upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. Besides the apparent visual assessment of the increased accuracy of building boundaries, the effectiveness of the proposed method is demonstrated using the processed dataset as input to five supervised classification methods. The performance of each method is evaluated using a subset of the test area as ground truth. Comparisons with classification results obtained with the original data demonstrate that preprocessing the input dataset using the mean shift algorithm improves significantly the performance of all tested classifiers for building block extraction.
This paper examines the utility of high-resolution airborne RGB orthophotos and LiDAR data for mapping residential land uses within the spatial limits of suburb of Athens, Greece. Modern remote sensors deliver ample information from the AOI (area of interest) for the estimation of 2D indicators or with the inclusion of elevation data 3D indicators for the classification of urban land. In this research, two of these indicators, BCR (building coverage ratio) and FAR (floor area ratio) are automatically evaluated. In the pre-processing step, the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an nDSM (normalized digital surface model) comprised of upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. Following this step, a MFNN (multilayer feedforward neural network) is used to classify all pixels of the AOI into building or non-building categories. The information derived from the BCR and FAR building indicators, adapted to landscape characteristics of the test area is used to propose two new indices and an automatic post-classification based on the density of buildings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.