High-resolution stereo and multi-view imagery are used for digital surface model (DSM) derivation over large areas for numerous applications in topography, cartography, geomorphology, and 3D surface modelling. Dense image matching is a key component in 3D reconstruction and mapping, although the 3D reconstruction process encounters difficulties for water surfaces, areas with no texture or with a repetitive pattern appearance in the images, and for very small objects. This study investigates the capabilities and limitations of space-borne very high resolution imagery, specifically Pléiades (0.70 m) and WorldView-3 (0.31 m) imagery, with respect to the automatic point cloud reconstruction of small isolated objects. For this purpose, single buildings, vehicles, and trees were analyzed. The main focus is to quantify their detectability in the photogrammetrically-derived DSMs by estimating their heights as a function of object type and size. The estimated height was investigated with respect to the following parameters: building length and width, vehicle length and width, and tree crown diameter. Manually measured object heights from the oriented images were used as a reference. We demonstrate that the DSM-based estimated height of a single object strongly depends on its size, and we quantify this effect. Starting from very small objects, which are not elevated against their surroundings, and ending with large objects, we obtained a gradual increase of the relative heights. For small vehicles, buildings, and trees (lengths <7 pixels, crown diameters <4 pixels), the Pléiades-derived DSM showed less than 20% or none of the actual object’s height. For large vehicles, buildings, and trees (lengths >14 pixels, crown diameters >7 pixels), the estimated heights were higher than 60% of the real values. In the case of the WorldView-3 derived DSM, the estimated height of small vehicles, buildings, and trees (lengths <16 pixels, crown diameters <8 pixels) was less than 50% of their actual height, whereas larger objects (lengths >33 pixels, crown diameters >16 pixels) were reconstructed at more than 90% in height.
Abstract. Image matching of aerial or satellite images and Airborne Laser Scanning (ALS) are the two main techniques for the acquisition of geospatial information (3D point clouds), used for mapping and 3D modelling of large surface areas. While ALS point cloud classification is a widely investigated topic, there are fewer studies related to the image-derived point clouds, even less for point clouds derived from stereo satellite imagery. Therefore, the main focus of this contribution is a comparative analysis and evaluation of a supervised machine learning classification method that exploits the full 3D content of point clouds generated by dense image matching of tri-stereo Very High Resolution (VHR) satellite imagery. The images were collected with two different sensors (Pléiades and WorldView-3) at different timestamps for a study area covering a surface of 24 km2, located in Waldviertel, Lower Austria. In particular, we evaluate the performance and precision of the classifier by analysing the variation of the results obtained after multiple scenarios using different training and test data sets. The temporal difference of the two Pléiades acquisitions (7 days) allowed us to calculate the repeatability of the adopted machine learning algorithm for the classification. Additionally, we investigate how the different acquisition geometries (ground sample distance, viewing and convergence angles) influence the performance of classifying the satellite image-derived point clouds into five object classes: ground, trees, roads, buildings, and vehicles. Our experimental results indicate that, in overall the classifier performs very similar in all situations, with values for the F1-score between 0.63 and 0.65 and overall accuracies beyond 93%. As a measure of repeatability, stable classes such as buildings and roads show a variation below 3% for the F1-score between the two Pléiades acquisitions, proving the stability of the model.
Abstract. Graffiti is a short-lived form of heritage balancing between tangible and intangible, offensive and pleasant. Graffiti makes people laugh, wonder, angry, think. These conflicting traits are all present along Vienna's Donaukanal (Eng. Danube Canal), a recreational hotspot – located in the city's heart – famous for its endless display of graffiti. The graffiti-focused heritage science project INDIGO aims to build the basis to systematically document, monitor, and analyse circa 13 km of Donaukanal graffiti in the next decade. The first part of this paper details INDIGO's goals and overarching methodological framework, simultaneously placing it into the broader landscape of graffiti research. The second part of the text concentrates on INDIGO's graffiti documentation activities. Given the project's aim to create a spatially, spectrally, and temporally accurate record of all possible mark-makings attached in (il)legal ways to the public urban surfaces of the Donaukanal, it seems appropriate to provide insights on the photographic plus image-based modelling activities that form the foundation of INDIGO's graffiti recording strategy. The text ends with some envisioned strategies to streamline image acquisition and process the anticipated hundreds of thousands of images.
Admired and despised, created and destroyed, legal and illegal: Contemporary graffiti are polarising, and not everybody agrees to label them as cultural heritage. However, if one is among the steadily increasing number of heritage professionals and academics that value these short-lived creations, their digital documentation can be considered a part of our legacy to future generations. To document the geometric and spectral properties of a graffito, digital photographs seem to be appropriate. This also holds true when documenting an entire graffiti-scape consisting of 1000s of individual creations. However, proper photo-based digital documentation of such an entire scene comes with logistical and technical challenges, certainly if the documentation is considered the basis for further analysis of the heritage assets. One main technical challenge relates to the photographs themselves. Conventional photographs suffer from multiple image distortions and usually lack a uniform scale, which hinders the derivation of dimensions and proportions. In addition, a single graffito photograph often does not reflect the meaning and setting intended by the graffitist, as the creation is frequently shown as an isolated entity without its surrounding environment. In other words, single photographs lack the spatio-temporal context, which is often of major importance in cultural heritage studies. Here, we present AUTOGRAF, an automated and freely-available orthorectification tool which converts conventional graffiti photos into high-resolution, distortion-free, and georeferenced graffiti orthophotomaps, a metric yet visual product. AUTOGRAF was developed in the framework of INDIGO, a graffiti-centred research project. Not only do these georeferenced photos support proper analysis, but they also set the basis for placing the graffiti in their native, albeit virtual, 3D environment. An experiment showed that 95 out of 100 tested graffiti photo sets were successfully orthorectified, highlighting the proposed methodology’s potential to improve and automate one part of contemporary graffiti’s digital preservation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.