Large-scale spatial databases contain information of different objects in the public domain and are of great importance for many stakeholders. These data are not only used to inventory the different assets of the public domain but also for project planning, construction design, and to create prediction models for disaster management or transportation. The use of mobile mapping systems instead of traditional surveying techniques for the data acquisition of these datasets is growing. However, while some objects can be (semi)automatically extracted, the mapping of manhole covers is still primarily done manually. In this work, we present a fully automatic manhole cover detection method to extract and accurately determine the position of manhole covers from mobile mapping point cloud data. Our method rasterizes the point cloud data into ground images with three channels: intensity value, minimum height and height variance. These images are processed by a transfer learned fully convolutional neural network to generate the spatial classification map. This map is then fed to a simplified class activation mapping (CAM) location algorithm to predict the center position of each manhole cover. The work assesses the influence of different backbone architectures (AlexNet, VGG-16, Inception-v3 and ResNet-101) and that of the geometric information channels in the ground image when commonly only the intensity channel is used. Our experiments show that the most consistent architecture is VGG-16, achieving a recall, precision and F2-score of 0.973, 0.973 and 0.973, respectively, in terms of detection performance. In terms of location performance, our approach achieves a horizontal 95% confidence interval of 16.5 cm using the VGG-16 architecture.
Abstract. Mobile mapping systems are increasingly being used for the acquisition of 3D information of the environment. Although these systems are very efficient in data capturing compared to more traditional methods, the high cost of high-end accurate mobile mapping systems is a major drawback. In contrast, the much cheaper low-end mobile mapping systems are more frequently used for less accurate projects where visualization is more important. In general, the achievable accuracy level is the driving factor that differentiates low-end from high-end systems. To determine this value, the sensor quality, calibration and GNSS reception quality should be reliably evaluated.In this paper, we present a theoretical accuracy model of a mobile mapping system that takes into account variable GNSS accuracy. The predicted accuracy level of low-end and high-end mobile mapping systems is evaluated in a comprehensive accuracy analysis. The absolute accuracy of the system is determined in three datasets in which GNSS reception quality varies between optimal, good and poor. Additionally, the relative accuracy of both systems is checked by comparison of control distances. The presented approach allows for a more general and robust accuracy prediction of mobile mapping systems in different circumstances.
<p><strong>Abstract.</strong> Progress monitoring of construction sites is becoming increasingly popular in the construction industry. Especially with the integration of 4D BIM, the progression and quality of the construction process can be better quantified. A key aspect is the detection of the changes between consecutive epochs of measurements on the site. However, the development of automated procedures is challenging due to noise, occlusions and the associativity between different objects. Additionally, objects are built in stages and thus varying states have to be detected according to the Percentage of Completion.</p><p>In this work, a framework is presented to derive work progress of construction sites based on point cloud data. More specifically, a methodology is constituted to compute the Percentage of Completion of in-situ cast concrete walls. In the literature study, existing methods are evaluated for their ability to track progress even in highly cluttered environments. In the practical study, we perform an empirical analysis on a set of periodic point clouds to establish the obstacles and feasibility of the methodology. This work leads to a better understanding of the progress monitoring paradigm which is still subject of ongoing research and will serve as the basis for the further development of a set of automated procedures.</p>
The reconstruction of Building Information Modeling objects for as-built modeling is currently the subject of ongoing research. A popular method is to extract structure information from point cloud data to create a set of parametric objects. This requires the interpretation of the point cloud data which currently is a manual and labor intensive procedure. Automated processes have to cope with excessive occlusions and clutter in the data sets. To create an as-built BIM, it is vital to reconstruct the building's structure i.e. wall geometry prior to the reconstruction of other objects. In this work, a novel method is presented to automatically reconstruct as-built BIM for generic buildings. We presented an unsupervised method that procedurally models the geometry of the walls based on point cloud data. A bottom-up process is defined where consecutively higher level information is extracted from the point cloud data using pre-trained machine learning models. Prior to the reconstruction, the data is segmented, classified and clustered to retrieve all the available observations of the walls. The resulting geometry is processed by the reconstruction algorithm. First, the necessary information is extracted from the observations for the creation of parametric solid objects. Subsequently, the final walls are created by updating their topology. The method is tested on a variety of scenes and shows promising results to reliably and accurately create as-built models. The accuracy of the generated geometry is similar to the precision of expert modelers. A key advantage is that that the algorithm creates Revit and Rhino native objects which makes the geometry directly applicable to a wide range of applications.This contribution has been peer-reviewed. https://doi.org/10.5194/isprs-archives-XLII-2-W17-53-2019 |
<p><strong>Abstract.</strong> The reassembling of fractured fragments is a paramount task in the fields of digital heritage documentation and reconstruction of archaeological artifacts and monuments. This process is typically carried out by manually puzzling matching clues such as decoration,shape, contour, etc. This labor poses a challenge for restorers as fractured fragments are fragile, deteriorated and in some cases bulky. In order to aid experts in this meticulous and time-consuming process, a puzzling engine is developed with the aim of providing the user with tools to facilitate the reassembling of 3D digital fractured fragments. The assisting tools that compose the puzzling engine include 3D manipulation, reference plane alignment, segmentation, and registration. Furthermore, a Virtual Reality (VR) environment is presented as an alternative matching tool. This allows the user to have an intuitive understanding of the fragments in terms of scale, texture, materials, etc., thus facilitating and speeding up the reassembling process. To show the potential of the proposed tool, the engine is tested by archaeologists not only to puzzle classical stone fragments but also to match deteriorated ancient Egyptian rock tomb blocks.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.