Three-dimensional (3D) digital technology is essential to the maintenance and monitoring of cultural heritage sites. In the field of bridge engineering, 3D models generated from point clouds of existing bridges is drawing increasing attention. Currently, the widespread use of the unmanned aerial vehicle (UAV) provides a practical solution for generating 3D point clouds as well as models, which can drastically reduce the manual effort and cost involved. In this study, we present a semi-automated framework for generating structural surface models of heritage bridges. To be specific, we propose to tackle this challenge via a novel top-down method for segmenting main bridge components, combined with rule-based classification, to produce labeled 3D models from UAV photogrammetric point clouds. The point clouds of the heritage bridge are generated from the captured UAV images through the structure-from-motion workflow. A segmentation method is developed based on the supervoxel structure and global graph optimization, which can effectively separate bridge components based on geometric features. Then, recognition by the use of a classification tree and bridge geometry is utilized to recognize different structural elements from the obtained segments. Finally, surface modeling is conducted to generate surface models of the recognized elements. Experiments using two bridges in China demonstrate the potential of the presented structural model reconstruction method using UAV photogrammetry and point cloud processing in 3D digital documentation of heritage bridges. By using given markers, the reconstruction error of point clouds can be as small as 0.4%. Moreover, the precision and recall of segmentation results using testing date are better than 0.8, and a recognition accuracy better than 0.8 is achieved.
Vehicle detection and classification have become important tasks for traffic monitoring, transportation management and pavement evaluation. Nowadays there are sensors to detect and classify the vehicles on road. However, on one hand, most sensors rely on direct contact measurement to detect the vehicles, which have to interrupt the traffic. On the other hand, complex road scenes produce much noise to consider when to process the signals. In this paper, a data-driven methodology for the detection and classification of vehicles using strain data is proposed. The sensors are well arranged under the bridge deck without traffic interruption. Next, a cascade pre-processing method is applied for vehicle detection to eliminate in-situ noise. Then, a neural network model is trained to identify the close-range following vehicles and separate them by Non-Maximum Suppression. Finally, a deep convolutional neural network is designed and trained to identify the vehicle types based on the axle group. The methodology was applied in a long-span bridge. Three strain sensors were installed beneath the bridge deck for a week. High robustness and accuracy were obtained by these algorithms. The methodology proposed in this paper is an adaptive and promising method for vehicle detection and classification under complex noise. It would serve as a supplement to current transportation systems and provide reliable data for management and decision-making.
The orthotropic steel-box girder (OSG) is widely used in the construction of a large-scale bridges. Since cumulative damages caused by the heavy vehicles and initial flaws of welding, the bridges with OSGs frequently suffer from fatigue cracks, which are commonly distributed around U-ribs. Hence, the management of fatigue cracks is mandatory in practical engineering. Although some techniques have been adopted for the detection of cracks, the workflow is often labor-intensive, time-consuming, and of low-temporal resolution. Considering the optical visibility of a crack and the limitation of the shape of an over-welding-hole around the U-rib, a machine vision-based monitoring methodology for the fatigue cracks in U-rib-to-deck weld seams is proposed in this paper. To be specific, a specific Internet of Things (IoT) based image acquisition device is first developed and introduced to obtain precisely part-view images of a fatigue crack. As followed, a novel image rectification and stitching method based on a specified coded calibration board is innovated and described for generating a measurable panoramic fatigue crack image. Furthermore, a deep learning-based crack detection-segmentation integrated algorithm is developed to detect and segment the crack areas. Afterwards, a feature extraction procedure based on image processing is explored to obtain the morphological features of a crack, involving its area, length and width. Finally, a field experiment was carried out on a real steel suspension bridge. By comparing the measurements both from manual measuring and vision-based monitoring, the results indicate that the proposed methodology is very promising to monitor the fatigue cracks in U-rib-to-deck weld seams, and the root-mean-square errors in length and width measuring could be 3.0195 mm and 0.003 mm, respectively. This work is not only of practical value to the management and maintenance of the OSG bridges in engineering, but also critical for the researches on fatigue cracks propagation. INDEX TERMS Orthotropic steel deck, machine vision, IoT based image acquisition system, fatigue crack measuring, image stitching, deep learning. DALEI WANG received the Ph.D. degree in bridge and tunnel engineering from Tongji University, China, in 2012. He is currently the Head of the Research Group of Computer Vision and Artificial Intelligence Application in Bridge Engineering, and is an Associate Professor and the Assistant Dean of the College of Civil Engineering, Tongji University. He has authored over 30 international refereed journals and conference papers, and holds eight patents. His current research interests include computer vision, machine learning, and artificial intelligence and their applications to the monitoring, evaluation, and maintenance of long-span bridges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.