As innovative technologies emerge, extensive research has been undertaken to develop new structural health monitoring procedures. The current methods, involving on-site visual inspections, have proven to be costly, time-consuming, labor-intensive, and highly subjective for assessing the safety and integrity of civil infrastructures. Mobile and stationary LiDAR (Light Detection and Ranging) devices have significant potential for damage detection, as the scans provide detailed geometric information about the structures being evaluated. This paper reviews the recent developments for LiDAR-based structural health monitoring, in particular, for detecting cracks, deformation, defects, or changes to structures over time. In this regard, mobile laser scanning (MLS) and terrestrial laser scanning (TLS), specific to structural health monitoring, were reviewed for a wide range of civil infrastructure systems, including bridges, roads and pavements, tunnels and arch structures, post-disaster reconnaissance, historical and heritage structures, roofs, and retaining walls. Finally, the existing limitations and future research directions of LiDAR technology for structural health monitoring are discussed in detail.
A large amount of the world's existing infrastructure is reaching the end of its service life, requiring intervention in the form of structural rehabilitation or replacement. A critical aspect of such asset management is the condition assessment of these structures to evaluate their existing health and dictate the scheduling and extent of required rehabilitation. It has been demonstrated that human-based manual inspections face logistical constraints and are expensive, time extensive, and subjective, depending on the knowledge of the inspection. Recently, autonomous vision-based techniques have been proposed as an alternative, more accurate method for the inspection of deteriorating structures. Convolutional neural networks (CNNs) have demonstrated state-of-the-art accuracy with respect to damage classification for concrete structures and are often implemented to process images taken from vision-based sensors such as cameras, smartphones, and drones. However, these archetypes require a large database of annotated images to train the network to an accurate level, which is not readily available for real-life structures. Moreover, CNNs are limited to the extent by which they are trained; they are often only trained for binary damage classification of a singular material model. This paper addresses these challenges of CNNs through the application of a generative adversarial network (GANs) for multiclass damage detection of concrete structures. The proposed GAN is trained using the SDNET2018 dataset to detect cracking, spalling, pitting, and construction joints in concrete surfaces. Moreover, transfer learning is implemented to transfer the learned features of the GAN to a CNN architecture to allow for accurate image classification. It is concluded that, for a 0%-30% reduction in the amount of labeled data used, the proposed GAN method has comparable accuracy to traditional CNNs.
Acoustic Emission (AE) has emerged as a popular damage detection and localization tool due to its high performance in identifying minor damage or crack. Due to the high sampling rate, AE sensors result in massive data during long-term monitoring of large-scale civil structures. Analyzing such big data and associated AE parameters (e.g., rise time, amplitude, counts, etc.) becomes time-consuming using traditional feature extraction methods. This paper proposes a 2D convolutional neural network (2D CNN)-based Artificial Intelligence (AI) algorithm combined with time–frequency decomposition techniques to extract the damage information from the measured AE data without using standalone AE parameters. In this paper, Empirical Mode Decomposition (EMD) is employed to extract the intrinsic mode functions (IMFs) from noisy raw AE measurements, where the IMFs serve as the key AE components of the data. Continuous Wavelet Transform (CWT) is then used to obtain the spectrograms of the AE components, serving as the “artificial images” to an AI network. These spectrograms are fed into 2D CNN algorithm to detect and identify the potential location of the damage. The proposed approach is validated using a suite of numerical and experimental studies.
The deterioration of infrastructure’s health has become more predominant on a global scale during the 21st century. Aging infrastructure as well as those structures damaged by natural disasters have prompted the research community to improve state-of-the-art methodologies for conducting Structural Health Monitoring (SHM). The necessity for efficient SHM arises from the hazards damaged infrastructure imposes, often resulting in structural collapse, leading to economic loss and human fatalities. Furthermore, day-to-day operations in these affected areas are limited until an inspection is performed to assess the level of damage experienced by the structure and the required rehabilitation determined. However, human-based inspections are often labor-intensive, inefficient, subjective, and restricted to accessible site locations, which ultimately negatively impact our ability to collect large amounts of data from inspection sites. Though Deep-Learning (DL) methods have been heavily explored in the past decade to rectify the limitations of traditional methods and automate structural inspection, data scarcity continues to remain prevalent within the field of SHM. The absence of sufficiently large, balanced, and generalized databases to train DL-based models often results in inaccurate and biased damage predictions. Recently, Generative Adversarial Networks (GANs) have received attention from the SHM community as a data augmentation tool by which a training dataset can be expanded to improve the damage classification. However, there are no existing studies within the SHM field which investigate the performance of DL-based multiclass damage identification using synthetic data generated from GANs. Therefore, this paper investigates the performance of a convolutional neural network architecture using synthetic images generated from a GAN for multiclass damage detection of concrete surfaces. Through this study, it was determined the average classification performance of the proposed CNN on hybrid datasets decreased by 10.6% and 7.4% for validation and testing datasets when compared to the same model trained entirely on real samples. Moreover, each model’s performance decreased on average by 1.6% when comparing a singular model trained with real samples and the same model trained with both real and synthetic samples for a given training configuration. The correlation between classification accuracy and the amount and diversity of synthetic data used for data augmentation is quantified and the effect of using limited data to train existing GAN architectures is investigated. It was observed that the diversity of the samples decreases and correlation increases with the increase in the number of synthetic samples.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.