2022
DOI: 10.1186/s43065-022-00051-8
|View full text |Cite
|
Sign up to set email alerts
|

Acoustic emission-based damage localization using wavelet-assisted deep learning

Abstract: Acoustic Emission (AE) has emerged as a popular damage detection and localization tool due to its high performance in identifying minor damage or crack. Due to the high sampling rate, AE sensors result in massive data during long-term monitoring of large-scale civil structures. Analyzing such big data and associated AE parameters (e.g., rise time, amplitude, counts, etc.) becomes time-consuming using traditional feature extraction methods. This paper proposes a 2D convolutional neural network (2D CNN)-based Ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 18 publications
(7 citation statements)
references
References 38 publications
0
7
0
Order By: Relevance
“…The input was a 2D image value obtained by the wavelet transformation. Conversely, [86] also employed 2D CNN for damage localization, however, the authors first exploited Empirical Mode Decomposition (EMD) to extract intrinsic mode functions (IMFs) from noisy raw measurements. These IMFs were then used to generate Continuous Wavelet Transform (CWT).…”
Section: Damage Localizationmentioning
confidence: 99%
“…The input was a 2D image value obtained by the wavelet transformation. Conversely, [86] also employed 2D CNN for damage localization, however, the authors first exploited Empirical Mode Decomposition (EMD) to extract intrinsic mode functions (IMFs) from noisy raw measurements. These IMFs were then used to generate Continuous Wavelet Transform (CWT).…”
Section: Damage Localizationmentioning
confidence: 99%
“…Given an RGB image as depicted in Figure 2a, the number of channels is reduced by converting the RGB image to a greyscale image. As RGB images consist of a 3D matrix of size (M,N,c) where c is a channel associated with the red, blue, and green spectrum of the image, the image can be converted to grayscale [44] through Equation ( 6): P mn = 0.2989P R mn + 0.5870P G mn + 0.1140P B mn (6) where m is the integer value representing the location of the pixel along the length of the image, n is the integer value representing the location of the pixel along the width of the image, P R mn is the pixel intensity of the red channel at pixel location (m,n), P G mn is the pixel intensity of the green channel at pixel location (m,n), P B mn is the pixel intensity of the blue channel at pixel location (m,n), and P mn is the pixel intensity of the grayscale image at pixel location (m,n). Once the RGB image has been converted to grayscale, the centroid of the image can be determined as shown in Figure 2b based on the pixel intensity of the image as represented by the color bar.…”
Section: Cluster and Density Analysis Of Datasetsmentioning
confidence: 99%
“…Sikdar et al [25] proposed a convolutional neural network (CNN)-based algorithm for identification of the region of damage in a composite panel. Barbosh et al [26] utilised a combination of the CWT and a deep neural network to detect the location of damage in wooden beams, wooden plates and concrete beams. The application of ML in the monitoring of rails is rather sparse.…”
Section: Introductionmentioning
confidence: 99%