2021
DOI: 10.1007/s00521-021-06279-x
|View full text |Cite
|
Sign up to set email alerts
|

Damage detection using in-domain and cross-domain transfer learning

Abstract: We investigate the capabilities of transfer learning in the area of structural health monitoring. In particular, we are interested in damage detection for concrete structures. Typical image datasets for such problems are relatively small, calling for the transfer of learned representation from a related large-scale dataset. Past efforts of damage detection using images have mainly considered cross-domain transfer learning approaches using pre-trained ImageNet models that are subsequently fine-tuned for the tar… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 33 publications
(9 citation statements)
references
References 59 publications
0
9
0
Order By: Relevance
“…Additionally, in Bukhsh et al. (2021), in‐domain and cross‐domain TL is analyzed for damage classification. The analysis shows that both types of TL enhance the performance of networks, especially on small data sets.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, in Bukhsh et al. (2021), in‐domain and cross‐domain TL is analyzed for damage classification. The analysis shows that both types of TL enhance the performance of networks, especially on small data sets.…”
Section: Related Workmentioning
confidence: 99%
“… 31 Second, the source data set and the target data set must have the same or similar distribution. 32 Therefore, it takes a lot of computing resources to simulate the sample library. In addition, the number of parameters that need to be trained for the stacked LSTM network is very large, and the training process consumes a lot of time and computing resources.…”
Section: Transfer Learning Based On the Pretraining Modelmentioning
confidence: 99%
“…First, the source data set must be labeled sample data . Second, the source data set and the target data set must have the same or similar distribution . Therefore, it takes a lot of computing resources to simulate the sample library.…”
Section: Transfer Learning Based On the Pretraining Modelmentioning
confidence: 99%
“…Xia et al [10] proposed a graph alignment mechanism for mapping source and target-domain information for Unsupervised Domain Adaptation (UDA) with insufficient or unlabelled data. Further, Buksh et al [11] used a mix of indomain and cross-domain learning on tiny image datasets to successfully detect structural damage in bridges. Similarly, Lin et al [12] used cross-domain learning for designing a damage-sensitive and domain-invariant feature extractor for structural damage detection.…”
Section: Introductionmentioning
confidence: 99%