2023
DOI: 10.1109/tim.2022.3220302
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Deep Offline-to-Online Transfer Learning Framework for Pipeline Leakage Detection With Small Samples

Abstract: In this paper, a two-stage deep offline-to-online transfer learning framework (DOTLF) is proposed for longdistance pipeline leakage detection (PLD). At the offline training stage, a feature transfer-based long short-term memory network with regularization information (TL-LSTM-Ri) is developed where a maximum mean discrepancy regularization term is employed to extract domain-invariant features and an adjacentbias-corrected regularization term is introduced to extract early fault features from pipeline samples u… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(2 citation statements)
references
References 63 publications
0
2
0
Order By: Relevance
“…Mei et al [10] utilized compressed sensing to represent the data coefficients and fed the processed data into a deep residual neural network (ResNet) as a two-dimensional matrix, which greatly reduced the training time. Wang et al [11] introduced migration learning into long-distance pipeline leakage detection and accomplished online detection from a real monitoring pipeline. Although these models achieved good results, they incurred a lot of computational costs as the number of network layers increased.…”
Section: Introductionmentioning
confidence: 99%
“…Mei et al [10] utilized compressed sensing to represent the data coefficients and fed the processed data into a deep residual neural network (ResNet) as a two-dimensional matrix, which greatly reduced the training time. Wang et al [11] introduced migration learning into long-distance pipeline leakage detection and accomplished online detection from a real monitoring pipeline. Although these models achieved good results, they incurred a lot of computational costs as the number of network layers increased.…”
Section: Introductionmentioning
confidence: 99%
“…Empirical evidence suggests that such a well-designed architecture can partially alleviate the issue of overfitting. Besides, in terms of the amount of data required, a substantial dataset is typically required for effective training and accurate generalization [13]- [16].…”
Section: Introductionmentioning
confidence: 99%