2019
DOI: 10.1177/1475921719879071
|View full text |Cite
|
Sign up to set email alerts
|

Event classification for natural gas pipeline safety monitoring based on long short-term memory network and Adam algorithm

Abstract: Hydrate plugging and pipeline leak can impair the normal operation of natural gas pipeline and may lead to serious accidents. Since natural gas pipeline safety monitoring based on active acoustic excitation can detect and locate not only the two abnormal events but also normal components such as valves and pipeline elbows, recognition and classification of these events are of great importance to provide maintenance guidance for the pipeline operators and avoid false alarm. In this article, long short-term memo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 16 publications
0
8
0
Order By: Relevance
“…Therefore, the training process of the neural network is accelerated and the difficulty of events classification is reduced. The specific process of Z -score standardization is as follows 12…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, the training process of the neural network is accelerated and the difficulty of events classification is reduced. The specific process of Z -score standardization is as follows 12…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…At the end of each iteration, 720 samples are shuffled, in which 540 samples are randomly re-selected as the training dataset and the rest 180 samples are the validation dataset for the next iteration. Since it can accelerate the loss reduction and avoid falling into local optimum of loss, adaptive moment estimation (Adam) algorithm 12 is applied to update the parameter weights w and bias b .…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The parameter update rule is defined as: where σ is the learning rate. As an extension of the stochastic gradient descent method [ 32 ], the Adam algorithm possesses fast speed in the optimization process without falling into the local optimum [ 33 ].…”
Section: Methodsmentioning
confidence: 99%
“…Gated neural network, as a kind of recurrent neural network (RNN), is able to extract the dependencies hidden in the sequential data with controlling the previous information. 22 Compared to long short-term memory network which captures long-term information, 23 Gated recurrent unit (GRU) has simpler recurrent architecture but performs better on less amount of training data. To improve the prediction accuracy, the deep GRU architecture is modified with extra wavelet sequence layer, given the fact that the bearing data are heavily non-stationary and it is hard to capture the time-frequency features if we want to take advantage of the known physics.…”
Section: Introductionmentioning
confidence: 99%