2020
DOI: 10.1016/j.isatra.2020.07.011
|View full text |Cite
|
Sign up to set email alerts
|

Bidirectional deep recurrent neural networks for process fault classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 77 publications
(22 citation statements)
references
References 38 publications
0
22
0
Order By: Relevance
“…Table 1 represents the Tennessee Eastman Process fault cases. Since the TE process dataset contains collected time-series sensor data, the data is prepared as time series sequences as discussed in [ 1 ] before the training.…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Table 1 represents the Tennessee Eastman Process fault cases. Since the TE process dataset contains collected time-series sensor data, the data is prepared as time series sequences as discussed in [ 1 ] before the training.…”
Section: Resultsmentioning
confidence: 99%
“…To define the anomaly detection setting, we follow previous works [ 1 ] by dividing the fault classes into subgroups based on how challenging the faults are to detect. Accordingly, we divide the 21 faults into three subgroups: easy, medium, and hard-to-detect faults.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Fault-1,2,4,5,6,7 are all step-type ones which are considered by [36]- [37] to demonstrate the good performance of their fault diagnosis methods. Similarly, this paper employs the SOM (self-organizing map, results are from [37]), CCA-SOM (canonical correlation analysis and self-organizing map, results are from [37]) and B-LSTM (Bidirectional Long Short Term Memory, results are from [36]) to demonstrate the effectiveness of this SGCN-based fault diagnosis, and the diagnostic results of 7 classes are presented in Table 4. Most diagnosis accuracy of these 7 classes can be reached to above 99.2% except normal and fault-5.…”
Section: Case-3mentioning
confidence: 99%
“…In our model, the bidirectional network starts with a spatial dropout layer, which performs dropout on entire feature maps rather than individual elements. Then, the output of this layer is fed to the bidirectional RNN (BiRNN) layer [32] based on the GRU, which connects two hidden layers (forward and backward) of opposite directions to the same output. After that, the output of the BiRNN layer is fed the global average pooling layer and global maximum pooling layer simultaneously, and the outputs of the two layers are combined to form new input to the next stage, as shown in Figure 2.…”
Section: Bidirectional Recurrent Neural Networkmentioning
confidence: 99%