2022
DOI: 10.1007/s00521-021-06711-2
|View full text |Cite
|
Sign up to set email alerts
|

Generalized transformer in fault diagnosis of Tennessee Eastman process

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 27 publications
0
5
0
Order By: Relevance
“…Transformer is a neural network based on a pure attention mechanism to reflect the global dependence between input and output and has good identification performance in chemical process fault diagnosis. 38 Transformer’s network architecture and hyperparametric references 39 take the enhanced dynamic data as the input, and the size of the input subsequence is 20. Similarly, all experiments are repeated 10 times under the same conditions, and the average accuracy is taken as the experimental result.…”
Section: Experimental Verificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Transformer is a neural network based on a pure attention mechanism to reflect the global dependence between input and output and has good identification performance in chemical process fault diagnosis. 38 Transformer’s network architecture and hyperparametric references 39 take the enhanced dynamic data as the input, and the size of the input subsequence is 20. Similarly, all experiments are repeated 10 times under the same conditions, and the average accuracy is taken as the experimental result.…”
Section: Experimental Verificationmentioning
confidence: 99%
“…The kernel function of SVM is RBF. Transformer is a neural network based on a pure attention mechanism to reflect the global dependence between input and output and has good identification performance in chemical process fault diagnosis . Transformer’s network architecture and hyperparametric references take the enhanced dynamic data as the input, and the size of the input subsequence is 20.…”
Section: Experimental Verificationmentioning
confidence: 99%
“…For instance, Wei et al [ 23 ] proposed an improved Transformer for fault diagnosis in chemical processes, demonstrating its performance through evaluations of the Tennessee Eastman (TE) process. Zhang et al [ 24 ] introduced a novel method called the generalized transformer, employing the attention mechanism to explore the relationships and non‐linearities between inputs for fault diagnosis tasks, and their experiments validated its effectiveness. These advancements highlight the transformative impact of deep learning technologies in diagnosing and managing faults in complex systems, showcasing their potential for broader applications in various industrial processes.…”
Section: Introductionmentioning
confidence: 99%
“…Bidirectional structures are introduced into LSTM, which include both forward and backward dependencies [42]. This improvement effectively avoids problems such as gradient disappearance and explosion caused by time dependence, and has been widely used in many fields [43]. However, it still has some shortcomings in the depth and complexity of feature extraction.…”
Section: Introductionmentioning
confidence: 99%