2023
DOI: 10.14569/ijacsa.2023.0140114
|View full text |Cite
|
Sign up to set email alerts
|

Bidirectional Recurrent Neural Network based on Multi-Kernel Learning Support Vector Machine for Transformer Fault Diagnosis

Abstract: Traditional neural network has many weaknesses, such as a lack of mining transformer timing relation, poor generalization of classification, and low classification accuracy of heterogeneous data. Aiming at questions raised, this paper proposes a bidirectional recurrent neural network model based on a multi-kernel learning support vector machine. Through a bidirectional recurrent neural network for feature extraction, the features of the before and after time fusion and obvious data are outputted. The multi-ker… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 9 publications
0
1
0
Order By: Relevance
“…The current issue and full text archive of this journal is available on Emerald Insight at: https://www.emerald.com/insight/0036-8792.htm feature representation, struggle to capture long-span temporal structures within a single temporal window (Ji et al, 2012;Tran et al, 2015;Zhu et al, 2022), rendering them less suitable for spatiotemporal sequence prediction problems where time-series relationships are crucial. Conversely, convolutional LSTMs (ConvLSTMs), despite their capacity to process spatial characteristics in three-dimensional tensors, often fail to retain detailed information in raw data, thereby compromising their predictive accuracy (Miao et al, 2022;Zhou et al, 2022;Zhang et al, 2022;Zhao et al, 2023). To address these challenges, we propose a novel spatiotemporal LSTM (ST-LSTM) that introduces a groundbreaking memory state transfer method, enabling interaction and updating of memory states across different network layers (Goodfellow et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…The current issue and full text archive of this journal is available on Emerald Insight at: https://www.emerald.com/insight/0036-8792.htm feature representation, struggle to capture long-span temporal structures within a single temporal window (Ji et al, 2012;Tran et al, 2015;Zhu et al, 2022), rendering them less suitable for spatiotemporal sequence prediction problems where time-series relationships are crucial. Conversely, convolutional LSTMs (ConvLSTMs), despite their capacity to process spatial characteristics in three-dimensional tensors, often fail to retain detailed information in raw data, thereby compromising their predictive accuracy (Miao et al, 2022;Zhou et al, 2022;Zhang et al, 2022;Zhao et al, 2023). To address these challenges, we propose a novel spatiotemporal LSTM (ST-LSTM) that introduces a groundbreaking memory state transfer method, enabling interaction and updating of memory states across different network layers (Goodfellow et al, 2016).…”
Section: Introductionmentioning
confidence: 99%