2019
DOI: 10.1088/1742-6596/1325/1/012089
|View full text |Cite
|
Sign up to set email alerts
|

OGRU: An Optimized Gated Recurrent Unit Neural Network

Abstract: Due to the structural problem, the traditional neural network models are prone to problems such as gradient explosion and over-fitting, while the deep GRU neural network model has low update efficiency and poor information processing capability among multiple hidden layers. Based on this, this paper proposes an optimized gated recurrent unit(OGRU) neural network.The OGRU neural network model proposed in this paper improves information processing capability and learning efficiency by optimizing the unit structu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 27 publications
(26 citation statements)
references
References 3 publications
0
26
0
Order By: Relevance
“…The work in [ 16 ] used preprocessed raw PPG signal windows with the first and second derivatives as the input of their modified ResNet-GRU-based network to predict SBP and DBP. However, this ResNet-GRU-based model is computationally expensive as the learning efficiency of gated recurrent unit (GRU) is low and converges slowly [ 17 ]. Similarly, another research work [ 18 ], used a convolutional neural network (CNN) model to get SBP and DBP output giving a similar preprocessed raw PPG signal window with the first and second derivatives as the input to the network.…”
Section: Introductionmentioning
confidence: 99%
“…The work in [ 16 ] used preprocessed raw PPG signal windows with the first and second derivatives as the input of their modified ResNet-GRU-based network to predict SBP and DBP. However, this ResNet-GRU-based model is computationally expensive as the learning efficiency of gated recurrent unit (GRU) is low and converges slowly [ 17 ]. Similarly, another research work [ 18 ], used a convolutional neural network (CNN) model to get SBP and DBP output giving a similar preprocessed raw PPG signal window with the first and second derivatives as the input to the network.…”
Section: Introductionmentioning
confidence: 99%
“…In [25], the first and second derivatives of the PPG signal are used as the input of a modified ResNet-GRU-based network to estimate DBP and SBP. However, the ResNet-GRU-based model is computationally expensive as the learning efficiency of gated recurrent unit (GRU) is low and converges slowly [27]. In order to overcome this shortcoming, Harfiya [11] replaced the ResNet-GRU-based network with an LSTM-based network.…”
Section: Introductionmentioning
confidence: 99%
“…In this study, GRU and XGB were used to establish the rainfall forecasting model. The GRU model, a variant of the LSTM neural network, is composed of the input, forget, and output layers [ 63 ]. The input gate regulates the amount of information that enters the memory cell, the forget gate directs the memory cell and remains in the present memory cell through recurring connection, and the output gate determines the amount of data used to calculate the output activation of the memory cell and information flow to the rest of the neural network [ 60 ].…”
Section: Prediction System and Modelsmentioning
confidence: 99%
“…Numerous RNNs, such as the long short-term memory (LSTM) network [ 62 ], have been developed. The LSTM network can address the gradient-vanishing problem by introducing a gate control unit, and LSTM has been widely used in the field of time-series data prediction [ 63 , 64 ]. Cho et al [ 65 ] proposed a LSTM-based network gated recurrent unit (GRU) model, which learns how to use its gates to protect its memory to optimize the network structure and make long-term predictions [ 66 ].…”
Section: Introductionmentioning
confidence: 99%