2021
DOI: 10.1016/j.neunet.2021.08.032
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent neural network from adder’s perspective: Carry-lookahead RNN

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2025
2025

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(7 citation statements)
references
References 17 publications
0
7
0
Order By: Relevance
“…In this way, the genetic algorithm can help to adjust the connection weights and bias terms between the layers in the BP neural network, which enables the neural network to achieve a better fit on the training dataset and a better generalisation ability [12]. This combined optimisation approach tends to be more effective than using BP neural networks alone when dealing with complex problems [13].…”
Section: Genetic Algorithm Improved Bp Neural Network Modelmentioning
confidence: 99%
“…In this way, the genetic algorithm can help to adjust the connection weights and bias terms between the layers in the BP neural network, which enables the neural network to achieve a better fit on the training dataset and a better generalisation ability [12]. This combined optimisation approach tends to be more effective than using BP neural networks alone when dealing with complex problems [13].…”
Section: Genetic Algorithm Improved Bp Neural Network Modelmentioning
confidence: 99%
“…Lin and Cao (2020) [9] introduce a touch interactive system designed around an intelligent vase for psychotherapy tailored to Alzheimer's disease patients. Published in Designs, their work explores innovative approaches to integrating technology and therapeutic practices for enhancing patient engagement and wellbeing in Alzheimer's care.…”
Section: Liu Et Al (2024)mentioning
confidence: 99%
“…The adder works similarly to RNNs [36]. Combined with the features of the adder, Carry-lookahead (CL-RNNs) is proposed in [37], which alleviates the problems in time series for parallel computing, thus mitigating the effects of the training complexity. However, the performance of CL-RNN in time series tasks is inferior to LSTM and GRU.…”
Section: Modified Rnnsmentioning
confidence: 99%