2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01490
|View full text |Cite
|
Sign up to set email alerts
|

Time Adaptive Recurrent Neural Network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

1
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 15 publications
1
24
0
Order By: Relevance
“…This superior performance is further illustrated in Table 7, where the test perplexity for different models on the PTB word-level task is presented. We observe that not only does LEM significantly outperform (by around 40%) LSTM, but it also provides again the best performance among all single layer recurrent models, including the recently proposed TARNN [20]. Moreover, the single-layer results for LEM are better than reported results for multi-layer LSTM models, such as in Gal and Ghahramani [14] (2-layer LSTM, 1500 units each: 75.…”
Section: Modelmentioning
confidence: 66%
See 2 more Smart Citations
“…This superior performance is further illustrated in Table 7, where the test perplexity for different models on the PTB word-level task is presented. We observe that not only does LEM significantly outperform (by around 40%) LSTM, but it also provides again the best performance among all single layer recurrent models, including the recently proposed TARNN [20]. Moreover, the single-layer results for LEM are better than reported results for multi-layer LSTM models, such as in Gal and Ghahramani [14] (2-layer LSTM, 1500 units each: 75.…”
Section: Modelmentioning
confidence: 66%
“…Moreover, the single-layer results for LEM are better than reported results for multi-layer LSTM models, such as in Gal and Ghahramani [14] (2-layer LSTM, 1500 units each: 75. [20] 115.9 256 131k LSTM [20] 116.9 256 524k SkipLSTM [20] 114.2 256 524k TARNN [20] 94.6 256 524k LEM 72.8 256 524k…”
Section: Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Efficient Training for SNNs. Several RNN training methods pursue online learning and constant memory occupation agnostic time horizon, such as real time recurrent learning [60] and forward propagation through time [31]. Inspired by them, some SNN training methods [2,3,[70][71][72] apply similar ideas to achieve memory-efficient and online learning.…”
Section: Related Workmentioning
confidence: 99%
“…This way, they retain major portion of the dynamics and forecast the future behavior of the system. Both, the incremental Recurrent Neural Network (IRNN) (Kag et al, 2019) and the time adaptive RNN (Kag & Saligrama, 2021) use additional recurrent iterations on each input to enable the model of coping different input time scales, where the latter provides a time-varying function that adapts the model's behavior to the time scale of the provided input.…”
Section: Related Workmentioning
confidence: 99%