2021
DOI: 10.48550/arxiv.2106.12089
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Structured in Space, Randomized in Time: Leveraging Dropout in RNNs for Efficient Training

Anup Sarma,
Sonali Singh,
Huaipan Jiang
et al.

Abstract: Recurrent Neural Networks (RNNs), more specifically their Long Short-Term Memory (LSTM) variants, have been widely used as a deep learning tool for tackling sequence-based learning tasks in text and speech. Training of such LSTM applications is computationally intensive due to the recurrent nature of hidden state computation that repeats for each time step. While sparsity in Deep Neural Nets has been widely seen as an opportunity for reducing computation time in both training and inference phases, the usage of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 23 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?