2022
DOI: 10.1109/tcsi.2021.3099716
|View full text |Cite
|
Sign up to set email alerts
|

Vau Da Muntanialas: Energy-Efficient Multi-Die Scalable Acceleration of RNN Inference

Abstract: Recurrent neural networks such as Long Short-Term Memories (LSTMs) learn temporal dependencies by keeping an internal state, making them ideal for time-series problems such as speech recognition. However, the outputto-input feedback creates distinctive memory bandwidth and scalability challenges in designing accelerators for RNNs. We present MUNTANIALA, an RNN accelerator architecture for LSTM inference with a silicon-measured energy-efficiency of 3.25 TOP/s/W and performance of 30.53 GOP/s in UMC 65nm technol… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
references
References 60 publications
0
0
0
Order By: Relevance