2021 IEEE/ACM International Conference on Computer Aided Design (ICCAD) 2021
DOI: 10.1109/iccad51958.2021.9643531
|View full text |Cite
|
Sign up to set email alerts
|

RNSiM: Efficient Deep Neural Network Accelerator Using Residue Number Systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 15 publications
(4 citation statements)
references
References 19 publications
0
4
0
Order By: Relevance
“…Commonly used nonlinear activation functions include softmax, logistic, hyperbolic, and ReLU [22]. In the context of neural network development with the RNS system, advantages have been observed in performing addition, subtraction, and multiplication operations [15].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Commonly used nonlinear activation functions include softmax, logistic, hyperbolic, and ReLU [22]. In the context of neural network development with the RNS system, advantages have been observed in performing addition, subtraction, and multiplication operations [15].…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, the research presents a methodology for performing the exponential operation in RNS. Both of these objectives hold fundamental importance in advancing the practical application of unconventional numerical systems, such as RNS, in pattern recognition systems and digital signal processing [11,15,16,24,26].…”
Section: Discussionmentioning
confidence: 99%
“…A complete multiplier-free RNS accelerator utilizing this approach is also developed [14]. RNS has been also utilized in the design of in-memory computing (IMC) systems [15], [16]. This work addresses the shortcomings of existing state-of-the-art RNS DNN architectures (Sec.…”
Section: B Rns In Dnn Acceleratorsmentioning
confidence: 99%
“…The authors observed that the quantized model produced better accuracy results than the fullprecision model. Quantization on CNNs applied to portable devices is discussed in [22][23][24][25], where the authors evaluated inference results on medical images using a quantized CNN and exhibited that the inference time is reduced by 97%. The objective of quantizing a neural network is to decrease the bit-width of the network parameters: weights and/or activation.…”
Section: Hardware-friendly Neural Networkmentioning
confidence: 99%