2018 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2018
DOI: 10.23919/date.2018.8342202
|View full text |Cite
|
Sign up to set email alerts
|

Energy-efficient neural networks using approximate computation reuse

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
16
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 43 publications
(16 citation statements)
references
References 17 publications
0
16
0
Order By: Relevance
“…Furthermore, custom hardware accelerators are often associated with proposed computational reuse schemes. DNN is known to be error-tolerant; therefore, it is resilient to approximating inputs and weights by reading the binary representation [22], [23]. Such property is leveraged by computational reuse, supporting architectures, and algorithms to increase reuse potential further.…”
Section: A Hardware-based Cnn Acceleratorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, custom hardware accelerators are often associated with proposed computational reuse schemes. DNN is known to be error-tolerant; therefore, it is resilient to approximating inputs and weights by reading the binary representation [22], [23]. Such property is leveraged by computational reuse, supporting architectures, and algorithms to increase reuse potential further.…”
Section: A Hardware-based Cnn Acceleratorsmentioning
confidence: 99%
“…Face recognition is a CNN application commonly used in mobile devices that can take advantage of computation reuse due to the high similarity profile of face images. In [23], researchers use similar feature skipping and tile-based hierarchical clustering resulting in a 30% reduction of face recognition CNN computation with only 1% accuracy loss. The proposed ultra-low-power face recognition processors use 1-bit binary weights, which decreases the memory weight footprint and simplifies the MAC processing resulting in very high energy efficiency.…”
Section: A Hardware-based Cnn Acceleratorsmentioning
confidence: 99%
“…RMAC uses addition for manitssae product computation, instead of multiplying them [17]. In [18], Jiao et al proposed a multiplier which exploits the computation reuse opportunities and enhance them by performing approximate pattern matching.…”
Section: Related Workmentioning
confidence: 99%
“…Lookup-based approximation avoids computation entirely by reusing previously computed values. Ternary content addressable memories (TCAMs) can be utilized for approximate computational reuse for GPGPU applications [15,16,23,24]. Associative memory is placed adjacent to FPUs and stores input and output values from previously computed operations which are searched against incoming inputs.…”
mentioning
confidence: 99%
“…The authors in [15] proposed a configurable memory which applies VOS to non-volatile associative memory to relax computation and to trade output accuracy for energy savings. Work in [23] designed a novel associative memory based bloom filters. Machine learning algorithms and neural networks have proven to be resilient to some level of reduced precision.…”
mentioning
confidence: 99%