2017
DOI: 10.1109/lca.2016.2521654
|View full text |Cite
|
Sign up to set email alerts
|

Power-Efficient Accelerator Design for Neural Networks Using Computation Reuse

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…PACMAN uses a simulated annealing algorithm to search for the best partitioning plan. A variety of previous studies have examined neural network accelerators in which parameters such as reduced power consumption [23,24], increased throughput [25,26], and the use of memory bandwidth for information processing are evaluated. A variety of previous studies have examined neural network accelerators in which parameters such as reduced power consumption [23,24], increased throughput [25,26], and the use of memory bandwidth for information processing are evaluated [27].…”
Section: Related Workmentioning
confidence: 99%
“…PACMAN uses a simulated annealing algorithm to search for the best partitioning plan. A variety of previous studies have examined neural network accelerators in which parameters such as reduced power consumption [23,24], increased throughput [25,26], and the use of memory bandwidth for information processing are evaluated. A variety of previous studies have examined neural network accelerators in which parameters such as reduced power consumption [23,24], increased throughput [25,26], and the use of memory bandwidth for information processing are evaluated [27].…”
Section: Related Workmentioning
confidence: 99%
“…For permanent fault, [3] proposes a fault-aware mapping technique to minus the permanent fault in MAC units. For power-efficient technology, [4] proposes a computation re-use-aware neural network technique to reuse the weight by constructing a computational reuse table. [5] uses approximate a computing technique to retrain the network for getting the resilient neurons.…”
Section: Introductionmentioning
confidence: 99%
“…These articles focus on fault tolerance technology specifically. Some of them such as [4,5] address the relationship between accuracy and power-efficient together but lack computation ability information. Besides these listed articles, there are still many published works targeting fault tolerance in recent years.…”
Section: Introductionmentioning
confidence: 99%
“…For permanent faults, Reference [3] proposes a fault-aware mapping technique to minus the permanent fault in MAC units. For powerefficient technology, Reference [4] proposes a computation re-use-aware neural network technique to reuse the weight by constructing a computational reuse table. Reference [5] uses an approximate computing technique to retrain the network for getting the resilient neurons.…”
mentioning
confidence: 99%
“…These articles focus on fault tolerance technology specifically. Some of them address the relationship between accuracy and power-efficient together but lack computation ability information [4,5]. Besides these listed articles, there are still many published works targeting fault tolerance in recent years, which indicates that the edge AI with fault tolerance is the trend.…”
mentioning
confidence: 99%