2020
DOI: 10.1145/3408062
|View full text |Cite
|
Sign up to set email alerts
|

Modular Neural Networks for Low-Power Image Classification on Embedded Devices

Abstract: Embedded devices are generally small, battery-powered computers with limited hardware resources. It is difficult to run deep neural networks (DNNs) on these devices, because DNNs perform millions of operations and consume significant amounts of energy. Prior research has shown that a considerable number of a DNN's memory accesses and computation are redundant when performing tasks like image classification. To reduce this redundancy and thereby reduce the energy consumption of DNNs, we introduce the Modular Ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 68 publications
(82 reference statements)
0
7
0
Order By: Relevance
“…Because of the importance of having adaptive workloads for different application scenarios and hardware requirements, a significant amount of research focuses on building on re-configurable DNN architectures. In these techniques, the DNNs use different layers and connections based on the input [19,20]. GaterNet [19] is a technique that uses a gating layer to deactivate certain DNN connections to reduce computation costs.…”
Section: Adaptive Dnn Workloadsmentioning
confidence: 99%
“…Because of the importance of having adaptive workloads for different application scenarios and hardware requirements, a significant amount of research focuses on building on re-configurable DNN architectures. In these techniques, the DNNs use different layers and connections based on the input [19,20]. GaterNet [19] is a technique that uses a gating layer to deactivate certain DNN connections to reduce computation costs.…”
Section: Adaptive Dnn Workloadsmentioning
confidence: 99%
“…Hierarchical DNNs use multiple DNNs in a tree structure [13,10]. An input is characterized by the path it follows from root to leaf (Fig.…”
Section: Background and Related Work A Hierarchical Deep Neural Networkmentioning
confidence: 99%
“…Larger DNNs are more accurate but use more resources. To obtain an acceptable tradeoff between accuracy and efficiency, we apply an architecture search technique [13]. It uses the change in accuracy density to evaluate whether a DNN (D i+1 ) with i+1 layers should be selected over a DNN (D i ) with i layers.…”
Section: B Constructing the Hierarchymentioning
confidence: 99%
“…Our Prior Work: Our prior work [22] proposes a technique to use hierarchical DNNs for low-power image classification. To achieve high accuracy with low memory, computation, and energy requirements, the method uses the output of a DNN's softmax layer to identify and group visually similar categories.…”
Section: Related Workmentioning
confidence: 99%
“…If the value of ∆ID(D 1 , D 2 ) > T , then this process continues for increasing values of i until ∆ID(D i , D i+1 ) ≤ T . This paper selects the value of T = 0.001 by experiments [22]. A small T Fig.…”
Section: Hierarchical Object Countingmentioning
confidence: 99%