2022
DOI: 10.1109/tmc.2021.3062575
|View full text |Cite
|
Sign up to set email alerts
|

MDLdroidLite: A Release-and-Inhibit Control Approach to Resource-Efficient Deep Neural Networks on Mobile Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 9 publications
(10 citation statements)
references
References 31 publications
0
10
0
Order By: Relevance
“…In addition, many works focus on reducing the overall system resources required for DNN training [6, 11, 18, 21, 31-33, 36, 43, 51, 70, 73, 78, 87, 92, 96, 102]. For example, researchers control the layerwise growth of the model structure to enable efficient DNN training on mobile phones [104]. Other methods optimize sparse activations and redundant weights to avoid unnecessary storage of activations and weight updates during DNN training [5,28,58].…”
Section: Efficient Deep Learning Systemsmentioning
confidence: 99%
“…In addition, many works focus on reducing the overall system resources required for DNN training [6, 11, 18, 21, 31-33, 36, 43, 51, 70, 73, 78, 87, 92, 96, 102]. For example, researchers control the layerwise growth of the model structure to enable efficient DNN training on mobile phones [104]. Other methods optimize sparse activations and redundant weights to avoid unnecessary storage of activations and weight updates during DNN training [5,28,58].…”
Section: Efficient Deep Learning Systemsmentioning
confidence: 99%
“…For example, a NN pre-trained with the ImageNet dataset for pedestrian detection suffers 20% inference accuracy drop on the PASCAL dataset [36,79] 1 . On-device training with online data, hence, is essential for NNs to retain generality or be personalized [38,62,91].…”
Section: Offline Onlinementioning
confidence: 99%
“…However, since the pruned NN portions can never be selected again even if they may be useful [52], NN's representation power is weakened over time and becomes insufficient for difficult learning tasks. Training can also start from a small NN that gradually grows [48,91] (Figure 1 bottom-left), but the newly added NN layers are trained from scratch and require >30% extra training epochs [91].…”
Section: Offline Onlinementioning
confidence: 99%
“…Recent works optimize the resource usage of on-device training via pruning and quantization [34,45], tuning partial weights [9,47], memory profiling and optimization [15,61,64], as well as growing the NN on the fly [67]. All these works optimize training given resource constraints and do not focus on lifelong learning.…”
Section: Related Workmentioning
confidence: 99%