2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS) 2022
DOI: 10.1109/aicas54282.2022.9869960
|View full text |Cite
|
Sign up to set email alerts
|

A 0.95 mJ/frame DNN Training Processor for Robust Object Detection with Real-World Environmental Adaptation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 10 publications
0
9
0
Order By: Relevance
“…Adaptation after an unexpected situation such as a camera malfunction or abrupt domain change is also important to prevent fatal operational errors. [80] shows that online DNN tuning performed right after an unpredictable accident is one of the solutions to recovering its original performance As shown in both two examples, on-device adaptation seems promising but it must be accompanied by an energy-efficient and low-latency DNN training processor. Long latency due to the training rather disturbs the DNN inference and can cause other problems due to slow response.…”
Section: Adaptation: Short-term Dnn Trainingmentioning
confidence: 93%
See 4 more Smart Citations
“…Adaptation after an unexpected situation such as a camera malfunction or abrupt domain change is also important to prevent fatal operational errors. [80] shows that online DNN tuning performed right after an unpredictable accident is one of the solutions to recovering its original performance As shown in both two examples, on-device adaptation seems promising but it must be accompanied by an energy-efficient and low-latency DNN training processor. Long latency due to the training rather disturbs the DNN inference and can cause other problems due to slow response.…”
Section: Adaptation: Short-term Dnn Trainingmentioning
confidence: 93%
“…It wastes lots of energy if there is no sparsity during the DNN training. [29,80] focused on this drawback and utilized bit-slice (4-bit) level sparsity. Moreover, [29] skipped partial accumulation slices which should be truncated before it is used for the next layer.…”
Section: In-and Out-slice Skippingmentioning
confidence: 99%
See 3 more Smart Citations