2022 International Conference on Smart Systems and Power Management (IC2SPM) 2022
DOI: 10.1109/ic2spm56638.2022.9988992
|View full text |Cite
|
Sign up to set email alerts
|

Implementation and Optimization of Neural Networks for Tiny Hardware Devices

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 29 publications
0
2
0
Order By: Relevance
“…From the proposed taxonomy in Figure 5, the area of model optimization, based on referenced research contributions, is the one that has received the most extensive exploration. Indeed, within TinyML, we come across several cutting-edge works that explore techniques such as pruning [63], quantization [75], and knowledge distillation [88]. On the contrary, the scarcity of research focusing on HPO [93] can be attributed to its complexity, the lack of awareness about the importance of HPO and its potential to enhance the performance of ML models significantly, and the resource requirements, which can be a limiting factor for researchers with restricted access to high-performance computing infrastructure.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…From the proposed taxonomy in Figure 5, the area of model optimization, based on referenced research contributions, is the one that has received the most extensive exploration. Indeed, within TinyML, we come across several cutting-edge works that explore techniques such as pruning [63], quantization [75], and knowledge distillation [88]. On the contrary, the scarcity of research focusing on HPO [93] can be attributed to its complexity, the lack of awareness about the importance of HPO and its potential to enhance the performance of ML models significantly, and the resource requirements, which can be a limiting factor for researchers with restricted access to high-performance computing infrastructure.…”
Section: Discussionmentioning
confidence: 99%
“…In general, knowledge distillation is used to achieve a good trade-off between a small model size and an acceptable accuracy [88]. For this reason, it is widely adopted in several fields where existing models are well-performing but unable to be deployed ''as they are'' in resource-constrained hardware.…”
Section: ) Knowledge Distillationmentioning
confidence: 99%