2022
DOI: 10.1016/j.future.2022.02.005
|View full text |Cite
|
Sign up to set email alerts
|

Quantune: Post-training quantization of convolutional neural networks using extreme gradient boosting for fast deployment

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(1 citation statement)
references
References 17 publications
0
1
0
Order By: Relevance
“…The deployment of an AI-based IDS framework in these devices imposes computational challenges such as restricted memory, fewer ALUs, high CPU time, etc. In such conditions, direct implementation of memory-consuming AI algorithms is unreliable without optimization (Lee et al 2022;Garifulla et al 2021). A solution to this problem is optimizing AI algorithms for IoT devices.…”
Section: Introductionmentioning
confidence: 99%
“…The deployment of an AI-based IDS framework in these devices imposes computational challenges such as restricted memory, fewer ALUs, high CPU time, etc. In such conditions, direct implementation of memory-consuming AI algorithms is unreliable without optimization (Lee et al 2022;Garifulla et al 2021). A solution to this problem is optimizing AI algorithms for IoT devices.…”
Section: Introductionmentioning
confidence: 99%