2021
DOI: 10.48550/arxiv.2106.03640
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Making EfficientNet More Efficient: Exploring Batch-Independent Normalization, Group Convolutions and Reduced Resolution Training

Dominic Masters,
Antoine Labatie,
Zach Eaton-Rosen
et al.

Abstract: Much recent research has been dedicated to improving the efficiency of training and inference for image classification. This effort has commonly focused on explicitly improving theoretical efficiency, often measured as ImageNet validation accuracy per FLOP. These theoretical savings have, however, proven challenging to achieve in practice, particularly on high-performance training accelerators.In this work, we focus on improving the practical efficiency of the state-of-the-art EfficientNet models on a new clas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 42 publications
(53 reference statements)
0
1
0
Order By: Relevance
“…Although the number of parameters of the EfficientNet is 6.87 times greater than the SqueezeNet, the inference time for both models is similar. This is expected since the models of the EfficientNet family are designed to focus on increasing accuracy performance while minimizing the overall number of operations required [15], [59]. Considering file size and accuracy as decision-making metrics for the choice of deep keyword spotting models, among the selected one, SqueezeNet can be considered the best compromise model.…”
Section: Resultsmentioning
confidence: 99%
“…Although the number of parameters of the EfficientNet is 6.87 times greater than the SqueezeNet, the inference time for both models is similar. This is expected since the models of the EfficientNet family are designed to focus on increasing accuracy performance while minimizing the overall number of operations required [15], [59]. Considering file size and accuracy as decision-making metrics for the choice of deep keyword spotting models, among the selected one, SqueezeNet can be considered the best compromise model.…”
Section: Resultsmentioning
confidence: 99%