2017 International Conference on Smart Technologies for Smart Nation (SmartTechCon) 2017
DOI: 10.1109/smarttechcon.2017.8358502
|View full text |Cite
|
Sign up to set email alerts
|

Benchmark analysis of popular ImageNet classification deep CNN architectures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 21 publications
(6 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…As expected, our study verified the feasibility and effectiveness of using DCNNs to automatically identify habitat elements, and the best accuracy rate reached 97.76%. Although we only used ten habitat elements as the objects of recognition in our research, DCNNs have also achieved good results in terms of the recognition of 1000 categories in the ImageNet recognition task [ 65 ], so we have reason to believe that if more categories of images are provided, our method can also identify more habitat elements.…”
Section: Discussionmentioning
confidence: 99%
“…As expected, our study verified the feasibility and effectiveness of using DCNNs to automatically identify habitat elements, and the best accuracy rate reached 97.76%. Although we only used ten habitat elements as the objects of recognition in our research, DCNNs have also achieved good results in terms of the recognition of 1000 categories in the ImageNet recognition task [ 65 ], so we have reason to believe that if more categories of images are provided, our method can also identify more habitat elements.…”
Section: Discussionmentioning
confidence: 99%
“…As shown in the figure, our base model pre-trained with the ImageNet data consists of 1000 different classes, with over 14 million images [43]. With this matter, fine-tuning is imperative, as the current weights and structure of the EfficientNetB0 cannot immediately work for our selected task [44].…”
Section: Transfer Learning and Fine-tuningmentioning
confidence: 99%
“…(15): lr = base_lr × γ floor (iter /stepsize) (15) where, lr represents the current learning rate, γ the learning rate update weight, iter the current iteration number, stepsize the set learning rate update step size, and floor (•) the round-down operation. In this paper, the momentum value and weight decay value are set to 0.9 and 0.0005, respectively [31].…”
Section: B Model Evaluation Metricsmentioning
confidence: 99%