2020
DOI: 10.1007/978-3-030-63830-6_35
|View full text |Cite
|
Sign up to set email alerts
|

LCNet: A Light-Weight Network for Object Counting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 21 publications
0
9
0
Order By: Relevance
“…In this study, the proposed method begins with the preprocessing process. Next, a modeling process was carried out for each of the models used, namely EfficientNet [11], EfficientNetV2 [12], LCNet [13], MobileNetV3 [14], TinyNet [15], and FBNetV3 [16]. The six models are used because previous research conducted by Yi et al [7] is an improvement to the EfficientNet model by adding Residual Attention, which is an attention mechanism in its architecture.…”
Section: Methodology Figure 1 Workflow Of the Proposed Diabetic Retin...mentioning
confidence: 99%
See 4 more Smart Citations
“…In this study, the proposed method begins with the preprocessing process. Next, a modeling process was carried out for each of the models used, namely EfficientNet [11], EfficientNetV2 [12], LCNet [13], MobileNetV3 [14], TinyNet [15], and FBNetV3 [16]. The six models are used because previous research conducted by Yi et al [7] is an improvement to the EfficientNet model by adding Residual Attention, which is an attention mechanism in its architecture.…”
Section: Methodology Figure 1 Workflow Of the Proposed Diabetic Retin...mentioning
confidence: 99%
“…Given that APTOS2019 [7] dataset gave insufficient and unbalanced datasets, it is more difficult to obtain a satisfactory result based on deep learning technology alone. To solve this problem, we adopted transfer learning technology in our model, by experimenting using six different architecture that is EfficientNet [11], EfficientNetV2 [12], LCNet [13], MobileNetV3 [14], TinyNet [15], and FBNetV3 [16]. Those architectures were already pre-trained on ImageNet and gives a satisfactory classification performance.…”
Section: Modellingmentioning
confidence: 99%
See 3 more Smart Citations