2020
DOI: 10.1109/access.2020.3030249
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Category Similarity-Based Distributed Labeling for Fine-Grained Visual Classification

Abstract: The fine-grained visual classification (FGVC) which aims to distinguish subtle differences among subcategories is an important computer vision task. However, one issue that limits model performance is the problem of diversity within subcategories. To this end, we propose a simple yet effective approach named category similarity-based distributed labeling (CSDL) to tackle this problem. Specifically, we first obtain the feature centers for various subcategories and utilize them to initialize the label distributi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 57 publications
0
5
0
Order By: Relevance
“…AIR CUB CAR NAB B-CNN [60] 86.9 84.0 90.6 -MA-CNN [6] 89.9 86.5 92.8 -M2DRL [44] -87.2 93.3 -NTS-Net [43] 91.4 87.5 93.9 -Cross-X [9] 92.6 87.7 94.6 86.2 MGE-CNN [4] -88.5 93.9 86.7 ELP [61] 92.7 88.8 94.2 -DCL [48] 93.0 87.8 94.5 86.0 WSDAN [2] 93.0 89.4 94.5 87.9 SFFF [5] 93.1 85.4 94.4 -API-Net [62] 93.4 88.6 94.9 86.2 PMG [46] 93.4 89.6 95.1 -CDSL-DCL [13] 93.5 88.6 94.9 -CAL [59] 94.2 90.6 95.5 91.0 SIA-Net [35] 94. CNNs on the CUB and NAB datasets, but CNNs achieve better performances on the AIR and CAR datasets.…”
Section: Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…AIR CUB CAR NAB B-CNN [60] 86.9 84.0 90.6 -MA-CNN [6] 89.9 86.5 92.8 -M2DRL [44] -87.2 93.3 -NTS-Net [43] 91.4 87.5 93.9 -Cross-X [9] 92.6 87.7 94.6 86.2 MGE-CNN [4] -88.5 93.9 86.7 ELP [61] 92.7 88.8 94.2 -DCL [48] 93.0 87.8 94.5 86.0 WSDAN [2] 93.0 89.4 94.5 87.9 SFFF [5] 93.1 85.4 94.4 -API-Net [62] 93.4 88.6 94.9 86.2 PMG [46] 93.4 89.6 95.1 -CDSL-DCL [13] 93.5 88.6 94.9 -CAL [59] 94.2 90.6 95.5 91.0 SIA-Net [35] 94. CNNs on the CUB and NAB datasets, but CNNs achieve better performances on the AIR and CAR datasets.…”
Section: Methodsmentioning
confidence: 99%
“…SCL aggregates representations of natural samples around the center point and increases the distance from manipulated samples to the center point, making it greater than from natural samples by a margin. CSDL [13] combines class centers and one-hot labels to generate soft labels. To measure the importance of samples in the same cluster, AdaMG [50] calculates the distances between these samples and their corresponding class centers.…”
Section: Class Centermentioning
confidence: 99%
See 3 more Smart Citations