2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00842
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Mixture of Granularity-Specific Experts for Fine-Grained Categorization

Abstract: We aim to divide the problem space of fine-grained recognition into some specific regions. To achieve this, we develop a unified framework based on a mixture of experts. Due to limited data available for the fine-grained recognition problem, it is not feasible to learn diverse experts by using a data division strategy. To tackle the problem, we promote diversity among experts by combing an expert gradually-enhanced learning strategy and a Kullback-Leibler divergence based constraint. The strategy learns new ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
95
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 170 publications
(95 citation statements)
references
References 46 publications
0
95
0
Order By: Relevance
“…Additionally, all of the results were obtained fairly, without external information such as annotations or a bounding box. As shown in Table 1, The proposed framework outperforms MGE-CNN [43], which includes many experts' input and a gating network, by 0.73% on the same ResNet-50 network. The result of the proposed method have been confirmed to be 1.13% higher than the second-highest result, ISQRT-COV [36].…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 93%
See 1 more Smart Citation
“…Additionally, all of the results were obtained fairly, without external information such as annotations or a bounding box. As shown in Table 1, The proposed framework outperforms MGE-CNN [43], which includes many experts' input and a gating network, by 0.73% on the same ResNet-50 network. The result of the proposed method have been confirmed to be 1.13% higher than the second-highest result, ISQRT-COV [36].…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 93%
“…Content may change prior to final publication. [33] ResNet-50 84.7 MAMC [27] ResNet-101 86.5 HBPASM [38] ResNet-34 86.8 DFL-CNN(1-scale) [35] ResNet-50 87.4 DBTNet-50 [39] ResNet-50 87.5 Cross-X [40] ResNet-50 87.7 DCL [21] ResNet-50 87.8 TASN [41] ResNet-50 87.9 iSQRT-COV [36] ResNet-50 88.1 S3N [42] ResNet-50 88.5 MGE-CNN [43] ResNet-50 88.5 MGE-CNN [43] ResNet [36] ResNet-50 92.8 MAMC [27] ResNet-101 93.0 DFL-CNN(1-scale) [35] ResNet-50 93.1 MGE-CNN [43] ResNet-101 93.6 TASN [41] ResNet-50 93.8 HBPASM [38] ResNet-34 93.8 MaxEnt [44] ResNet-50 93.85 MGE-CNN [43] ResNet-50 93.9 DBTNet-50 [39] ResNet-50 94.1 DCL [21] ResNet-50 94.5 Cross-X [40] ResNet-50…”
Section: ) the Positive Setmentioning
confidence: 99%
“…Table 2 gives the performance comparisons of the proposed method with several baseline models [ 1 , 2 , 14 , 20 , 53 , 54 , 73 ]. We give the performances of the proposed method when combined with these baseline models.…”
Section: Methodsmentioning
confidence: 99%
“…Girshick et al [ 66 ] used the feature hierarchy while Xie et al [ 67 ] leveraged hyperclass correlation. The combination of semantic representation and multiview information were also proven effective for classification [ 68 , 69 , 70 , 71 , 72 , 73 , 74 , 75 , 76 , 77 , 78 , 79 , 80 , 81 ].…”
Section: Related Workmentioning
confidence: 99%
“…√ indicates YES and blank expresses NO. 82.8 Mask-CNN [44] √ 87.3 STN [19] 84.1 Bilinear-CNN [28] 84.1 Low-rank Bilinear [21] 84.2 RA-CNN [8] 85.3 HBP [49] 87.1 DFL-CNN [42] 87.4 NTS-Net [48] 87.5 DCL [6] 87.8 TASN [56] 87.9 WS-BAN [15] 88.8 Ge et al [10] 90.4 Triplet-A [7] √ Manual labour 80.7 Multi-grained [ [34] 89.5 SDR [1] 90.5 ResNet-50 [11] 92.4 RIIR [46] 94.0 NAC-CNN [38] 95.3 MGE-CNN [51] 95.9 PBC [16] 96.1 CVL [12] 96.2 TEB (ours) 96.6…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%