2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00436
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Discriminative Filter Bank Within a CNN for Fine-Grained Recognition

Abstract: Compared to earlier multistage frameworks using CNN features, recent end-to-end deep approaches for finegrained recognition essentially enhance the mid-level learning capability of CNNs. Previous approaches achieve this by introducing an auxiliary network to infuse localization information into the main classification network, or a sophisticated feature encoding method to capture higher order feature statistics. We show that mid-level representation learning can be enhanced within the CNN framework, by learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
236
0
2

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 392 publications
(238 citation statements)
references
References 49 publications
0
236
0
2
Order By: Relevance
“…From the results presented in Tables I and II, we can observe that a sub-component level segmentation strategy, supported by the secondary fine-grain CNN classification of DFL model [22], offers significantly superior anomaly detection performance (A: 97.91, TP: 98.20, FP: 3.50 - Table I) than an object level segmentation strategy overall (Table II). Furthermore, fine-grain CNN classification similarly offers the highest overall accuracy and lowest false positive rate (A: 89.77, FP: 3.88 - Table II) for object level segmentation.…”
Section: Discussionmentioning
confidence: 93%
See 2 more Smart Citations
“…From the results presented in Tables I and II, we can observe that a sub-component level segmentation strategy, supported by the secondary fine-grain CNN classification of DFL model [22], offers significantly superior anomaly detection performance (A: 97.91, TP: 98.20, FP: 3.50 - Table I) than an object level segmentation strategy overall (Table II). Furthermore, fine-grain CNN classification similarly offers the highest overall accuracy and lowest false positive rate (A: 89.77, FP: 3.88 - Table II) for object level segmentation.…”
Section: Discussionmentioning
confidence: 93%
“…Second stage binary classification via CNN performed less well overall with the sub-component segmentation strategy (lower accuracy (A) caused by significantly higher false positive (FP) - Table II). Fine grain classification model (DFL [22]) offer the lowest false positive and maximal accuracy for both segmentation strategies (Table I). We can deduce that increased levels isolation via segmentation to the sub-component level improves the performance of the discriminative feature space learnt by the fine-grain technique [20]- [22] whilst more classical object classification CNN architectures perform only marginally better on objects than sub-components (Table II).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently, some frameworks employ a more general architecture that can localize discriminative parts within an image without any extra supervision from part annotations, and thus it makes the fine-grained image classification more feasible in real-world scenarios. Wang et al [40] claimed that improving mid-level convolutional feature representation can bring significant advantages for part-based fine-grained classification. This is accomplished by introducing a bank of discriminative filters in the classical convolutional neural networks (CNNs) architecture and it can be trained in an endto-end fashion.…”
Section: A Fined-grained Image Classificationmentioning
confidence: 99%
“…We can observe that feature channels after going through the MC-Loss become class-aligned and each focues on different discriminative regions that roughly correspond to object parts. subtle differences is the key for solving fine-grained image classification [40], [46], [47].…”
Section: Introductionmentioning
confidence: 99%