2019
DOI: 10.1609/aaai.v33i01.3301273
|View full text |Cite
|
Sign up to set email alerts
|

Learning from Web Data Using Adversarial Discriminative Neural Networks for Fine-Grained Classification

Abstract: Fine-grained classification is absorbed in recognizing the subordinate categories of one field, which need a large number of labeled images, while it is expensive to label these images. Utilizing web data has been an attractive option to meet the demands of training data for convolutional neural networks (CNNs), especially when the well-labeled data is not enough. However, directly training on such easily obtained images often leads to unsatisfactory performance due to factors such as noisy labels. This has be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 26 publications
(14 citation statements)
references
References 32 publications
0
14
0
Order By: Relevance
“…In the Weakly Supervised Data Augmentation Network (WS-DAN) [24], high-quality features are kept and the useless features are dropped. Another direction to augment training set is through a Web-supervised network [29][30][31] that directly learns from the real-world Web images, which greatly increases the size of training set. A challenge with this approach is to eliminate irrelevant noisy images that are harmful to the training.…”
Section: Methods Using Data Augmentationmentioning
confidence: 99%
“…In the Weakly Supervised Data Augmentation Network (WS-DAN) [24], high-quality features are kept and the useless features are dropped. Another direction to augment training set is through a Web-supervised network [29][30][31] that directly learns from the real-world Web images, which greatly increases the size of training set. A challenge with this approach is to eliminate irrelevant noisy images that are harmful to the training.…”
Section: Methods Using Data Augmentationmentioning
confidence: 99%
“…Sparse data has become the bottleneck of FGVC in both academic research and industrial applications. Recently, researchers' interests have shifted to saving expert effort during training, e.g., visual recognition with small-sample [25,26,47], websupervised learning [38,45], and leveraging layer persons annotations [10]. In this paper, we introduce a new lens to FGVC and propose a semi-supervised framework specifically aiming at FGVC tasks with out-of-distribution data.…”
Section: Fine-grained Visual Classificationmentioning
confidence: 99%
“…It has been widely applied to mitigate the distribution discrepancy between different domains (tasks). The essence of adversarial domain adaptation [9], [11], [14], [25]- [27] is to train a domain discriminator besides the feature extractor, and the two networks are optimized in an adversarial manner. The domain discriminator exerts to distinguish the source domain representations from the target domain ones, while the feature extractor, acting as the generator in typical GAN setting, tries to learn domain-invariant features by fooling the discriminator.…”
Section: B Adversarial Domain Adaptationmentioning
confidence: 99%
“…That is, making the distributions P (f s )={G f (x; θ f )|x ∼ P (X s )} and P (f t ) = {G f (x; θ f )|x ∼ P (X t )} to be aligned. We follow the idea of adversarial domain adaptation, as [11], [14], [25]- [27]. Specifically, we train a domain discriminator G d (parameterized with θ d ) along with the feature extractor G f in a min-max way.…”
Section: B Domain-level Alignmentmentioning
confidence: 99%