2017
DOI: 10.1109/tcsvt.2016.2589879
|View full text |Cite
|
Sign up to set email alerts
|

Cost-Effective Active Learning for Deep Image Classification

Abstract: Abstract-Recent successes in learning-based image classification, however, heavily rely on the large number of annotated training samples, which may require considerable human efforts. In this paper, we propose a novel active learning framework, which is capable of building a competitive classifier with optimal feature representation via a limited amount of labeled training instances in an incremental learning manner. Our approach advances the existing active learning methods in two aspects. First, we incorpor… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
387
1
5

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 594 publications
(397 citation statements)
references
References 35 publications
4
387
1
5
Order By: Relevance
“…Self-training has been applied to tasks in natural language processing including word-sense disambiguation [2], noun identification [23] and parsing [3], in addition to tasks in computer vision such as object detection [24,25] and image classification [26]. In automatic speech recognition, self-training-style approaches have seen some success in hybrid, alignmentbased speech systems.…”
Section: Related Workmentioning
confidence: 99%
“…Self-training has been applied to tasks in natural language processing including word-sense disambiguation [2], noun identification [23] and parsing [3], in addition to tasks in computer vision such as object detection [24,25] and image classification [26]. In automatic speech recognition, self-training-style approaches have seen some success in hybrid, alignmentbased speech systems.…”
Section: Related Workmentioning
confidence: 99%
“…In [67] and [68], uncertainty-based active learning criteria for deep models are proposed. The authors offer several met-rics to estimate model uncertainty, including least confidence, margin or entropy sampling.…”
Section: Active Learning For Deep Architecturesmentioning
confidence: 99%
“…In addition, most of existing incremental approaches suffer from noisy samples or outliers in the model updating. In this work, we propose a novel active self-paced learning framework (ASPL) to handle the above difficulties, which absorbs powers of two recently rising techniques: active learning (AL) [12], [13] and self-paced learning (SPL) [14], [15], [16]. In particular, our framework tends to conduct a "Cost-less-Earn-more" working manner: as much as possible pursuing a high performance while reducing costs.…”
Section: Introductionmentioning
confidence: 99%