2018
DOI: 10.1109/tpami.2017.2652459
|View full text |Cite
|
Sign up to set email alerts
|

Active Self-Paced Learning for Cost-Effective and Progressive Face Identification

Abstract: Abstract-This paper aims to develop a novel cost-effective framework for face identification, which progressively maintains a batch of classifiers with the increasing face images of different individuals. By naturally combining two recently rising techniques: active learning (AL) and self-paced learning (SPL), our framework is capable of automatically annotating new instances and incorporating them into training under weak expert recertification. We first initialize the classifier using a few annotated samples… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
90
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
3
1

Relationship

3
6

Authors

Journals

citations
Cited by 151 publications
(90 citation statements)
references
References 32 publications
0
90
0
Order By: Relevance
“…Do these methods, advanced with small models and data, well scale to large deep networks [23,17] and data? Fortunately, the uncertainty approach [28,55] for classification tasks still performs well despite its simplicity. However, a task-specific design is necessary for other tasks since it utilizes network outputs.…”
Section: Related Researchmentioning
confidence: 99%
“…Do these methods, advanced with small models and data, well scale to large deep networks [23,17] and data? Fortunately, the uncertainty approach [28,55] for classification tasks still performs well despite its simplicity. However, a task-specific design is necessary for other tasks since it utilizes network outputs.…”
Section: Related Researchmentioning
confidence: 99%
“…Kumar et al [50] propose to determine the training sample order by how easy they are. Many other researchers [51], [52], [53], [54], [55] propose more theoretically analysis on this progressive paradigm. Some researches also apply the similar idea of the progressive paradigm [56], [57], [58].…”
Section: Progressive Paradigmmentioning
confidence: 99%
“…Self-learning: In the literature, a few works [18], [19], [29], [33], [37]- [39] have attempted to leverage samples with high prediction confidence in the context of self-training. Chen et al [37] proposed the slow addition of both target features and instances, among which the current model is the most confident, to the training set for domain adaption.…”
Section: Related Workmentioning
confidence: 99%
“…By sequentially optimizing the model while gradually controlling the learning pace via the SPL regularizer, labeled samples can be incrementally added into the training process in a self-paced manner. Inspired by these techniques, several approaches [18], [19] have been developed to improve AL for image classification by introducing a so-called pseudo-labeling strategy, which is intended to automatically select unlabeled samples with high prediction confidence and iteratively assign pseudo-labels to them in a self-paced manner.…”
Section: Introductionmentioning
confidence: 99%