2009
DOI: 10.1016/j.patcog.2009.03.027
|View full text |Cite
|
Sign up to set email alerts
|

Robust supervised classification with mixture models: Learning from data with uncertain labels

Abstract: . Robust supervised classification with mixture models: Learning from data with uncertain labels. Pattern Recognition, Elsevier, 2009, 42 (11), pp.2649-2658. <10.1016/j.patcog.2009 Robust supervised classification with mixture models:Learning from data with uncertain labels AbstractIn the supervised classification framework, human supervision is required for labeling a set of learning data which are then used for building the classifier. However, in many applications, human supervision is either imprecise, d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
70
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
4
4
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 119 publications
(72 citation statements)
references
References 23 publications
2
70
0
Order By: Relevance
“…One is to filter the noisy label examples to either remove or correct them [109]. Multiple techniques have been employed to detect noisy labels, such as large margin classifiers [110], nearest neighborhood verification [111], committee voting [112], cross validation [113], and clustering algorithms [114,115]. Another type learns directly from the weakly labeled dataset by considering the mechanism of label noise.…”
Section: Noisy Label Learningmentioning
confidence: 99%
“…One is to filter the noisy label examples to either remove or correct them [109]. Multiple techniques have been employed to detect noisy labels, such as large margin classifiers [110], nearest neighborhood verification [111], committee voting [112], cross validation [113], and clustering algorithms [114,115]. Another type learns directly from the weakly labeled dataset by considering the mechanism of label noise.…”
Section: Noisy Label Learningmentioning
confidence: 99%
“…However, it is difficult to judge labeling errors in the subjective domains, because the absolute ground truth is unknown. In [9], authors propose to use a unsupervised mixture model in which the supervised information is introduced, so as to compare the supervised information given by the learning data with an unsupervised modelling. For this model, the probability that the jth cluster belongs to the ith class is introduced to measure the consistency between classes and clusters.…”
Section: Related Workmentioning
confidence: 99%
“…However, when class label noise is present, it becomes unclear why would CV be a good approach since then all candidate models will be validated against noisy class labels. The issue has also been briefly discussed in [24,6]. In [24], the authors resort to using a 'trusted validation set' to select optimal kernel parameters.…”
Section: Introductionmentioning
confidence: 99%