2009
DOI: 10.1007/978-3-642-04274-4_91
|View full text |Cite
|
Sign up to set email alerts
|

Learning SVMs from Sloppily Labeled Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
29
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 45 publications
(30 citation statements)
references
References 3 publications
1
29
0
Order By: Relevance
“…For instance, in [47] the hinge loss is replaced by a related loss function that takes into account the amount of noise in the data. With this loss function the optimization problem becomes non-convex.…”
Section: Robust Learning Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…For instance, in [47] the hinge loss is replaced by a related loss function that takes into account the amount of noise in the data. With this loss function the optimization problem becomes non-convex.…”
Section: Robust Learning Algorithmsmentioning
confidence: 99%
“…For instance, the SVM's described [47,37] are tested in very specific cases: asymmetric noise [47] or data in which class probabilities are available [37]. An extensive empirical comparison of the different robust learning methods would be of great interest in the field.…”
Section: Robust Learning Algorithmsmentioning
confidence: 99%
“…Finally, the comparison with StPMKL will give us insights into the effects of our simple modelling assumptions about the label flipping process in comparison with the model-free approach in StPMKL. We should mention that there exist label-noise robust versions of SVM [41,2] that could be a subject of further comparisons. However, these works again assume knowledge of label noise probability without providing an algorithm for automatically inferring the noise rate from data, and [41] limit themselves to linear kernels which means no procedure is proposed for the highly non-trivial problem of setting kernel parameters in label-noise conditions.…”
Section: Comparisons With State-of-the-art Classifiers and Other Labementioning
confidence: 99%
“…We should mention that there exist label-noise robust versions of SVM [41,2] that could be a subject of further comparisons. However, these works again assume knowledge of label noise probability without providing an algorithm for automatically inferring the noise rate from data, and [41] limit themselves to linear kernels which means no procedure is proposed for the highly non-trivial problem of setting kernel parameters in label-noise conditions. Although for these reasons a direct comparison with these methods would not be particularly meaningful, we believe that our approach for estimating these crucial parameters could be adapted to extend those methods as well in future developments.…”
Section: Comparisons With State-of-the-art Classifiers and Other Labementioning
confidence: 99%
“…Deep learning algorithms can be extended to handle label noise by additional network layers for noise modeling [45]. Robust kernels can be learned from the data in order to improve the effectiveness of kernel-based methods under label noise [7] and robust SVM methods have also been considered [44] [6].…”
Section: Related Workmentioning
confidence: 99%