2015
DOI: 10.48550/arxiv.1506.01520
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Average Classification Algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

1
1
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 0 publications
1
1
0
Order By: Relevance
“…That is, training on the contaminated data without modifying the loss still minimizes the clean BER. This result has been previously observed for the 0/1 loss [25] and general symmetric losses [42,6]. The above argument gives a simple derivation from Prop.…”
Section: Symmetric Lossessupporting
confidence: 84%
See 1 more Smart Citation
“…That is, training on the contaminated data without modifying the loss still minimizes the clean BER. This result has been previously observed for the 0/1 loss [25] and general symmetric losses [42,6]. The above argument gives a simple derivation from Prop.…”
Section: Symmetric Lossessupporting
confidence: 84%
“…1 enables a simple proof of a known result concerning symmetric losses, i.e., losses for which ℓ(t, 1) + ℓ(t, −1) is constant, such as the sigmoid loss. In particular, symmetric losses are immune to label noise under MCMs, meaning the original loss ℓ can be minimized on data drawn from the MCM and still optimize the clean BER [25,42,6].…”
Section: Mutual Contamination Models and Unbiased Lossesmentioning
confidence: 99%