Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence 2020
DOI: 10.24963/ijcai.2020/210
|View full text |Cite
|
Sign up to set email alerts
|

Structured Probabilistic End-to-End Learning from Crowds

Abstract: End-to-end learning from crowds has recently been introduced as an EM-free approach to training deep neural networks directly from noisy crowdsourced annotations. It models the relationship between true labels and annotations with a specific type of neural layer, termed as the crowd layer, which can be trained using pure backpropagation. Parameters of the crowd layer, however, can hardly be interpreted as annotator reliability, as compared with the more principled probabilistic approach. The lack of pr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 21 publications
(20 citation statements)
references
References 7 publications
0
20
0
Order By: Relevance
“…SpectralDS [31], EBCC [32], [33], BayesDGC [34] MLNB [35], P-DS [36], ND-DS [36], MCMLD [37], MCMLD-OC [38] RY N [12] Discriminative MV, [39], KOS [40], [41], [42], PLAT [43], IEThresh [44] PV, [45], [46], CATD [47], PM [48], [49], [50], MNLDP [51], GTIC [52], [53], LLA [54], CrowdLayer [55], SpeeLFC [56] MLCC [57] Mean, Median CATD N [47], PM N [48] machine learning and data mining community first realized the opportunity that crowdsourcing brought to supervised learning, i.e., obtaining class labels for training sets. To improve the quality of labels, both Sheng et al [7] and Snow et al [8] proposed a repeated-labeling scheme in 2008, which let multiple crowd workers to label the same objects and the true labels of the objects are inferred from these multiple noisy labels.…”
Section: Data Fusion For Crowdsourcingmentioning
confidence: 99%
See 2 more Smart Citations
“…SpectralDS [31], EBCC [32], [33], BayesDGC [34] MLNB [35], P-DS [36], ND-DS [36], MCMLD [37], MCMLD-OC [38] RY N [12] Discriminative MV, [39], KOS [40], [41], [42], PLAT [43], IEThresh [44] PV, [45], [46], CATD [47], PM [48], [49], [50], MNLDP [51], GTIC [52], [53], LLA [54], CrowdLayer [55], SpeeLFC [56] MLCC [57] Mean, Median CATD N [47], PM N [48] machine learning and data mining community first realized the opportunity that crowdsourcing brought to supervised learning, i.e., obtaining class labels for training sets. To improve the quality of labels, both Sheng et al [7] and Snow et al [8] proposed a repeated-labeling scheme in 2008, which let multiple crowd workers to label the same objects and the true labels of the objects are inferred from these multiple noisy labels.…”
Section: Data Fusion For Crowdsourcingmentioning
confidence: 99%
“…More sophisticated, Rodrigues and Pereira [55] proposed a model CrowdLayer that trains deep neural networks to realize end-to-end learning from crowds (including label aggregation). Chen et al [56] proposed SpeeLFC, which extends CrowdLayer with interpretable parameters and strengthens the correlation between workers and classes. GCN-Clean [77] uses graph convolution networks (GCNs) to learn the relations between classes.…”
Section: B Improving Aggregation With Learning Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Rodrigues et al (2018) propose an end-to-end method named Crowd Layer which directly applies backpropagation to train deep neural networks from the crowdsourced labeled data. Considering the lack of Interpretability of Crowd Layer, Chen et al (2020b) propose a structured end-to-end model which endows Crowd Layer the probabilistic interpretability. Chu et al (2021) divide the confusion matrix into two components: namely frequentlyshared confusion matrix and the individually-specific confusion matrix.…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, more recently proposed one-stage methods (Luo et al 2018;Yang et al 2018a) such as CrowdLayer (CL) (Rodrigues and Pereira 2018) and AggNet (Albarqouni and Baur 2016) simultaneously infer the true labels while learning the parameters of the deep neural network and the confusion matrices of annotators. We note that most models in the LFC family are based on the assumption that all the examples are benign (Cao et al 2019;Chen et al 2020b) and focus on producing accurate classifiers with the estimation of ground truth labels inferred from the noisy labels of crowd workers. Unfortunately, recent studies (Goodfellow, Shlens, and Szegedy 2015;Dong et al 2020) have found that even in the ideal case when ground truth labels are known, the classifier trained from instances could perform rather poorly in presence of adversarial examples-small changes to images can cause computer vision models to make mistakes such as identifying a school bus as an ostrich (Carlini and Wagner 2017).…”
Section: Introductionmentioning
confidence: 99%