2021
DOI: 10.3390/app11125409
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Chained Deep Neural Network Classifier for Multiple Annotators

Abstract: The increasing popularity of crowdsourcing platforms, i.e., Amazon Mechanical Turk, changes how datasets for supervised learning are built. In these cases, instead of having datasets labeled by one source (which is supposed to be an expert who provided the absolute gold standard), databases holding multiple annotators are provided. However, most state-of-the-art methods devoted to learning from multiple experts assume that the labeler’s behavior is homogeneous across the input feature space. Besides, independe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 30 publications
0
6
0
Order By: Relevance
“…Yet, while such approaches code the annotators’ parameters as fixed points, we model them as functions to consider dependencies between the input features and the labelers’ behavior. GCECDL is also similar to the works in [ 14 , 43 ]. Both approaches model the annotators’ performance as a function of the input instances and consider the interdependencies among the labelers.…”
Section: Literature Reviewmentioning
confidence: 76%
See 3 more Smart Citations
“…Yet, while such approaches code the annotators’ parameters as fixed points, we model them as functions to consider dependencies between the input features and the labelers’ behavior. GCECDL is also similar to the works in [ 14 , 43 ]. Both approaches model the annotators’ performance as a function of the input instances and consider the interdependencies among the labelers.…”
Section: Literature Reviewmentioning
confidence: 76%
“…This article introduced a Generalized Cross-Entropy-based Chained Deep Learning model, termed GCECDL, to deal with multiple-annotator scenarios. Our method follows the ideas of [ 43 , 46 ], where each parameter is modeled in a multi-labeler likelihood by using the outputs of a deep neural network. Nonetheless, unlike [ 43 ]—where a CCE-based loss was used—we also introduced a noise-robust loss function based on GCE [ 42 ] as a tradeoff between MAE and CCE.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, a non-parametric Friedman test was computed for statistical significance. The null hypothesis was that all algorithms perform equally [54,55]. For concrete testing, we fixed the significance threshold as p-value < 0.05.…”
Section: Semantic Segmentation Resultsmentioning
confidence: 99%