2021 IEEE International Conference on Data Mining (ICDM) 2021
DOI: 10.1109/icdm51629.2021.00065
|View full text |Cite
|
Sign up to set email alerts
|

Truth Discovery in Sequence Labels from Crowds

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(5 citation statements)
references
References 35 publications
0
5
0
Order By: Relevance
“…The more robust performance demonstrated by our Neural-Hidden-CRF relative to the SOTA neuralized HMM-based CHMM [18] largely showcases the effectiveness of our model in leveraging the global optimization perspective offered by the undirected graphical model. Further and more specifically, on the average F1 metric, Neural-Hidden-CRF outperforms the recently proposed AggSLC [35] by 4.81 points on CoNLL-03 (MTurk), and exceeds the SOTA method CHMM [18] by 2.80/2.23 points on the three WS datasets. It is also worth noting that the comparison methods [17,26,35,46] on CoNLL-03 (MTurk) dataset, apply either the same backbone (i.e., the GloVe 100-dimensional word embeddings along with BiLSMT-CRF in Nguyen et al [26]) as ours, or more advanced backbones (i.e., BERT-BiLSTM-CRF in Zhang et al [46], Efficient ELMO along with BiLSTM-CRF in Lan et al [17], BERT in AggSLC [35]) than ours.…”
Section: Results and Analysismentioning
confidence: 86%
See 2 more Smart Citations
“…The more robust performance demonstrated by our Neural-Hidden-CRF relative to the SOTA neuralized HMM-based CHMM [18] largely showcases the effectiveness of our model in leveraging the global optimization perspective offered by the undirected graphical model. Further and more specifically, on the average F1 metric, Neural-Hidden-CRF outperforms the recently proposed AggSLC [35] by 4.81 points on CoNLL-03 (MTurk), and exceeds the SOTA method CHMM [18] by 2.80/2.23 points on the three WS datasets. It is also worth noting that the comparison methods [17,26,35,46] on CoNLL-03 (MTurk) dataset, apply either the same backbone (i.e., the GloVe 100-dimensional word embeddings along with BiLSMT-CRF in Nguyen et al [26]) as ours, or more advanced backbones (i.e., BERT-BiLSTM-CRF in Zhang et al [46], Efficient ELMO along with BiLSTM-CRF in Lan et al [17], BERT in AggSLC [35]) than ours.…”
Section: Results and Analysismentioning
confidence: 86%
“…All WSSL methods can be divided into probabilistic graphical model approach, deep learning model approach, and neuralized graphical model approach. (1) In probabilistic graphical model approach (and in addition to the HMM-based models [20,21,26,36,39]), Rodrigues et al [32] in early 2014 used a partially directed graph containing a CRF for modeling to solve the truth inference from crowdsourcing labels; (2) In deep learning model approach (and in addition to the "source-specific perturbation" methods [17,26,46]), other methods [17,[33][34][35] are either based on the end-to-end deep neural architecture [33], or the customized optimization objective along with coordinate ascent optimization technology [34,35], or the iterative solving framework similar to expectation-maximization algorithm [4]. However, all these methods do not have the advantages of the recently proposed neuralized HMM-based graphical models [18,19] and our Neural-Hidden-CRF in principled modeling for variants of interest and in harnessing the context information that provided by advanced deep learning models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Given this concern, it is possible to estimate method performance in the context of label errors (Raykar et al 2009; Y. Yan et al 2014) or correct errored labels so traditional assessment metrics are more accurate (Sabetpour et al 2021; G. Zheng et al 2021).…”
Section: Discussionmentioning
confidence: 99%
“…Thus, observed recall is a contaminated measure of recall, and methods compared via observed recall (or F1 or PR curve) may not reveal their actual ranking. Given this concern, it is possible to estimate method performance in the context of label errors (Raykar et al 2009;Y Yan et al 2014) or correct errored labels so traditional assessment metrics are more accurate (Sabetpour et al 2021;G Zheng et al 2021). Alternatively, performance evaluation can be carried out with simulated data.…”
Section: Discussionmentioning
confidence: 99%