Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.59
|View full text |Cite
|
Sign up to set email alerts
|

De-Bias for Generative Extraction in Unified NER Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 20 publications
(17 citation statements)
references
References 0 publications
0
17
0
Order By: Relevance
“…First, it can be seen that our model achieves the best performance among all compared methods on almost all datasets. E.g., on GENIA, our model outperforms the sequence labeling [22] and span-based methods [13] by 5.07% and 4.67% in terms of F1 score, and increases over 1.57%, 0.54% and 0.69% compared with the seq2seq models [21,9,10]. The hypothesis is that our approach effectively models the entity relation, improving entity boundary detection for better entity generation.…”
Section: Resultsmentioning
confidence: 84%
See 4 more Smart Citations
“…First, it can be seen that our model achieves the best performance among all compared methods on almost all datasets. E.g., on GENIA, our model outperforms the sequence labeling [22] and span-based methods [13] by 5.07% and 4.67% in terms of F1 score, and increases over 1.57%, 0.54% and 0.69% compared with the seq2seq models [21,9,10]. The hypothesis is that our approach effectively models the entity relation, improving entity boundary detection for better entity generation.…”
Section: Resultsmentioning
confidence: 84%
“…We compare our model with three seq2seq models [21,9,10], and several other baselines that are specifically designed for individual NER substask, including sequence labeling [22], span-based methods [13] and hypergraph model [8], etc. The performance comparisons on three type datasets are reported 1, 2 and 3 respectively.…”
Section: Resultsmentioning
confidence: 99%
See 3 more Smart Citations