Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1587
|View full text |Cite
|
Sign up to set email alerts
|

A Semi-Markov Structured Support Vector Machine Model for High-Precision Named Entity Recognition

Abstract: Named entity recognition (NER) is the backbone of many NLP solutions. F 1 score, the harmonic mean of precision and recall, is often used to select/evaluate the best models. However, when precision needs to be prioritized over recall, a state-of-the-art model might not be the best choice. There is little in the literature that directly addresses training-time modifications to achieve higher precision information extraction. In this paper, we propose a neural semi-Markov structured support vector machine model … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(12 citation statements)
references
References 11 publications
0
12
0
Order By: Relevance
“…"weighted" accounts for class imbalance by computing the average of binary metrics in which each class's score was weighted by its presence in the true data sample. We calculated precision (P), recall (R), f1-score (F1) for each class [36], gave the accuracy of the model, and calculated the overall macro-precision (macro-P), macro-recall (macro-R), macro-f1 (macro-F1), weighted-precision (weighted-P), weighted-recall (weighted-R), weighted-f1 (weighted-F1) according to the "weighted" and "macro" criteria. The calculation results are given in Table 7.…”
Section: B Model Validation Resultsmentioning
confidence: 99%
“…"weighted" accounts for class imbalance by computing the average of binary metrics in which each class's score was weighted by its presence in the true data sample. We calculated precision (P), recall (R), f1-score (F1) for each class [36], gave the accuracy of the model, and calculated the overall macro-precision (macro-P), macro-recall (macro-R), macro-f1 (macro-F1), weighted-precision (weighted-P), weighted-recall (weighted-R), weighted-f1 (weighted-F1) according to the "weighted" and "macro" criteria. The calculation results are given in Table 7.…”
Section: B Model Validation Resultsmentioning
confidence: 99%
“…We can use this task to identify named entities (Kasai et al, 2019;Arora et al, 2019;Jain et al, 2019) and for understanding other cultures (Katan and Taibi, 2004).…”
Section: Wer Ist Bill Gates?mentioning
confidence: 99%
“…In addition to research on improving the performance of the NER model, other experimental setups have been proposed for this task. These include domain adaptation, where a model trained on data from a source domain is used to tag data from a different target domain (Guo et al, 2009;Greenberg et al, 2018;Wang et al, 2020), temporal drift, where a model is tested on data from future time intervals (Derczynski et al, 2016;Rijhwani and Preotiuc-Pietro, 2020), cross-lingual modelling where models trained in one language are adapted to other languages (Tsai et al, 2016;Ni et al, 2017;Xie et al, 2018), identifying nested entities Lu and Roth, 2015) or high-precision NER models (Arora et al, 2019).…”
Section: Related Workmentioning
confidence: 99%