Proceedings of the 2018 Conference of the North American Chapter Of the Association for Computational Linguistics: Hu 2018
DOI: 10.18653/v1/n18-1071
|View full text |Cite
|
Sign up to set email alerts
|

Collective Entity Disambiguation with Structured Gradient Tree Boosting

Abstract: We present a gradient-tree-boosting-based structured learning model for jointly disambiguating named entities in a document. Gradient tree boosting is a widely used machine learning algorithm that underlies many topperforming natural language processing systems. Surprisingly, most works limit the use of gradient tree boosting as a tool for regular classification or regression problems, despite the structured nature of language. To the best of our knowledge, our work is the first one that employs the structured… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
39
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(40 citation statements)
references
References 30 publications
0
39
1
Order By: Relevance
“…Accuracy Chisholm and Hachey (2015) 88.7 Guo and Barbosa (2018) 89.0 Globerson et al (2016) 91.0 Yamada et al (2016) 91.5 Ganea and Hofmann (2017) 92.22 ± 0.14 Yang et al (2018) 93.0 Le and Titov (2018) 93.07 ± 0.27 Our 94.0 ± 0.28 Our (+pseudo entities)…”
Section: Methodsmentioning
confidence: 99%
“…Accuracy Chisholm and Hachey (2015) 88.7 Guo and Barbosa (2018) 89.0 Globerson et al (2016) 91.0 Yamada et al (2016) 91.5 Ganea and Hofmann (2017) 92.22 ± 0.14 Yang et al (2018) 93.0 Le and Titov (2018) 93.07 ± 0.27 Our 94.0 ± 0.28 Our (+pseudo entities)…”
Section: Methodsmentioning
confidence: 99%
“…In the context of ED, recent neural methods He et al (2013); Sun et al (2015); Yamada et al (2016); Ganea and Hofmann (2017); Le and Titov (2018); Yang et al (2018); Radhakrishnan et al (2018) have established state-of-the-art results, outperforming engineered features based models. Context aware word, span and entity embeddings, together with neural similarity functions, are essential in these frameworks.…”
Section: Related Workmentioning
confidence: 99%
“…The best model for one dataset may perform poorly on others. An example is the SGTB-BiBSG model [99], which performed well on the WNED-CWEB dataset but not on the others. Only a small number of models performed best on more than one dataset.…”
Section: ) Disambiguation-only Nel Methodsmentioning
confidence: 99%