Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1114
|View full text |Cite
|
Sign up to set email alerts
|

Learning Global Features for Coreference Resolution

Abstract: There is compelling evidence that coreference prediction would benefit from modeling global information about entity-clusters. Yet, state-of-the-art performance can be achieved with systems treating each mention prediction independently, which we attribute to the inherent difficulty of crafting informative clusterlevel features. We instead propose to use recurrent neural networks (RNNs) to learn latent, global representations of entity clusters directly from their mentions. We show that such representations ar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
181
1
1

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 149 publications
(183 citation statements)
references
References 24 publications
(46 reference statements)
0
181
1
1
Order By: Relevance
“…We would like to exploit the pragmatic and socio-linguistic context to better handle code-switching. 386 model of Wiseman et al (2015b), which on its own outperforms the original one and achieves similar performance to its global version (Wiseman et al, 2016). Importantly when we use the introduced differentiable relaxations in training, we observe a substantial gain in performance over our probabilistic baseline.…”
Section: Resultsmentioning
confidence: 69%
See 4 more Smart Citations
“…We would like to exploit the pragmatic and socio-linguistic context to better handle code-switching. 386 model of Wiseman et al (2015b), which on its own outperforms the original one and achieves similar performance to its global version (Wiseman et al, 2016). Importantly when we use the introduced differentiable relaxations in training, we observe a substantial gain in performance over our probabilistic baseline.…”
Section: Resultsmentioning
confidence: 69%
“…Most importantly, it yields the second-best results reported in the literature, outperforming the system of Wiseman et al (2016) by 0.26 percentage points.…”
Section: Ensembles Of Post-conll Systemsmentioning
confidence: 83%
See 3 more Smart Citations