Proceedings of the 15th Conference of the European Chapter of The Association for Computational Linguistics: Volume 2 2017
DOI: 10.18653/v1/e17-2023
|View full text |Cite
|
Sign up to set email alerts
|

A Practical Perspective on Latent Structured Prediction for Coreference Resolution

Abstract: Latent structured prediction theory proposes powerful methods such as Latent Structural SVM (LSSVM), which can potentially be very appealing for coreference resolution (CR). In contrast, only small work is available, mainly targeting the latent structured perceptron (LSP). In this paper, we carried out a practical study comparing for the first time online learning with LSSVM. We analyze the intricacies that may have made initial attempts to use LSSVM fail, i.e., a huge training time and much lower accuracy pro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2017
2017
2025
2025

Publication Types

Select...
4
3

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…Machine learning methods requiring hand-crafted features have been used in the CR literature for a long time. Generally, learning-based CR models are collected under three main categories: mentionpair (Ng, 2005;Ji et al, 2005;Nicolae and Nicolae, 2006;Yang et al, 2006;Denis and Baldridge, 2007a;Haponchyk andMoschitti, 2017), entitymention (McCallum andWellner, 2005;Denis and Baldridge, 2007a;Culotta et al, 2007), and ranking mechanisms (Denis and Baldridge, 2007b;Ng, 2009, 2011). Deep neural networks have been frequently used in recent studies: mention-pair (Martschat and Strube, 2015), entitymention (Clark and Manning, 2015), mentionranking (Fernandes et al, 2012;Durrett and Klein, 2013;Björkelund and Kuhn, 2014;Wiseman et al, 2015Wiseman et al, , 2016.…”
Section: Related Workmentioning
confidence: 99%
“…Machine learning methods requiring hand-crafted features have been used in the CR literature for a long time. Generally, learning-based CR models are collected under three main categories: mentionpair (Ng, 2005;Ji et al, 2005;Nicolae and Nicolae, 2006;Yang et al, 2006;Denis and Baldridge, 2007a;Haponchyk andMoschitti, 2017), entitymention (McCallum andWellner, 2005;Denis and Baldridge, 2007a;Culotta et al, 2007), and ranking mechanisms (Denis and Baldridge, 2007b;Ng, 2009, 2011). Deep neural networks have been frequently used in recent studies: mention-pair (Martschat and Strube, 2015), entitymention (Clark and Manning, 2015), mentionranking (Fernandes et al, 2012;Durrett and Klein, 2013;Björkelund and Kuhn, 2014;Wiseman et al, 2015Wiseman et al, , 2016.…”
Section: Related Workmentioning
confidence: 99%
“…It is typically used in non structured CR methods and has a nice property of using the same features of LSP (we do not use global features in our study). We carried out a selection using the absolute values of the model weights of the classifier for ranking features and then selecting those having higher rank (Haponchyk and Moschitti, 2017).…”
Section: Learning In More Challenging Conditionsmentioning
confidence: 99%
“…In our last experiment, we depart from the CoNLL outputs to run the collaborative partitioning algorithm on top of the state-of-the-art coreference resolvers. In particular, we com-bine three very different high-performing systems, berkeleycoref (Durrett et al, 2013), ims-hotcoref (Björkelund and Kuhn, 2014) and lspe (Haponchyk and Moschitti, 2017b;Haponchyk and Moschitti, 2017a). The former relies on an entity-level modeling, whereas the latter two use different structural learning approaches to coreference.…”
Section: Ensembles Of Post-conll Systemsmentioning
confidence: 99%