Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics - ACL '04 2004
DOI: 10.3115/1218955.1218973
|View full text |Cite
|
Sign up to set email alerts
|

A mention-synchronous coreference resolution algorithm based on the Bell tree

Abstract: This paper proposes a new approach for coreference resolution which uses the Bell tree to represent the search space and casts the coreference resolution problem as finding the best path from the root of the Bell tree to the leaf nodes. A Maximum Entropy model is used to rank these paths. The coreference performance on the 2002 and 2003 Automatic Content Extraction (ACE) data will be reported. We also train a coreference system using the MUC6 data and competitive results are obtained.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
125
1

Year Published

2006
2006
2020
2020

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 128 publications
(126 citation statements)
references
References 12 publications
0
125
1
Order By: Relevance
“…They should not be allowed to corefer with other NPs. Consequently, the use we make of nominal-predicate and appositive features is the opposite to that made by systems trained on the MUC or ACE corpora [4,13]. Besides, the fact that AnCora contains gold standard annotation from the morphological to the semantic levels makes it possible to include additional features that rely on such rich information.…”
Section: Pairwise Comparison Featuresmentioning
confidence: 99%
See 1 more Smart Citation
“…They should not be allowed to corefer with other NPs. Consequently, the use we make of nominal-predicate and appositive features is the opposite to that made by systems trained on the MUC or ACE corpora [4,13]. Besides, the fact that AnCora contains gold standard annotation from the morphological to the semantic levels makes it possible to include additional features that rely on such rich information.…”
Section: Pairwise Comparison Featuresmentioning
confidence: 99%
“…The features that have been shown to obtain better results in previous works [4,5,13] capture the most basic information on which coreference depends, but form a reduced feature set that does not account for all kinds of coreference relations.…”
Section: Pairwise Comparison Featuresmentioning
confidence: 99%
“…Several works have explored using non-local entity-level features in mention-entity models that assign a single mention to a (partially completed) cluster (Luo et al, 2004;Yang et al, 2008;Rahman and Ng, 2011). Our system, however, builds clusters incrementally through merge operations, and so can operate in an easy-first fashion.…”
Section: Related Workmentioning
confidence: 99%
“…The annotation guidelines, corpora, and other resources in support of the ACE program are available through LDC. The ACE data has been used in a number of experiments on coreference resolution, both data-driven and knowledge-based (see e.g., (Luo et al, 2004), (Chen and Hacioglu, 2006), (Ng, 2007), and (Haghighi and Klein, 2009)). …”
Section: Corpora Annotated With Coreferencementioning
confidence: 99%