2017
DOI: 10.1007/978-3-319-65340-2_69
|View full text |Cite
|
Sign up to set email alerts
|

Towards a Mention-Pair Model for Coreference Resolution in Portuguese

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(6 citation statements)
references
References 10 publications
0
6
0
Order By: Relevance
“…We follow the mention-pair model (described in Section II), as it is used in several recent state-of-the-art systems [15], and even more so in Portuguese [20], [21].…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We follow the mention-pair model (described in Section II), as it is used in several recent state-of-the-art systems [15], and even more so in Portuguese [20], [21].…”
Section: Methodsmentioning
confidence: 99%
“…Regarding coreference resolution in the Portuguese language, state-of-the-art has lacked behind more resourced languages, but direct comparison is complex as evaluation is obviously performed in different corpora. To the best of our knowledge, the coreference resolution systems reporting best results in an unrestricted Portuguese dataset are the works of Fonseca et al [20] and of Rocha and Lopes Cardoso [21]. Both use hand engineered features and linear models for mentionpair classification with promising results.…”
Section: Related Workmentioning
confidence: 99%
“…Fonseca et al [69] explored random undersampling in coreference resolution, with encouraging results. Rocha and Lopes Cardoso [119] proposed heuristic-based strategies to undersample the originally unbalanced dataset of mention-pair learning instances. These training set creation strategies explore well-known properties of coreference resolution to generate more balanced distribution of labels while providing suitable learning instances for the mention-pair models.…”
Section: Imbalanced Datasetsmentioning
confidence: 99%
“…Unfortunately, these techniques have their weaknesses, as oversampling tends to lead to overfitting, and undersampling may deprive the model of useful training instances [28,119].…”
Section: Imbalanced Datasetsmentioning
confidence: 99%
See 1 more Smart Citation