Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.13
|View full text |Cite
|
Sign up to set email alerts
|

Transformer-GCRF: Recovering Chinese Dropped Pronouns with General Conditional Random Fields

Abstract: Pronouns are often dropped in Chinese conversations and recovering the dropped pronouns is important for NLP applications such as Machine Translation. Existing approaches usually formulate this as a sequence labeling task of predicting whether there is a dropped pronoun before each token and its type. Each utterance is considered to be a sequence and labeled independently. Although these approaches have shown promise, labeling each utterance independently ignores the dependencies between pronouns in neighborin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(6 citation statements)
references
References 23 publications
0
6
0
Order By: Relevance
“…We first introduce the problem formulation of these two tasks. Following the practices in (Yang et al, 2015(Yang et al, , 2019(Yang et al, , 2020, we formulate DPR as a sequence labeling problem. DPR aims to recover the dropped pronouns in an utterance by assigning one of 17 labels to each token that indicates the type of pronoun that is dropped before the token (Yang et al, 2015).…”
Section: Problem Formulationmentioning
confidence: 99%
See 2 more Smart Citations
“…We first introduce the problem formulation of these two tasks. Following the practices in (Yang et al, 2015(Yang et al, , 2019(Yang et al, , 2020, we formulate DPR as a sequence labeling problem. DPR aims to recover the dropped pronouns in an utterance by assigning one of 17 labels to each token that indicates the type of pronoun that is dropped before the token (Yang et al, 2015).…”
Section: Problem Formulationmentioning
confidence: 99%
“…Same as existing efforts (Yang et al, 2015(Yang et al, , 2019), we use Precision(P), Recall(R) and F-score(F) as metrics when evaluating the performance of dropped pronoun models. Baselines We compared DiscProReco against ex-isting baselines, including: (1) MEPR (Yang et al, 2015), which leverages a Maximum Entropy classifier to predict the type of dropped pronoun before each token; (2) NRM , which employs two MLPs to predict the position and type of a dropped pronoun separately; (3) Bi-GRU, which utilizes a bidirectional GRU to encode each token in a pro-drop sentence and then makes prediction; (4) NDPR (Yang et al, 2019), which models the referents of dropped pronouns from a large context with a structured attention mechanism; (5) Transformer-GCRF (Yang et al, 2020), which jointly recovers the dropped pronouns in a conversational snippet with general conditional random fields; (6) XLM-RoBERTa-NDPR, which utilizes the pre-trained multilingual masked language model (Conneau et al, 2020) to encode the pro-drop utterance and its context, and then employs the attention mechanism in NDPR to model the referent semantics.…”
Section: Dropped Pronoun Recoverymentioning
confidence: 99%
See 1 more Smart Citation
“…We first introduce the problem formulation of these two tasks. Following the practices in (Yang et al, 2015(Yang et al, , 2019(Yang et al, , 2020, we formulate DPR as a sequence labeling problem. DPR aims to recover the dropped pronouns in an utterance by assigning one of 17 labels to each token that indicates the type of pronoun that is dropped before the token (Yang et al, 2015).…”
Section: Problem Formulationmentioning
confidence: 99%
“…and Yang et al (2019) attempt to recover the dropped pronouns by modeling the referents with deep neural networks. More recently, Yang et al (2020) attempt to jointly predict all dropped pronouns in a conversation snippet by modeling dependencies between pronouns with general conditional random fields. A major shortcoming of these DPR methods is that they overlook the discourse relation (e.g., reply, question) between conversa-tional utterances when exploiting the context of the dropped pronoun.…”
Section: Introductionmentioning
confidence: 99%