Proceedings of the 20th Annual SIGdial Meeting on Discourse and Dialogue 2019
DOI: 10.18653/v1/w19-5927
|View full text |Cite
|
Sign up to set email alerts
|

Zero-shot transfer for implicit discourse relation classification

Abstract: Automatically classifying the relation between sentences in a discourse is a challenging task, in particular when there is no overt expression of the relation. It becomes even more challenging by the fact that annotated training data exists only for a small number of languages, such as English and Chinese. We present a new system using zero-shot transfer learning for implicit discourse relation classification, where the only resource used for the target language is unannotated parallel text. This system is eva… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 15 publications
0
8
0
Order By: Relevance
“…Although the golden labeled data are few and the synthetic data may contain some linguistically dissimilarity, they can be iteratively used for model training [8,22,32,56,59,71,102,169]. For example, Fisher et al [22] and Zhou et al [169] used the bootstrapping method in an iterative training model.…”
Section: Joint Data Expansion and Model Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…Although the golden labeled data are few and the synthetic data may contain some linguistically dissimilarity, they can be iteratively used for model training [8,22,32,56,59,71,102,169]. For example, Fisher et al [22] and Zhou et al [169] used the bootstrapping method in an iterative training model.…”
Section: Joint Data Expansion and Model Trainingmentioning
confidence: 99%
“…The main idea is to first recognize English implicit discourse relations, then project the predicted sense labels in English onto the corresponding Chinese samples. Kurfali and Ostling [56] represented arguments with multi-language sentence embedding via a pre-trained LASER model [1], and fed them into a feed forward network [116]. She et al [120] employed a distributed representation of hierarchical semantic components from different languages as classification triggers.…”
Section: Joint Data Expansion and Model Trainingmentioning
confidence: 99%
“…Looking at implicit relations in the PDTB-3, Prasad et al (2017) consider the difficulty in extending implicit relations to relations that cross paragraph boundaries. Kurfalı and Östling (2019) examine whether implicit relation annotation in the PDTB-3 can be used as a basis for learning to classify implicit relations in languages that lack discourse annotation. Kim et al (2020) explored whether the PDTB-3 could be used to learn finegrained (Level-2) sense classification in general, while Liang et al (2020) looked at whether separating inter-sentential implicits from intra-sentential implicits could improve their sense classification.…”
Section: Related Workmentioning
confidence: 99%
“…Due to the small size of the test sets, we confine ourselves to the top-level senses: Contingency, Comparison, Expansion, Temporal which is also the most common setting for this task. Despite the limited size of TED-MDB, zero-shot transfer is possible and yields meaningful results as shown in (Kurfalı and Östling, 2019). In total, seven languages are evaluated in this task: English, German, Lithuanian 4 , Portuguese, Polish, Russian and Chinese.…”
Section: Implicit Discourse Relation Classification (Pdtb)mentioning
confidence: 99%