Proceedings of the CoNLL-16 Shared Task 2016
DOI: 10.18653/v1/k16-2014
|View full text |Cite
|
Sign up to set email alerts
|

Discourse Relation Sense Classification Using Cross-argument Semantic Similarity Based on Word Embeddings

Abstract: This paper describes our system for the CoNLL 2016 Shared Task's supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 24 publications
(22 citation statements)
references
References 23 publications
0
22
0
Order By: Relevance
“…In this work we adopt the 15 fine-grained discourse relation sense types from the annotation scheme of the Penn Discourse Tree Bank (PDTB) (Prasad et al, 2008). For producing discourse relation annotations we use the discourse relation sense disambiguation system from Mihaylov and Frank (2016) which is trained on the data provided by the CoNLL Shared Task on Shallow Discourse Parsing (Xue et al, 2016). In this annotation scheme discourse relations are divided into two main types: Explicit and Non-Explicit.…”
Section: Events and Their Participantsmentioning
confidence: 99%
“…In this work we adopt the 15 fine-grained discourse relation sense types from the annotation scheme of the Penn Discourse Tree Bank (PDTB) (Prasad et al, 2008). For producing discourse relation annotations we use the discourse relation sense disambiguation system from Mihaylov and Frank (2016) which is trained on the data provided by the CoNLL Shared Task on Shallow Discourse Parsing (Xue et al, 2016). In this annotation scheme discourse relations are divided into two main types: Explicit and Non-Explicit.…”
Section: Events and Their Participantsmentioning
confidence: 99%
“…We use the 300-dimensional word vector used in the previous experiment and tune the number of hidden layers and hidden units on the development set. We consider the following models: Bidirectional-LSTM (Akanksha and Eisenstein, 2016), two flavors of convolutional networks (Qin et al, 2016;Wang and Lan, 2016), two variations of simple argument pooling (Mihaylov and Frank, 2016;Schenk et al, 2016), and the best system using surface features alone (Wang and Lan, 2015). The comparison results and brief system descriptions are shown in Table 4.…”
Section: English Discourse Relationsmentioning
confidence: 99%
“…We get significant improvement over the base system(*) (based on McNemar's Test) and outperform SemLM, which only utilizes frame information in the semantic sequences. We also rival the top system (Mihaylov and Frank, 2016) in the CoNLL16 Shared Task (connective sense classification subtask). Note that the FES-LM used here is trained on NYT corpus.…”
Section: Application On Newsmentioning
confidence: 99%
“…With added FES-LM features, we get significant improvement (based on McNemar's Test) over the base system(*) and outperform SemLM, which only models frame information. We also rival the top system (Mihaylov and Frank, 2016) in the CoNLL16 Shared Task (connective sense classification subtask).…”
Section: Application On Storiesmentioning
confidence: 99%