Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task 2015
DOI: 10.18653/v1/k15-2006
|View full text |Cite
|
Sign up to set email alerts
|

A Minimalist Approach to Shallow Discourse Parsing and Implicit Relation Recognition

Abstract: We describe a minimalist approach to shallow discourse parsing in the context of the CoNLL 2015 Shared Task. 1 Our parser integrates a rule-based component for argument identification and datadriven models for the classification of explicit and implicit relations. We place special emphasis on the evaluation of implicit sense labeling, we present different feature sets and show that (i) word embeddings are competitive with traditional word-level features, and (ii) that they can be used to considerably reduce th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(9 citation statements)
references
References 20 publications
0
9
0
Order By: Relevance
“…For discourse connective and argument extraction, token level features extracted from a fixed window centered on the target word token are generally used, and so are features extracted from syntactic parses. Distributional representations such as Brown clusters have generally been used to determine the senses (Chiarcos and Schenk, 2015;Devi et al, 2015;Kong et al, 2015;Song et al, 2015;Stepanov et al, 2015;Wang and Lan, 2015;Yoshida et al, 2015), although one team also used them in the sequence labeling task for argument extraction (Nguyen et al, 2015). Additional resources used by some systems for sense determination include word embeddings (Chiarcos and Schenk, 2015;, Verb-Net classes (Devi et al, 2015;Kong et al, 2015), and the MPQA polarity lexicon (Devi et al, 2015;Kong et al, 2015;Wang and Lan, 2015).…”
Section: Approachesmentioning
confidence: 99%
“…For discourse connective and argument extraction, token level features extracted from a fixed window centered on the target word token are generally used, and so are features extracted from syntactic parses. Distributional representations such as Brown clusters have generally been used to determine the senses (Chiarcos and Schenk, 2015;Devi et al, 2015;Kong et al, 2015;Song et al, 2015;Stepanov et al, 2015;Wang and Lan, 2015;Yoshida et al, 2015), although one team also used them in the sequence labeling task for argument extraction (Nguyen et al, 2015). Additional resources used by some systems for sense determination include word embeddings (Chiarcos and Schenk, 2015;, Verb-Net classes (Devi et al, 2015;Kong et al, 2015), and the MPQA polarity lexicon (Devi et al, 2015;Kong et al, 2015;Wang and Lan, 2015).…”
Section: Approachesmentioning
confidence: 99%
“…The so-obtained explicit argument pairs are sense labeled by a (linear-kernel) SVM classifier 12 with the connector word as the only feature, following the minimalist setting in Chiarcos and Schenk (2015).…”
Section: Chinese Full Task Pipeline (Cftp)mentioning
confidence: 99%
“…A challenging task is to detect the sense of NonExplicit discourse relations, as they usually don't have a connective that can help to determine their sense. In last year's task Non-Explicit relations have been tackled with features based on Brown clusters (Chiarcos and Schenk, 2015;, VerbNet classes and MPQA polarity lexicon . Earlier work (Rutherford and Xue, 2014) employed Brown cluster and coreference patterns to identify senses of implicit discourse relations in naturally occurring text.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In CoNLL-2015, various approaches were explored to conquer the sense classification problem, which is a straightforward multi-category classification task (Okita et al, 2015;Wang and Lan, 2015;Chiarcos and Schenk, 2015;Song et al, 2015;Stepanov et al, 2015;Yoshida et al, 2015;Sun et al, 2015;Nguyen et al, 2015;Laali et al, 2015). Typical data-driven machine learning methods, like Maximum Entropy and Support Vector Machine, were adopted.…”
Section: Introductionmentioning
confidence: 99%