Proceedings of the Nineteenth Conference on Computational Natural Language Learning - Shared Task 2015
DOI: 10.18653/v1/k15-2001
|View full text |Cite
|
Sign up to set email alerts
|

The CoNLL-2015 Shared Task on Shallow Discourse Parsing

Abstract: The CoNLL-2015 Shared Task is on Shallow Discourse Parsing, a task focusing on identifying individual discourse relations that are present in a natural language text. A discourse relation can be expressed explicitly or implicitly, and takes two arguments realized as sentences, clauses, or in some rare cases, phrases. Sixteen teams from three continents participated in this task. For the first time in the history of the CoNLL shared tasks, participating teams, instead of running their systems on the test set an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
80
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
5

Relationship

2
8

Authors

Journals

citations
Cited by 109 publications
(83 citation statements)
references
References 30 publications
0
80
0
Order By: Relevance
“…To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al, 2015, andXue et al, 2016, respectively). Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.…”
Section: Resultsmentioning
confidence: 99%
“…To compare against the previous state of the art, we include results for the top-performing systems from the 2015 and 2016 competitions (as reported by Xue et al, 2015, andXue et al, 2016, respectively). Where applicable, best results (when comparing F 1 ) are highlighted for each sub-task and -metric.…”
Section: Resultsmentioning
confidence: 99%
“…Word vectors English word vectors are taken from 300-dimensional Skip-gram word vectors trained on Google News data, provided by the shared task organizers (Mikolov et al, 2013;Xue et al, 2015). We trained our own 250-dimensional Chinese word vectors on Gigaword corpus, which is the same corpus used by the 300-dimensional Chinese word vectors provided by the shared task organizers (Graff and Chen, 2005).…”
Section: Methodsmentioning
confidence: 99%
“…For example, Pitler et al (2009) report improvements in implicit relation sense classification with a sequence model. And more re-cent systems, including the best systems (Wang and Lan, 2015;Oepen et al, 2016) at the recent CONLL shared tasks on PDTB-style shallow discourse parsing (Xue et al, 2015(Xue et al, , 2016, while not using a sequence model, still incorporate features about neighboring relations. Such systems have many applications, including summarization (Louis et al, 2010), information extraction (Huang and Riloff, 2012), question answering (Blair-Goldensohn, 2007), opinion analysis (Somasundaran et al, 2008), and argumentation (Zhang et al, 2016).…”
Section: Introductionmentioning
confidence: 99%