Proceedings of the CoNLL-16 Shared Task 2016
DOI: 10.18653/v1/k16-2001
|View full text |Cite
|
Sign up to set email alerts
|

CoNLL 2016 Shared Task on Multilingual Shallow Discourse Parsing

Abstract: The CoNLL-2016 Shared Task is the second edition of the CoNLL-2015 Shared Task, now on Multilingual Shallow discourse parsing. Similar to the 2015 task, the goal of the shared task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
84
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 95 publications
(90 citation statements)
references
References 33 publications
0
84
0
Order By: Relevance
“…We also experimented with pre-trained dependency-based word embeddings (Levy and Goldberg, 2014), but this yielded slightly worse results on the Dev set. (Wang and Lan, 2015) and CoNLL 2016 Shared Task best systems in Explicit (Jain, 2016) and Non-Explicit (Rutherford and Xue, 2016). F-Score is presented.…”
Section: Further Experiments On Non-explicit Relationsmentioning
confidence: 99%
“…We also experimented with pre-trained dependency-based word embeddings (Levy and Goldberg, 2014), but this yielded slightly worse results on the Dev set. (Wang and Lan, 2015) and CoNLL 2016 Shared Task best systems in Explicit (Jain, 2016) and Non-Explicit (Rutherford and Xue, 2016). F-Score is presented.…”
Section: Further Experiments On Non-explicit Relationsmentioning
confidence: 99%
“…For example, Pitler et al (2009) report improvements in implicit relation sense classification with a sequence model. And more re-cent systems, including the best systems (Wang and Lan, 2015;Oepen et al, 2016) at the recent CONLL shared tasks on PDTB-style shallow discourse parsing (Xue et al, 2015(Xue et al, , 2016, while not using a sequence model, still incorporate features about neighboring relations. Such systems have many applications, including summarization (Louis et al, 2010), information extraction (Huang and Riloff, 2012), question answering (Blair-Goldensohn, 2007), opinion analysis (Somasundaran et al, 2008), and argumentation (Zhang et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…Recent years have seen more and more works on this topic, including two CoNNL shared tasks Xue et al, 2016). The community most often uses the Penn Discourse Treebank (PDTB) (Prasad et al, 2008) as a resource, and has adopted the usual split into training and test data as used for other tasks such as parsing.…”
Section: Introductionmentioning
confidence: 99%