2017
DOI: 10.18653/v1/k17-3
|View full text |Cite
|
Sign up to set email alerts
|

Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies

Abstract: ii IntroductionThis volume contains papers describing systems submitted to the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies and an overview paper summarizing the task, its features, evaluation methodology for the main and additional metrics, and some interesting observations about the submitted systems and the task as a whole.This Shared Task (http://universaldependencies.org/conll17/) can be seen as an extension of the CoNLL 2007 Shared Task on parsing, but there are ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 24 publications
(66 reference statements)
0
4
0
Order By: Relevance
“…Data To make our analysis maximally comparable across languages, we start from the Parallel Universal Dependencies (PUD) collection (Zeman et al, 2017), which contains translations for a set of 1000 English sentences. PUD only contains test corpora.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Data To make our analysis maximally comparable across languages, we start from the Parallel Universal Dependencies (PUD) collection (Zeman et al, 2017), which contains translations for a set of 1000 English sentences. PUD only contains test corpora.…”
Section: Methodsmentioning
confidence: 99%
“…Most of the work on word order variation using Universal Dependencies (UD: de Marneffe et al, 2021) is based on curated dependency treebanks, with only a few works using dependency corpora derived from raw texts. Although the accuracy rate of NLP systems trained on UD models is reportedly very high (Hajič and Zeman, 2017;Zeman and Hajič, 2018;Straka et al, 2019;Qi et al, 2020), a certain level of noise i.e., erroneous annotations is in fact present when working with automatically annotated texts (Levshina et al, to appear; Talamo and Verkerk, to appear); furthermore, different layers of UD annotations such as Universal Parts of Speech (UPOS) and UD Relations are not always used consistently across languages, often resulting in the cross-linguistic comparison of different categories.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the work on word order variation using Universal Dependencies (UD: de Marneffe et al, 2021) is based on curated dependency treebanks, with only a few works using dependency corpora derived from raw texts. Although the accuracy rate of NLP systems trained on UD models is reportedly very high (Hajič and Zeman, 2017;Zeman and Hajič, 2018;Straka et al, 2019;Qi et al, 2020), a certain level of noise i.e., erroneous annotations is in fact present when working with automatically annotated texts (Levshina et al, to appear; Talamo and Verkerk, to appear); furthermore, different layers of UD annotations such as Universal Parts of Speech (UPOS) and UD Relations are not always used consistently across languages, often resulting in the cross-linguistic comparison of different categories.…”
Section: Introductionmentioning
confidence: 99%
“…Traditional readability formulas (e.g. Flesch-Kincaid Grade Level (Kincaid et al, 1975), Gunning Fog Index (Gunning, 1952)) typically use shallow source text features such as average word and sentence length and word frequency to assess the reading difficulty level of a given text. Recently, more complex lexical, syntactic, semantic and discourse text features have been used (see for instance Schwarm and Ostendorf (2005); Francois and Miltsakaki (2012);De Clercq et al (2014); De Hoste (2016), andCollins-Thompson (2014) for an overview).…”
Section: Introductionmentioning
confidence: 99%