Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing From Raw Text to Universal Dependencies 2017
DOI: 10.18653/v1/k17-3016
|View full text |Cite
|
Sign up to set email alerts
|

A non-projective greedy dependency parser with bidirectional LSTMs

Abstract: The LyS-FASTPARSE team presents BIST-COVINGTON, a neural implementation of the Covington (2001) algorithm for non-projective dependency parsing. The bidirectional LSTM approach by Kiperwasser and Goldberg (2016) is used to train a greedy parser with a dynamic oracle to mitigate error propagation. The model participated in the CoNLL 2017 UD Shared Task. In spite of not using any ensemble methods and using the baseline segmentation and PoS tagging, the parser obtained good results on both macro-average LAS and U… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…There may indeed be several sources of inconsistencies in the gold annotations : in addition to the divergences in the theoretical linguistic principles that governed the design of the original annotation guidelines, inconsistencies may also result from automatic (pre-)processing, human post-editing, or human annotation. Actually, several studies have recently pointed out that treebanks for the same language are not consistently annotated (Vilares and Gómez-Rodríguez, 2017;Aufrant et al, 2017). In a closely related context, Wisniewski et al (2014) have also shown that, in spite of common annotation guidelines, one of the main bottleneck in cross-lingual transfer between UD corpora is the difference in the annotation conventions across treebanks and languages.…”
Section: Experimental Settingmentioning
confidence: 99%
“…There may indeed be several sources of inconsistencies in the gold annotations : in addition to the divergences in the theoretical linguistic principles that governed the design of the original annotation guidelines, inconsistencies may also result from automatic (pre-)processing, human post-editing, or human annotation. Actually, several studies have recently pointed out that treebanks for the same language are not consistently annotated (Vilares and Gómez-Rodríguez, 2017;Aufrant et al, 2017). In a closely related context, Wisniewski et al (2014) have also shown that, in spite of common annotation guidelines, one of the main bottleneck in cross-lingual transfer between UD corpora is the difference in the annotation conventions across treebanks and languages.…”
Section: Experimental Settingmentioning
confidence: 99%
“…The choice of using a recurrent neural network is based on its focus on dealing with sequential data, such as text, as well as its wide use in several NLP tasks, such as machine translation (Johnson et al, ), dependency parsing (Vilares & Gómez‐Rodríguez, ), question answering (Iyyer, Boyd‐Graber, Claudino, Socher, & Daumé III, ), or language modeling (Sundermeyer, Schlüter, & Ney, ). Recurrent neural networks differ from traditional feedfoward networks in that they allow feedback loops in their architectures, thus being able to use the output information corresponding to the input t when processing input t + 1 .…”
Section: System Descriptionmentioning
confidence: 99%