Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2016
DOI: 10.18653/v1/n16-1052
|View full text |Cite
|
Sign up to set email alerts
|

Shift-Reduce CCG Parsing using Neural Network Models

Abstract: We present a neural network based shiftreduce CCG parser, the first neural-network based parser for CCG. We also study the impact of neural network based tagging models, and greedy versus beam-search parsing, by using a structured neural network model. Our greedy parser obtains a labeled F-score of 83.27%, the best reported result for greedy CCG parsing in the literature (an improvement of 2.5% over a perceptron based greedy parser) and is more than three times faster. With a beam, our structured neural networ… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 19 publications
0
11
0
Order By: Relevance
“…Transition-based techniques are a natural starting point for UCCA parsing, given the conceptual similarity of UCCA's distinctions, centered around predicate-argument structures, to distinctions expressed by dependency schemes, and the achievements of transition-based methods in dependency parsing (Dyer et al, 2015;Andor et al, 2016;Kiperwasser and Goldberg, 2016). We are further motivated by the strength of transition-based methods in related tasks, including dependency graph parsing (Sagae and Tsujii, 2008;Ribeyre et al, 2014;Tokgöz and Eryigit, 2015), constituency parsing (Sagae and Lavie, 2005;Zhang and Clark, 2009;Zhu et al, 2013;Maier, 2015;Maier and Lichte, 2016), AMR parsing (Wang et al, 2015a(Wang et al, ,b, 2016Misra and Artzi, 2016;Goodman et al, 2016;Zhou et al, 2016;Damonte et al, 2017) and CCG parsing (Zhang and Clark, 2011;Ambati et al, 2015Ambati et al, , 2016.…”
Section: Introductionmentioning
confidence: 99%
“…Transition-based techniques are a natural starting point for UCCA parsing, given the conceptual similarity of UCCA's distinctions, centered around predicate-argument structures, to distinctions expressed by dependency schemes, and the achievements of transition-based methods in dependency parsing (Dyer et al, 2015;Andor et al, 2016;Kiperwasser and Goldberg, 2016). We are further motivated by the strength of transition-based methods in related tasks, including dependency graph parsing (Sagae and Tsujii, 2008;Ribeyre et al, 2014;Tokgöz and Eryigit, 2015), constituency parsing (Sagae and Lavie, 2005;Zhang and Clark, 2009;Zhu et al, 2013;Maier, 2015;Maier and Lichte, 2016), AMR parsing (Wang et al, 2015a(Wang et al, ,b, 2016Misra and Artzi, 2016;Goodman et al, 2016;Zhou et al, 2016;Damonte et al, 2017) and CCG parsing (Zhang and Clark, 2011;Ambati et al, 2015Ambati et al, , 2016.…”
Section: Introductionmentioning
confidence: 99%
“…Most of the work over the last years on deep parsing using neural network architectures has been done for the CCG formalism and for English language. For example (Xu et al, 2015) uses a recurrent neural network for improving the supertagging and parsing accuracy for CCG, while (Ambati et al, 2016) describes a neural networks architecture that performs CCG parsing. For HPSG, the work by (Zhou and Zhao, 2019) is very relevant as they try to derive a HPSG grammar from the Penn Treebanks in English and Chinese and use an self-attention based mechanism followed by a CKY decoder to parse them, obtaining very good results.…”
Section: Related Workmentioning
confidence: 99%
“…Most of the work over the last years on deep parsing using neural network architectures has been done for the CCG formalism and for English language. For example (Xu et al, 2015) uses a recurrent neural network for improving the supertagging and parsing accuracy for CCG, while (Ambati et al, 2016) describes a neural networks architecture that performs CCG parsing. For HPSG, the work by ) is very relevant as they try to derive a HPSG grammar from the Penn Treebanks in English and Chinese and use an self-attention based mechanism followed by a CKY decoder to parse them, obtaining very good results.…”
Section: Related Workmentioning
confidence: 99%