2017
DOI: 10.1609/aaai.v31i1.10992
|View full text |Cite
|
Sign up to set email alerts
|

A Dynamic Window Neural Network for CCG Supertagging

Abstract: Combinatory Category Grammar (CCG) supertagging is a task to assign lexical categories to each word in a sentence. Almost all previous methods use fixed context window sizes to encode input tokens. However, it is obvious that different tags usually rely on different context window sizes. This motivates us to build a supertagger with a dynamic window approach, which can be treated as an attention mechanism on the local contexts. We find that applying dropout on the dynamic filters is superior to the regular dro… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
2
2

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 23 publications
0
2
0
Order By: Relevance
“…Therefore, LSTM networks-a special variant of RNN which is capable of learning long-term dependencies-were proposed to overcome these RNN limitations. In particular, Bi-directional LSTM network models have been created with the ability to store two-way information, and the majority of literature in the area [25,40,41,2,23] uses this model with different training procedures and achieves high accuracy.…”
Section: Machine Learning and Supertaggingmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, LSTM networks-a special variant of RNN which is capable of learning long-term dependencies-were proposed to overcome these RNN limitations. In particular, Bi-directional LSTM network models have been created with the ability to store two-way information, and the majority of literature in the area [25,40,41,2,23] uses this model with different training procedures and achieves high accuracy.…”
Section: Machine Learning and Supertaggingmentioning
confidence: 99%
“…t n ) as output. Input features can be words or they can be extracted from words, such as suffix, capitalization property or characters selection [25,40,41,2,23]. We will use morphosyntactic annotations such as lemma, suffix, POS tags and dependency relations [20] to build feature sets.…”
Section: Introductionmentioning
confidence: 99%