Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1159
|View full text |Cite
|
Sign up to set email alerts
|

Universal Dependencies Parsing for Colloquial Singaporean English

Abstract: Singlish can be interesting to the ACL community both linguistically as a major creole based on English, and computationally for information extraction and sentiment analysis of regional social media. We investigate dependency parsing of Singlish by constructing a dependency treebank under the Universal Dependencies scheme, and then training a neural network model by integrating English syntactic knowledge into a state-ofthe-art parser trained on the Singlish treebank. Results show that English knowledge can l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
14
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 19 publications
(15 citation statements)
references
References 26 publications
1
14
0
Order By: Relevance
“…Such a hierarchical architecture is suitable for NLP tasks. Chen et al [44] apply the stacking technique to jointly train two part-ofspeech (POS) taggers on treebanks with different annotation standards so these two tasks could provide beneficial insights on sentence structures to each other. They propose two stacking schemes where the shallow stacking directly converts the source task′s predicted label into a embedding vector and feeds it to the target task and the deep stacking integrates the source task hidden feature vector into the input of the target task model.…”
Section: Deep Transfer Learning In Nlpmentioning
confidence: 99%
See 1 more Smart Citation
“…Such a hierarchical architecture is suitable for NLP tasks. Chen et al [44] apply the stacking technique to jointly train two part-ofspeech (POS) taggers on treebanks with different annotation standards so these two tasks could provide beneficial insights on sentence structures to each other. They propose two stacking schemes where the shallow stacking directly converts the source task′s predicted label into a embedding vector and feeds it to the target task and the deep stacking integrates the source task hidden feature vector into the input of the target task model.…”
Section: Deep Transfer Learning In Nlpmentioning
confidence: 99%
“…Similarly, the sentence level attention utilizes the output feature vector of the sentence entailment model when computing the sentence level importance score. This strategy is widely adopted in works on transfer learning from relatively straightforward tasks to sophisticated tasks, where one model′s output is used as input to another model [18,44,45] .…”
Section: Transfer Learning To Thanmentioning
confidence: 99%
“…Our method can be extended into neural stacking Wang et al (2017), by doing back-propagation training of the parser parameters during model training, which are leave for future work.…”
Section: Syntactic Featuresmentioning
confidence: 99%
“…Existing work on cross-lingual transfer can be classified into two categories. The first aims to train a dependency parsing model on source treebanks (McDonald et al, 2011;Guo et al, 2016a,b), or their adapted versions (Zhao et al, 2009;Tiedemann et al, 2014;Wang et al, 2017) in the target language. The second category, 1 https://github.com/zhangmeishan/CodeMixedTreebank namely annotation projection, aims to produce a set of large-scale training instances full of automatic dependencies by parsing parallel sentences (Hwa et al, 2005;Rasooli and Collins, 2015).…”
Section: Related Workmentioning
confidence: 99%