Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing 2016
DOI: 10.18653/v1/d16-1070
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network for Heterogeneous Annotations

Abstract: Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multiview learning, respectively, finding that neural models have their uni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
50
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(50 citation statements)
references
References 31 publications
(50 reference statements)
0
50
0
Order By: Relevance
“…In order to obtain automatically predicted POS tags as features for a base English dependency parser, we train a POS tagger for UD-Eng using the baseline model of Chen et al (2016), depicted in Figure 3. The bi-LSTM networks with a CRF layer (bi-LSTM-CRF) have shown state-of-the-art performance by globally optimizing the tag sequence Chen et al, 2016). Based on this English POS tagging model, we train a POS tagger for Singlish using the featurelevel neural stacking model of Chen et al (2016).…”
Section: Part-of-speech Taggingmentioning
confidence: 99%
See 3 more Smart Citations
“…In order to obtain automatically predicted POS tags as features for a base English dependency parser, we train a POS tagger for UD-Eng using the baseline model of Chen et al (2016), depicted in Figure 3. The bi-LSTM networks with a CRF layer (bi-LSTM-CRF) have shown state-of-the-art performance by globally optimizing the tag sequence Chen et al, 2016). Based on this English POS tagging model, we train a POS tagger for Singlish using the featurelevel neural stacking model of Chen et al (2016).…”
Section: Part-of-speech Taggingmentioning
confidence: 99%
“…The bi-LSTM networks with a CRF layer (bi-LSTM-CRF) have shown state-of-the-art performance by globally optimizing the tag sequence Chen et al, 2016). Based on this English POS tagging model, we train a POS tagger for Singlish using the featurelevel neural stacking model of Chen et al (2016). Both the English and Singlish models consist of an input layer, a feature layer, and an output layer.…”
Section: Part-of-speech Taggingmentioning
confidence: 99%
See 2 more Smart Citations
“…Zoph and Knight;Johnson et al (2016; have been jointly training translation models from and to different languages, it is achieved simply by jointly train encoder or both encoder and decoder. (Jiang, Huang, and Liu 2009;Sun and Wan 2012;Qiu, Zhao, and Huang 2013;Li et al 2015;Chen, Zhang, and Liu 2016) adopted the stack-based model to take advantage of annotated data from multiple sources, and show that tasks can indeed help improve each other.…”
Section: Related Workmentioning
confidence: 99%