Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1111
|View full text |Cite
|
Sign up to set email alerts
|

Low-Resource Semantic Role Labeling

Abstract: We explore the extent to which highresource manual annotations such as treebanks are necessary for the task of semantic role labeling (SRL). We examine how performance changes without syntactic supervision, comparing both joint and pipelined methods to induce latent syntax. This work highlights a new application of unsupervised grammar induction and demonstrates several approaches to SRL in the absence of supervised syntax. Our best models obtain competitive results in the high-resource setting and state-ofthe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 14 publications
(13 citation statements)
references
References 21 publications
0
13
0
Order By: Relevance
“…present a model (which we call RL-SPINN) that is identical to SPINN at test time, but uses the REINFORCE algorithm (Williams, 1992) at training time to compute gradients for the transition classification function, which produces discrete decisions and does not otherwise receive gradients through backpropagation. Surprisingly, and in contrast to Gormley et al (2014), they find that a small 100D instance of this RL-SPINN model performs somewhat better on several text classification tasks than an otherwise-identical model which is explicitly trained to parse.…”
Section: Introductionmentioning
confidence: 69%
“…present a model (which we call RL-SPINN) that is identical to SPINN at test time, but uses the REINFORCE algorithm (Williams, 1992) at training time to compute gradients for the transition classification function, which produces discrete decisions and does not otherwise receive gradients through backpropagation. Surprisingly, and in contrast to Gormley et al (2014), they find that a small 100D instance of this RL-SPINN model performs somewhat better on several text classification tasks than an otherwise-identical model which is explicitly trained to parse.…”
Section: Introductionmentioning
confidence: 69%
“…An important observation is that as by-products we get, the semantic-compatible trees are taskspecific, and may not be in line with any expertdesigned grammar (Marcus et al, 1993). However, once we directly examine our model on grammar induction task (Klein and Manning, 2004;Gormley et al, 2014), we find surprisingly promising results. More discussions can be found in Appendix B.…”
Section: Recovery From Dependency Treesmentioning
confidence: 86%
“…Creating SRL datasets requires expert annotation, which is expensive. While there are some efforts on semi-automatic annotation targeting low-resource languages (e.g., Akbik et al, 2016), achieving high neural network performance with small or unlabeled datasets remains a challenge (e.g., Lapata, 2009, 2012;Titov and Klementiev, 2012;Gormley et al, 2014;Abend et al, 2009).…”
Section: Scenario 1: Low Training Datamentioning
confidence: 99%