Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1541
|View full text |Cite
|
Sign up to set email alerts
|

A Syntax-aware Multi-task Learning Framework for Chinese Semantic Role Labeling

Abstract: Semantic role labeling (SRL) aims to identify the predicate-argument structure of a sentence. Inspired by the strong correlation between syntax and semantics, previous works pay much attention to improve SRL performance on exploiting syntactic knowledge, achieving significant results. Pipeline methods based on automatic syntactic trees and multi-task learning (MTL) approaches using standard syntactic trees are two common research orientations. In this paper, we adopt a simple unified span-based model for both … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(19 citation statements)
references
References 34 publications
1
18
0
Order By: Relevance
“…Taking the example in Figure 1, the Target span is often incompletely recognized without syntactic dependency relations, missing either "facing Chavez" or "challenge". For the similar SRL task, many previous works have proposed to incorporate syntax into the neural models He et al, 2018;Xia et al, 2019a). In contrast, few studies in the recent years explore this line of research for ORL.…”
Section: Holder Expression Targetmentioning
confidence: 99%
See 3 more Smart Citations
“…Taking the example in Figure 1, the Target span is often incompletely recognized without syntactic dependency relations, missing either "facing Chavez" or "challenge". For the similar SRL task, many previous works have proposed to incorporate syntax into the neural models He et al, 2018;Xia et al, 2019a). In contrast, few studies in the recent years explore this line of research for ORL.…”
Section: Holder Expression Targetmentioning
confidence: 99%
“…extract neural features from a welltrained SRL model as SRL-aware word representations, and then feed them into the input layer of ORL, aiming to alleviate the error propagation problem. Many previous works have shown that syntactic information is of great value for SRL and other NLP tasks (He et al, 2018;Zhang et al, 2019c;Strubell et al, 2018;Xia et al, 2019a;Miwa and Bansal, 2016;. Xia et al (2019b) use the relative position between predicate words and other words in a dependency tree to represent syntactic information, while Roth and Lapata (2016) employ LSTM to obtain the embedding of a dependency path.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…This is an instance of multi-task learning (MTL;Caruana, 1993Caruana, , 1997. MTL has been successfully applied to SRL (Collobert and Weston, 2008;Collobert et al, 2011;Shi et al, 2016) in many state-of-the-art systems (Strubell et al, 2018;Swayamdipta et al, 2018;Cai and Lapata, 2019;Xia et al, 2019a). A potential future extension is to learn multiple syntactic (Søgaard and Goldberg, 2016) and semantic representations (Peng et al, 2017;Hershcovich et al, 2018) beyond dependency trees and PropBank-style SRL at the same time.…”
Section: Related Workmentioning
confidence: 99%