Proceedings of the Third Workshop on Representation Learning for NLP 2018
DOI: 10.18653/v1/w18-3027
|View full text |Cite
|
Sign up to set email alerts
|

A Sequence-to-Sequence Model for Semantic Role Labeling

Abstract: We explore a novel approach for Semantic Role Labeling (SRL) by casting it as a sequence-to-sequence process. We employ an attention-based model enriched with a copying mechanism to ensure faithful regeneration of the input sequence, while enabling interleaved generation of argument role labels. Here, we apply this model in a monolingual setting, performing PropBank SRL on English language data. The constrained sequence generation set-up enforced with the copying mechanism allows us to analyze the performance … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
25
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 19 publications
(25 citation statements)
references
References 24 publications
0
25
0
Order By: Relevance
“…These two additions already show improvements compared to the reported results inDaza and Frank (2018).…”
mentioning
confidence: 50%
See 3 more Smart Citations
“…These two additions already show improvements compared to the reported results inDaza and Frank (2018).…”
mentioning
confidence: 50%
“…We define the SRL task as a sequence transduction problem: given an input sequence of tokens X = x 1 , ..., x i , the system is tasked to generate a sequence Y = y 1 , ..., y j consisting of words interleaved with SRL annotations. Defining the task in this fashion allows X and Y to be of different lengths and therefore target sequences may also (Daza and Frank, 2018). We generalize this architecture to multilingual and cross-lingual SRL.…”
Section: One Model To Treat Them Allmentioning
confidence: 99%
See 2 more Smart Citations
“…We found no attempt to adapt trend filtering to an online setup where data becomes available as time passes. On the other hand, sequence-to-sequence methods, see [34], have been successfully applied in a Natural Language Processing (NLP) context, for translation [2], text generation using Generative Adversarial Networks (GAN) like [26], speech recognition in [13], semantic labelling in [7]...Yet, these methods are supervised methods where the labels are given and unambiguous. Also, classification of time series is well established, see [20] for a recent survey.…”
Section: Background and Related Workmentioning
confidence: 99%