Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1099
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Role Labeling with Iterative Structure Refinement

Abstract: Modern state-of-the-art Semantic Role Labeling (SRL) methods rely on expressive sentence encoders (e.g., multi-layer LSTMs) but tend to model only local (if any) interactions between individual argument labeling decisions. This contrasts with earlier work and also with the intuition that the labels of individual arguments are strongly interdependent. We model interactions between argument labeling decisions through iterative refinement. Starting with an output produced by a factorized model, we iteratively ref… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 19 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…The relation identification component is however weaker than, e.g., (Cai and Lam, 2020). This may not be surprising, as we, following Lyu and Titov (2018), score edges independently, whereas (Cai and Lam, 2020) perform iterative refinement which is known to boost performance on relations (Lyu et al, 2019). Also, we use BiLSTM encoders, which -while cheaper to train and easier to tune -is likely weaker than Transformer encoders used by Astudillo et al; Lee et al While these modifications, along with using extra pre-training techniques and data augmentation, may further boost performance of our model, we believe that our model is strong enough for our purposes, i.e.…”
Section: Resultsmentioning
confidence: 94%
“…The relation identification component is however weaker than, e.g., (Cai and Lam, 2020). This may not be surprising, as we, following Lyu and Titov (2018), score edges independently, whereas (Cai and Lam, 2020) perform iterative refinement which is known to boost performance on relations (Lyu et al, 2019). Also, we use BiLSTM encoders, which -while cheaper to train and easier to tune -is likely weaker than Transformer encoders used by Astudillo et al; Lee et al While these modifications, along with using extra pre-training techniques and data augmentation, may further boost performance of our model, we believe that our model is strong enough for our purposes, i.e.…”
Section: Resultsmentioning
confidence: 94%
“…Intuitively speaking, the refinement mechanism provides the models with additional chances to revise previous decisions. In existing work, this method was successfully applied to various tasks, e.g., text classification , sequential labeling (Cui and Zhang, 2019;Lyu et al, 2019), machine translation , and question answering (Nema et al, 2019). Our work is not the first attempt of introducing refinement mechanism to sequential labeling tasks.…”
Section: Discussionmentioning
confidence: 99%
“…We can also find recent syntax-agnostic approaches that do not perform full SRL and follow a predicate-centered strategy for decoding and word representation (based on gold predicates provided by CoNLL-2009 corpora): (Chen, Lyu and Titov, 2019) and (Lyu, Cohen and Titov, 2019), which additionally implement different iterative refinement procedures, and (Conia and Navigli, 2020), which employs an additional specific encoder for contextualizing each gold predicate in the sentence.…”
Section: Related Workmentioning
confidence: 99%