Findings of the Association for Computational Linguistics: EMNLP 2020 2020
DOI: 10.18653/v1/2020.findings-emnlp.398
|View full text |Cite
|
Sign up to set email alerts
|

Parsing All: Syntax and Semantics, Dependencies and Spans

Abstract: Both syntactic and semantic structures are key linguistic contextual clues, in which parsing the latter has been well shown beneficial from parsing the former. However, few works ever made an attempt to let semantic parsing help syntactic parsing. As linguistic representation formalisms, both syntax and semantics may be represented in either span (constituent/phrase) or dependency, on both of which joint learning was also seldom explored. In this paper, we propose a novel joint model of syntactic and semantic … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 30 publications
(26 citation statements)
references
References 49 publications
0
26
0
Order By: Relevance
“…Thus we employ semi-supervised learning to alleviate such data unbalance on multi-task learning by using a pre-trained linguistics model to label BooksCorpus and English Wikipedia data. The pre-trained model jointly learns POS tags and the four types of structures on semantics and syntax, in which the latter is from the XLNet version of (Zhou et al, 2020), giving state-of-the-art or comparable performance for the concerned four parsing tasks. During training, we set 10% probability to use gold syntactic parsing and SRL data: Penn Treebank (PTB) (Marcus et al, 1993), span style SRL (CoNLL-2005) (Carreras and Màrquez, 2005) and dependency style SRL (CoNLL-2009) (Hajič et al, 2009).…”
Section: Tasks and Datasetsmentioning
confidence: 99%
See 4 more Smart Citations
“…Thus we employ semi-supervised learning to alleviate such data unbalance on multi-task learning by using a pre-trained linguistics model to label BooksCorpus and English Wikipedia data. The pre-trained model jointly learns POS tags and the four types of structures on semantics and syntax, in which the latter is from the XLNet version of (Zhou et al, 2020), giving state-of-the-art or comparable performance for the concerned four parsing tasks. During training, we set 10% probability to use gold syntactic parsing and SRL data: Penn Treebank (PTB) (Marcus et al, 1993), span style SRL (CoNLL-2005) (Carreras and Màrquez, 2005) and dependency style SRL (CoNLL-2009) (Hajič et al, 2009).…”
Section: Tasks and Datasetsmentioning
confidence: 99%
“…Firstly, we rebuild word representations from the WordPiece tokens for linguistics tasks. Then we follow (Zhou et al, 2020) to construct the taskspecific layers, including scoring layer and decoder layer. The former scores three types of linguistic objectives, dependency head, syntactic constituent and semantic role.…”
Section: Task-specific Layersmentioning
confidence: 99%
See 3 more Smart Citations