Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021 2021
DOI: 10.18653/v1/2021.findings-acl.100
|View full text |Cite
|
Sign up to set email alerts
|

Awakening Latent Grounding from Pretrained Language Models for Semantic Parsing

Abstract: Recent years pretrained language models (PLMs) hit a success on several downstream tasks, showing their power on modeling language. To better understand and leverage what PLMs have learned, several techniques have emerged to explore syntactic structures entailed by PLMs. However, few efforts have been made to explore grounding capabilities of PLMs, which are also essential. In this paper, we highlight the ability of PLMs to discover which token should be grounded to which concept, if combined with our proposed… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 41 publications
0
10
0
Order By: Relevance
“…From the (Lei et al 2020), we also report the result of SLSQL + BERT (Oracle), where the learnable schema linking module is replaced with human annotations in inference. It represents the maximum potential benefit of schema linking for the text-to-SQL task (Liu et al 2021). As mentioned, when introducing SDCUP, it dramatically improves the performance, up to almost 14.4% by base model.…”
Section: Experiments Results and Analysesmentioning
confidence: 94%
“…From the (Lei et al 2020), we also report the result of SLSQL + BERT (Oracle), where the learnable schema linking module is replaced with human annotations in inference. It represents the maximum potential benefit of schema linking for the text-to-SQL task (Liu et al 2021). As mentioned, when introducing SDCUP, it dramatically improves the performance, up to almost 14.4% by base model.…”
Section: Experiments Results and Analysesmentioning
confidence: 94%
“…Following the previous works [25,30], we report the micro average precision, recall and F1-score for both columns (𝐶𝑜𝑙 𝑃 , 𝐶𝑜𝑙 𝑅 , 𝐶𝑜𝑙 𝐹 ) and tables (𝑇 𝑎𝑏 𝑃 ,𝑇 𝑎𝑏 𝑅 , 𝑇 𝑎𝑏 𝐹 ). The metric focus on whether the correct schema item is identified.…”
Section: Schema Linking Analysismentioning
confidence: 99%
“…(4) SLSQL 𝐿 [25] is trained with full schema linking supervision by a learnable schema linking module. ( 5) ETA [30] trains the schema linking module using pseudo alignment as supervision generated from PLMs.…”
Section: Schema Linking Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…To capture such alignments, several attention-based models were proposed (Shi et al, 2020;Lei et al, 2020;Liu et al, 2021), which employ the attention weights among tokens to indicate the alignments. Specifically, they use an attention module to perform schema linking at the encoding stage (Lei et al, 2020;Liu et al, 2021), and may use another attention to align each output token to its corresponding input tokens at the decoding stage (Shi et al, 2020). However, we argue that the attention mechanism is not an appropriate way to capture and leverage lexico-logical alignments.…”
Section: Introductionmentioning
confidence: 99%