Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.99
|View full text |Cite
|
Sign up to set email alerts
|

S2SQL: Injecting Syntax to Question-Schema Interaction Graph Encoder for Text-to-SQL Parsers

Abstract: The task of converting a natural language question into an executable SQL query, known as text-to-SQL, is an important branch of semantic parsing. The state-of-the-art graph-based encoder has been successfully used in this task but does not model the question syntax well. In this paper, we propose S 2 SQL, injecting Syntax to question-Schema graph encoder for Text-to-SQL parsers, which effectively leverages the syntactic dependency information of questions in text-to-SQL to improve the performance. We also emp… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0
1

Year Published

2022
2022
2025
2025

Publication Types

Select...
3
3
3

Relationship

0
9

Authors

Journals

citations
Cited by 29 publications
(23 citation statements)
references
References 27 publications
0
22
0
1
Order By: Relevance
“…Results are reported in Table 3. We can observe that in all three datasets, RESDSQL-3B + NatSQL surprisingly outperforms all strong competitors by a large margin, which suggests that our decoupling idea can also improve (Gan et al 2021b) 73.7 75.0 68.7 73.3 SMBOP + GRAPPA (Rubin and Berant 2021) 74.7 75.0 69.5 71.1 DT-Fixup SQL-SP + RoBERTa (Xu et al 2021) 75.0 -70.9 -LGESQL + ELECTRA (Cao et al 2021) 75.1 -72.0 -S 2 SQL + ELECTRA (Hui et al 2022) 76.4 -72.1 -…”
Section: Results On Robustness Settingsmentioning
confidence: 96%
See 1 more Smart Citation
“…Results are reported in Table 3. We can observe that in all three datasets, RESDSQL-3B + NatSQL surprisingly outperforms all strong competitors by a large margin, which suggests that our decoupling idea can also improve (Gan et al 2021b) 73.7 75.0 68.7 73.3 SMBOP + GRAPPA (Rubin and Berant 2021) 74.7 75.0 69.5 71.1 DT-Fixup SQL-SP + RoBERTa (Xu et al 2021) 75.0 -70.9 -LGESQL + ELECTRA (Cao et al 2021) 75.1 -72.0 -S 2 SQL + ELECTRA (Hui et al 2022) 76.4 -72.1 -…”
Section: Results On Robustness Settingsmentioning
confidence: 96%
“…The input is one or more heterogeneous graphs (Wang et al 2020a;Hui et al 2022;Cao et al 2021;Cai et al 2021), where a node represents a question token, a table or a column, and an edge represents the relation between two nodes. Then, relation-aware transformer networks (Shaw, Uszkoreit, and Vaswani 2018) or relational graph neural networks, such as RGCN (Schlichtkrull et al 2018) and RGAT (Wang et al 2020b), are applied to encode each node.…”
Section: Graph Encodermentioning
confidence: 99%
“…Moreover, we train all the models on WikiBio and WikiPerson from scratch, and the training cost is rather expensive: 2.5 days using 4 NVIDIA V100 32G GPUs. Lastly, this paper does not compare the pre-trained language models (PLMs) (Devlin et al, 2019;Raffel et al, 2020;Hui et al, 2021Hui et al, , 2022, though our approach may also benefit from some pre-trained table encoders, such as TAPAS (Müller et al, 2021). The main reasons why we do not consider PLMs are that PLMs will bring an unfair comparison and bring more variables and may make our work lose focus.…”
Section: Limitationsmentioning
confidence: 99%
“…Research on cross-domain text-to-SQL benchmarks has led to numerous advances. Recent works (Zhao et al, 2021;Rubin and Berant, 2021;Hui et al, 2021) have achieved over 70% accuracy on Spider benchmark (Yu et al, 2018) and over 90% accuracy on WikiSQL benchmark (Zhong et al, 2017), which seems to suggest that existing models already solved most problems in this field. However, the follow-up studies from Deng et al (2021); Gan et al (2021); Suhr et al (2020); Shaw et al (2021); Oren et al (2020); Keysers et al (2020) show that the generalization performance is much worse in more challenging scenarios.…”
Section: Introductionmentioning
confidence: 99%