Findings of the Association for Computational Linguistics: ACL 2022 2022
DOI: 10.18653/v1/2022.findings-acl.236
|View full text |Cite
|
Sign up to set email alerts
|

HIE-SQL: History Information Enhanced Network for Context-Dependent Text-to-SQL Semantic Parsing

Abstract: Recently, context-dependent text-to-SQL semantic parsing which translates natural language into SQL in an interaction process has attracted a lot of attention. Previous works leverage context-dependence information either from interaction history utterances or the previous predicted SQL queries but fail in taking advantage of both since of the mismatch between natural language and logicform SQL. In this work, we propose a History Information Enhanced text-to-SQL model (HIE-SQL) to exploit context-dependence in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 0 publications
0
10
0
Order By: Relevance
“…EditSQL (Zhang et al, 2019) 39.9 12.3 GAZP (Zhong et al, 2020) 42.0 12.3 IGSQL (Cai and Wan, 2020) 44.1 15.8 RichContext 41.0 14.0 IST-SQL 44.4 14.7 R 2 SQL (Hui et al, 2021) 45.7 19.5 DELTA ♥ 51.7 21.5 SCORE ♦ (Yu et al, 2021b) 52.1 22.0 PICARD † (Scholak et al, 2021) 56.9 24.2 HIE-SQL ♦ (Zheng et al, 2022) 56.4 28.7…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…EditSQL (Zhang et al, 2019) 39.9 12.3 GAZP (Zhong et al, 2020) 42.0 12.3 IGSQL (Cai and Wan, 2020) 44.1 15.8 RichContext 41.0 14.0 IST-SQL 44.4 14.7 R 2 SQL (Hui et al, 2021) 45.7 19.5 DELTA ♥ 51.7 21.5 SCORE ♦ (Yu et al, 2021b) 52.1 22.0 PICARD † (Scholak et al, 2021) 56.9 24.2 HIE-SQL ♦ (Zheng et al, 2022) 56.4 28.7…”
Section: Resultsmentioning
confidence: 99%
“…CQR-SQL also surpasses PICARD (Scholak et al, 2021) which is based on the super large pre-trained models T5-3B by +1.5% QM scores and +5.2% IM scores on COSQL. Compared with recent HIE-SQL (Zheng et al, 2022) who leverages rich representations of previous SQL queries from a pre-trained SQL language model, our CQR-SQL can also achieve significant improvements.…”
Section: Resultsmentioning
confidence: 99%
“…In this scenario, most SOTA parsers (Guo et al 2019;Wang et al 2020a;Cao et al 2021) apply top-down grammar-based decoding, which is consistent with our work. The last one is a conversational task, such as SPARC (Yu et al 2019b) and COSQL (Yu et al 2019a), forcing the parser to learn to consider the context information when generating SQL in a multi-turn dialogue (Zhang et al 2019;Zheng et al 2022). SSL in Semantic Parsing Multiple classical methods have been applied to address the challenge of lack of annotation for semantic parsing, such as SVM (Kate and Mooney 2007), self-training (Goldwasser et al 2011), dual learning (Chen et al 2021b), auto-encoder (Yin et al 2018), and mean-teacher (Wang et al 2020b).…”
Section: Related Workmentioning
confidence: 99%
“…In prior research in the field of multi-turn text-to-SQL, the primary emphasis has been on harnessing contextual information [8]- [10]. In a multi-turn text-to-SQL task, models are confronted with the challenge of concurrently handling both relational modeling and contextual modeling.…”
Section: Introductionmentioning
confidence: 99%
“…Several prior studies [9], [11]- [13] have employed neural network encoders that con- catenate the current question, question context, and schema. Concurrently, a number of approaches have directly incorporated historically generated SQL queries [8], [14]- [16] to aid the model in SQL parsing for the present question. However, these methods have tended to overlook the exploration of the wealth of structural information embedded within the dataset.…”
Section: Introductionmentioning
confidence: 99%