2020
DOI: 10.1609/aaai.v34i05.6426
|View full text |Cite
|
Sign up to set email alerts
|

SPARQA: Skeleton-Based Semantic Parsing for Complex Questions over Knowledge Bases

Abstract: Semantic parsing transforms a natural language question into a formal query over a knowledge base. Many existing methods rely on syntactic parsing like dependencies. However, the accuracy of producing such expressive formalisms is not satisfying on long complex questions. In this paper, we propose a novel skeleton grammar to represent the high-level structure of a complex question. This dedicated coarse-grained formalism with a BERT-based parsing algorithm helps to improve the accuracy of the downstream fine-g… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
40
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 68 publications
(40 citation statements)
references
References 25 publications
0
40
0
Order By: Relevance
“…Each qmm-ml:FeatureProcessingLayer or qmmml:MLModellingLayer has a structure of qmm-ml:Input, qmm-ml: Algorithm, and qmm-ml:Output. A piece of the ML pipeline ontology looks like this: (8) Individual: qmm-ml-i:p1 (9) Types: qmm-ml:Pipeline (10) Facts: qmm-ml:hasNextLayer qmm-ml-i:l1 (11) Individual: qmm-ml-i:l1 (12) Types: qmm-ml:FeatureProcessingLayer (13) Facts: qmm-ml:hasNextLayer qmm-ml-i:l2, :hasInputOutputCombination io1, io2 (14) Individual: qmm-ml-i:io1 (15) Types: qmm-ml:InputOutputCombination (16) Facts: qmm-ml:hasInput :FG-SingleFeature, qmm-ml:hasAlgorithm qmm-ml:Maintain, qmm-ml:hasOutput :FG-SingleFeatureMaintained (17) Individual: qmm-ml-i:io2 (18) Types: qmm-ml:InputOutputCombination (19) Facts: qmm-ml:hasInput :FG-ProcessCurve, qmm-ml:hasAlgorithm qmm-ml:GetStats, qmm-ml:hasOutput qmm-ml:FG-TSStats…”
Section: Pipelinementioning
confidence: 99%
See 2 more Smart Citations
“…Each qmm-ml:FeatureProcessingLayer or qmmml:MLModellingLayer has a structure of qmm-ml:Input, qmm-ml: Algorithm, and qmm-ml:Output. A piece of the ML pipeline ontology looks like this: (8) Individual: qmm-ml-i:p1 (9) Types: qmm-ml:Pipeline (10) Facts: qmm-ml:hasNextLayer qmm-ml-i:l1 (11) Individual: qmm-ml-i:l1 (12) Types: qmm-ml:FeatureProcessingLayer (13) Facts: qmm-ml:hasNextLayer qmm-ml-i:l2, :hasInputOutputCombination io1, io2 (14) Individual: qmm-ml-i:io1 (15) Types: qmm-ml:InputOutputCombination (16) Facts: qmm-ml:hasInput :FG-SingleFeature, qmm-ml:hasAlgorithm qmm-ml:Maintain, qmm-ml:hasOutput :FG-SingleFeatureMaintained (17) Individual: qmm-ml-i:io2 (18) Types: qmm-ml:InputOutputCombination (19) Facts: qmm-ml:hasInput :FG-ProcessCurve, qmm-ml:hasAlgorithm qmm-ml:GetStats, qmm-ml:hasOutput qmm-ml:FG-TSStats…”
Section: Pipelinementioning
confidence: 99%
“…We now illustrate this reasoning with an example: (18) Class: qmm-rsw:ElectrodeCap (19) SubClassOf: qmm-core:SystemComponent (20) SubClassOf: qmm-rsw:hasElectrodeCapStatus only qmm-rsw:WearCount (21) Class: qmm-rsw:WearCount (22) SubClassOf: qmm-core:ToolWearingStatus (23) Class: qmm-core:ToolWearingStatus (24) SubClassOf: qmm-core:Status (25) SubClassOf: qmm-core:hasMLFeatureGroup only qmm-ml:FG-Wear Axiom 20 defines that (the elements of the class) qmm-rsw: ElectrodeCap can only have the status parameter qmm-rsw: WearCount. The latter has a superclass qmm-core:ToolWearingStatus from the Core ontology (Axiom 22), which is assigned the only associated machine learning feature group qmm-ml:FG-Wear (axiom 25).…”
Section: Semantically-enhanced Machine Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The task has attracted much attention because logically organized entities and relations can explicitly promote the answer reasoning process. Recently, researchers focus more on the complex KBQA task which involves multi-hop reasoning, constrained reasoning, and numerical reasoning (Bao et al, 2016;Lan and Jiang, 2020;Sun et al, 2020;Kapanipathi et al, 2021). Existing methods for KBQA tasks can be generally categorized into semantic parsing-based (SP-based) methods (Lan and Jiang, 2020;Sun et al, 2020) and embedding-based methods (Sun et al, 2018;Saxena et al, 2020;.…”
Section: Introductionmentioning
confidence: 99%
“…[17] proposed a modified staged query graph generation method by allowing longer relation paths. Sun et al [18] proposed a novel skeleton grammar that uses the BERT-based parsing algorithm to improve the downstream fine-semantic parsing. To avoid generating noisy candidate queries, Chen et al [19] proposed abstract query graphs (AQG) to describe query structures.…”
Section: Introductionmentioning
confidence: 99%