Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.555
|View full text |Cite
|
Sign up to set email alerts
|

TIARA: Multi-grained Retrieval for Robust Question Answering over Large Knowledge Base

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 15 publications
(7 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…The pipeline of such systems can be broken down into two main steps: in the first step, a retriever processes an input question to gather information relevant to the formation of the corresponding logical form; in the second step, a reader, which is often a fine-tuned LM, takes in the given question and the retrieved information and outputs the desired logical form. Different systems design for different kinds of information to be retrieved and fed into the reader, such as entities and relations detected from input questions, ,, candidate logical forms, candidate query paths, , or linearized facts . To ensure that the generated queries are compliant with the ontology of the KG, many of these systems impose decoding constraints on the reader or perform an additional step of revision to realign output logical forms to the KG’s ontology. , …”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The pipeline of such systems can be broken down into two main steps: in the first step, a retriever processes an input question to gather information relevant to the formation of the corresponding logical form; in the second step, a reader, which is often a fine-tuned LM, takes in the given question and the retrieved information and outputs the desired logical form. Different systems design for different kinds of information to be retrieved and fed into the reader, such as entities and relations detected from input questions, ,, candidate logical forms, candidate query paths, , or linearized facts . To ensure that the generated queries are compliant with the ontology of the KG, many of these systems impose decoding constraints on the reader or perform an additional step of revision to realign output logical forms to the KG’s ontology. , …”
Section: Related Workmentioning
confidence: 99%
“…Different systems design for different kinds of information to be retrieved and fed into the reader, such as entities and relations detected from input questions, ,, candidate logical forms, candidate query paths, , or linearized facts . To ensure that the generated queries are compliant with the ontology of the KG, many of these systems impose decoding constraints on the reader or perform an additional step of revision to realign output logical forms to the KG’s ontology. , …”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…+ denotes equal contribution 2021; Yu et al, 2022;Gu and Su, 2022;Shu et al, 2022) have achieved remarkable performance on the Zero-shot split giving the impression that KBQA generalization might be a solved problem. However, a cross-dataset evaluation of the models trained on GrailQA reveals that they do not transfer well even for the more simpler one or twohop questions.…”
Section: Introductionmentioning
confidence: 99%
“…For the entity linking, we adopt the same setting as our baseline models. On GrailQA, we use the entity linking results from TIARA (Shu et al 2022). On WebQSP, we adopted the entity linking results from ELQ (Li et al 2020).…”
Section: E Implementation Detailsmentioning
confidence: 99%