Proceedings of the 3rd Workshop on Machine Reading for Question Answering 2021
DOI: 10.18653/v1/2021.mrqa-1.8
|View full text |Cite
|
Sign up to set email alerts
|

Multi-modal Retrieval of Tables and Texts Using Tri-encoder Models

Abstract: Open-domain extractive question answering works well on textual data by first retrieving candidate texts and then extracting the answer from those candidates. However, some questions cannot be answered by text alone but require information stored in tables. In this paper, we present an approach for retrieving both texts and tables relevant to a question by jointly encoding texts, tables and questions into a single vector space. To this end, we create a new multi-modal dataset based on text and table datasets f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 20 publications
0
4
0
Order By: Relevance
“…ODTQA is also closely related to open-domain text-based QA (Kwiatkowski et al, 2019;Khashabi et al, 2021) and information retrieval (Luan et al, 2021a;Humeau et al, 2020;Tang et al, 2021). Compared to the text retrieval, the tabular characteristics need to be considered in ODTQA (Herzig et al, 2021;Chen et al, 2023;Kostic et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…ODTQA is also closely related to open-domain text-based QA (Kwiatkowski et al, 2019;Khashabi et al, 2021) and information retrieval (Luan et al, 2021a;Humeau et al, 2020;Tang et al, 2021). Compared to the text retrieval, the tabular characteristics need to be considered in ODTQA (Herzig et al, 2021;Chen et al, 2023;Kostic et al, 2021).…”
Section: Related Workmentioning
confidence: 99%
“…The uncased BERT-base model is used as the encoder on two datasets. Following Tri-encoder (Kostic et al, 2021) and UTP (Chen et al, 2023), no downprojection is used for the final representations on the NQ-TABLES dataset. However, for a comprehensive evaluation, we also reported the performance of the proposed model with projection in Table 7.…”
Section: B Experimental Setupmentioning
confidence: 99%
“…(2) Bi-Encoder (Kosti'c et al, 2021) is a dense retriever which uses a BERT-based encoder for questions, and a shared BERT-based encoder to separately en-code tables and text as representations for retrieval. (3) Tri-Encoder (Kosti'c et al, 2021) is a dense retriever that uses three individual BERT-based en-coder to separately encode questions, tables and text as representations.…”
Section: A1 Settingsmentioning
confidence: 99%
“…Pan et al (2021) later follows this work and improves the table retrieval with a 2-step retriever. Kostić et al (2021) discusses the use of dense vector embeddings to enhance the performance of bi-and tri-encoder in retrieving both table and text.…”
Section: Introductionmentioning
confidence: 99%