2019 IEEE 20th International Conference on Information Reuse and Integration for Data Science (IRI) 2019
DOI: 10.1109/iri.2019.00058
|View full text |Cite
|
Sign up to set email alerts
|

An Overview of Utilizing Knowledge Bases in Neural Networks for Question Answering

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 52 publications
0
2
0
Order By: Relevance
“…During training, we infuse the model with knowledge from an external source, distinct from datasets of paired V&L data.The challenge is that standard V&L data is not annotated or paired with such additional knowledge. Even though techniques have been proposed to exploit additional data in NLP, including text-based question answering (Kafle et al, 2019;Lv et al, 2019;Lin et al, 2019;Rajani et al, 2019), little work has been done on the extension to V&L. Works in NLP with benchmarks of knowledge-demanding questions (Yang et al, 2015;Talmor et al, 2018;Mihaylov et al, 2018;Sap et al, 2019) have shown that knowledge bases (KBs) contain information that can benefit large-scale pretrained models. Motivated by this line of evidence, we aim to evaluate similar mechanisms for V&L tasks.…”
Section: Introductionmentioning
confidence: 99%
“…During training, we infuse the model with knowledge from an external source, distinct from datasets of paired V&L data.The challenge is that standard V&L data is not annotated or paired with such additional knowledge. Even though techniques have been proposed to exploit additional data in NLP, including text-based question answering (Kafle et al, 2019;Lv et al, 2019;Lin et al, 2019;Rajani et al, 2019), little work has been done on the extension to V&L. Works in NLP with benchmarks of knowledge-demanding questions (Yang et al, 2015;Talmor et al, 2018;Mihaylov et al, 2018;Sap et al, 2019) have shown that knowledge bases (KBs) contain information that can benefit large-scale pretrained models. Motivated by this line of evidence, we aim to evaluate similar mechanisms for V&L tasks.…”
Section: Introductionmentioning
confidence: 99%
“…can be answered by the fact "(Estella Warren, people.person.nationality, Canada)," where the tail entity "Canada" is returned as the answer. As demonstrated by Kafle et al (2020), KBQA has a great practical application potential.…”
Section: Introductionmentioning
confidence: 99%