Proceedings of the First Workshop on Natural Language Interfaces 2020
DOI: 10.18653/v1/2020.nli-1.1
|View full text |Cite
|
Sign up to set email alerts
|

Answering Complex Questions by Combining Information from Curated and Extracted Knowledge Bases

Abstract: Knowledge-based question answering (KB-QA) has long focused on simple questions that can be answered from a single knowledge source, a manually curated or an automatically extracted KB. In this work, we look at answering complex questions which often require combining information from multiple sources. We present a novel KB-QA system, MULTIQUE, which can map a complex question to a complex query pattern using a sequence of simple queries each targeted at a specific KB. It finds simple queries using a neural-ne… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0
2

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(13 citation statements)
references
References 26 publications
0
11
0
2
Order By: Relevance
“…Um dos principais problemas da RNN é a queda no desempenho para sequências mais longas e complexas. Para resolvê-lo, trabalhos recentes utilizam o mecanismo de atenc ¸ão para enfatizar as partes mais relevantes de uma NLQ e preservar o contexto das sentenc ¸as [Bhutani et al 2019, Ding et al 2019, Tong et al 2019, Bhutani et al 2020. Embora RNNs sejam amplamente utilizadas, Redes de Memória [Miller et al 2016, Hao et al 2019, Saha et al 2018, Hua et al 2020b, Hua et al 2020a] e Redes Neurais Convolucionais [Hu et al 2018, Bao et al 2016] podem ser usadas nesta etapa.…”
Section: Representac ¸ãO Das Perguntas E Gerac ¸ãO De Candidatosunclassified
See 1 more Smart Citation
“…Um dos principais problemas da RNN é a queda no desempenho para sequências mais longas e complexas. Para resolvê-lo, trabalhos recentes utilizam o mecanismo de atenc ¸ão para enfatizar as partes mais relevantes de uma NLQ e preservar o contexto das sentenc ¸as [Bhutani et al 2019, Ding et al 2019, Tong et al 2019, Bhutani et al 2020. Embora RNNs sejam amplamente utilizadas, Redes de Memória [Miller et al 2016, Hao et al 2019, Saha et al 2018, Hua et al 2020b, Hua et al 2020a] e Redes Neurais Convolucionais [Hu et al 2018, Bao et al 2016] podem ser usadas nesta etapa.…”
Section: Representac ¸ãO Das Perguntas E Gerac ¸ãO De Candidatosunclassified
“…Nesse caso, é necessário realizar consultas avanc ¸adas para coletar a resposta dos KBs, como a explorac ¸ão de relac ¸ões indiretas entre entidades, multirelac ¸ões, restric ¸ões qualitativas e quantitativas, entre outras [Bao et al 2016]. Atualmente, os sistemas de QA obtêm melhores resultados ao responder a perguntas simples e, por conta disso, os sistemas de QA para perguntas complexas estão recebendo atenc ¸ão [Ding et al 2019, Bhutani et al 2020.…”
Section: Introduc ¸ãOunclassified
“…The sub-query graphs generated sequentially by MULTIQUE for an example question "What college did the author of 'The Hobbit' attend?" [5]. tackle the multi-hop relation questions, instead of generating the whole query graph at once, MULTIQUE [5] breaks the original question into simple partial queries and builds sub query graphs for partial queries one by one.…”
Section: Neural-enhanced Symbolic Reasoningmentioning
confidence: 99%
“…[5]. tackle the multi-hop relation questions, instead of generating the whole query graph at once, MULTIQUE [5] breaks the original question into simple partial queries and builds sub query graphs for partial queries one by one. The search space is shrinked since each time when extending the whole query graph by a new sub-query graph, the model only needs to consider the immediate answers queried by the previous most matched sub-query graph.…”
Section: Neural-enhanced Symbolic Reasoningmentioning
confidence: 99%
“…Papers' Selection: The research papers reviewed in this survey are high quality papers selected from the top NLP and AI conferences, including but not limited to, ACL 4 , SIGIR 5 , NeurIPS 6 , NAACL 7 , EMNLP 8 , ICLR 9 , AAAI 10 , IJCAI 11 CIKM 12 , SIGKDD 13 , and WSDM 14 . Other than published research papers in the aforementioned conferences, we have also considered good papers in e-Print archive 15 as they manifest the latest research outputs.…”
Section: Introductionmentioning
confidence: 99%