Artifact Digital Object Group 2021
DOI: 10.1145/3506805
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning from Reformulations in Conversational Question Answering over Knowledge Graphs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 0 publications
0
13
0
Order By: Relevance
“…The dynamic sub-graph approach was extended by Kaiser et al (2021) with their model, CONQUER. It uses reinforcement learning to select graph traversal actions.…”
Section: Conversational Qa On Knowledge Graphsmentioning
confidence: 99%
“…The dynamic sub-graph approach was extended by Kaiser et al (2021) with their model, CONQUER. It uses reinforcement learning to select graph traversal actions.…”
Section: Conversational Qa On Knowledge Graphsmentioning
confidence: 99%
“…One could, for example, provide users with passages or documents, and ask them to create a sequence of questions from there [6,37]. Alternatively, one could also provide annotators with some conversation from a benchmark so far, and request their continuation in some fashion [20]. Large-scale synthetic benchmarks would try to automate this as far as possible using rules and templates [41].…”
Section: The Convmix Benchmarkmentioning
confidence: 99%
“…Prepending history turns. Adding turns from the history to the beginning of the current question is still considered a simple yet tough-to-beat baseline in almost all ConvQA tasks [8,20,33,51], and so we investigate the same here as well. Specifically, we consider four variants: i) add only the initial turn ⟨𝑞 0 , 𝑎 0 ⟩, as it often establishes the topic of the conversation (Prepend init); ii) add only the previous turn ⟨𝑞 𝑖−1 , 𝑎 𝑖−1 ⟩, as it sets immediate context for the current information need (Prepend prev); iii) add both initial and previous turns (Prepend init+prev); and iv) add all turns {⟨𝑞 𝑡 , 𝑎 𝑡 ⟩} 𝑖−1 𝑡 =0 (Prepend all).…”
Section: Baselinesmentioning
confidence: 99%
See 2 more Smart Citations