Proceedings of the Second DialDoc Workshop on Document-Grounded Dialogue and Conversational Question Answering 2022
DOI: 10.18653/v1/2022.dialdoc-1.7
|View full text |Cite
|
Sign up to set email alerts
|

Conversational Search with Mixed-Initiative - Asking Good Clarification Questions backed-up by Passage Retrieval

Abstract: We deal with the scenario of conversational search, where user queries are under-specified or ambiguous. This calls for a mixed-initiative setup. User-asks (queries) and system-answers, as well as system-asks (clarification questions) and user response, in order to clarify her information needs. We focus on the task of selecting the next clarification question, given the conversation context. Our method leverages passage retrieval from a background content to fine-tune two deep-learning models for ranking cand… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 4 publications
0
2
0
Order By: Relevance
“…However, this concept is closely related to question generation [5] and clarification question generation [3,6]. The concept of clarification questions was formally introduced in [7], and since then, research into generating these questions has spanned a wide range of scenarios, including open-domain systems (AmbigQA) [8], knowledge bases (CLAQUA) [6], closed-book systems (CLAM) [9], information-seeking (ISEEQ) [10], task-oriented dialog systems (CLARIT) [3], and conversational search [11]. Rahmani et al [12] surveyed the various methodologies, datasets, and different evaluation strategies used for clarification questions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this concept is closely related to question generation [5] and clarification question generation [3,6]. The concept of clarification questions was formally introduced in [7], and since then, research into generating these questions has spanned a wide range of scenarios, including open-domain systems (AmbigQA) [8], knowledge bases (CLAQUA) [6], closed-book systems (CLAM) [9], information-seeking (ISEEQ) [10], task-oriented dialog systems (CLARIT) [3], and conversational search [11]. Rahmani et al [12] surveyed the various methodologies, datasets, and different evaluation strategies used for clarification questions.…”
Section: Related Workmentioning
confidence: 99%
“…The distinct characteristics of RAGs [13] make them essential for effective information retrieval and is widely used for generating question-answering tasks [1] and clarification questions [10,11].…”
Section: Related Workmentioning
confidence: 99%