2022
DOI: 10.48550/arxiv.2212.07769
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

CLAM: Selective Clarification for Ambiguous Questions with Generative Language Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…However, this concept is closely related to question generation [5] and clarification question generation [3,6]. The concept of clarification questions was formally introduced in [7], and since then, research into generating these questions has spanned a wide range of scenarios, including open-domain systems (AmbigQA) [8], knowledge bases (CLAQUA) [6], closed-book systems (CLAM) [9], information-seeking (ISEEQ) [10], task-oriented dialog systems (CLARIT) [3], and conversational search [11]. Rahmani et al [12] surveyed the various methodologies, datasets, and different evaluation strategies used for clarification questions.…”
Section: Related Workmentioning
confidence: 99%
“…However, this concept is closely related to question generation [5] and clarification question generation [3,6]. The concept of clarification questions was formally introduced in [7], and since then, research into generating these questions has spanned a wide range of scenarios, including open-domain systems (AmbigQA) [8], knowledge bases (CLAQUA) [6], closed-book systems (CLAM) [9], information-seeking (ISEEQ) [10], task-oriented dialog systems (CLARIT) [3], and conversational search [11]. Rahmani et al [12] surveyed the various methodologies, datasets, and different evaluation strategies used for clarification questions.…”
Section: Related Workmentioning
confidence: 99%
“…Amplayo et al ( 2023) suggest optimal prompts specifically engineered for the task. Kuhn et al (2022) prompt LLMs to clarify ambiguous questions selectively. However, the studies do not utilize external information to ensure the factual correctness of the disambiguations, thereby potentially increasing the risk of hallucinations from LLMs.…”
Section: Related Workmentioning
confidence: 99%
“…Kuhn et al (2023) and Lin et al (2023) use similar sampling-based methods over free-form question answering, using slightly different formulations of confidence scores, but they do not investigate ambiguous questions. Kuhn et al (2022) examine synthetically-created ambiguous questions, but focus on multi-turn interactions.…”
Section: Calibration and Selective Qamentioning
confidence: 99%