2022
DOI: 10.48550/arxiv.2210.17525
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Query Refinement Prompts for Closed-Book Long-Form Question Answering

Abstract: Large language models (LLMs) have been shown to perform well in answering questions and in producing long-form texts, both in few-shot closed-book settings. While the former can be validated using well-known evaluation metrics, the latter is difficult to evaluate. We resolve the difficulties to evaluate long-form output by doing both tasks at once -to do question answering that requires long-form answers. Such questions tend to be multifaceted, i.e., they may have ambiguities and/or require information from mu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 27 publications
0
2
0
Order By: Relevance
“…The respective ML tasks are called long-form question answering [15] which was originally designed to involve document retrieval before answering the question. However, LLMs and to some degree also MLMs should be capable of performing it as closed-book QA [16]. This poses the problem that evaluation of results is difficult due to ambiguity of questions (ibid) and other challenges.…”
Section: Introductionmentioning
confidence: 99%
“…The respective ML tasks are called long-form question answering [15] which was originally designed to involve document retrieval before answering the question. However, LLMs and to some degree also MLMs should be capable of performing it as closed-book QA [16]. This poses the problem that evaluation of results is difficult due to ambiguity of questions (ibid) and other challenges.…”
Section: Introductionmentioning
confidence: 99%
“…Desirable answers to misleading questions require discerning and resolving misleadings. A previous work by Amplayo et al (2022) focuses on ambiguous questions, by introducing query refinement prompts that encourage LMs to consider multiple facets of the question. Another work (Kim et al, 2021) tackles questions containing FP, by extracting and verifying presuppositions.…”
Section: Introductionmentioning
confidence: 99%