2022
DOI: 10.48550/arxiv.2205.11822
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 0 publications
0
12
0
Order By: Relevance
“…• Ozturkler et al (2022) aggregate language model probabilities using mathematical combinators like sum and product. • Jung et al (2022) recursively generate a tree of explanations for a statement, then determine the truth of the statement by treating the inference as a satisfiability problem over these explanations and their logical relations. • Various works, including Fu et al (2021), Dua et al (2022), Guo et al (2022), and, explore decomposition of questions into subquestions, often under the name multi-hop question-answering.…”
Section: Task Decompositionsmentioning
confidence: 99%
“…• Ozturkler et al (2022) aggregate language model probabilities using mathematical combinators like sum and product. • Jung et al (2022) recursively generate a tree of explanations for a statement, then determine the truth of the statement by treating the inference as a satisfiability problem over these explanations and their logical relations. • Various works, including Fu et al (2021), Dua et al (2022), Guo et al (2022), and, explore decomposition of questions into subquestions, often under the name multi-hop question-answering.…”
Section: Task Decompositionsmentioning
confidence: 99%
“…AI models are uniquely developed and trained on distinct collections of training data, which can influence the responses retrieved by a user. There are many different approaches to communicating with a text-based AI interface, such as breaking down complex questions or problems into constituent components, refining prompts based on the model's feedback, providing cues such as a general structure or keywords for the model to include in its response and maieutic prompting that requires the model to explain its responses to ensure logical consistency (Jung et al, 2022).…”
Section: Prompt Engineeringmentioning
confidence: 99%
“…GPT-3). Aside from methods that make reasoning explicit in a linear chain manner, another set of work produce recursive structures of reasoning, through either backward chaining Jung et al, 2022) or forward chaining (Bostrom et al, 2022). Our work contributes to this line of research, yet we depart from prior work by presenting the first approach that learns to generate relevant knowledge without requiring human-labeled gold knowledge.…”
Section: Related Workmentioning
confidence: 99%