Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.277
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Generation of Socratic Subquestions for Teaching Math Word Problems

Abstract: Socratic questioning is an educational method that allows students to discover answers to complex problems by asking them a series of thoughtful questions. Generation of didactically sound questions is challenging, requiring understanding of the reasoning process involved in the problem. We hypothesize that such questioning strategy can not only enhance the human performance, but also assist the math word problem (MWP) solvers. In this work, we explore the ability of large language models (LMs) in generating s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…Decomposing Multi-Step Reasoning Tasks Solving multi-step reasoning tasks like MWPs has been a popular area of research for the last couple of years Hosseini et al, 2014;Roy et al, 2015;Amini et al, 2019;Zhang et al, 2020;Shridhar et al, 2022;Opedal et al, 2023). However, the majority of the modern approaches for these problems are shifting towards using large language models, often relying on approaches involving prompting or in-context learning (Cobbe et al, 2021;Kojima et al, 2022;Wei et al, 2022b;Chowdhery et al, 2022;Lewkowycz et al, 2022;Srivastava et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Decomposing Multi-Step Reasoning Tasks Solving multi-step reasoning tasks like MWPs has been a popular area of research for the last couple of years Hosseini et al, 2014;Roy et al, 2015;Amini et al, 2019;Zhang et al, 2020;Shridhar et al, 2022;Opedal et al, 2023). However, the majority of the modern approaches for these problems are shifting towards using large language models, often relying on approaches involving prompting or in-context learning (Cobbe et al, 2021;Kojima et al, 2022;Wei et al, 2022b;Chowdhery et al, 2022;Lewkowycz et al, 2022;Srivastava et al, 2022).…”
Section: Related Workmentioning
confidence: 99%
“…We found that the fine-tuned GPT-2 predicted an incorrect number of subquestions for the majority of problems (see Table 4, first row). Thus, following previous work on subquestion generation (Shridhar et al, 2022), we introduced a guidance mechanism that conditions the generation of subquestions for a problem P on the equations describing the intermediate solutions of P . This strategy improved the quality of the generated questions for all three metrics considered (Table 4, second row).…”
Section: Ablation Studiesmentioning
confidence: 99%
“…With the emergence of the SQuAD dataset (Rajpurkar et al, 2016), context-dependent QG gained momentum (Du et al, 2017;Yuan et al, 2017;Subramanian et al, 2018;Puri et al, 2020). This extended to complex tasks like generating unanswerable questions (Choi et al, 2018;Zhu et al, 2019;Reddy et al, 2019) and multi-hop reasoning (Pan et al, 2020(Pan et al, , 2021Shridhar et al, 2022). Our work, focusing on generating code tracing questions in CS Education domain, addresses unique challenges around code, natural language, and pedagogical comprehension, inadequately covered by previous methods due to a lack of specialized datasets.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, (Patel et al, 2022) proposes an alternative approach to enhance the performance of LLMs by decomposing challenging questions into simpler sub-questions on various tasks. Notably, the efficacy of question decomposition has been demonstrated across a range of tasks and domains, including solving mathematical problems (Shridhar et al, 2022), medical question answering (Roberts et al, 2014), and factual correction (Huang et al, 2023).…”
Section: Related Workmentioning
confidence: 99%