2014
DOI: 10.1080/10508406.2014.883977
|View full text |Cite
|
Sign up to set email alerts
|

On the Benefits of Seeking (and Avoiding) Help in Online Problem-Solving Environments

Abstract: Seeking the right level of help at the right time can support learning. However, in the context of online problem-solving environments, it is still not entirely clear which help-seeking strategies are desired. We use fine-grained data from 38 high school students who worked with the Geometry Cognitive Tutor for 2 months to better understand the associations between specific help-seeking patterns and learning. We evaluate how students' help-seeking behaviors on each step in a tutored problem are associated with… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
60
0
1

Year Published

2015
2015
2023
2023

Publication Types

Select...
6
4

Relationship

2
8

Authors

Journals

citations
Cited by 93 publications
(64 citation statements)
references
References 41 publications
3
60
0
1
Order By: Relevance
“…Some technically attractive efforts such as free-form explanation feedback (Popescu et al 2003), collaborative peer tutoring (Walker et al 2009), hand-written equation entry (Anthony et al 2005), and teachable agents (Matsuda et al 2012), have shown promise, but have not always produced clear demonstrations of improving student learning outcomes above and beyond those achieved by the existing tutor. Some approaches to supporting student metacognition and motivation, such as help-seeking (Roll et al 2014), sense-making on errors (Mathan and Koedinger 2003), or reducing gaming the system (Baker et al 2006) have been demonstrated to improve outcomes, but have not been incorporated, perhaps in part because of the cost of technical development. In some cases where better outcomes in rigorous experimental studies have been clearly achieved, such as learning benefits for adding menu-based selfexplanation (e.g., Aleven and Koedinger 2002), adding worked examples (e.g., Salden et al 2010) or adding more personalized problem scenarios (Walkington 2013), there has been some limited influence on the product.…”
Section: Q) What Was Particularly Challenging In Undertaking This Study?mentioning
confidence: 99%
“…Some technically attractive efforts such as free-form explanation feedback (Popescu et al 2003), collaborative peer tutoring (Walker et al 2009), hand-written equation entry (Anthony et al 2005), and teachable agents (Matsuda et al 2012), have shown promise, but have not always produced clear demonstrations of improving student learning outcomes above and beyond those achieved by the existing tutor. Some approaches to supporting student metacognition and motivation, such as help-seeking (Roll et al 2014), sense-making on errors (Mathan and Koedinger 2003), or reducing gaming the system (Baker et al 2006) have been demonstrated to improve outcomes, but have not been incorporated, perhaps in part because of the cost of technical development. In some cases where better outcomes in rigorous experimental studies have been clearly achieved, such as learning benefits for adding menu-based selfexplanation (e.g., Aleven and Koedinger 2002), adding worked examples (e.g., Salden et al 2010) or adding more personalized problem scenarios (Walkington 2013), there has been some limited influence on the product.…”
Section: Q) What Was Particularly Challenging In Undertaking This Study?mentioning
confidence: 99%
“…These systems can evaluate student knowledge growth over time, aiming to determine the best sequence of problems or tutor moves that will maximize learning from the tutor (Chi, VanLehn, & Litman, 2010). Recently, this work has further begun to consider affect (e.g., boredom and engaged concentration), gaming the system, and off-task behaviours (Fancsali, 2014;Baker, Corbett, & Koedinger, 2004;Roll, Baker, Aleven, & Koedinger, 2014). The primary purpose of the measurements is to determine ways to help students learn more from the system.…”
Section: Choice-based Assessmentsmentioning
confidence: 99%
“…Second, methods need to be fashioned that identify structures within data. Correspondingly, methods need to be tested for the extent and conditions they contribute to valid inferences about the constructs that the structures of data represent (Roll, Baker, Aleven, & Koedinger, 2014). Thirdly, standing on these pillars of instrumentation (data collection) and methodologies that operate on data, learners and learning scientists alike are positioned to explore the effects of interventions.…”
Section: Introductionmentioning
confidence: 99%