Proceedings of the Second (2015) ACM Conference on Learning @ Scale 2015
DOI: 10.1145/2724660.2724667
|View full text |Cite
|
Sign up to set email alerts
|

Improving Student Modeling Through Partial Credit and Problem Difficulty

Abstract: Student modeling within intelligent tutoring systems is a task largely driven by binary models that predict student knowledge or next problem correctness (i.e., Knowledge Tracing (KT)). However, using a binary construct for student assessment often causes researchers to overlook the feedback innate to these platforms. The present study considers a novel method of tabling an algorithmically determined partial credit score and problem difficulty bin for each student's current problem to predict both binary and p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
3
2
2

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 7 publications
0
13
0
Order By: Relevance
“…The gold standard for student modeling, Knowledge Tracing (KT), has maintained its reign for almost a quarter-century despite relying on a rudimentary sequence of correct and incorrect responses to estimate the probability of student knowledge [2]. Attempts to enrich this approach have included supplemental estimates of prior knowledge to individualize predictions to each student [9], supplemental estimates of item difficulty to individualize to each problem [10], and the implementation of flexible correctness via consideration of hint usage and attempt count [12,13,7]. Despite these excursions, popular learning systems, including the Cognitive Tutor series, still largely rely on traditional KT to inform mastery learning [4].…”
Section: Introductionmentioning
confidence: 99%
“…The gold standard for student modeling, Knowledge Tracing (KT), has maintained its reign for almost a quarter-century despite relying on a rudimentary sequence of correct and incorrect responses to estimate the probability of student knowledge [2]. Attempts to enrich this approach have included supplemental estimates of prior knowledge to individualize predictions to each student [9], supplemental estimates of item difficulty to individualize to each problem [10], and the implementation of flexible correctness via consideration of hint usage and attempt count [12,13,7]. Despite these excursions, popular learning systems, including the Cognitive Tutor series, still largely rely on traditional KT to inform mastery learning [4].…”
Section: Introductionmentioning
confidence: 99%
“…It differs from Wang and Heffernan [20] in that it does not rely on the EM algorithm, and differs from Ostrow et al [14] in that it does not use predefined methods to determine the posterior distributions. For example, receiving a score of 66% for the question shown in Figure 2 means that 66% of the student's response is correct, while 34% of the student's response is incorrect.…”
Section: Adapting Bkt To the Classroommentioning
confidence: 94%
“…The first domain was the K9 dataset described in Section 3, while the second domain was the ASSISTment data set used by Ostrow et al [14]. Table 1 shows the distribution of the questions over four years of use of the K9 system, including the number of different questions, the number of students and the number of responses.…”
Section: Empirical Methodologymentioning
confidence: 99%
See 2 more Smart Citations