The stereotype that children who are more able solve tasks quicker than their less capable peers exists both in and outside education. The F > C phenomenon and the distance–difficulty hypothesis offer alternative explanations of the time needed to complete a task; the former by the response correctness and the latter by the relative difference between the difficulty of the task and the ability of the examinee. To test these alternative explanations, we extracted IRT-based ability estimates and task difficulties from a sample of 514 children, 53% girls, M(age) = 10.3 years; who answered 29 Piagetian balance beam tasks. We used the answer correctness and task difficulty as predictors in multilevel regression models when controlling for children’s ability levels. Our results challenge the ‘faster equals smarter’ stereotype. We show that ability levels predict the time needed to solve a task when the task is solved incorrectly, though only with moderately and highly difficult items. Moreover, children with higher ability levels take longer to answer items incorrectly, and tasks equal to children’s ability levels take more time than very easy or difficult tasks. We conclude that the relationship between ability, task difficulty, and answer correctness is complex, and warn education professionals against basing their professional judgment on students’ quickness.
We focused on the effect of various types of feedback in a game-based fluid reasoning test called Triton and the Hungry Ocean on elementary school students (ages 8-12; total N = 321). The feedback types were four: no feedback (A), simple (correct/wrong feedback; B), elaborated (correct solution shown; C), and learner-controlled feedback (student chooses between feedback types; D). We did not observe an effect of any feedback type on performance (i.e., there were no between-group differences). However, within group D, students overall tended to choose elaborated feedback more often as task difficulty increased (r = .92), and those in group D who generally tended to choose elaborated feedback also tended to perform better even after controlling for intellect.
The factor structure, the concurrent validity, and test–retest reliability of the Czech translation of the Gifted Rating Scales-School Form [GRS-S; Pfeiffer, S. I., & Jarosewich, T. (2003). GRS (gifted rating scales) - manual. Pearson] were evaluated. Ten alternative models were tested. Four models were found to exhibit acceptable fit and interpretability. The factor structure was comparable for both parent ( n = 277) and teacher raters ( n = 137). High correlations between the factors suggest that raters might be subject to a halo effect. Ratings made by teachers show a closer relationship with criteria (WJ IE II COG, CFT 20-R, and TIM3–5) than ratings made by parents. Test–retest reliability of teacher rating (with median 93 days) was quite high for all GRS-S subscales ( r = .84–.87).
The aim of this study was to assess the factor structure of the Czech adaptation of the Emotional Intelligence Scale by U.S. researchers Valler and Pfeiffer (2015) and its equivalence across different groups of raters. Altogether, 87 teachers, 251 mothers, and 117 fathers participated in the data collection, rating the socio-emotional competencies of 315 children from 51 schools. A two-factor model, consisting of prosocial behavior and emotional conscientiousness and self-regulation, proved to be most likely, showing an acceptable fit in both mothers and fathers; however, the fit was unsatisfactory in teachers. The results and possible implications are further discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.