Until children can produce letters quickly and accurately, it is assumed that handwriting disrupts and limits the quality of their text. This investigation is the largest study to date (2596 girls, 2354 boys) assessing the association between handwriting fluency and writing quality. We tested whether handwriting fluency made a statistically unique contribution to predicting primary grade students’ writing quality on a functional writing task, after variance due to attitude towards writing, students’ language background (L1, L2, bilingual), gender, grade, and nesting due to class and school were first controlled. Handwriting fluency accounted for a statistically significant 7.4% of the variance in the writing quality of primary grade students. In addition, attitude towards writing, language background, grade and gender each uniquely predicted writing quality. Finally, handwriting fluency increased from one grade to the next, girls had faster handwriting than boys, and gender differences increased across grades. An identical pattern of results were observed for writing quality. Directions for future research and writing practices are discussed.
In applications of cognitive diagnostic models (CDMs), practitioners usually face the difficulty of choosing appropriate CDMs and building accurate Q-matrices. However, functions of model-fit indices that are supposed to inform model and Q-matrix choices are not well understood. This study examines the performance of several promising model-fit indices in selecting model and Q-matrix under different sample size conditions. Relative performance between Akaike information criterion and Bayesian information criterion in model and Q-matrix selection appears to depend on the complexity of data generating models, Q-matrices, and sample sizes. Among the absolute fit indices, MX2 is least sensitive to sample size under correct model and Q-matrix specifications, and performs the best in power. Sample size is found to be the most influential factor on model-fit index values. Consequences of selecting inaccurate model and Q-matrix in classification accuracy of attribute mastery are also evaluated.
Although the promise of universal social-emotional learning (SEL) programs enhancing student academic outcomes has captured public attention, there has been limited research regarding such programs’ impact on students’ state test scores. We used multilevel modeling of follow-up data from a multiyear, multisite cluster-randomized efficacy trial to investigate the impact of a brief universal SEL program on students’ subsequent state test performance. Although somewhat smaller in magnitude than those reported in previous SEL meta-analyses (e.g., Durlak et al., 2011), observed effect sizes generally were positive and consistent with other studies employing similar designs (i.e., randomized trial, state test outcome, baseline academic covariate). These findings may assuage concerns about the program negatively impacting state test scores due to lost instructional time; however, they also temper expectations about large academic gains resulting from its implementation.
This study compares the parametric multiple-choice model and the nonparametric kernel smoothing approach to estimating option characteristic functions (OCCs) using an empirical criterion, the stability of curve estimates over occasions that represents random error. The potential utility of graphical OCCs in item analysis was illustrated with selected items. The effect of increasing the smoothing parameter on the nonparametric model and the effect of small sample on both approaches were investigated. Differences between estimated curve values for between-model within-occasion, within-model between-occasion, and between-model between-occasion were evaluated. The betweenmodel differences were minor in relation to the within-model stabilities, and the incremental difference attributable to model was smaller than that attributable to occasion. Either model leads to the same choice in item analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.