The present investigation comprises two studies. In Study 1, participants gave numerical information about demographic attributes (real-scores). They subsequently rated themselves regarding these attributes on a five-point Likert-type scale (5LTS). Items used different phrasings, inducing (1) a general, (2) a personal, and (3) an outsiders’ perspective. By regressing these ratings on the real-scores, it was shown that information on centers and intervals of the real-scores were not readily reflected by the response scales. This led to different representations of the intervals and centers of the real-scores. The outsiders’ perspective resulted in the most adequate representation of the real-score intervals. Study 2 used neutral item wording with a 5LTS and a four-point Likert-type scale (4LTS) to investigate the possible confound of positive wording. This increased the adequacy of the representations only slightly. Together, the findings indicate that, even on average, the investigated rating scales and items reflect the actual attributes only limitedly and that the self-ratings depend on the item phrasing instead of simply representing a coarse measure of the real-scores. All data and analysis scripts are available on https://osf.io/4pcdb/.
Classical statistical methods are limited in the analysis of highdimensional datasets. Machine learning (ML) provides a powerful framework for prediction by using complex relationships, often encountered in modern data with a large number of variables, cases and potentially non-linear effects. ML has turned into one of the most influential analytical approaches of this millennium and has recently become popular in the behavioral and social sciences. The impact of ML methods on research and practical applications in the educational sciences is still limited, but continuously grows as larger and more complex datasets become available through massive open online courses (MOOCs) and large scale investigations.The educational sciences are at a crucial pivot point, because of the anticipated impact ML methods hold for the field. Here, we review the opportunities and challenges of ML for the educational sciences, show how a look at related disciplines can help learning from their experiences, and argue for a philosophical shift in model evaluation. We demonstrate how the overall quality of data analysis in educational research can benefit from these methods and show how ML can play a decisive role in the validation of empirical models. In this review, we (1) provide an overview of the types of data suitable for ML, (2) give practical advice for the application of ML methods, and (3) show how ML-based tools and applications can be used to enhance the quality of education. Additionally we provide practical R code with exemplary analyses, available at https: //osf.io/ntre9/?view only=d29ae7cf59d34e8293f4c6bbde3e4ab2.
In this study, we compared different models of reading comprehension on a large data basis ofmore than 6500 students. We examine the Simple View of Reading, the Progress in International Reading Literacy study (PIRLS) four-process-model as well as the influence oftext difficulty by applying cross-validated psychometric modeling in the frameworks of classical test theory and item response theory to the newly developed reading comprehension test BYLET. Results demonstrate the best fit for a four-process model and a negligible influenceof text difficulty measured by word and sentence length. The psychometric models were robust toward new samples and the test showed good reliability and validity. We conclude that theories of reading comprehension processes also apply to the measurement of reading comprehensionas a trainable skill. The study is preregistered. Analysis code is available on https://osf.io/ywrks/?view_only=ce6ea36a465e4e959a17be90234ea0c7. Materials can be sent to interested researchers on request.
There is a large number of scientifically evaluated reading trainings recommended by literature, that have shown to be effective on the process level. However, field studies indicate that teachers rarely adopt those but use reading animation methods and self-invented methods instead. Yet, scientific evidence for those teacher-constructed methods is still missing. We therefore asked 87 teachers about their reading lessons and assessed the reading ability development of their 1469 students with a standardized reading test. The results show that teachers hardly use any evidence-based methods, but mainly rely on reading animation. Further, the methods reported by the teachers did not show a measurable impact on the development of the students’ reading competence. On the contrary, teacher constructed methods seemingly lead to a growth of heterogeneity of competence.
This study presents a novel method to investigate test fairness combining psychometrics and machine learning. Test unfairness manifests itself in systematic and demographically imbalanced influences of confounding constructs on residual variances in psychometric modelling. Our method disentangles the underlying complex relationships between response patterns and demographic attributes. Specifically, it measures the importance of individual test items and latent ability scores in predicting demographic characteristics as indicators of imbalanced influences. We conducted a simulation study to examine the functionality of our method under various conditions and found that it reliably detects unfair items. To apply the method, we used random forests to predict migration backgrounds from ability scores and single items of an elementary school reading comprehension test. One single item could be identified as unfair. Subsequent content analysis yielded reasonable post-hoc explanations for the finding, which is discussed in terms of consequential validity. Analysis code is available at: https://osf.io/p5sz9/?view_only=14d87c2b9a1f45c58c91f1f28df9f650
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.