2018
DOI: 10.3389/fpsyg.2018.00317
|View full text |Cite
|
Sign up to set email alerts
|

TIMSS 2011 Student and Teacher Predictors for Mathematics Achievement Explored and Identified via Elastic Net

Abstract: A substantial body of research has been conducted on variables relating to students' mathematics achievement with TIMSS. However, most studies have employed conventional statistical methods, and have focused on selected few indicators instead of utilizing hundreds of variables TIMSS provides. This study aimed to find a prediction model for students' mathematics achievement using as many TIMSS student and teacher variables as possible. Elastic net, the selected machine learning technique in this study, takes ad… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 26 publications
(26 citation statements)
references
References 30 publications
0
24
0
2
Order By: Relevance
“…Note that the LASSO may produce inconsistent coefficient estimates under certain scenarios, and have some shortcomings for very high correlation of predictors or for the condition of large number of predictors and small number of observations (e.g., Fan and Li, 2001; Zou and Hastie, 2005; Zou, 2006). In psychometrics, Yoo (2018) uses the elastic net with logistic regression to show how to select variables from a large number of predictors in the data analysis of educational large-scale tests. Therefore, although the proposed LASSO-based method shows success under the simulation situations of this paper (i.e., comparatively small ability dimension, and not strong correlation between abilities), future study can explore how the popular shrinkage methods perform with the proposed pattern recognition procedure under other scenarios in MCAT such as large ability dimensions, very high correlation between abilities, or strong multicollinearity in terms of abilities.…”
Section: Discussionmentioning
confidence: 99%
“…Note that the LASSO may produce inconsistent coefficient estimates under certain scenarios, and have some shortcomings for very high correlation of predictors or for the condition of large number of predictors and small number of observations (e.g., Fan and Li, 2001; Zou and Hastie, 2005; Zou, 2006). In psychometrics, Yoo (2018) uses the elastic net with logistic regression to show how to select variables from a large number of predictors in the data analysis of educational large-scale tests. Therefore, although the proposed LASSO-based method shows success under the simulation situations of this paper (i.e., comparatively small ability dimension, and not strong correlation between abilities), future study can explore how the popular shrinkage methods perform with the proposed pattern recognition procedure under other scenarios in MCAT such as large ability dimensions, very high correlation between abilities, or strong multicollinearity in terms of abilities.…”
Section: Discussionmentioning
confidence: 99%
“…These include machine learning methods, which enable an effective analysis of enormous amounts of data and complex data structures almost without distributional assumptions. So far, machine learning approaches have only rarely been used in empirical educational research (e.g., Kotsiantis, 2012), but they represent a promising alternative for the analysis of national and international large scale studies, such as PISA (Programme for International Student Assessment) or TIMSS (e.g., Depren et al, 2017;Yoo, 2018; Trends in International Mathematics and Science Study), for secondary analyses (e.g., Pargent and Albert-von der Gönna, 2018) or, as will be outlined in the following, for the investigation of transdisciplinary analyses.…”
Section: Machine Learning Methodsmentioning
confidence: 99%
“…Compared with ridge regression, it can better exclude the influence of irrelevant information and artificial penalty parameters on the prediction model. However, the elastic net, which is more widely used, combines the advantages of ridge regression and LASSO regression [45,46]. As for classification algorithms, numbers of big data studies have shown that these three algorithms perform well in building classification prediction models [27,31,47,48] For regression models, we used the participants' score on the Chinese version of The Recovery Experience on Weekend Scale as a label directly; for classification models, considering the sample size of this study and reducing data losses, we took the mean score as the critical value and labeled participants with high or low on weekend recovery experience.…”
Section: ) Establishment Of Prediction Modelsmentioning
confidence: 99%