2021
DOI: 10.1002/ets2.12327
|View full text |Cite
|
Sign up to set email alerts
|

Model Adequacy Checking for Applying Harmonic Regression to Assessment Quality Control

Abstract: In recent years, harmonic regression models have been applied to implement quality control for educational assessment data consisting of multiple administrations and displaying seasonality. As with other types of regression models, it is imperative that model adequacy checking and model fit be appropriately conducted. However, there has been no literature on how to perform a comprehensive model adequacy evaluation when applying harmonic regression models to sequential data with seasonality in the educational a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 39 publications
0
5
0
Order By: Relevance
“…Then, learning curves, residual plots, and predicted R 2 for each metric model were calculated to clarify whether the model using the proposed method is overfitting. The predicted R 2 serves as a measure to evaluate the predictive performance of the regression model [23]. Its importance lies in identifying potential overfitting problems within the model.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, learning curves, residual plots, and predicted R 2 for each metric model were calculated to clarify whether the model using the proposed method is overfitting. The predicted R 2 serves as a measure to evaluate the predictive performance of the regression model [23]. Its importance lies in identifying potential overfitting problems within the model.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…, where ŷ(i) and ȳ are the inferred value in which data point i is not included and mean measured value of router metrics y, respectively [23]. In general, overfitting happens when the model is too complex relative to the amount and noisiness of the training datasets.…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…The differentiation of the statistic at that time was its ability to measure fit based on samples that were not used to form a model [62,63]. The statistic is a cross-validation attempt by a leave-one-out method that subtracts ŷ(i) and leaves the i-th observation out, reducing the sample size to n − 1 [64]. Repeating the subtraction and omission of every data point will lead to the sum of squares of discrepancies [65,66].…”
Section: Predicted Residual Error Sum Of Squares (Press)mentioning
confidence: 99%
“…The differentiation to the statistic at that time is its ability to measure fit based on samples that are not used to form a model [55,56]. The statistic is a cross-validation attempt by a leave-one-out method that subtracts 𝑦 ̂(𝑖) and leaves the ith observation out, reducing the sample size to n -1 [57]. Repeating the subtraction and omission to every single data point will lead to the sum of squares of discrepancies [58,59].…”
Section: Predicted Residual Error Sum Of Squares (Press)mentioning
confidence: 99%