pegegog 2022
DOI: 10.47750/pegegog.12.02.21
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of Classical Test Theory vs. Multi-Facet Rasch Theory in writing assessment

Abstract: Testing English writing skills could be multi-dimensional; thus, the study aimed to compare students' writing scores calculated according to Classical Test Theory (CTT) and Multi-Facet Rasch Model (MFRM). The research was carried out in 2019 with 100 university students studying at a foreign language preparatory class and four experienced instructors who participated in the study as raters. Data of the study were collected by using a writing rubric consisting of four components (content, organization, grammar … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 22 publications
1
6
0
Order By: Relevance
“…Furthermore, the mean measure of the z-standardized measures (ZSTD outfit) for both instruments were 0.10 for students and 0.30 for items, which align with the recommended values of −2 to +2, as stated by Andrich [60]. The Chi-square values and degrees of freedom (χ 2 /df < 3) for the instrument indicated that the analyzed data followed a normal distribution in the Rasch model [61]. Table 2 demonstrates that both item and person reliabilities were found to be satisfactory.…”
Section: Reliability and Validity Of The Instrumentsupporting
confidence: 79%
“…Furthermore, the mean measure of the z-standardized measures (ZSTD outfit) for both instruments were 0.10 for students and 0.30 for items, which align with the recommended values of −2 to +2, as stated by Andrich [60]. The Chi-square values and degrees of freedom (χ 2 /df < 3) for the instrument indicated that the analyzed data followed a normal distribution in the Rasch model [61]. Table 2 demonstrates that both item and person reliabilities were found to be satisfactory.…”
Section: Reliability and Validity Of The Instrumentsupporting
confidence: 79%
“…However, CTT does not account for item difficulty or variability in individual differences in ability levels (Ayanwale et al 2022) and MI testing (Siregar and Panjaitan 2022). IRT is a modern approach to psychometric measurement that models the relationship between a person's ability level and their responses to test items (Polat et al 2022). IRT assumes that items have varying degrees of difficulty and discrimination, allowing the estimation of individuals' abilities based on their responses (Liu et al 2022).…”
Section: Theoretical Perspectives To Assessmentsmentioning
confidence: 99%
“…It is crucial to acknowledge that these instruments need to undergo psychometric evaluation to ensure their suitability for different participants and varying timeframes (Liu et al 2020). Additionally, emphasis should be placed on incorporating assessment theories during the development of psychological scales (Polat et al 2022).…”
mentioning
confidence: 99%
“…However, CTT does not account for item difficulty or variability in individual differences in ability levels (Ayanwale et al, 2022) and MI testing (Siregar & Panjaitan, 2022). IRT is a modern approach to psychometric measurement that models the relationship between a person's ability level and their responses to test items (Polat et al, 2022). IRT assumes that items have varying degrees of difficulty and discrimination, allowing the estimation of individuals' abilities based on their responses (Liu et al, 2022).…”
Section: Theoretical Perspectives To Assessmentsmentioning
confidence: 99%