A much-debated question in the L2 assessment field is if computer familiarity should be considered a potential source of construct-irrelevant variance in computer-based writing (CBW) tests. This study aims to make a partial validity argument for an online source-based writing test (OSWT) designed for English placement testing (EPT), focusing on the explanation inference. Score interpretations on the OWST are proposed and supporting evidence is sought in terms of test-takers’ self-confidence in and preferences for CBW tests (two interrelated aspects of computer familiarity) and L2 writing ability. Ninety-seven ESL students demonstrating two different levels (higher and lower levels) of L2 writing ability studying at a US university completed the OSWT and an online questionnaire asking about their attitudes towards CBW tests. A series of statistical and thematic analyses revealed that most of the test-takers held self-confidence in and preferences for CBW tests for reasons related to previous CBW experience (e.g., familiarity with CBW, useful tools/functions available on computers) regardless of L2 writing ability. The higher-level test-takers obtained significantly higher scores on the OSWT than their lower-level counterparts. Test-takers’ preferences were a significant predictor of the OSWT scores only in the higher-level group. The findings largely support the validity of proposed score interpretations on the OSWT. Implications are discussed in terms of test fairness and the construct of CBW tests.
This study describes the development process and examines the construct validity of an English placement test of oral communication (EPT OC) developed at a Midwestern university in the United States. This test includes a one‐on‐one oral interview and paired discussion task, and test performance is judged on an analytic rating scale. A confirmatory factor analysis conducted on the ratings of 338 students who took the initial fully operational EPT OC revealed the test structure was represented by a correlated four‐factor model with interactional competence, fluency, pronunciation/comprehensibility, and grammar/vocabulary as sub‐constructs, in line with its targeted theoretical framework. Both tasks were effective in measuring the targeted sub‐constructs, but the sub‐constructs were not sufficiently distinct from each other to completely justify a four‐factor model. The findings provide some support for the proposed interpretations of the EPT OC test scores but indicate the need for some modifications to the assessment, such as more thorough rater training and/or revised rating scales to better distinguish the targeted sub‐constructs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.