Taking cultural knowledge tests as the case study, this research carries out a series of empirical investigations to verify the moderating effects of item order arranged by difficulty on the relationship between test anxiety and test performance. Groups classified according to test anxiety take tests with two major types of item order: item order arranged according to item bank calibrated item difficulty and item order adjusted according to individual examinee's perceived item difficulty. The means of those test results are compared between groups to see whether the differences are significant. The investigations obtain the following findings: the higher the test taker's level of test anxiety, the higher significance of the moderating effects and vice versa; item order adjusted according to individual examinee's perceived item difficulty may have a more significant moderating effect than item order arranged according to item bank calibrated item difficulty has.
The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a computer-based language test. By taking the test on Essentials of English-Speaking Countries as the case study, this paper elucidates the three-step model of validating the equivalence of the two types of test: investigating computer familiarity, assessing the impact of audio-visual cognitive competence, and examining other discrepancies in construct. The model proposed by this paper can offer some methodological insights on the way to establishing the validation model of the equivalence between the paper-and-pencil language test and the computer-based language test.
While translation competence assessment has been playing an increasingly facilitating role in translation teaching and learning, it still failed to offer fine-grained diagnostic feedback based on certain reliable translation competence standards. As such, this study attempted to investigate the feasibility of providing diagnostic information about students’ translation competence by integrating China’s Standards of English (CSE) with cognitive diagnostic assessment (CDA) approaches. Under the descriptive parameter framework of CSE translation scales, an attribute pool was established, from which seven attributes were identified based on students’ and experts’ think-aloud protocols. A checklist comprising 20 descriptors was developed from CSE translation scales, with which 458 students’ translation responses were rated by five experts. In addition, a Q-matrix was established by seven experts. By comparing the diagnostic performance of four widely-used cognitive diagnostic models (CDMs), linear logistic model (LLM) was selected as the optimal model to generate fine-grained information about students’ translation strengths and weaknesses. Furthermore, relationships among translation competence attributes were discovered and diagnostic results were shown to differ across high and low proficiency groups. The findings can provide insights for translation teaching, learning and assessment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.