English teachers' assessment literacy has always been considered as an important factor in their performance. However, no instrument has ever been developed to assess this construct among Iranian EFL teachers. To fill this gap, in the first phase of the present study a theoretical framework for the main four components of teacher assessment literacy, named validity, reliability, interpretability of the results, and efficiency, was developed through extensive review of the related literature and conducting interviews with PhD candidates of TEFL. In the second phase, a questionnaire was developed and piloted with 150 participants who took part in the study through the rules of convenience sampling. More specifically, the 30 items of the newly-developed “ELTs’ Assessment Literacy” questionnaire were subjected to factor analysis which revealed the presence of all the four components consisting of different number of items. These phases led to the development of a questionnaire with four components and 25 items on the basis of a five point Likert scale that measured: (1) “Validity” including six items, (2) “Reliability” including ten items, (3) “Interpretability of the Results” including eight item, and (4) “Efficiency” including five items. The findings of this study may shed lights on this subject and help researchers and teaching practitioners assess EFL teachers’ assessment literacy and make principled decisions as far as assessment is concerned.
The idea of sources other than the test-takers’ knowledge leading to different results on high-stakes tests was the motif based on which the present investigation was initiated on the probable sources of unreliability of a test. For this purpose, the researchers went through a thorough literature review with the aim to identify the issues to be counted as sources of unreliability of a high-stakes test, i.e., the MA University Entrance Exam of English (UEEE) in Iran. First, 17 MA UEEE test-takers were asked to take part in a semi-structured interview to find out their ideas about such sources. The outcome of the thematic coding of the information from the literature and interviews was a 57-item Likert scale questionnaire which was reviewed by three assessment experts, revised accordingly, piloted with 57 MA UEEE test-takers, and revised again with 55 items remaining. The revised questionnaire was administered to 312 MA UEEE test-takers in Iran, and its reliability and construct validity were checked through Cronbach alpha (.89) and exploratory factor analysis, respectively. After checking its reliability and construct validity, 46 items remained and loaded on four factors which were named as the effect of test-takers (16 items), structure of the test and external concerns (13 items), administration conditions of the test (13 items), and role of proctors (4 items). The results of this study might familiarize test developers, test administrators, teachers, and test-takers with issues they should be aware of in developing or preparing for a high-stakes test like the MA UEEE.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.