Teacher efficacy is a topic of significance in mainstream education and various instruments have been developed to measure this construct. The available instruments however, are general both in terms of their subject matter and context. To compensate for this generality, the present study aims to develop a new teacher efficacy instrument whose items are specific to ELT classes. To this end, based on a thorough analysis of the existing literature and the researchers’ conceptualization of typical ELT classes, a tentative theoretical model of ELT teacher efficacy was developed. This phase was followed by a series of classroom observations plus interviews with students, teachers, and experts. The model was then crosschecked against the results of the observations and interviews, and evolved into a scenario-based, Likert-scale ELT teacher efficacy instrument (ELTEI). The newly developed instrument was validated through administering it to 206 English language teachers, leading to some modifications in the model. The study hopefully can lead to a more meaningful interpretation of teacher efficacy in terms of context, language skills, students’ age and proficiency, as well as teachers’ perception in L2 settings.
Portfolio assessment (PA) as an assessment for learning (AfL) alternative has been under-represented in second/foreign language acquisition (SLA) research literature. This study examined the potential impacts of electronic PA (e-PA) on English-as-a-Foreign-Language (EFL) learners’ engagement modes in descriptive and narrative genres of writing on Moodle™. To do so, 56 university students were non-randomly selected and assigned into two intermediate-level EFL cohorts. In a pretest-mediation-posttest study, descriptive and narrative writing tasks completed by two groups were subjected to teacher feedback, student reflection logs, and subsequent revision every week. Results of repeated measures ANOVA indicated significant progress in lower-level skills (sentence structure, word choice/grammar, mechanics), and moderate progress in higher-level skills (organization, development) in both groups’ genre-based writing. Results of one-way ANCOVA reported the notable pretest-to-posttest achievement by both groups with no intergroup statistical differences. The content of students’ reflection logs was inductively analyzed for their behavioral, emotional, and cognitive modes of engagement in e-PA. Qualitative data analysis indicated similar writing time intervals and recurrence of revisions as the behavioral mode of both groups. Participants also expressed novelty, low anxiety, and enjoyment as their emotional experiences. In terms of their cognitive experience, the majority agreed upon the applicability of teacher feedback and positive perception of writing improvement in e-PA. Yet, they were critical to regular mismatches between the scopes of teacher assessment and self-assessment, as well as teacher linguistic bias towards certain writing features. Several pedagogical implications of the study promote the facilitating role of e-PA in genre-based academic writing and e-learning contexts.
Research on language assessment knowledge (LAK) of teachers has focused on two major topics: identifying the LAK needs of teachers and developing appropriate LAK tests. Although the prior research findings significantly contributed to our understanding of the parameters of LAK, they were mostly quantitative and did not provide much information about EFL teachers’ perceptions and applications of their LAK in a direct and face-to-face situation. Therefore, this qualitative study was designed to shed light on some key issues related to teachers’ LAK using semi-structured interviews. The issues included EFL teachers’ perception of their LAK and their utilization of LAK in their teaching. The participants were 11 teachers with a high level of LAK and 10 teachers with a low level of LAK determined by their performance on a LAK test. The interviews were recorded, transcribed, and content analyzed. The findings did not reveal significant differences in the responses provided by the two groups of teachers. Further, to investigate the extent of teachers’ application of LAK in classroom contexts, some of the tests made by the participating teachers were collected and content analyzed. The results showed that teachers with high LAK wrote longer tests with more varied sections and tasks. Finally, no meaningful relationship was found between the teachers’ level of LAK and their students’ performance on classroom achievement tests. The findings imply that the language assessment field needs more research on multiple dimensions of LAK.
Background. Recently, there has been a growing interest in the personal attributes of raters which determine the quality of cognitive processes involved in their rating writing practice. Purpose. Accordingly, this research attempted to explore how the rating experience of L2 raters might affect their rating of integrated and independent writing tasks. Methods. To pursue this aim, 13 experienced and 14 novice Iranian raters were selected through criterion sampling. After attending a training course on rating writing tasks, both groups produced introspective verbal protocols while they were rating integrated and independent writing tasks which were produced by an Iranian EFL learner. The verbal protocols were recorded and transcribed, and their content was analyzed by the researchers. Results. The six extracted major themes from the content analysis included content, formal requirement, general linguistic range, language use, mechanics of writing, and organization. The results indicated that the type of writing task (integrated vs. independent) is a determining factor for the number of references experienced and novice raters made to the TOEFL-iBT rating rubric. Further, the raters’ rating experience determined the proportions of references they made. Yet, the proportional differences observed between experienced and novice raters in their references were statistically significant only in terms of language use, mechanics of writing, organization, and the total. Conclusion. The variations in L2 raters’ rating performance on integrated and independent writing tasks emphasize the urgency of professional training to use and interpret the components of various rating writing scales by both experienced and novice raters. nced and novice raters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.