A problem facing investigations of implicit and explicit learning is the lack of valid measures of second language implicit and explicit knowledge. This paper attempts to establish operational definitions of these two constructs and reports a psychometric study of a battery of tests designed to provide relatively independent measures of them. These tests were (a) an oral imitation test involving grammatical and ungrammatical sentences, (b) an oral narration test, (c) a timed grammaticality judgment test (GJT), (d) an untimed GJT with the same content, and (e) a metalinguistic knowledge test. Tests (a), (b), and (c) were designed as measures of implicit knowledge, and tests (d) and (e) were designed as measures of explicit knowledge. All of the tests examined 17 English grammatical structures. A principal component factor analysis produced two clear factors. This analysis showed that the scores from tests (a), (b), and (c) loaded on Factor 1, whereas the scores from ungrammatical sentences in test (d) and total scores from test (e) loaded on Factor 2. These two factors are interpreted as corresponding to implicit and explicit knowledge, respectively. A number of secondary analyses to support this interpretation of the construct validity of the tests are also reported. This research was funded by a Marsden Fund grant awarded by the Royal Society of Arts of New Zealand to Rod Ellis and Cathie Elder+ Other researchers who contributed to the research are Shawn Loewen, Rosemary Erlam, Satomi Mizutani, and Shuhei Hidaka+The author wishes to thank Nick Ellis, Jim Lantolf, and two anonymous SSLA reviewers+ Their constructive comments have helped me to present the theoretical background of the study more convincingly and to remove errors from the results and refine my interpretations of them+
As a basis for a systematic approach to investigating the effects of written corrective feedback, this article presents a typology of the different types available to teachers and researchers. The typology distinguishes two sets of options relating to (1) strategies for providing feedback (for example, direct, indirect, or metalinguistic feedback) and (2) the students' response to the feedback (for example, revision required, attention to correction only required). Each option is illustrated and relevant research examined.
This paper begins by offering a definition of ‘task’ and by emphasizing that there is no single ‘task‐based teaching’ approach. It then evaluates a number of criticisms of TBT, drawing on recent critiques by Widdowson, Seedhouse, Sheen, and Swan. It is argued that many of these criticisms stem from a fundamental misunderstanding of what a ‘task’ is, and of the theoretical rationales that inform task‐based teaching. These criticisms also reflect a failure to acknowledge that multiple versions of task‐based teaching exist. In particular, it is argued that task‐based teaching need not be seen as an alternative to more traditional, form‐focused approaches but can be used alongside them. The paper concludes with an examination of a number of genuine problems with implementing task‐based teaching, as reflected in evaluation studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.