Gestures are often considered to be demonstrative of the embodied nature of the mind (Hostetter and Alibali, 2008). In this article, we review current theories and research targeted at the intra-cognitive role of gestures. We ask the question how can gestures support internal cognitive processes of the gesturer? We suggest that extant theories are in a sense disembodied, because they focus solely on embodiment in terms of the sensorimotor neural precursors of gestures. As a result, current theories on the intra-cognitive role of gestures are lacking in explanatory scope to address how gestures-as-bodily-acts fulfill a cognitive function. On the basis of recent theoretical appeals that focus on the possibly embedded/extended cognitive role of gestures (Clark, 2013), we suggest that gestures are external physical tools of the cognitive system that replace and support otherwise solely internal cognitive processes. That is gestures provide the cognitive system with a stable external physical and visual presence that can provide means to think with. We show that there is a considerable amount of overlap between the way the human cognitive system has been found to use its environment, and how gestures are used during cognitive processes. Lastly, we provide several suggestions of how to investigate the embedded/extended perspective of the cognitive function of gestures.
Research on multimedia learning has shown that learning is hampered when a multimedia message includes extraneous information that is not relevant for the task, because processing the extraneous information uses up scarce attention and working memory resources. However, eye-tracking research suggests that task experience might be a boundary condition for this negative effect of extraneous information on learning, because people seem to learn to ignore task-irrelevant information over time. We therefore hypothesised that extraneous information might no longer hamper learning when it is present over a series of tasks, giving learners the chance to adapt their study strategy. This hypothesis was tested in three experiments. In experiments 1a/1b, participants learned the definitions of new words (from an artificial language) that denoted actions, with matching pictures (same action), mismatching pictures (another action), or without pictures. Mismatching pictures hampered learning compared with matching pictures. Experiment 2 showed that task experience may indeed be a boundary condition to this negative effect on learning: the initial negative effect was no longer present when learners gained experience with the task. This suggests that learners adapted their study strategy, ignoring the mismatching pictures. That hypothesis was tested in experiment 3, using eye tracking. Results showed that attention to the pictures waned with task experience, and that this decrease was stronger for mismatching than for matching pictures. Our findings demonstrate the importance of investigating multimedia effects over time and in relation to study strategies.
According to the body-specificity hypothesis, hearing action words creates body-specific mental simulations of the actions. Handedness should, therefore, affect mental simulations. Given that pictures of actions also evoke mental simulations and often accompany words to be learned, would pictures that mismatch the mental simulation of words negatively affect learning? We investigated effects of pictures with a left-handed, right-handed, or bimanual perspective on left- and right-handers' learning of object-manipulation words in an artificial language. Right-handers recalled fewer definitions of words learned with a corresponding left-handed-perspective picture than with a right-handed-perspective picture. For left-handers, there was no effect of perspective. These findings suggest that mismatches between pictures and mental simulations evoked by hearing action words can negatively affect right-handers' learning. Left-handers, who encounter the right-handed perspective frequently, could presumably overcome the lack of motor experience with visual experience and, therefore, not be influenced by picture perspective.
Research on embodied cognition has shown that action and language are closely intertwined. The present study seeks to exploit this relationship, by systematically investigating whether motor activation would improve eight-to-nine year old children’s learning of vocabulary in their first language. In a within-subjects paradigm, 49 children learned novel object manipulation, locomotion and abstract verbs via a verbal definition alone and in combination with gesture observation, imitation, or generation (i.e., enactment). Results showed that learning of locomotion verbs significantly improved through gesture observation compared to verbal definitions only. For learning object-manipulation verbs, children with good language skills seemed to benefit from imitation and enactment, while this appeared to hinder children with poor language skills. Learning of abstract verbs was not differentially affected by instructional condition. This study suggests that the effectiveness of observing and generating gestures for vocabulary learning may differ depending on verb type and language proficiency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.