To test whether the language we speak influences our behavior even when we are not speaking, we asked speakers of four languages differing in their predominant word orders (English, Turkish, Spanish, and Chinese) to perform two nonverbal tasks: a communicative task (describing an event by using gesture without speech) and a noncommunicative task (reconstructing an event with pictures). We found that the word orders speakers used in their everyday speech did not influence their nonverbal behavior. Surprisingly, speakers of all four languages used the same order and on both nonverbal tasks. This order, actor-patient-act, is analogous to the subject-object-verb pattern found in many languages of the world and, importantly, in newly developing gestural languages. The findings provide evidence for a natural order that we impose on events when describing and reconstructing them nonverbally and exploit when constructing language anew.gesture ͉ language genesis ͉ sign language ͉ word order C onsider a woman twisting a knob. When we watch this event, we see the elements of the event (woman, twisting, knob) simultaneously. But when we talk about the event, the elements are mentioned one at a time and, in most languages, in a consistent order. For example, English, Chinese, and Spanish speakers typically use the order woman-twist-knob [actor (Ar)-act (A)-patient (P)] to describe the event; Turkish speakers use woman-knob-twist (ArPA). The way we represent events in our language might be such a powerful tool that we naturally extend it to other representational formats. We might, for example, impose our language's ordering pattern on an event when called on to represent the event in a nonverbal format (e.g., gestures or pictures). Alternatively, the way we represent events in our language may not be easily mapped onto other formats, leaving other orderings free to emerge.Word order is one of the earliest properties of language learned by children (1) and displays systematic variation across the languages of the world (2, 3), including sign languages (4). Moreover, for many languages, word order does not vary freely and speakers must use marked forms if they want to avoid using canonical word order (5). If the ordering rules of language are easily mapped onto other, nonverbal representations, then the order in which speakers routinely produce words for particular elements in an event might be expected to influence the order in which those elements are represented nonverbally. Consequently, speakers of different languages would use different orderings when asked to represent events in a nonverbal format (the ordering rules of their respective languages). If, however, the ordering rules of language are not easily mapped onto nonverbal representations of events, speakers of different languages would be free to use orders that differ from the canonical orders found in their respective languages; in this event, the orderings they use might, or might not, converge on a single order. To explore this question, speakers of four langu...
In order to produce a coherent narrative, speakers must identify the characters in the tale so that listeners can figure out who is doing what to whom. This paper explores whether speakers use gesture, as well as speech, for this purpose. English speakers were shown vignettes of two stories and asked to retell the stories to an experimenter. Their speech and gestures were transcribed and coded for referent identification. A gesture was considered to identify a referent if it was produced in the same location as the previous gesture for that referent. We found that speakers frequently used gesture location to identify referents. Interestingly, however, they used gesture most often to identify referents that were also uniquely specified in speech. Lexical specificity in referential expressions in speech thus appears to go hand-in-hand with specification in referential expressions in gesture.
Using a cross-modal semantic priming paradigm, both experiments of the present study investigated the link between the mental representations of iconic gestures and words. Two groups of the participants performed a primed lexical decision task where they had to discriminate between visually presented words and nonwords (e.g., flirp). Word targets (e.g., bird) were preceded by video clips depicting either semantically related (e.g., pair of hands flapping) or semantically unrelated (e.g., drawing a square with both hands) gestures. The duration of gestures was on average 3,500 ms in Experiment 1 but only 1,000 ms in Experiment 2. Significant priming effects were observed in both experiments, with faster response latencies for related gesture-word pairs than unrelated pairs. These results are consistent with the idea of interactions between the gestural and lexical representational systems, such that mere exposure to iconic gestures facilitates the recognition of semantically related words.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.