Recent work in embodied cognition has demonstrated that language comprehension involves the motor system (e.g., Glenberg & Kaschak, 2002). Such findings are often attributed to mechanisms involving simulations of linguistically described events (Barsalou, 1999; Fischer & Zwaan, 2008). We propose that research paradigms in which simulation is the central focus need to be augmented with paradigms that probe the organization of the motor system during language comprehension. The use of well-studied motor tasks may be appropriate to this endeavour. To this end, we present a study in which participants perform a bimanual rhythmic task (Kugler & Turvey, 1987) while judging the plausibility of sentences. We show that the dynamics of the bimanual task differ when participants judge sentences describing performable actions as opposed to sentences describing events that are not performable. We discuss the general implications of our results for accounts of embodied cognition.
Several studies have shown that phonetic and phonological categories of both languages interact in bilingual speakers [e.g., speech learning model; Flege (1995)]. Interestingly, these categories continuously change over a period of time, drifting toward the characteristics of the ambient language. [Sancier and Fowler (1997)]. In this study, we studied how categories change in a short term bilingual interaction. Specifically, we were interested in whether cross linguistic influences were moderated by the extent of concurrent use of the two languages as well as the linguistic abilities of the target audience. To examine these questions, we recorded the production of Spanish language instructors. Their productions were studied before, during, and after a classroom interaction to determine changes in production. Furthermore, these productions were recorded in Spanish language courses of varying levels to determine specifically whether there was an increased influence of Spanish phonology on English productions in higher-level courses. The speech samples were phonetically transcribed and analyzed. Acoustic analyzes were performed to detect changes in voice onset time [Lisker and Abramson (1964)], vowel space, consonant manner class, and stress patterns. This study has implications for theories of bilingual speech production as well as for second language instruction and education.
Typical listeners can adjust to the speech of individuals with dysarthria through the process of perceptual learning. Past research has demonstrated that listeners improve in their recognition of both segments and connected speech produced by people with dysarthria. The mechanisms underlying learning of dysarthric speech are still uncertain, though it has been suggested that exposure allows listeners to tune into the segmental characteristics of speech. In the current study, we test this hypothesis by training listeners to identify vowels spoken by an individual with mild dysarthria. We employed a pre-test/post-test design where we first tested listeners on (1) vowel recognition and (2) transcription accuracy of connected speech, trained them, and then tested them again. We found that listeners who were trained on dysarthric vowels demonstrated greater improvements in vowel identification than those in the control condition. Likewise, the listeners who underwent training showed greater improvements in transcription accuracy than those in the control condition. However, the training advantage did not generalize to unfamiliar phrases. Overall, it appears that listeners are able to tune into the segmental characteristics of dysarthric speech after a short training session, which improves their recognition of longer connected speech.
The Mandarin low-dipping tone (T3) undergoes an alteration, tone sandhi, when followed by another T3. The resulting F0 is superficially the same as that of the rising tone (T2). We investigated how Mandarin speakers adapt their speech production to overcome potential ambiguities induced by T3 sandhi during an interactive task. Ten pairs of Chinese participants completed an interactive phrase matching task. Participants were shown displays with Chinese phrases (surname + title, e.g., 1a-b). One participant read an indicated phrase which the other selected from their display. There were two conditions. In the no sandhi condition (1a-b), the title did not induce sandhi, and should result in distinct F0 patterns for the surnames. In the sandhi condition (2a-b), the title induced T3 sandhi, and should result in homophonous surnames. 1a 卢侦探 (lu2 tʃ̺ən1tan4 “Detective Lu”) 2a 卢主任 (lu2 tʃ̺u3ɹ̺ən4 “Director Lu”) 1b 鲁侦探 (lu3 tʃ̺ən1tan4 “Detective Lu”) 2b 鲁主任 (lu3 tʃ̺u3ɹ̺ən4 “Director Lu”) Task performance showed clear evidence of sandhi-induced ambiguity. Examination of accuracy and tone acoustics suggested that pairs deployed different strategies to attempt to overcome sandhi-induced ambiguity, including exaggerating F0 rise or T3 duration, or adding a pause to avoid applying sandhi processes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.