Evidence is presented for a perceptual shift affecting consonant clusters that are phonotactically illegal, albeit pronounceable, in French. They are perceived as phonetically close legal clusters. Specifically, word-initial /dl/ and /tl/ are heard as /gl/ and /kl/, respectively. In 2 phonemic gating experiments, participants generally judged short gates-which did not yet contain information about the 2nd consonant t\l-as being dental stops. However, as information for the /!/ became available in larger gates, a perceptual shift developed in which the initial stops were increasingly judged to be velars. A final phoneme monitoring test suggested that this kind of shift took place on-line during speech processing and with some extratemporal processing cost. These results provide evidence for the automatic integration of low-level phonetic information into a more abstract code determined by the native phonological system.The view that speech perception is determined by the native-language sound system is well motivated and widely shared. Ontogenetically, there is a shift from universal to language-specific perceptual capacities: Although young infants seem to be initially equipped with "universal" capacities for processing speech sounds, language-specific capacities emerge in the second half of the first year of life, by 9 months or before
In two experiments, French speakers detected cv or cvc sequences at the beginning of disyllabic pseudowords varying in syllable structure and pivotal consonant. Overall, both studies failed to replicate the crossover interaction that has been previously observed in French by Mehler, Dommergues, Frauenfelder and Seguṍ (1981). In both experiments, latencies were shorter to cv than to cvc targets and this effect of target length was generally smaller for cvc . cv than for cv . cv carriers. However, a clear crossover interaction was observed for liquid pivotal consonants under target-blocking conditions, Alain Content, Laboratoire de Psychologie Expé rimentale, ULB-LAPSE CP191, Avenue F.D. Roosevelt, 50, B-1050 Bruxelles Belgium. E-mail: acontent@ulb.ac.beThe present research was supported by grants from the FNRS, Switzerland (1114Switzerland ( -039553.93 , 1113Switzerland ( -049698.9 6 and 1114 ) and from the Direction gé né rale de la Recherche scienti que -Communauté française de . Preliminary reports were presented at the second AMLAP Meeting (Torino, 1996), and at the 1997 Eurospeech Conference. Ruth Kearns is now with Procter and Gamble Technical Centres Ltd. Thanks go to Christophe Pallier for comments on a previous version and to Nicolas Dumay for his help in the nal stages of manuscript preparation. and especially for slow participants. A third experiment collected phonemegating data on the same pseudowords to obtain estimates of the duration of the initial phonemes. Regression analyses showed that phoneme duration accounted for a large proportion of the variance for cvc target detection, suggesting that participants were reacting rather directly to phonemic throughput. These ndings argue against the hypothesis of an early syllabic classi cation mechanism in the perception of speech.How acoustic-phonetic information is mapped onto lexical representations constitutes a central issue in the study of speech perception and spoken word recognition. Various kinds of linguistic units, ranging from phonetic features to syllables, have been proposed to mediate the mapping process. Among these units, researchers have long considered the syllable as an obvious choice. Indeed, since the syllable constitutes the domain of most coarticulation phenomena, it appears to provide a natural way of dealing with the problem of variability in the signal. One in uential source of evidence favouring the hypothesis that syllable units are instrumental in speech processing comes from studies using the sequence detection task (see Frauenfelder & Kearns, 1996). In the original study (Mehler, Dommergues, Frauenfelder, & Seguṍ, 1981), French subjects detected Consonant-Vowel (cv) or Consonant-Vowel-Consonan t (cvc) targets in spoken target-bearing carrier words whose initial syllable was either cv or cvc. For instance, pa and pal were detected in words like pa . lace and pal . mier.1 Detection latencies were shorter when the target exactly matched the rst syllable of the carrier word, with responses to pa faster in pa . lace than pal . m...
Perceptual evaluation is still the most common method in clinical practice for diagnosing and following the progression of the condition of people with speech disorders. Although a number of studies have addressed the acoustic analysis of speech productions exhibiting impairments, additional descriptive analysis is required to manage interperson variability, considering speakers with the same condition or across different conditions. In this context, this article investigates automatic speech processing approaches dedicated to the detection and localization of abnormal acoustic phenomena in speech signal produced by people with speech disorders. This automatic process aims at enhancing the manual investigation of human experts while at the same time reducing the extent of their intervention by calling their attention to specific parts of the speech considered as atypical from an acoustical point of view. Two different approaches are proposed in this article. The first approach models only the normal speech, whereas the second models both normal and dysarthric speech. Both approaches are evaluated following two strategies: one consists of a strict phone comparison between a human annotation of abnormal phones and the automatic output, while the other uses a “one-phone delay” for the comparison. The experimental evaluation of both approaches for the task of detecting acoustic anomalies was conducted on two different corpora composed of French dysarthric speakers and control speakers. These approaches obtain very encouraging results and their potential for clinical uses with different types of dysarthria and neurological diseases is quite promising.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.