This study investigated whether congenital amusia, a neuro-developmental disorder of musical perception, also has implications for speech intonation processing. In total, 16 British amusics and 16 matched controls completed five intonation perception tasks and two pitch threshold tasks. Compared with controls, amusics showed impaired performance on discrimination, identification and imitation of statements and questions that were characterized primarily by pitch direction differences in the final word. This intonation-processing deficit in amusia was largely associated with a psychophysical pitch direction discrimination deficit. These findings suggest that amusia impacts upon one's language abilities in subtle ways, and support previous evidence that pitch processing in language and music involves shared mechanisms.
The totally deafened adult, unable to make use of a hearing aid, has no alternative to lipreading for everyday communication. Lipreading, however, is no substitute for hearing speech. Many lipreaders have great difficulty in ideal conditions and even the best lipreaders find the task demanding and tiring. Prosthetic attempts to substitute for lost hearing have centred on three distinct types of intervention, visual, tactile, and electrocochlear. As none of these is likely to yield a good understanding of a speech independent of lipreading in the near future, we have attempted to isolate relatively simple patterns of stimulation that, although not intelligible in themselves, well aid lipreading. From this point of view, the fundamental frequency or 'pitch' of the voice is the most important pattern element because if provides both segmental and suprasegmental information and is practically invisible. It thus complements the visual information already available on the face. As we show here, with the voice pitch presented acoustically, normal listeners can lipread a speaker reading continuous text at up to two and a half times the rate possible on the basis of lipreading alone. The pitch signal by itself, of course, is completely unintelligible. Although our work is primarily concerned with methods of electrical stimulation of the cochlea, it has implications for other sensory substitution techniques, the design of special purpose hearing aids and current theories of speech perception.
In order to investigate the nature of some processes in speech acquisition, synthetic speechlike stimuli were played to groups of English and French children between two and fourteen years of age. The acoustic parameters varied were voice onset time and first-formant transition. Three stages were observed in the development of children’s labeling behavior. These were called scattered labeling, progressive labeling, and categorical labeling, respectively. Individual response patterns were examined. The first stage (scattered labeling) includes mostly children of two to three years of age for the English and up to about four for the French. Children label most confidently those stimuli closest in physical terms to those of their natural speech environment. All stimuli with intermediate VOT values are labeled quasirandomly. Progressive labeling behavior is found mostly amongst children aged three and four for the English, up to about seven for the French. Children’s response curves go progressively—almost linearly—from one type of label (voiced) to the other (voiceless): response follows stimulus continuum. Categorical labeling becomes the dominant pattern only at the age of five to six for the English, one or two years later for the French. This development was found to be highly significant (p smaller than 0.003 for both English and French, using Kendall’s tau measure of correlation). English children learn to make use of the F1 transition feature around five years, whereas French children never use it as a voicing cue. French children will have fewer features than English children at their disposal: This may account for the later age at which French children, as a group, reach the various labeling behavior stages, and for labeling curves being less sharply categorical for French than for English children. These findings indicate that categorical labeling for speech sounds is not innate but learned through a relatively slow process which is to a certain extent language specific. The implications of the results are discussed in the light of previous work in the field.
Our progress towards the development of a particular form of cochlear implant for the totally deaf is described. A single channel stimulation at the round window or promontory is used. This involves a minimum of surgical intervention and infective risk, preserves the possibility of remission and allows the application of later developments. The signal used for stimulation is designed to be matched both to the deaf lip-reader's needs and to his new, restricted, auditory ability. This is done by concentrating on the acoustic pattern components of speech which carry intonation and voiced-voiceless information. Surgical electrophysical, psychoacoustic and speech perceptual aspects of our work with twelve patients are described. The tests involve responses, for example, relating to: threshold for sinusoids; frequency difference limens; periodic -aperiodic discrimination; stress placement; and consonant labelling using combined visual and electrical inputs. Relatively extensive measurements were made with six patients. Significant individual differences were found and the sets of responses provide an essential basis for an appraisal of the potential usefulness of our work to the individual patient. Possible reasons for the individual differences are discussed. A brief indication is given of the techniques which we have developed for the future speech training and speech production evaluation of patients with electro-cochlear voice monitoring. The final section of our paper mentions our histological investigation of the effects of this type of stimulation in the guinea pig.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.