The development of a set of everyday, nonverbal, digitized sounds for use in auditory confrontation naming applications is described. Normative data are reported for 120 sounds of varying lengths representing a wide variety of acoustic events such as sounds produced by animals, people, musical instruments, tools, signals, and liquids. In Study 1, criteria for scoring naming accuracy were developed and rating data were gathered on degree of confidence in sound identification and the perceived familiarity, complexity, and pleasantness of the sounds. In Study 2, the previously developed criteria for scoring naming accuracy were applied to the naming responses of a new sample of subjects, and oral naming times were measured. In Study 3 data were gathered on how subjects categorized the sounds: In the first categorization task - free classification - subjects generated category descriptions for the sounds; in the second task - constrained classification - a different sample of subjects selected the most appropriate category label for each sound from a list of 27 labels generated in the first task. Tables are provided in which the 120 stimuli are sorted by familiarity, complexity, pleasantness, duration, naming accuracy, speed of identification, and category placement. The. WAV sound files are freely available to researchers and clinicians via a sound archive on the World Wide Web; the URL is http://www.cofc.edu/~marcellm/confront.htm.
Sound events are sequences of closely grouped and temporally related environmental sounds that tell a story or establish a sense of place. The goal of our project was to create a set of sound events depicting various scenarios (such as a car accident, cooking breakfast, and walking outdoors) and to gather normative data about how people understand them. Samples of college students listened to 22 sound events over headphones in three self-paced, computer-based studies. In the Identification Task, 43 participants used text boxes to type descriptions of what was happening in the sound events. In the Rating Task, 39 participants used Likert scales to rate the sound events on the attributes of familiarity, complexity, and pleasantness. In the Memory Task, 42 participants answered two multiple-choice questions immediately after listening to each sound event. Detailed tables are provided for the following: (1) Description of the sound events and their components; (2) accuracy and response time measurements for each of the 22 sound events across the three studies; and (3) rank-orderings of the sound events by ease of identification, recognition of details, and rated familiarity, complexity, and pleasantness. Digital files of the stimuli, which may be of interest to auditory cognition researchers and clinical neuropsychologists, may be downloaded from either www.psychonomic.org/archive or www.cofc.edu/-marcellm/sound event studies/sndevent.htm.
The purpose of the present study was to investigate sensory integration in infants. Two groups of infants were repeatedly presented with a standard visual or auditory temporal sequence in a habituation period. In the test period, each group was divided into four subgroups in which the presentation modality and/or the temporal sequence remained the same or were different. Subjects who were presented a different temporal sequence in the test period produced larger responses than did subjects who were presented the same standard temporal sequence. This differential magnitude of responding occurred regardless of the sensory modality in which the temporal sequences were presented. The results of the present study support the conclusion that infants as young as 7 months of age are capable of perceiving equivalences and differences in temporal sequential information within and across sensory modalities.
Nonretarded (NR) individuals typically show better short-term memory for brief sequences of auditory than visual information (the modality effect). The present study attempted to determine whether the failure of Down's syndrome (DS) individuals to show the modality effect is due to the verbal-expressive demands of oral responding in memory tasks. DS, NR and MR (non-DS mentally retarded) subjects listened to or looked at increasingly long sequences of digits and attempted to recall them either orally or manually (through placement of items). Analyses suggested the following: (1) manual responding failed to enhance auditory recall in either DS or any other subjects; and (2) difficulty in recalling auditory stimuli was greatest for DS mentally retarded subjects. An additional assessment of DS, MR and NR subjects on a standardized auditory short-term memory test requiring a nonverbal pointing response replicated the above findings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.