The present research addresses whether music training acts as a mediator of the recall of spoken and sung lyrics and whether presentation rate is the essential variable, rather than the inclusion of melody. In Experiment 1, 78 undergraduates, half with music training and half without, heard spoken or sung lyrics. Recall for sung lyrics was superior to that for spoken lyrics for both groups. In Experiments 2 and 3, presentation rate was manipulated so that the durations of the spoken and the sung materials were equal. Withpresentation rate equated, there was no advantage for sung over spoken lyrics. In all the experiments, those participants with music training outperformed those without training in all the conditions. The results suggest that music training leads to enhanced memory for verbal material. Previous findings of melody's aiding text recall may be attributed to presentation rate.
INTEREST IN STUDYING THE EFFECTS of extensive music training on nonmusical perceptual and cognitive abilities has grown in recent years. Here, we present evidence that formal music instruction is associated with superior verbal and visual memory. Participants included 15 highly trained pianists and 21 individuals with little or no formal music training. The groups were comparable in terms of age, gender distribution, and socioeconomic status. Musicians showed superior immediate and delayed recall of word lists and greater use of a semantic clustering strategy during initial list-learning than nonmusicians. They also exhibited superior learning, delayed recall, and delayed recognition for visual designs. Group differences in delayed free recall of both words and designs persisted after controlling statistically for estimated Full Scale IQ. These results suggest that extensive music training is associated with a generalized enhancement of auditory and visual memory functions.
Years of formal music training and proficiency at a nonmusical skill—— verbal recall——are surprisingly associated (e.g., A. R. Kilgour, L. S. Jakobson, & L. L. Cuddy, 2000). The present study proposes an indirect mechanism to account for this association. It is proposed that music training strengthens auditory temporal-order processing, and that temporalorder processing then mediates the relationship between years of music training and prose recall.
WHEN PROCESSING MUSIC STIMULI, MOST PEOPLE show a global bias (e.g., Evers et al., 1999). Extensive music training, however, develops both global (Halpern & Bower, 1982) and local (Burton et al., 1989) music processing abilities. Our goal was to determine whether enhancement of musicians' local processing abilities is domain-specific or extends to processing nonmusical, visual stimuli. Musicians outperformed nonmusicians on the Group Embedded Figures Test (Experiment 1) and on Block Design (Experiment 2). Additionally, musicians' ability to copy drawings of physically impossible objects accurately was also superior to that of nonmusicians (Experiment 2). These effects could not be accounted for by group differences in several demographic indicators (age, education, gender, or SES), or (in Experiment 2) in verbal intelligence. The results provide converging evidence that extensive music training is specifically associated with superior visual processing of local details, beyond any benefits it may have on verbal intelligence.
We investigated participants' ability to identify and represent faces by hand. In Experiment 1, participants proved surprisingly capable of identifying unfamiliar live human faces using only their sense of touch. To evaluate the contribution of geometric and material information more directly, we biased participants toward encoding faces more in terms of geometric than material properties, by varying the exploration condition. When participants explored the faces both visually and tactually, identification accuracy did not improve relative to touch alone. When participants explored masks of the faces, thereby eliminating material cues, matching accuracy declined substantially relative to tactual identification of live faces. In Experiment 2, we explored intersensory transfer of face information between vision and touch. The findings are discussed in terms of their relevance to haptic object processing and to the faceprocessing literature in general.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.