The Goldsmiths Musical Sophistication Index (Gold-MSI) was recently proposed as a self-report measure of musical skills and behaviors in the general population. Although it is becoming a widely used tool, relatively little is known about its correlates, and adaptations into different languages will be crucial for cross-cultural comparisons and to allow for use beyond the original validation context. In this study, we adapted the Gold-MSI for use with Portuguese speaking individuals and evaluated it with a Portuguese sample ( N = 408; age range = 17–66 years; 306 women). We demonstrate that the Portuguese version of the Gold-MSI has appropriate psychometric properties, including good internal consistency and very good test–retest reliability. This was observed for the five subscales and for the general musical sophistication index (α values ⩾ 0.82, r values ⩾ 0.84). Using confirmatory factor analysis, the expected underlying factor structure was also confirmed. In addition, we identified associations between individual differences on the Gold-MSI and socio-demographic factors (age, sex, education, socio-economic status), personality traits, and music preferences. The Portuguese Gold-MSI is freely available, and it offers a reliable and valid tool that can contribute to the refined assessment of musical sophistication in a range of research contexts.
We sought to determine whether an objective test of musical ability could be successfully administered online. A sample of 754 participants was tested with an online version of the Musical Ear Test (MET), which had Melody and Rhythm subtests. Both subtests had 52 trials, each of which required participants to determine whether standard and comparison auditory sequences were identical. The testing session also included the Goldsmiths Musical Sophistication Index (Gold-MSI), a test of general cognitive ability, and self-report questionnaires that measured basic demographics (age, education, gender), mind-wandering, and personality. Approximately 20% of the participants were excluded for incomplete responding or failing to finish the testing session. For the final sample ( N = 608), findings were similar to those from in-person testing in many respects: (1) the internal reliability of the MET was maintained, (2) construct validity was confirmed by strong associations with Gold-MSI scores, (3) correlations with other measures (e.g., openness to experience, cognitive ability, mind-wandering) were as predicted, (4) mean levels of performance were similar for individuals with no music training, and (5) musical sophistication was a better predictor of performance on the Melody than on the Rhythm subtest. In sum, online administration of the MET proved to be a reliable and valid way to measure musical ability. Supplementary Information The online version contains supplementary material available at 10.3758/s13428-021-01641-2.
Music training is widely assumed to enhance several nonmusical abilities, including speech perception, executive abilities, reading, and emotion recognition. This assumption is based primarily on cross-sectional comparisons between musicians and nonmusicians. It remains unclear, however, whether training itself is necessary to explain the musician advantages, or whether factors such as innate predispositions and informal musical experience could produce similar effects. Here, we sought to clarify this issue by examining the association between music and vocal emotion recognition. The sample (N = 169) comprised musically trained and untrained listeners who varied widely in their music perception abilities, as assessed through self-report and performance-based measures. The emotion recognition tasks required listeners to categorize emotions in nonverbal vocalizations (e.g., laughter, crying) and in speech prosody. Music training was associated positively with emotion recognition across tasks, but the effect was small. We also found a positive association between music perception abilities and emotion recognition in the entire sample, even with music training held constant. In fact, untrained participants with good musical abilities were as good as highly trained musicians at recognizing vocal emotions. Moreover, the association of music training with emotion recognition was fully mediated by auditory and music perception skills. Thus, in the absence of formal music training, individuals who were 'naturally' musical showed musician-like performance at recognizing vocal emotions. These findings highlight an important role for predispositions and informal musical experience in associations between music and nonmusical domains.
Voices are a primary source of emotional information in everyday interactions. Being able to process non-verbal vocal emotional cues, namely those embedded in speech prosody, impacts on our behavior and communication. Extant research has delineated the role of temporal and inferior frontal brain regions for vocal emotional processing. A growing number of studies also suggest the involvement of the motor system, but little is known about such potential involvement. Using resting-state fMRI, we ask if the patterns of motor system intrinsic connectivity play a role in emotional prosody recognition in children. Fifty-five 8-year-old children completed an emotional prosody recognition task and a resting-state scan. Better performance in emotion recognition was predicted by a stronger connectivity between the inferior frontal gyrus (IFG) and motor regions including primary motor, lateral premotor and supplementary motor sites. This is mostly driven by the IFG pars triangularis and cannot be explained by differences in domain-general cognitive abilities. These findings indicate that individual differences in the engagement of sensorimotor systems, and in its coupling with inferior frontal regions, underpin variation in children's emotional speech perception skills. They suggest that sensorimotor and higher-order evaluative processes interact to aid emotion recognition, and have implications for models of vocal emotional communication.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.