In the present study we examined how different aspects of a person's life, such as the amount of stress experienced, levels of optimism, and the amount of musical training received, were related to their motives for listening to music (for emotional regulation and/or for cognitive stimulation) and their preferences for what types of music to listen to. Participants (N = 154) completed surveys measuring stress, optimism, music uses, and music preferences. Results indicate that high stress ratings predicted the use of music for emotional regulation. Additionally, optimistic individuals also tended to use music emotionally, meaning that stress and optimism, though highly negatively correlated, appear to influence uses of music independently. People with more music training followed a different pattern; even though they had higher stress ratings and lower optimism ratings overall, individuals with music training tended to listen to music for cognitive reasons more than for emotional regulation. These findings help us further understand the variables that lead to individual differences in music uses and preferences.
The current study examined the relationship between individual differences in uses of music (i.e. motives for listening to music), music preferences (for different genres), and positive affect (PA) and negative affect (NA), thus linking two areas of past research into a more comprehensive model. A sample of 193 South African adolescents (ages 12–17) completed measures of the above constructs and data were analyzed via correlations and structural equation modeling (SEM). Significant correlations between affect and uses of music were tested using SEM; a model whereby PA influenced background and cognitive uses of music, NA influenced emotional use of music, and higher uses of music led to increased preferences for music styles was supported. Future research for uses of music and music preferences are discussed.
An unresolved issue in speech perception concerns whether top-down linguistic information influences perceptual responses. We addressed this issue using the event-related-potential technique in two experiments that measured cross-modal sequential-semantic priming effects on the auditory N1, an index of acoustic-cue encoding. Participants heard auditory targets (e.g., “potatoes”) following associated visual primes (e.g., “MASHED”), neutral visual primes (e.g., “FACE”), or a visual mask (e.g., “XXXX”). Auditory targets began with voiced (/b/, /d/, /g/) or voiceless (/p/, /t/, /k/) stop consonants, an acoustic difference known to yield differences in N1 amplitude. In Experiment 1 ( N = 21), semantic context modulated responses to upcoming targets, with smaller N1 amplitudes for semantic associates. In Experiment 2 ( N = 29), semantic context changed how listeners encoded sounds: Ambiguous voice-onset times were encoded similarly to the voicing end point elicited by semantic associates. These results are consistent with an interactive model of spoken-word recognition that includes top-down effects on early perception.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.