2017
DOI: 10.3389/fpsyg.2017.01179
|View full text |Cite
|
Sign up to set email alerts
|

Music of the 7Ts: Predicting and Decoding Multivoxel fMRI Responses with Acoustic, Schematic, and Categorical Music Features

Abstract: Underlying the experience of listening to music are parallel streams of auditory, categorical, and schematic qualia, whose representations and cortical organization remain largely unresolved. We collected high-field (7T) fMRI data in a music listening task, and analyzed the data using multivariate decoding and stimulus-encoding models. Twenty subjects participated in the experiment, which measured BOLD responses evoked by naturalistic listening to twenty-five music clips from five genres. Our first analysis ap… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
26
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(27 citation statements)
references
References 29 publications
1
26
0
Order By: Relevance
“…Ghaemmaghami and Sebe (2017) used magnetoencephalogram and electroencephalogram datasets to classify musical stimuli as either pop or rock using SVM (Ghaemmaghami & Sebe, 2016). Further, Case y (2017) and Sengupta et al. (2018) used fMRI data with five distinct music genres, followed by activity‐based multi‐class classification using SVM.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Ghaemmaghami and Sebe (2017) used magnetoencephalogram and electroencephalogram datasets to classify musical stimuli as either pop or rock using SVM (Ghaemmaghami & Sebe, 2016). Further, Case y (2017) and Sengupta et al. (2018) used fMRI data with five distinct music genres, followed by activity‐based multi‐class classification using SVM.…”
Section: Discussionmentioning
confidence: 99%
“…However, there remains considerable uncertainty as to how such genre categories are perceived from complex auditory stimuli and how the human brain subserves this categorization. Neuroimaging studies have decoded music genres from brain activity using support vector machines (SVM) (Case y, 2017; Ghaemmaghami & Sebe, 2016; Sengupta et al., 2018); however, these studies did not clarify how cortical representations of music genres contribute to genre classification.…”
Section: Introductionmentioning
confidence: 99%
“…Data were taken from a published dataset 2 which were repeatedly analyzed previously 4 , 5 , and publicly available from the studyforrest.org project of 20 participants passively listening to five natural, stereo, high-quality music stimuli (6 s duration; 44.1 kHz sampling rate) for each of five different musical genres: 1) Ambient, 2) Roots Country 3) Heavy Metal, 4) 50s Rock’n’Roll, and 5) Symphonic, while fMRI data were recorded in a 7 Tesla Siemens scanner (1.4 mm isotropic voxel size, TR=2 s, matrix size 160 × 160, 36 axial slices, 10% interslice gap). fMRI data were scanner-side corrected for spatial distortions 6 .…”
Section: Stimulus and Fmri Datamentioning
confidence: 99%
“…Researchers have long debated whether the human brain has neural mechanisms dedicated to music, and if so, what computations those mechanisms perform (Patel, 2012;Peretz et al, 2015). These questions have important implications for understanding the organization of auditory cortex (Leaver and Rauschecker, 2010;Norman-Haignere et al, 2015), the neural basis of sensory deficits such as amusia (Peterson and Pennington, 2015;Norman-Haignere et al, 2016;Peretz, 2016), the consequences of auditory expertise (Herholz and Zatorre, 2012), and the computational underpinnings of auditory behavior (Casey, 2017;Kell et al, 2018). Neuroimaging studies have suggested that representations of music diverge from those of other sound categories in non-primary human auditory cortex: some non-primary voxels show partial selectivity for music compared with other categories (Leaver and Rauschecker, 2010;Fedorenko et al, 2012;Angulo-Perkins et al, 2014), and recent studies from our lab, which modeled voxels as weighted sums of multiple response profiles, inferred a component of the fMRI response with clear selectivity for music (Norman-Haignere et al, 2015;Boebinger et al, 2020) that was distinct from nearby speech-selective responses.…”
mentioning
confidence: 99%