24What is the neural basis of the human capacity for music? Neuroimaging has suggested some 25 segregation between responses to music and other sounds, like speech. But it remains unclear 26 whether finer-grained neural organization exists within the domain of music. Here, using intracranial 27 recordings from the surface of the human brain, we demonstrate a selective response to music with 28 vocals, distinct from responses to speech and to music more generally. Song selectivity was evident 29 using both data-driven component modeling and single-electrode analyses, and could not be 30 explained by standard acoustic features. These results suggest that music is represented by multiple 31 neural populations selective for different aspects of music, at least one of which is specialized for 32 the analysis of song. 33 3 Music is a quintessentially human capacity that is present in some form in nearly every society 34 (Savage et al., 2015;Lomax, 2017; Mehr et al., 2018), and that differs substantially from its closest 35 analogues in non-human animals (Patel, 2019). Researchers have long debated whether the human 36 brain has neural mechanisms dedicated to music, and if so, what computations those mechanisms 37 perform (Patel, 2012;Peretz et al., 2015). These questions have important implications for 38 understanding the organization of auditory cortex (Leaver and Rauschecker, 2010; Norman-39 Haignere et al., 2015), the neural basis of sensory deficits such as amusia (Peterson and 40 Pennington, 2015;Norman-Haignere et al., 2016;Peretz, 2016), the consequences of auditory 41 expertise (Herholz and Zatorre, 2012), and the computational underpinnings of auditory behavior 42 (Casey, 2017;Kell et al., 2018). 43 44Neuroimaging studies have suggested that representations of music diverge from those of other 45 sound categories in non-primary human auditory cortex: some non-primary voxels show partial 46 selectivity for music compared with other categories (Leaver and Rauschecker, 2010; Fedorenko et 47 al., 2012;Angulo-Perkins et al., 2014), and a recent study from our lab, which modeled voxels as 48 weighted sums of multiple response profiles, inferred a component of the fMRI response with clear 49 selectivity for music (Norman-Haignere et al., 2015). However, there are few reports of finer-grained 50 organization within the domain of music (Casey, 2017), potentially due to the coarse resolution of 51 fMRI. As a consequence, we know little about the neural code for music. 52 53Here, we tested for finer-grained selectivity for music using intracranial recordings from the human 54 brain (electrocorticography or ECoG) ( Fig 1A). We measured ECoG responses to a diverse set of 55 165 natural sounds, and submitted these responses to a novel decomposition method that is well-56 suited to the statistical structure of ECoG to reveal dominant response components of auditory 57cortex. This data-driven method revealed multiple music-and speech-selective response 58 components. Our key finding is that one of these components re...