Music is core to human experience across cultures. Multiple brain regions are engaged in music perception, but the neural dynamics of music processing are not well defined. We applied predictive modeling tools to intracranial EEG data recorded from 29 patients who passively listened to a song (Another Brick in the Wall, Part 1, Pink Floyd), focusing on high-frequency activity (HFA; 70-150Hz) as a marker of local neural population activity. Encoding models characterized the spectrotemporal receptive fields (STRFs) of each electrode and decoding models investigated the population-level song representation. With the STRFs, we confirmed a central role of bilateral superior temporal gyri (STG) in music perception with additional involvement of bilateral sensory-motor cortices (SMC) and inferior frontal gyri (IFG). We also observed a right hemispheric preference for music perception. Using both an independent component analysis (ICA) and temporal modulations, we observed cortical regions tuned to specific musical elements including the vocals, lead guitar notes and rhythm guitar patterns. An ablation analysis selectively removed music-responsive electrodes from decoding models to assess the contribution of anatomical and functional regions to representing the song's acoustics. Ablating electrodes from either left and right hemispheres sites deteriorated decoding accuracy, and we confirmed a right STG preference for music perception. We also report, to our knowledge, the first attempt at reconstructing the song from neural activity using both linear and non-linear decoding models, and discuss methodological factors impacting decoding accuracy.