To what extent do speech and music processing rely on domain-specific and domain-general neural networks? Adopting a dynamical system framework, we investigate the presence of frequency-specific and network-level selectivity and combine it with a statistical approach in which a clear distinction is made between shared, preferred, and category-selective neural responses. Using intracranial EEG recordings in 18 epilepsy patients listening to natural and continuous speech and music, we show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of regional selectivity. Instead, neural selectivity is restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work addresses a longstanding debate and redefines an epistemological posture on how to map cognitive and brain functions.