Human sound localization results primarily from the processing of binaural differences in sound level and arrival time for locations in the horizontal plane (azimuth) and of spectral shape cues generated by the head and pinnae for positions in the vertical plane (elevation). The latter mechanism incorporates two processing stages: a spectral-to-spatial mapping stage and a binaural weighting stage that determines the contribution of each ear to perceived elevation as function of sound azimuth. We demonstrated recently that binaural pinna molds virtually abolish the ability to localize sound-source elevation, but, after several weeks, subjects regained normal localization performance. It is not clear which processing stage underlies this remarkable plasticity, because the auditory system could have learned the new spectral cues separately for each ear (spatial-mapping adaptation) or for one ear only, while extending its contribution into the contralateral hemifield (binaural-weighting adaptation). To dissociate these possibilities, we applied a long-term monaural spectral perturbation in 13 subjects. Our results show that, in eight experiments, listeners learned to localize accurately with new spectral cues that differed substantially from those provided by their own ears. Interestingly, five subjects, whose spectral cues were not sufficiently perturbed, never yielded stable localization performance. Our findings indicate that the analysis of spectral cues may involve a correlation process between the sensory input and a stored spectral representation of the subject's ears and that learning acts predominantly at a spectralto-spatial mapping level rather than at the level of binaural weighting.