During continuous speech perception, patterns of endogenous neural activity become time-locked to acoustic stimulus features, such as the speech amplitude envelope. This speech-brain coupling can be decoded using non-invasive brain imaging techniques, including electroencephalography (EEG). Methods like these may provide clinical use as an objective measure of stimulus encoding by the brain, for example, in the case of cochlear implant (CI) listening. Yet, the CI-transmitted speech signal is severely spectrally degraded, rendering its amenability to neural decoding unknown. Furthermore, interplay between acoustic and linguistic factors may lead to top-down modulation of perception, thereby challenging potential audiological applications. We assess neural decoding of the speech envelope under spectral degradation with EEG in acoustically hearing listeners (n = 38; 18-35 years old) using vocoded speech. Additionally, we dissociate sensory from higher-order processing by employing intelligible (English) and non-intelligible (Dutch) stimuli. Subject-specific and group decoders were trained to reconstruct the speech envelope from held-out EEG, with decoder significance determined via random permutation testing. Whereas speech envelope reconstruction did not vary by acoustic clarity, intelligible speech was associated with better decoding accuracy in general. Results were similar across subject-specific and group analyses, with less consistent effects of spectral degradation in group decoding. Permutation tests revealed possible differences in decoder statistical significance by experimental condition. In general, while robust neural decoding was observed at the group level, variability within participants would most likely prevent the clinical use of such a measure to differentiate levels of spectral degradation and intelligibility on an individual basis.