Understanding how the brain represents speech sounds is necessary to delineate the mapping between the acoustic signal and words stored in long-term memory. Spoken word recognition models and phonological theory propose that abstract features are linguistic units that play a central role in speech processing. The current goal was to determine whether the brain represents abstract phonological features. English phonology functionally codes stops and fricatives as voiced or voiceless. Fricatives and stops, however, encode voicing in distinct phonetic manners. Fricatives use a spectral cue, while stops use a temporal cue. Participants listened to syllables in a many-to-one oddball design while their EEG was recorded. A critical design element was the presence of inter-category variation within the standards. In one block, both voiceless stops and fricatives were the standard. In the other block, both voiced stops and fricatives were the standards. A many-to-one relationship exists only if the standards are grouped together. Oscillatory activity was also measured. Results show an MMN effect in the voiceless standards block–an asymmetric MMN–and increased beta-band oscillatory power prior to voiceless standards stimulus-onset. These findings suggest that (i) the brain constructed an auditory memory trace of the standards based on the shared [voiceless] feature, which is only functionally defined, (ii) voiced consonants are underspecified and, (iii) features can serve as a basis for predictive processing. Taken together, these results point toward the brain’s ability to functionally code distinct phonetic cues together and that abstract features can be used to parse the continuous acoustic signal.