Children with hearing loss (HL) remain at risk for poorer language abilities than normal hearing (NH) children despite targeted interventions; reasons for these differences remain unclear. In NH children, research suggests speech discrimination is related to language outcomes, yet we know little about it in children with HL under the age of 2 years. We utilized a vowel contrast, /a‐i/, and a consonant‐vowel contrast, /ba‐da/, to examine speech discrimination in 47 NH infants and 40 infants with HL. At Mean age =3 months, EEG recorded from 11 scalp electrodes was used to compute the time‐frequency mismatched response (TF‐MMRSE) to the contrasts; at Mean age =9 months, behavioral discrimination was assessed using a head turn task. A machine learning (ML) classifier was used to predict behavioral discrimination when given an arbitrary TF‐MMRSE as input, achieving accuracies of 73% for exact classification and 92% for classification within a distance of one class. Linear fits revealed a robust relationship regardless of hearing status or speech contrast. TF‐MMRSE responses in the delta (1–3.5 Hz), theta (3.5–8 Hz), and alpha (8–12 Hz) bands explained the most variance in behavioral task performance. Our findings demonstrate the feasibility of using TF‐MMRSE to predict later behavioral speech discrimination.