Artificial intelligence (AI) based synthesized speech has become almost human-like, ubiquitous in everyday live (e.g., smart phones, grocery self-checkouts), and relatively easy to synthesize. This opens opportunities to use AI speech in research and clinical areas, such as hearing sciences, audiology, and speech pathology, where recordings of speech materials by voice actors can be time- and cost-intensive. However, much research thus far has focused on technological developments towards more human-like voices evaluated by younger adults. How older adults perceive AI speech is unclear. Using Google’s Wavenet text-to-speech synthesizer, the current study explores whether AI speech can be used to investigate common speech-in-noise perception phenomena in younger and older adults. Speech intelligibility was recorded for human speech and synthesized speech masked by a modulated or an unmodulated multi-talker babble noise. For both human and AI speech, speech intelligibility was better for the modulated than the unmodulated masker (masking release), and this masking-release benefit was reduced in older adults. Release from masking effects were comparable between human and AI speech, suggesting that modern AI speech could be useful for hearing and speech research. The data further suggest that older adults recognize the presentation of AI speech less frequently, rate AI speech as more natural, and are less able to discriminate between human and AI speech compared to younger adults. Research on speech perception in older adults may thus especially benefit from modern AI-based synthesized speech because, to them, AI speech feels much like spoken by a human.