Understanding speech in noisy environments, especially in communication situations with two or more competing speakers, is a challenging task. Despite their ongoing improvement, assistive listening devices and speech processing approaches still do not perform well enough in noisy multi-talker environments, as they fail to restore the intelligibility of a speaker of interest among competing sound sources. We developed a real-time feasible deep learning algorithm that can extract the voice of a target speaker, as indicated by a short enrollment utterance, from a mixture of multiple concurrent speakers in background noise. Objective evaluation with computational metrics demonstrated that the algorithm successfully extracts the target speaker from noisy multi-talker mixtures. This was achieved using a single algorithm that generalized to unseen speakers, different numbers of speakers and relative speaker levels, speech corpora, and an unseen language. A double-blind sentence recognition test on mixtures of one, two, and three speakers in restaurant noise was conducted with normal-hearing listeners and indicated significant intelligibility improvements with the speaker-informed model. In conclusion, we demonstrate that deep learning-based target speaker extraction can enhance speech perception in noisy multi-talker environments where uninformed speech enhancement methods fail.