“…These have been shown to improve the intelligibility of speech in stationary noise for CI users (Loizou et al, 2005;Dawson et al, 2011;Mauger et al, 2012) and NH listeners using CI simulations (Bolner et al, 2016;Lai et al, 2018). Data-based algorithms using machine-learning (ML) techniques, such as deep neural networks (DNNs) or Gaussian mixture models (GMMs), were successful for speech in non-stationary, multi-talker babble and achieved significant SI improvements for NH (Kim et al, 2009;Bentsen et al, 2018), hearing-impaired (HI; Healy et al, 2013;Healy et al, 2015;Healy et al, 2019;Chen et al, 2016;Monaghan et al, 2017;Bramsløw et al, 2018), and CI listeners (Hu and Loizou, 2010;Goehring et al, 2017;Lai et al, 2018). Improvements of more recent approaches over earlier ones have been mainly driven by two factors: the use of more powerful DNN-based regression systems instead of classification systems, and the use of a ratio mask instead of a binary mask as the training target (Madhu et al, 2013;Bentsen et al, 2018).…”