“…First, although the Sig-Morphon 2020 datasets are balanced by paradigm cell, real datasets are Zipfian, with sparse coverage of cells (Blevins et al, 2017;Lignos and Yang, 2018). For languages with large paradigms, the model thus requires the capacity to fill cells for which no exemplar can be retrieved, perhaps using a variant of adaptive source selection (Erdmann et al, 2020;Kann et al, 2017a). Second, the similar-exemplar model performs better in one-shot transfer experiments, but is hampered in the su- Finally, since the memory-based architecture is cognitively inspired, it might be adapted as a cognitive model of language learning in contact situations.…”