“…Thus, like children, iCub integrates visual, auditory, tactile and proprioceptive information to generate behaviour, for example auditory and visual information in a word learning task (although which modalities contribute to a given simulation are decided a priori by the modeller). Thus far, iCub has captured a range of developmental phenomena, for example motor development (Tikhanoff, Cangelosi, & Metta, 2011), visuomotor development (Shaw, Law, & Lee, 2014), intrinsically motivated exploration (Maestre, Cully, Gonzales, & Doncieux, 2015), affordancebased verb learning (Marocco, Cangelosi, Fischer, & Belpaeme, 2010), and spatiallygrounded noun learning (Morse et al, 2015; for a review see Cangelosi & Schlesinger, 2015).…”