“…The current work was motivated by the broad scientific goal of discovering models that quantitatively explain the neuronal mechanisms underlying primate invariant object recognition behavior. To this end, previous work had shown that specific artificial neural network models (ANNs), drawn from a large family of deep convolutional neural networks (DCNNs) and optimized to achieve high levels of object categorization performance on large-scale image-sets, capture a large fraction of the variance in primate visual recognition behaviors (Rajalingham et al, 2015; Jozwik et al, 2016; Kheradpisheh et al, 2016; Kubilius et al, 2016; Peterson et al, 2016; Wallis et al, 2017), and the internal hidden neurons of those same models also predict a large fraction of the image-driven response variance of brain activity at multiple stages of the primate ventral visual stream (Yamins et al, 2013; Cadieu et al, 2014; Khaligh-Razavi and Kriegeskorte, 2014; Yamins et al, 2014; Güçlü and van Gerven, 2015; Cichy et al, 2016; Hong et al, 2016; Seibert et al, 2016; Cadena et al, 2017; Wen et al, 2017). For clarity, we here referred to this sub-family of models as DCNN IC (to denote ImageNet-Categorization training), so as to distinguish them from all possible models in the DCNN family, and more broadly, from the super-family of all ANNs.…”