“…We demonstrate in this study that recurrent neural network models are closer to humans than feedforward ones, irrespective of the grammars' level in the Chomsky's hierarchy. This result endorses recurrent models as the more plausible cognitive architecture underlying language, pointing to an essential role of recursion in grammar learning (Rohrmeier, Dienes, Guo, & Fu, 2014). However, it could be fetching to generalize this conclusion to natural language acquisition (Corballis, 2014;Jackendoff, 2011;Fitch & Friederici, 2012;Vergauwen, 2014): the Chomsky hierarchy, despite being an excellent reference, does not embrace all natural grammars (Jäger & Rogers, 2012), and does not fully reflect human cognitive complexity (Öttl et al, 2015).…”