“…Unlike convolutional neural networks, whose architectural design principles are roughly inspired by biological vision [Lindsay, 2021], the design of current neural network language models is largely uninformed by psycholinguistics and neuroscience. And yet, there is an ongoing effort to adopt and adapt neural network language models to serve as computational hypotheses of how humans process language, making use of a variety of different architectures, training corpora, and training tasks [e.g., Wehbe et al, 2014, Toneva and Wehbe, 2019, Heilbron et al, 2020, Jain et al, 2020, Lyu et al, 2021, Schrimpf et al, 2021, Wilcox et al, 2021, Goldstein et al, 2022, Caucheteux and King, 2022. We found that recurrent neural networks make markedly human-inconsistent predictions once pitted against transformer-based neural networks.…”