“…They may also not update their prediction on every trial, unlike the optimal solution (Gallistel et al, 2014;Khaw et al, 2017). Finally, there is substantial interindividual variability which does not exist in the optimal solution (Khaw et al, 2021;Nassar et al, 2010Nassar et al, , 2012Prat-Carrabin et al, 2021). In the future, these suboptimalities could be explored using our networks by making them suboptimal in three ways (among others): by stopping training before quasi-optimal performance is reached (Caucheteux & King, 2021;Orhan & Ma, 2017), by constraining the size of the network or its weights (with hard constraints or with regularization penalties) (Mastrogiuseppe & Ostojic, 2017;Sussillo et al, 2015), or by altering the network in a certain way, such as pruning some of the units or some of the connections (Blalock et al, 2020;Chechik et al, 1999;LeCun et al, 1990;Srivastava et al, 2014), or introducing random noise into the activity (Findling et al, 2021;Findling & Wyart, 2020;Legenstein & Maass, 2014).…”