“…We investigate whether and how NNs, as a proxy for cognitive space wherein learning occurs, reveal children's manifestation of the Agent‐First strategy in comprehension. We develop four NN models— Word2Vec (Mikolov et al., 2013), Long Short‐Term Memory ( LSTM ; Hochreiter & Schmidhuber, 1997), Bidirectional Encoder Representations from Transformers ( BERT ; Devlin et al., 2018), Generative Pre‐trained Transformer 2 ( GPT‐2 ; Radford et al., 2019)—and measure their classification performance on the same stimuli used in Shin (2021). Given the special status of this strategy in child language development as a window to the interface between linguistic knowledge and domain‐general factors, scrutinising the extent to which deep‐learning algorithms capture children's language behaviour with respect to this comprehension bias is expected to reveal the explainability of artificial intelligence for child language, and more fundamentally, for (the developing nature of) a child processor.…”