Over the last decade, deep neural networks (DNNs) have transformedthe state of the art in artificial intelligence. In domains like languageproduction and reasoning, long considered uniquely human abilities,models like GPT-4 have proven capable of shockingly human-likeperformance. However, in contrast to classical symbolic models, neuralnetworks can be inscrutable even to their designers, making it unclearwhat significance, if any, they have for theories of human cognition. Twoextreme reactions are common. Neural network enthusiasts argue that,because the inner workings of DNNs do not seem to resemble any of thetraditional constructs of psychological or linguistic theory, their successrenders these theories obsolete and motivates a radical paradigm shift.Neural network skeptics instead take this inability to interpret DNNs inpsychological terms to mean that their success is irrelevant topsychological science. In this paper, we review recent work thatsuggests that, in fact, the internal mechanisms of DNNs often can beinterpreted in the functional terms characteristic of psychologicalexplanations. We argue that this undermines the shared assumption ofboth extremes and opens the door for DNNs to inform theories ofcognition and its development.