In this response to commentators, I agree with those who suggested that the distinction between exemplar- and abstraction-based accounts is something of a false dichotomy and therefore move to an abstractions-made-of-exemplars account under which (a) we store all the exemplars that we hear (subject to attention, decay, interference, etc.) but (b) in the service of language use, re-represent these exemplars at multiple levels of abstraction, as simulated by computational neural-network models such as BERT, ELMo and GPT-3. Whilst I maintain that traditional linguistic abstractions (e.g. a DETERMINER category; SUBJECT VERB OBJECT word order) are no more than human-readable approximations of the type of abstractions formed by both human and artificial multiple-layer networks, I express hope that the abstractions-made-of-exemplars position can point the way towards a truce in the language acquisition wars: We were all right all along, just focusing on different levels of abstraction.