How a system represents information tightly constrains the kinds of problems it can solve.Humans routinely solve problems that appear to require structured representations of stimulus properties and relations. Answering the question of how we acquire these representations has central importance in an account of human cognition. We propose a theory of how a system can learn invariant responses to instances of similarity and relative magnitude, and how structured relational representations can be learned from initially unstructured inputs. We instantiate that theory in the DORA (Discovery of Relations by Analogy) computational framework. The result is a system that learns structured representations of relations from unstructured flat feature vector representations of objects with absolute properties. The resulting representations meet the requirements of human structured relational representations, and the model captures several specific phenomena from the literature on cognitive development. In doing so, we address a major limitation of current accounts of cognition, and provide an existence proof for how structured representations might be learned from experience. KEYWORDS: relation learning, predicate learning, neural networks, similarity, relative magnitude, invariance, learning structured representations .
CC-BY-NC-ND 4.0 International license not peer-reviewed) is the author/funder. It is made available under aThe copyright holder for this preprint (which was . http://dx.doi.org/10.1101/198804 doi: bioRxiv preprint first posted online Oct. 18, 2017; Learning structured representations 3 To reason relationally is to reason about objects based on the relations that those objects play, rather than based on the literal features of those objects (see, e.g., Holyoak, 2012;Holyoak & Thagard, 1995). For example, when we make an analogy between the nucleus of an atom and the sun, we do so based on a common relation-e.g., that both nuclei and suns are larger than their orbiting bodies (planets and electrons respectively)-despite the fact that nuclei and suns are otherwise not particularly similar. Humans routinely draw inferences based on relations, from the mundane ("my kid won't eat a portion that big"), to the sublime ("the cardinal number of the reals between 0-1 is larger than the cardinal number of the positive integers"), and relational reasoning has been shown to importantly contribute to abilities such as analogy (e.g., Holyoak & Thagard, 1995), categorisation (e.g., Medin, Goldstone, & Gentner, 1993), concept learning (e.g., Doumas & Hummel, 2004, 2013, and visual cognition (e.g., Biederman, 1987, Hummel, 2013. In fact, the capacity to represent and reason about relations has been posited as the key difference in human and non-human animal cognition (Penn, Holyoak, & Povinelli, 2008).Perhaps the most plausible explanation of how humans are able to reason relationally is that we can represent relations as abstract structures that take arguments-i.e., as predicates (see, e.g., Holyoak, 2012;Holyoak ...