2010
DOI: 10.1016/j.tics.2010.06.002
|View full text |Cite
|
Sign up to set email alerts
|

Letting structure emerge: connectionist and dynamical systems approaches to cognition

Abstract: Connectionist and dynamical systems approaches explain human thought, language and behavior in terms of the emergent consequences of a large number of simple non-cognitive processes. We view the entities that serve as the basis for structured probabilistic approaches as sometimes useful but often misleading abstractions that have no real basis in the actual processes that give rise to linguistic and cognitive abilities or the development of these abilities. While structured probabilistic approaches can be usef… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

5
253
0
3

Year Published

2011
2011
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 386 publications
(261 citation statements)
references
References 61 publications
5
253
0
3
Order By: Relevance
“…In one way, the answer is yes: because neural networks are universal approximators, it is always possible to construct one that approximates the input-output functionality of a specific Bayesian model. In practice, however, the answer is usually no: the two methods have very different strengths and weaknesses, and therefore their value as modelling tools varies depending on the questions being asked (see Griffiths, Chater, Kemp, Perfors, and Tenenbaum (2010) and McClelland et al (2010) for a more thorough discussion of these issues). One difference is that connectionist models make certain commitments about representation that make it difficult to capture explicit symbolic knowledge, of the sort that is commonly incorporated into cognitive theories.…”
Section: Where Does It All Come From?mentioning
confidence: 99%
“…In one way, the answer is yes: because neural networks are universal approximators, it is always possible to construct one that approximates the input-output functionality of a specific Bayesian model. In practice, however, the answer is usually no: the two methods have very different strengths and weaknesses, and therefore their value as modelling tools varies depending on the questions being asked (see Griffiths, Chater, Kemp, Perfors, and Tenenbaum (2010) and McClelland et al (2010) for a more thorough discussion of these issues). One difference is that connectionist models make certain commitments about representation that make it difficult to capture explicit symbolic knowledge, of the sort that is commonly incorporated into cognitive theories.…”
Section: Where Does It All Come From?mentioning
confidence: 99%
“…Already in their first months of life, infants rapidly learn to recognize complex objects and events in their visual input (1)(2)(3). Probabilistic learning models, as well as connectionist and dynamical models, have been developed in recent years as powerful tools for extracting the unobserved causes of sensory signals (4)(5)(6). Some of these models can efficiently discover significant statistical regularities in the observed signals, which may be subtle and of high order, and use them to construct world models and guide behavior (7)(8)(9)(10).…”
mentioning
confidence: 99%
“…[2]), the working hypothesis is that -at a less fine-grained, more abstract level of description -the functions computed by intuitive processes are often well approximated by symbolic descriptions. The intuitive mental processes that actually compute these functions do not however admit a symbolic description: these processes require subsymbolic, connectionist descriptions.…”
Section: Effective Procedures Vs Intuitive Cognitive Processesmentioning
confidence: 99%
“…Then V F and V R are combined by the tensor product to form the actual representational (vector) space for trees V S ≡ V F ⊗ V R . 2 Finally, the filler-role decomposition and vectorial realizations are combined: the vector realization of s, ψ S (s) ∈ V S , is the sum of the vector realizations of the filler-role bindings of s, each of which uses the tensor product to bind together the encodings of the filler and the role:…”
mentioning
confidence: 99%