2022
DOI: 10.48550/arxiv.2205.01128
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Neurocompositional computing: From the Central Paradox of Cognition to a new generation of AI systems

Abstract: What explains the dramatic progress from 20th-century to 21st-century AI, and how can the remaining limitations of current AI be overcome? The widely accepted narrative attributes this progress to massive increases in the quantity of computational and data resources available to support statistical learning in deep artificial neural networks. We show that an additional crucial factor is the development of a new type of computation. Neurocompositional computing (Smolensky et al., 2022) adopts two principles tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 34 publications
0
2
0
Order By: Relevance
“…Third, and relatedly, prospects for modeling continuoustime, continuous-valued neural systems with emergent logicalsymbolic operations remain unclear, bringing us back to the central "paradox" [29] that historically divided symbolists and connectionists [7][8][9]: how can a physical system learn and implement symbolic rules? The argument has more recently taken the form of whether-or not [10][11][12]the impressive progress represented by large transformer models already reflects an unrecognized emergence of limited symbolic (e.g., compositional) capacities [3,6,99,100]. Alternatively, theory-driven and simulation-based approaches [101] have shown that specially devised nonsymbolic neural nets can facilitate the developmental emergence of finite automata or Turing machine-like computing capacities [102][103][104].…”
Section: Do Brains Compute With Continuous Dynamics?mentioning
confidence: 99%
“…Third, and relatedly, prospects for modeling continuoustime, continuous-valued neural systems with emergent logicalsymbolic operations remain unclear, bringing us back to the central "paradox" [29] that historically divided symbolists and connectionists [7][8][9]: how can a physical system learn and implement symbolic rules? The argument has more recently taken the form of whether-or not [10][11][12]the impressive progress represented by large transformer models already reflects an unrecognized emergence of limited symbolic (e.g., compositional) capacities [3,6,99,100]. Alternatively, theory-driven and simulation-based approaches [101] have shown that specially devised nonsymbolic neural nets can facilitate the developmental emergence of finite automata or Turing machine-like computing capacities [102][103][104].…”
Section: Do Brains Compute With Continuous Dynamics?mentioning
confidence: 99%
“…It first arose with the 'connectionist' networks of the 1980s (McClelland et al, 1986;Smolensky, 1988). Even as contemporary models have progressed beyond these predecessors (Smolensky et al, 2022), they continue to work according to many of the same principles. At a fundamental level, all neural networks are made up of layers of interconnected units ('nodes' or 'neurons'), each of which calculates a weighted combination of its inputs and applies a (usually nonlinear) function to the result, which it then passes to further nodes.…”
mentioning
confidence: 99%