2021
DOI: 10.3233/faia210348
|View full text |Cite
|
Sign up to set email alerts
|

Chapter 1. Neural-Symbolic Learning and Reasoning: A Survey and Interpretation1

Abstract: The study and understanding of human behaviour is relevant to computer science, artificial intelligence, neural computation, cognitive science, philosophy, psychology, and several other areas. Presupposing cognition as basis of behaviour, among the most prominent tools in the modelling of behaviour are computational-logic systems, connectionist models of cognition, and models of uncertainty. Recent studies in cognitive science, artificial intelligence, and psychology have produced a number of cognitive models … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
56
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
4
2

Relationship

1
9

Authors

Journals

citations
Cited by 54 publications
(57 citation statements)
references
References 0 publications
0
56
0
1
Order By: Relevance
“…Finally, we note that disagreements between symbolic AI and connectionism and statistical learning in general have occurred recurrently throughout the history of AI, as shown by Minsky and Papert (1969); Marcus (2018). Alongside those tensions, there have also been calls for the combination of elements of both fields (Fodor et al, 1988;Minsky, 1991;Besold et al, 2017;Lake et al, 2017;Marcus, 2020), and correspondingly, there have already been many attempts to do so (Sun and Bookman, 1994;d'Avila Garcez et al, 2009;Manhaeve et al, 2018;Yi et al, 2018). However, the message that we want to convey with this discussion, as summarized by the last sentences of section 4 and the first sentences of this section, is more general than a call for the integration of classical AI with connectionism.…”
Section: Improving Neural Network Representationsmentioning
confidence: 99%
“…Finally, we note that disagreements between symbolic AI and connectionism and statistical learning in general have occurred recurrently throughout the history of AI, as shown by Minsky and Papert (1969); Marcus (2018). Alongside those tensions, there have also been calls for the combination of elements of both fields (Fodor et al, 1988;Minsky, 1991;Besold et al, 2017;Lake et al, 2017;Marcus, 2020), and correspondingly, there have already been many attempts to do so (Sun and Bookman, 1994;d'Avila Garcez et al, 2009;Manhaeve et al, 2018;Yi et al, 2018). However, the message that we want to convey with this discussion, as summarized by the last sentences of section 4 and the first sentences of this section, is more general than a call for the integration of classical AI with connectionism.…”
Section: Improving Neural Network Representationsmentioning
confidence: 99%
“…Several techniques have been proposed to augment Convolutional Neural Networks (CNNs) with relationship representation and reasoning capabilities, including Relational Network [6], Graph Neural Networks [7] and Neural-Symbolic (NeSy) techniques [8,3,5]. For a more general introduction to NeSy techniques, the reader is referred to recent surveys [9,10].…”
Section: Related Workmentioning
confidence: 99%
“…Probabilistic modeling in discrete spaces, especially those with compositional structure, is important due to the universality of applications of discrete data structures, such as in natural language processing (Tai et al, 2015) or in symbolic reasoning (Besold et al, 2017). However, distributions in high-dimensional discrete spaces are generally hard to model, as they may feature rapid combinatorial growth of modes.…”
Section: Introductionmentioning
confidence: 99%