2021
DOI: 10.31234/osf.io/tbmcg
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On logical inference over brains, behaviour, and artificial neural networks

Abstract: In the cognitive, computational, and neuro- sciences, we often reason about what models (viz., formal and/or computational) represent, learn, or "know", as well as what algorithm they instantiate. The putative goal of such reasoning is to generalize claims about the model in question to claims about the mind and brain. This reasoning process typically presents as inference about the representations, processes, or algorithms the human mind and brain instantiate. Such inference is often based on a model's perfor… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
25
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 20 publications
(25 citation statements)
references
References 42 publications
0
25
0
Order By: Relevance
“…As such, RSA provides a benchmark for rejecting possible models. However, the above simulations show that RSA may be a misleading benchmark for selecting models – two systems may show similar representational geometries and yet work on very different transformations and features of input stimuli (for an in depth discussion about inferring similarity of causal mechanisms from similar outcomes see [38]).…”
Section: Discussionmentioning
confidence: 99%
“…As such, RSA provides a benchmark for rejecting possible models. However, the above simulations show that RSA may be a misleading benchmark for selecting models – two systems may show similar representational geometries and yet work on very different transformations and features of input stimuli (for an in depth discussion about inferring similarity of causal mechanisms from similar outcomes see [38]).…”
Section: Discussionmentioning
confidence: 99%
“…While the goal for many ANN methods is to predict new data, in the cognitive and neural sciences, the goal is to explain the capacities of a system and to understand the mechanisms that lead to observed data. The only way to do this is to pose hypotheses about brain mechanisms, constrained through abductive inference based on what we know to an acceptable degree of verisimilitude about language and the brain, and test explicitly for these mechanisms [ 7 , 8 , 15 , 24 ]. Some forms of data science applications simply replace the thing we aim to understand (e.g., the brain, behavior, the capacity of a system for human language) with another black box (e.g., an ANN model; [ 25 , 26 ]).…”
Section: Examples Of Unfounded Inferencementioning
confidence: 99%
“…We do, however, show, that computations and encodings are similar. By focusing on interpretable comparison (rather than on correlations) and by focusing on internal representations rather than only on behavioral data (see discussion in Guest and Martin 2021), we argue that similarities in both representations and computations exist between brains and deep convolutional layers. These similarities open up possibilities for modeling work in order to gain insights both for how humans acquire and process speech as well as for how deep learning models learn internal representations.…”
Section: Introductionmentioning
confidence: 99%
“…Comparing representations and computations in the human brain and deep learning models is a complex task. The goal of this paper is not to argue that human speech processing operates exactly as in deep convolutional networks (for problems with such an approach, see Guest and Martin 2021). We do, however, argue that computations and encodings are similar in interpretable ways between the two signals and that they result from similar underlying mechanisms (Section 5.2).…”
Section: Introductionmentioning
confidence: 99%