2022
DOI: 10.1007/s11229-022-03694-y
|View full text |Cite
|
Sign up to set email alerts
|

Mapping representational mechanisms with deep neural networks

Abstract: The predominance of machine learning based techniques in cognitive neuroscience raises a host of philosophical and methodological concerns. Given the messiness of neural activity, modellers must make choices about how to structure their raw data to make inferences about encoded representations. This leads to a set of standard methodological assumptions about when abstraction is appropriate in neuroscientific practice. Yet, when made uncritically these choices threaten to bias conclusions about phenomena drawn … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 70 publications
0
1
0
Order By: Relevance
“…It is not necessary for downstream systems to read out whole neural spaces to consider neural pattern-similarity structures functional. We might understand state space structures as structural descriptions that provide a way to capture the functional similarities of a neural system’s underlying processes, while abstracting away from the specific properties of the mechanism that generates them ( Kieval 2022 ).…”
Section: A Defense Of the Structural Approachmentioning
confidence: 99%
“…It is not necessary for downstream systems to read out whole neural spaces to consider neural pattern-similarity structures functional. We might understand state space structures as structural descriptions that provide a way to capture the functional similarities of a neural system’s underlying processes, while abstracting away from the specific properties of the mechanism that generates them ( Kieval 2022 ).…”
Section: A Defense Of the Structural Approachmentioning
confidence: 99%