2023
DOI: 10.1016/j.neuron.2023.06.007
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting the retinal neural code for natural scenes: From computations to neurons

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

2
30
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 26 publications
(35 citation statements)
references
References 56 publications
(78 reference statements)
2
30
0
Order By: Relevance
“…This includes finding certain units tuned to the direction and orientation of moving gratings, as has similarly been reported in the retina [74]; units that exclusively fire to high-spatial-frequency gratings only if they move, like Y-type cells in the retina [20]; units tuned for the differential motion of objects versus their background [25]; units tuned for the anticipation of moving objects [24]; and units tuned for stimulus omission within a temporal sequence of flashing lights [22,23]. Certain complex retinal phenomena (like omission and anticipation responses) have been shown to emerge in non-spiking encoding models of the retina [75,76]. These phenomena emerge as a consequence of fitting models to retinal responses, whereas our normative approach rather examines whether these phenomena can be explained as a consequence of underlying principles like compression or prediction.…”
Section: Discussionmentioning
confidence: 99%
“…This includes finding certain units tuned to the direction and orientation of moving gratings, as has similarly been reported in the retina [74]; units that exclusively fire to high-spatial-frequency gratings only if they move, like Y-type cells in the retina [20]; units tuned for the differential motion of objects versus their background [25]; units tuned for the anticipation of moving objects [24]; and units tuned for stimulus omission within a temporal sequence of flashing lights [22,23]. Certain complex retinal phenomena (like omission and anticipation responses) have been shown to emerge in non-spiking encoding models of the retina [75,76]. These phenomena emerge as a consequence of fitting models to retinal responses, whereas our normative approach rather examines whether these phenomena can be explained as a consequence of underlying principles like compression or prediction.…”
Section: Discussionmentioning
confidence: 99%
“…Popular computational methods of subunit inference in recent years have been linear-nonlinear cascade models (Maheswaranathan et al 2018; Real et al 2017), approaches of convolutional neural network models (Maheswaranathan et al 2023; McIntosh et al 2016; Tanaka et al 2019), and methods of statistical inference (Liu et al 2017; Shah et al 2020). Linear-nonlinear-linear-nonlinear (LNLN) models consist of a layer of linear filters of the spatial or spatiotemporal subunits with nonlinear transfer functions that additively converge into another nonlinear transfer function to produce a firing rate or spiking probability.…”
Section: Discussionmentioning
confidence: 99%
“…Training can be done with natural images instead of artificial stimuli like white noise or flickering bars. For models trained with ganglion cell data from the salamander retina, the obtained convolutional filters shared properties with actual bipolar and amacrine cells (Maheswaranathan et al 2023; Tanaka et al 2019), although it remains unclear to what extent the model architecture resembles the actual neural circuitry. Spike-triggered clustering (Shah et al 2020), on the other hand, recovers subunits on stimulus correlations much like STNMF.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…An additional useful aspect of these models is that they can be analyzed to determine the set of model interneurons that generated any particular response (Tanaka et al, 2019) (Maheswaranathan et al, 2023). This analysis, termed an attribution analysis, derives from approaches of interpretable machine learning (Sundararajan et al, 2017), where the goal is to gain insight into how the properties of the model generated particular outputs.…”
Section: Introductionmentioning
confidence: 99%