2021
DOI: 10.48550/arxiv.2108.01548
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inference via Sparse Coding in a Hierarchical Vision Model

Abstract: Sparse coding has been incorporated in models of the visual cortex for its computational advantages and connection to biology. But how the level of sparsity contributes to performance on visual tasks is not well understood. In this work, sparse coding has been integrated into an existing hierarchical V2 model (Hosoya and Hyvärinen, 2015), but replacing the Independent Component Analysis (ICA) with an explicit sparse coding in which the degree of sparsity can be controlled. After training, the sparse coding bas… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 53 publications
0
4
0
Order By: Relevance
“…While the model lacks convolution, it is trivial to add a convolution-like procedure where sparse inference replaces the dot product of a neural network (see section 4.3). Previous methods of convolutional sparse coding modified the loss function to reconstruct images as a sum of filters convolved with sparse feature maps (Bristow et al, 2013;Wohlberg, 2014), but this loses the original probabilistic interpretation of sparse coding and the associated inference capabilities described by Bowren et al (2021). Instead, this work proposed gathering image patches via a sliding window (like in convolution), but performing sparse inference on each patch.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…While the model lacks convolution, it is trivial to add a convolution-like procedure where sparse inference replaces the dot product of a neural network (see section 4.3). Previous methods of convolutional sparse coding modified the loss function to reconstruct images as a sum of filters convolved with sparse feature maps (Bristow et al, 2013;Wohlberg, 2014), but this loses the original probabilistic interpretation of sparse coding and the associated inference capabilities described by Bowren et al (2021). Instead, this work proposed gathering image patches via a sliding window (like in convolution), but performing sparse inference on each patch.…”
Section: Discussionmentioning
confidence: 99%
“…The next question evident from this claim is: why would a sparse prior promote proper feature integration (and thus generalization)? The answer to this question is explained by the inductive inference mechanism of sparse coding described by Bowren et al (2021). As the regularization coefficient of sparse coding increases, fewer basis functions must reconstruct the input image.…”
Section: Maintaining a Sparse Priormentioning
confidence: 99%
See 2 more Smart Citations