2015
DOI: 10.1371/journal.pone.0124088
|View full text |Cite
|
Sign up to set email alerts
|

Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding

Abstract: Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 11 publications
(15 citation statements)
references
References 28 publications
0
15
0
Order By: Relevance
“…Alternatively, one could create features that exploit the stimulus statistics, for example features that are made statistically independent from each other (Bell and Sejnowski, 1995 ) or by exploiting the concept of sparsity of stimulus representation bases (Olshausen and Field, 1997 , 2004 ; Shelton et al, 2015 ). Feature sparseness of can improve the predictive power and interpretability of models because the representation of stimulus features in active neural populations may be inherently sparse (Olshausen and Field, 2004 ).…”
Section: Identifying Input/output Featuresmentioning
confidence: 99%
“…Alternatively, one could create features that exploit the stimulus statistics, for example features that are made statistically independent from each other (Bell and Sejnowski, 1995 ) or by exploiting the concept of sparsity of stimulus representation bases (Olshausen and Field, 1997 , 2004 ; Shelton et al, 2015 ). Feature sparseness of can improve the predictive power and interpretability of models because the representation of stimulus features in active neural populations may be inherently sparse (Olshausen and Field, 2004 ).…”
Section: Identifying Input/output Featuresmentioning
confidence: 99%
“…The spike-and-slab prior is the fundamental basis for most Bayesian variable selection approaches, and has proved remarkably successful McCulloch 1993, 1997;Chipman 1996;Chipman et al 2001;Ročková and George 2014, and unpublished results). Recently, Bayesian spike-and-slab priors have been applied to predictive modeling and variable selection in largescale genomic studies (Yi et al 2003;Ishwaran and Rao 2005;de los Campos et al 2010;Zhou et al 2013;Lu et al 2015;Shankar et al 2015;Shelton et al 2015;Partovi Nia and Ghannad-Rezaie 2016). However, most previous spikeand-slab variable selection approaches use the mixture normal priors on coefficients and employ Markov Chain Monte Carlo (MCMC) algorithms (e.g., stochastic search variable selection) to fit the model.…”
mentioning
confidence: 99%
“…In previous work, the selection function S(y (n) ) was a deterministic function derived individually for each model (see e.g. Shelton et al, 2011Shelton et al, , 2012Dai and Lücke, 2012a,b;Bornschein et al, 2013;Sheikh et al, 2014;Shelton et al, 2015). We now generalize the selection approach: instead of predefining the form of S for variable selection, we want to learn it in a black-box and model-free way based on the data.…”
Section: Gp-selectmentioning
confidence: 99%
“…As hyperparameters of kernels are learned, the composition kernel (4) (Shelton et al, 2011(Shelton et al, , 2012(Shelton et al, , 2015. We run all models until convergence.…”
Section: Sparse Coding Modelsmentioning
confidence: 99%