2017
DOI: 10.1109/tsp.2016.2645543
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Antisparse Coding

Abstract: Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Anti-sparse regularization can be naturally expressed through an ∞-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distri… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 36 publications
0
13
0
Order By: Relevance
“…Capitalizing on the advantages of proximal splitting recently popularized to solve large-scale inference problems [13]- [18], the proximal Monte Carlo method allows high-dimensional log-concave distributions to be sampled. For instance, this algorithm has been successfully used to conduct antisparse coding [19] and has been significantly improved in [20].…”
Section: Introductionmentioning
confidence: 99%
“…Capitalizing on the advantages of proximal splitting recently popularized to solve large-scale inference problems [13]- [18], the proximal Monte Carlo method allows high-dimensional log-concave distributions to be sampled. For instance, this algorithm has been successfully used to conduct antisparse coding [19] and has been significantly improved in [20].…”
Section: Introductionmentioning
confidence: 99%
“…• For sub Gaussian distributions [21], we adopt p > 2, with p → ∞ for uniform distribution. From these results we observe two dual potential scenarios for predictive deconvolution: 1) the case where the desired signal is super Gaussian (sparsity property to be explored) and an p predictor may be used with 1 ≤ p ≤ 2 and 2) the case where the desired signal is sub Gaussian (antisparsity property [11] to be explored) and it is suitable to use p > 2.…”
Section: A P Norms and Maximum Likelihood Criterionmentioning
confidence: 96%
“…where λ(n) is a weight factor that controls the degree of relevance of the error sample at time instant n, e(n) is the error signal and w are the filter coefficients. Considering stationary and ergodic processes [18], the criteria given by (4) and (11) are equivalent for practical purposes. So, minimizing the MSE criterion is equivalent to minimize the least squares (LS) criterion in (11).…”
Section: The P Pef As An Alternative For Blind Deconvolutionmentioning
confidence: 99%
See 1 more Smart Citation
“…where 2 ( ) = ( − ) − =1 ( − + 1). One of our interests in this work is to perform unsupervised deconvolution in order to recover telecommunication signals, which usually follow a uniform distribution [14], and, hence, present an antisparse structure [15]. Therefore, as we discussed in [7], the most suitable measure for this property is the ℓ ∞ norm, which is equivalent to the Maximum Likelihood estimator for a uniform distribution.…”
Section: Proposed Structure and Algorithmsmentioning
confidence: 99%