2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton) 2012
DOI: 10.1109/allerton.2012.6483364
|View full text |Cite
|
Sign up to set email alerts
|

Signal representations with minimum &#x2113;<inf>&#x221E;</inf>-norm

Abstract: Maximum (or ∞) norm minimization subject to an underdetermined system of linear equations finds use in a large number of practical applications, such as vector quantization, peak-to-average power ratio (PAPR) (or "crest factor") reduction in wireless communication systems, approximate neighbor search, robotics, and control. In this paper, we analyze the fundamental properties of signal representations with minimum ∞-norm. In particular, we develop bounds on the maximum magnitude of such representations using … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 22 publications
(22 citation statements)
references
References 21 publications
0
22
0
Order By: Relevance
“…We also compare to the recovery guarantee provided for PhaseMax using classical machine learning methods in [3]. 6 We see that PhaseMax requires the same sample complexity (number of required measurements) as compared to PhaseLift, TWF, and TAW, when used together with the truncated spectral initializer [19]. While the constants c 0 , c 1 , and c 2 in the recovery guarantees for all of the other methods are generally very large, our recovery guarantees contain no unspecified constants, explicitly depend on the approximation factor α, and are surprisingly sharp.…”
Section: A Comparison With Existing Recovery Guaranteesmentioning
confidence: 99%
“…We also compare to the recovery guarantee provided for PhaseMax using classical machine learning methods in [3]. 6 We see that PhaseMax requires the same sample complexity (number of required measurements) as compared to PhaseLift, TWF, and TAW, when used together with the truncated spectral initializer [19]. While the constants c 0 , c 1 , and c 2 in the recovery guarantees for all of the other methods are generally very large, our recovery guarantees contain no unspecified constants, explicitly depend on the approximation factor α, and are surprisingly sharp.…”
Section: A Comparison With Existing Recovery Guaranteesmentioning
confidence: 99%
“…, 256. As it has been empirically shown in [29] for randomly subsampled DCT matrices, the SNR y is expected to be an increasing function of N/M for a given PAPR level of anti-sparsity. The rational is that the number of combinations of ± y 2 /N increases when N grows, leading to more chance of finding a better representation for a given PAPR, due to increased redundancy.…”
Section: A a Toy Examplementioning
confidence: 82%
“…Resorting to an uncertainty principle (UP), Lyubarskii et al have also introduced several examples of frames yielding computable Kashin's representations, such as random orthogonal matrices, random subsampled discrete Fourier transform (DFT) matrices, and random sub-Gaussian matrices [24]. The properties of the alternate optimization problem, which consists of minimizing the maximum magnitude of the representation coefficients for an upper-bounded 2 -reconstruction error, have been deeply investigated in [29], [30]. In these latest contributions, the optimal expansion is called the democratic representation and some bounds associated with archetypal matrices ensuring the UP are derived.…”
Section: Introductionmentioning
confidence: 99%
“…The ℓ ∞ regularization has found applications in several fields [32,38,53]. Suppose that θ = 0, and define the saturation support of θ as I sat…”
Section: Anti-sparsitymentioning
confidence: 99%