2017
DOI: 10.1109/tsp.2017.2706181
|View full text |Cite
|
Sign up to set email alerts
|

Fast Algorithms for Demixing Sparse Signals From Nonlinear Observations

Abstract: We study the problem of demixing a pair of sparse signals from nonlinear observations of their superposition. Mathematically, we consider a nonlinear signal observation model, yi = g(a T i x) + ei, i = 1, . . . , m, where x = Φw + Ψz denotes the superposition signal, Φ and Ψ are orthonormal bases in R n , and w, z ∈ R n are sparse coefficient vectors of the constituent signals. Further, we assume that the observations are corrupted by a subgaussian additive noise. Within this model, g represents a nonlinear li… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
5
1
1

Relationship

5
2

Authors

Journals

citations
Cited by 19 publications
(31 citation statements)
references
References 59 publications
(139 reference statements)
0
31
0
Order By: Relevance
“…We assume that link function g(x) is a differentiable monotonic function, satisfying 0 < µ 1 ≤ g (x) ≤ µ 2 for all x ∈ D(g) (domain of g). This assumption is standard in statistical learning [19] and in nonlinear sparse recovery [52,59,60]. Also, as we will discuss below, this assumption will be helpful for verifying the RSC/RSS condition for the loss function that we define as follows.…”
Section: Nonlinear Affine Rank Minimizationmentioning
confidence: 97%
“…We assume that link function g(x) is a differentiable monotonic function, satisfying 0 < µ 1 ≤ g (x) ≤ µ 2 for all x ∈ D(g) (domain of g). This assumption is standard in statistical learning [19] and in nonlinear sparse recovery [52,59,60]. Also, as we will discuss below, this assumption will be helpful for verifying the RSC/RSS condition for the loss function that we define as follows.…”
Section: Nonlinear Affine Rank Minimizationmentioning
confidence: 97%
“…To overcome the inherent ambiguity issue in problem (1), many existing methods have assumed that the structures of sets (i.e., the structures can be low-rank matrices, or have sparse representation in some domain [McCoy and Tropp, 2014]) X and N are a prior known and also that the signals from X and N are "distinguishable" [Elad and Aharon, 2006, Soltani and Hegde, 2016, Soltani and Hegde, 2017, Druce et al, 2016, Elyaderani et al, 2017. The assumption of having the prior knowledge is a big restriction in many real-world applications.…”
Section: Application and Prior Artmentioning
confidence: 99%
“…The goal is now to recover both G(z) and ν. This is reminiscent of the problem of source separation or signal demixing [20], and in our previous work [17], [21] we proposed greedy iterative algorithms for solving such demixing problems. We extend this work by proving a nonlinear extension, together with a new analysis, of the algorithm proposed in [17].…”
Section: Techniquesmentioning
confidence: 99%