1997
DOI: 10.1109/78.575692
|View full text |Cite
|
Sign up to set email alerts
|

Deconvolution of sparse spike trains by iterated window maximization

Abstract: This research report is organized as two separate papers. The first paper describes a new deconvolution algorithm for sparse spike trains. The second paper compares the new algorithm to a number of existing alternatives.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
40
0

Year Published

2004
2004
2019
2019

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(40 citation statements)
references
References 21 publications
0
40
0
Order By: Relevance
“…Usually, alternating minimization algorithms are used to estimate signal x and impulse response h by minimizing the cost function, which often comprises a data fidelity term and a regularization term (penalty term). The regularization term is adopted to exploit the prior information, such as the sparsity of the sources [36,37,40], as considered in [36] for seismic signals, in [10] [21] for spike signals, and in [22] for images. In blind source separation, however, apart from the sparsity that is often assumed for the underdetermined case [28] [43], statistical independence between the sources is also widely exploited for estimating the sources and the mixing channels [9,18,19,29,30,34,39].…”
Section: Introductionmentioning
confidence: 99%
“…Usually, alternating minimization algorithms are used to estimate signal x and impulse response h by minimizing the cost function, which often comprises a data fidelity term and a regularization term (penalty term). The regularization term is adopted to exploit the prior information, such as the sparsity of the sources [36,37,40], as considered in [36] for seismic signals, in [10] [21] for spike signals, and in [22] for images. In blind source separation, however, apart from the sparsity that is often assumed for the underdetermined case [28] [43], statistical independence between the sources is also widely exploited for estimating the sources and the mixing channels [9,18,19,29,30,34,39].…”
Section: Introductionmentioning
confidence: 99%
“…Early techniques implicitly assumed known, constant, Gaussian white noise-from Wiener filters, which often overblurred locally sharp features, to autocorrelation-based techniques and CLEAN algorithms (Kaaresen 1997), which worked for point sources. More robust maximum-likelihood, maximum-entropy, and general probabilistic-based techniques such as Richardson-Lucy (Richardson 1972;Lucy 1974) and the EM algorithm (Dempster et al 1977) still often assumed a single uniform scale (usually the pixel size) for features in the ''true'' image.…”
Section: Introductionmentioning
confidence: 99%
“…The 1 norm penalty (i.e., φ n (x) = |x|) has been proposed for sparse deconvolution [10], [16], [41], [66] and more generally for sparse signal processing [15] and statistics [67]. For the 1 norm and other non-differentiable convex penalties, efficient algorithms for large scale problems of the form (1) and similar (including convex constraints) have been developed based on proximal splitting methods [18], [19], alternating direction method of multipliers (ADMM) [9], majorization-minimization (MM) [25], primal-dual gradient descent [22], and Bregman iterations [36].…”
Section: B Related Work (Sparsity Penalized Least Squares)mentioning
confidence: 99%