2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6639260
|View full text |Cite
|
Sign up to set email alerts
|

Real-time implementations of sparse linear prediction for speech processing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
15
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 12 publications
(16 citation statements)
references
References 19 publications
1
15
0
Order By: Relevance
“…[39, §11.8.2]. However, the reweighting destroys the Toeplitz structure and we need to resort to more general linear solvers like Cholesky factorization with time-complexity O(N 3 ), as in [30].…”
Section: An Admm Algorithm For the Slp Problemmentioning
confidence: 99%
See 3 more Smart Citations
“…[39, §11.8.2]. However, the reweighting destroys the Toeplitz structure and we need to resort to more general linear solvers like Cholesky factorization with time-complexity O(N 3 ), as in [30].…”
Section: An Admm Algorithm For the Slp Problemmentioning
confidence: 99%
“…This paper deals with understanding the trade-offs occurring in choosing a proper and fixed number of iterations for the ADMM algorithm and extends the analysis and algorithms presented in [30,38]. We note that we will apply an ADMM algorithm in its straightforward form but several variants and extensions may be useful for solving the sparse linear prediction problem efficiently and may be considered for further investigations.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…This optimisation problem can be solved by various methods (Eg: interior point methods, iterative re−weighted 1 norm minimisation algorithm etc). Here primal-dual interior-point method is used to solve this optimisation problem [12], because it is very efficient for solving convex optimization problems in real-time applications. Database is created using the sparse LP coefficient matrix obtained from the training speech samples.…”
Section: A Feature Extractionmentioning
confidence: 99%