2018
DOI: 10.1109/tsp.2018.2872886
|View full text |Cite
|
Sign up to set email alerts
|

Bilinear Matrix Factorization Methods for Time-Varying Narrowband Channel Estimation: Exploiting Sparsity and Rank

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
23
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(23 citation statements)
references
References 32 publications
0
23
0
Order By: Relevance
“…To show the low-rank structure of the UWA channel, we first consider the case that the delay and Doppler are separable in the spreading function, then Sh(τ,v) can be expressed as Sh(τ,v)=p=1Pfp(τ)gp(v) where fp(τ) and gp(v) describe the delay and Doppler profile of the p th dominant path, respectively. For sparse channel models considered in the literature, fp(τ) and gp(v) are usually assumed to be of a ’sharp’ shape function with a small spread, such as the Dirac delta function or sinc function [10,11,16]. In this paper, we did not have such restriction, and the model in (9) generally takes the case of large dispersion spreads into account.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To show the low-rank structure of the UWA channel, we first consider the case that the delay and Doppler are separable in the spreading function, then Sh(τ,v) can be expressed as Sh(τ,v)=p=1Pfp(τ)gp(v) where fp(τ) and gp(v) describe the delay and Doppler profile of the p th dominant path, respectively. For sparse channel models considered in the literature, fp(τ) and gp(v) are usually assumed to be of a ’sharp’ shape function with a small spread, such as the Dirac delta function or sinc function [10,11,16]. In this paper, we did not have such restriction, and the model in (9) generally takes the case of large dispersion spreads into account.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…One well-known scheme is the least-squares (LS) estimator. However, due to lack of exploiting any prior information of the channel, LS estimators usually require many training signal measurements to achieve good estimation, especially when the number of unknown parameters is large [10]. Therefore, various channel estimation schemes have been proposed by considering the sparsity of channels, in which the technique of compressed sensing (CS) plays an important role [11].…”
Section: Introductionmentioning
confidence: 99%
“…Channel leakage has been addressed by enhancing classical sparse approximation [3,10,11]. Distinct from these approaches, [12] exploits the atomic norm heuristic [13,14] to promote structure while explicitly considering the leakage leading to strong performance improvements over [10]. The extensions of [12] to the orthogonal frequency-division multiplexing leaked channel are presented in [15] - [17].…”
Section: Introductionmentioning
confidence: 99%
“…Additionally, sparse Bayesian learning-(SBL-) based methods with the prior assumption of sparse signals are also propped [43], such as SBL method and OGSBI method [44], which can achieve the excellent performance with relatively high computational complexity. Moreover, different from the traditional methods of the sparse reconstruction using the discretized dictionary matrix, the atomic norm-based method as a new type of the norm-based methods realizes the sparse reconstruction in the continuous domain [45][46][47]. In [48], a semidefinite programming-based method is proposed for the ℓ 1 norm optimization over infinite dictionaries.…”
Section: Introductionmentioning
confidence: 99%