Digital Front-End in Wireless Communications and Broadcasting 2011
DOI: 10.1017/cbo9780511744839.007
|View full text |Cite
|
Sign up to set email alerts
|

General principles and design overview of digital predistortion

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
7
3

Relationship

0
10

Authors

Journals

citations
Cited by 32 publications
(18 citation statements)
references
References 23 publications
0
18
0
Order By: Relevance
“…Following a closed-loop error minimization technique as in [18], the coefficients can be extracted iteratively finding the LS solution. At the nth iteration (i.e., when considering buffers of N data samples), the coefficients are obtained as where G 0 determines the desired linear gain of the PA, and where y and u are the N × 1 vectors of the PA output and the transmitted input, respectively.…”
Section: Digital Predistortion Adaptation Pathmentioning
confidence: 99%
“…Following a closed-loop error minimization technique as in [18], the coefficients can be extracted iteratively finding the LS solution. At the nth iteration (i.e., when considering buffers of N data samples), the coefficients are obtained as where G 0 determines the desired linear gain of the PA, and where y and u are the N × 1 vectors of the PA output and the transmitted input, respectively.…”
Section: Digital Predistortion Adaptation Pathmentioning
confidence: 99%
“…This is another cross-term, which is similar to the envelope memory polynomial [3] or the gain polynomial model [30]. We define it as the 2 nd -order type-2 term.…”
Section:  Higher-order Extension and Variationsmentioning
confidence: 99%
“…GMP has been chosen since it outperformed MP and DDR-Volterra in our scenario. For parameter estimation, our DPD scheme followed the direct learning (DL) approach [8], using the linear least squares (LS) solution as the estimation method because while the DPD function is nonlinear, it is linear in the parameters. DL was chosen instead of the simpler and widely used indirect learning architecture because it performs better [9]- [10].…”
Section: Preliminary Digital Linearization Architecturementioning
confidence: 99%