2002
DOI: 10.1109/tsp.2002.1011217
|View full text |Cite
|
Sign up to set email alerts
|

An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems

Abstract: This paper investigates error-entropy-minimization in adaptive systems training. We prove the equivalence between minimization of error's Renyi entropy of order and minimization of a Csiszar distance measure between the densities of desired and system outputs. A nonparametric estimator for Renyi's entropy is presented, and it is shown that the global minimum of this estimator is the same as the actual entropy. The performance of the error-entropy-minimization criterion is compared with mean-square-error-minimi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
159
0
4

Year Published

2005
2005
2021
2021

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 318 publications
(175 citation statements)
references
References 15 publications
1
159
0
4
Order By: Relevance
“…In fact, the minimization of MSE is just taking the secondorder moment of the error distribution into consideration, which is optimal only for Gaussian buted errors. In cases where the error distribution is not Gaussian, it makes sense to study alternate cost functions for adaptation [15]. Here we take a different approach using informationtheoretical concepts, and propose the error entropy C).…”
Section: Figure 4 Block Diagram Of the Proposed Methodsmentioning
confidence: 99%
“…In fact, the minimization of MSE is just taking the secondorder moment of the error distribution into consideration, which is optimal only for Gaussian buted errors. In cases where the error distribution is not Gaussian, it makes sense to study alternate cost functions for adaptation [15]. Here we take a different approach using informationtheoretical concepts, and propose the error entropy C).…”
Section: Figure 4 Block Diagram Of the Proposed Methodsmentioning
confidence: 99%
“…Parzen window method using a Gaussian kernel [15,16]. The study in [15] demonstrated that the error samples of ITLtrained systems exhibit a more concentrated density function than MSE.…”
Section: Test Phase Readjustment For Sensor Drift Compensationmentioning
confidence: 99%
“…We next consider the system identification of a moving-average model with a 9 th order transfer function given by using the minimization of the error entropy [10]. Although the true advantage of MEE is for nonlinear system identification with nonlinear filters, here the goal is to compare adaptation accuracy and speed so we elected to use a linear plant and a FIR adaptive filter with the same plant order (zero achievable error).…”
Section: System Identificationmentioning
confidence: 99%