2002
DOI: 10.1016/s0165-0114(01)00241-x
|View full text |Cite
|
Sign up to set email alerts
|

Training fuzzy systems with the extended Kalman filter

Abstract: The generation of membership functions for fuzzy systems is a challenging problem. We show that for Mamdani-type fuzzy systems with correlation-product inference, centroid defuzziÿcation, and triangular membership functions, optimizing the membership functions can be viewed as an identiÿcation problem for a nonlinear dynamic system. This identiÿcation problem can be solved with an extended Kalman ÿlter. We describe the algorithm and compare it with gradient descent and with adaptive neuro-fuzzy inference syste… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
75
0
1

Year Published

2008
2008
2017
2017

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 132 publications
(76 citation statements)
references
References 35 publications
0
75
0
1
Order By: Relevance
“…Gradient descent can be further improved by using an adaptive learning rate and momentum term (Nauck, Klawonn, Kruse, 1997). Kalman filtering is a gradient-based method that can give better fuzzy system and neural network training results than gradient descent (Simon, 2002a(Simon, , 2002b. Constrained Kalman filtering can further improve fuzzy system results by optimally constraining the network parameters (Simon, 2002c).…”
Section: Fine Tuning Using Gradient Informationmentioning
confidence: 99%
“…Gradient descent can be further improved by using an adaptive learning rate and momentum term (Nauck, Klawonn, Kruse, 1997). Kalman filtering is a gradient-based method that can give better fuzzy system and neural network training results than gradient descent (Simon, 2002a(Simon, , 2002b. Constrained Kalman filtering can further improve fuzzy system results by optimally constraining the network parameters (Simon, 2002c).…”
Section: Fine Tuning Using Gradient Informationmentioning
confidence: 99%
“…EKF improves over gradient descent because it introduces an adaptive learning gain (the Kalman gain) depending on the uncertainty of the network prediction, thus ensuring smooth convergence. The EKF has been also applied to tuning the membership functions in Mamdani type of fuzzy systems with correlation inference [19]. There the estimation of the optimal membership parameters is found equivalent to nonlinear dynamic system identification problem, solved applying the EKF.…”
Section: Kf As An Estimation Algorithmmentioning
confidence: 99%
“…Overall, all the existing training algorithms can be classified into derivative-based and derivativefree algorithms. The derivative-based methods include backpropagation algorithm (Wang, 1992a), simplex method (Egusa, 1995), least squares (Wang, 1992b), adaptive learning algorithm (Lin, 1999) and the extended Kalman filter method (Simon, 2002), etc. Derivativebased methods require that the objective function must have the derivative and tend to converge to local minima.…”
mentioning
confidence: 99%
“…Derivativebased methods require that the objective function must have the derivative and tend to converge to local minima. They are limited to specific objective functions, specific types of inference, and specific types of membership functions (Simon, 2002). The derivative-free methods that have been used include clustering algorithm (Chen, 1998), neural network (Figueiredo, 1999), genetic algorithm (Chen, 2003), cell mapping (Smith, 1991), fuzzy equivalence relations (Wu, 1999), and inductive learning algorithm (Castro, 1997).…”
mentioning
confidence: 99%
See 1 more Smart Citation