2019
DOI: 10.1016/j.asoc.2019.01.005
|View full text |Cite
|
Sign up to set email alerts
|

Self-boosting first-order autonomous learning neuro-fuzzy systems

Abstract: In this paper, a detailed mathematical analysis of the optimality of the premise and consequent parts of the recently introduced first-order Autonomous Learning Multi-Model (ALMMo) neuro-fuzzy system is conducted. A novel self-boosting algorithm for structure-and parameter-optimization is, then, introduced to the ALMMo, which results in the self-boosting ALMMo (SBALMMo) neuro-fuzzy system. By minimizing the objective functions with the previously collected data, the SBALMMo is able to optimize its system struc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 19 publications
(20 citation statements)
references
References 40 publications
(104 reference statements)
0
20
0
Order By: Relevance
“…However, the system performance will deteriorate significantly if η o is set too large since the system forgets the learned knowledge from historical data rather rapidly. The influence of Ω o and η o has been analyzed and verified through numerical examples in [7], [28]. The recommended values of Ω o and η o are 10 and 0.1, respectively.…”
Section: Stage 7 Rule Base Updatingmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the system performance will deteriorate significantly if η o is set too large since the system forgets the learned knowledge from historical data rather rapidly. The influence of Ω o and η o has been analyzed and verified through numerical examples in [7], [28]. The recommended values of Ω o and η o are 10 and 0.1, respectively.…”
Section: Stage 7 Rule Base Updatingmentioning
confidence: 99%
“…However, this is, again, a"one pass" approach and it lacks a theoretical proof of optimality. In [28], the optimality of EISs is systematically studied, and two algorithms are introduced for optimizing the antecedent and consequent parts, respectively. Nonetheless, the prediction accuracy is only improved marginally on certain problems after system optimization.…”
Section: Background a Eissmentioning
confidence: 99%
See 1 more Smart Citation
“…• Layer 2: The firing level of each rule Rule r is computed, by (3). • Layer 3: The normalized firing levels of the rules are computed, by (5). • Layer 4: Each normalized firing level is multiplied by its corresponding rule consequent.…”
Section: E Comparison With Anfismentioning
confidence: 99%
“…There are generally three different strategies for optimizing a TSK fuzzy system in supervised regression 1 : 1) Evolutionary algorithms [6], in which each set of the parameters of the antecedent membership functions (MFs) and the consequents are encoded as an individual in a population, and genetic operators, such as selection, 1 Some novel approaches for optimizing evolving fuzzy systems have also been proposed recently [4], [5]; however, they are not the focus of this paper, so their details are not included. crossover, mutation, and reproduction, are used to produce the next generation.…”
Section: Introductionmentioning
confidence: 99%