2007 IEEE International Conference on Automation and Logistics 2007
DOI: 10.1109/ical.2007.4338973
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Inverse Induction Machine Control Based on Variable Learning Rate BP Algorithm

Abstract: The adaptive inverse control technology is utilized for induction machine (IM) control. Adaptive inverse control is actually an open-loop control scheme and so in the adaptive inverse control the instability problem caused by feedback control is avoided and the better dynamic performances can also be achieved. Linear LMS technique of adaptive inverse control is extended to control the MIMO, nonlinear IM based on BP neural network. And the BP algorithm is improved by using variable learning rate. Simulation stu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(11 citation statements)
references
References 9 publications
0
11
0
Order By: Relevance
“…Some studies [20,69] have demonstrated that current optimization approaches, such as SGD [55], Adam [40], AdamW [51] and others [19,48] affect generalization. Some previous literature finds that Adam is more vulnerable to sharp minima than SGD [65], which results in worse generalization ability [22,28,68].…”
Section: Optimizermentioning
confidence: 99%
See 1 more Smart Citation
“…Some studies [20,69] have demonstrated that current optimization approaches, such as SGD [55], Adam [40], AdamW [51] and others [19,48] affect generalization. Some previous literature finds that Adam is more vulnerable to sharp minima than SGD [65], which results in worse generalization ability [22,28,68].…”
Section: Optimizermentioning
confidence: 99%
“…Some previous literature finds that Adam is more vulnerable to sharp minima than SGD [65], which results in worse generalization ability [22,28,68]. Some following works [10,52,69,76] propose generalizable optimizers to address this problem. However, it can be a trade-off between generalization ability and convergence speed [19,38,48,69,76].…”
Section: Optimizermentioning
confidence: 99%
“…The learning rate variable BP algorithm [37,41] has an adaptive ability of adjusting the learning rate gradient descent in terms of the variation of error. The learning rate will increase if the error decreases; otherwise, the adjustment is wrong, and the step size should be reduced.…”
Section: Figure 1 Bp Neural Network Structurementioning
confidence: 99%
“…However, the network convergence speed of the BP neural network algorithm is a little slow due to the problem of finding the first derivative of the objective function. In general, for improving the convergence speed, there exists a kind of improved heuristic algorithms, such as the momentum Back Propagation [35][36] and variable learning rate Back Propagation [37]. And also, there exists another kind of improved numerical optimization method, such as the conjugate gradient Back Propagation [38] and the Levenberg-Marquardt Back Propagation (LMBP) [39][40].…”
Section: Introductionmentioning
confidence: 99%
“…It has drawn much Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/conengprac attention in engineering applications with its advantages of clear physical concept, being intuitive and easy to understand (Yuan & Guo, 1994;Han et al, 2001;Dai, He, Zhang, & Zhang, 2003;Dai, 2005;Xie, Zhang, & Xiao, 2007). But solving for the inverse system model of a complex multivariable system is a bottleneck.…”
Section: Introductionmentioning
confidence: 99%