System identification has become a highly relevant problem with the increasing demand for high-quality control and prediction in various branches of science and engineering. To be able to make inferences about system operation, we need a model that runs in real time. If the dynamic model does not fit adequately the actual behavior of the s~,stem, the estimation errors build up leading to divergence of the recursive estimation process.The recursive least squares (RLS) methoql has a central role in recursive identification of processes. This rote is attributable to its simple implementation and availability of numerically stable algorithms. Adaptive control relies on construction of new algorithms that exploit the good properties of the least squares method and include an additional qualitatively new property. Thus, we construct algorithms that are capable of tracking time-dependent parameters in the presence of unobservable noise and inexact model specification. Numerical aspects guaranteeing convergence are also important [ 1, 2].Stability, convergence, and dynamic properties of the algorithm essentially depend on the method of determination of the updated covariance matrix, which provides important information about the operation of the algorithm. The RLS method cannot be used to track time-dependent parameters, because the algorithm gains tend to zero. We accordingly use modified algorithms for estimation of time-dependent parameters, which preserve global time convergence (the invariant case) while ensuring a covariance matrix with nonzero elements.The tracking of time-dependent properties in the exponential estimation paradigm still requires attention to the specification of the forgetting factor, which should be correlated with the variation of the unknown parameters. Moreover, controls must be introduced to suppress exponential growth of the covariance matrix P.We consider standard modifications of the lease squares method for tracking time-dependent parameters.In the algorithm with constant forgetting, the eigenvalues of P vary from zero to infinity. In the algorithm with correction, the trace of the matrix P remains constant, the eigenvalues are upper bounded, but the lower bound for the eigenvalues is still zero. In the algorithm with resetting of the covariance matrix to the given matrix or to a matrix that depends on the latest values, the main problem is to choose the resetting point. The algorithm with modification of the covariance matrix adds a positive definite matrix to the updated covariance matrix. This ensures a lower bound on the minimum eigenvalue of P, but the maximum eigenvalue may go to infinity.Thus, standard modifications of the least squares method achieve some desirable properties, while other properties are lost [3].Let us consider an algorithm which keeps the trace constant and uses variable forgetting. Here
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.