2017
DOI: 10.1080/00207179.2017.1390256
|View full text |Cite
|
Sign up to set email alerts
|

Recursive estimation in piecewise affine systems using parameter identifiers and concurrent learning

Abstract: Piecewise affine systems constitute a popular framework for the approximation of non-linear systems and the modelling of hybrid systems. This paper addresses the recursive subsystem estimation in continuoustime piecewise affine systems. Parameter identifiers are extended from continuous-time state-space models to piecewise linear and piecewise affine systems. The convergence rate of the presented identifiers is improved further using concurrent learning, which makes concurrent use of current and recorded measu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
48
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(49 citation statements)
references
References 52 publications
0
48
0
1
Order By: Relevance
“…Apparently, the bounded-inputbounded-state stability of the switched reference model (3) can be ensured for the identical switching sequences as the error switched system (40). Therefore, all the signals in (3) and (39) are bounded for the switching law σ (t) satisfying the MDADT (59).…”
Section: Part 3 (Global Practical Stability)mentioning
confidence: 99%
See 1 more Smart Citation
“…Apparently, the bounded-inputbounded-state stability of the switched reference model (3) can be ensured for the identical switching sequences as the error switched system (40). Therefore, all the signals in (3) and (39) are bounded for the switching law σ (t) satisfying the MDADT (59).…”
Section: Part 3 (Global Practical Stability)mentioning
confidence: 99%
“…The authors of [38] designed switched CL adaptive laws under arbitrary switching signals to solve the MRAC tracking problem of a class of uncertain switched systems. In [39], parameter identifiers and CL were used to propose recursive subsystem estimation in piecewise affine systems.…”
Section: Introductionmentioning
confidence: 99%
“…Nosúltimos anos, tem-se observado um empenho crescente em técnicas de identificação de sistemas com modelos afim chaveados (do inglês -switched affine models) (Du et al, 2018;Zwart, 2019;Hojjatinia et al, 2019;Fey et al, 2020) e modelos afim por partes (do inglês -piecewise affine models) (Barbosa et al, 2018;Schirrer et al, 2018;Lassoued e Abderrahim, 2019;Kersting e Buss, 2019;Du et al, 2020). Isto se deve ao fato de que identificar um modelo global que consiga cobrir diversas situações pode torná-lo muito complexo.…”
Section: Introductionunclassified
“…Second, the optimisation problem is solved by conventional EA and a least squares formulation, being less susceptible to getting stuck in local minima. Since the problem here studied is a non‐convex combinatorial (NP‐hard) optimisation one [13], solving it with EA is much simpler than using a mixed‐integer programing, as presented in [18], making the proposed approach amenable in problems with large datasets since the number of decision variables increases with the number of states and submodels order and not with the size of the data set, hence avoiding problems due to initialisation and high dimensionality [26]. Finally, although simulated examples consider only PWOE models, the proposed algorithm can be also applied to PWARX models, and is robust to output error noise.…”
Section: Introductionmentioning
confidence: 99%
“…The PWA identification procedures can be classified into five categories [13]: optimisation, clustering, algebraic and recursive methods. Among them, the following approaches stand out: (i) procedure based on mixed integer programming [18]; (ii) methods based on clustering [12, 17, 34]; (iii) Bayesian approach [20]; (iv) bounded error procedure [19]; (v) evolutionary approaches [28, 29] and (vi) recursive or on‐line algorithms [13, 24, 35, 36]. A simple and efficient clustering method based on GMMs, trained by the expectation‐maximisation algorithm – presented in [21] – will be used as a framework in what follows.…”
Section: Introductionmentioning
confidence: 99%