2016
DOI: 10.1016/j.arcontrol.2016.04.013
|View full text |Cite
|
Sign up to set email alerts
|

Regularization and Bayesian learning in dynamical systems: Past, present and future

Abstract: Regularization and Bayesian methods for system identification have been repopularized in the recent years, and proved to be competitive w.r.t. classical parametric approaches. In this paper we shall make an attempt to illustrate how the use of regularization in system identification has evolved over the years, starting from the early contributions both in the Automatic Control as well as Econometrics and Statistics literature. In particular we shall discuss some fundamental issues such as compound estimation p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
22
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 55 publications
(22 citation statements)
references
References 134 publications
(291 reference statements)
0
22
0
Order By: Relevance
“…If this idea is not taken, then we can design more general b(t) and even more general k d . For instance, we could allow b(t) to be arbitrary real-valued function and we could also allow k d to be the more general exponentially convex (EC) kernel 5 and design 5 For the LS kernel (12), if k d is also a kernel, then k d is called an exponentially convex (EC) kernel [32] and in this case k is called an exponentially convex locally stationary (ECLS) kernel [26]. Fig.…”
Section: Remark 33mentioning
confidence: 99%
See 1 more Smart Citation
“…If this idea is not taken, then we can design more general b(t) and even more general k d . For instance, we could allow b(t) to be arbitrary real-valued function and we could also allow k d to be the more general exponentially convex (EC) kernel 5 and design 5 For the LS kernel (12), if k d is also a kernel, then k d is called an exponentially convex (EC) kernel [32] and in this case k is called an exponentially convex locally stationary (ECLS) kernel [26]. Fig.…”
Section: Remark 33mentioning
confidence: 99%
“…The route to adopt regularization is by no means new, see e.g., [10], [11], [1, p. 504-505] and also [12] for a historic review, but no important progress along this route has been reported until [5]. The major obstacle is that it was unclear whether or not it is possible to design the regularization to embed the prior knowledge of the LTI system to be identified.…”
mentioning
confidence: 99%
“…In recent years, the power of regularization has been well noted in the literature of optimization, machine learning, and system identification, e.g., Chiuso (2016); Goodfellow et al (2016); Shalev-Shwartz (2012) for the purpose of avoiding overfitting in empirical learning. Noting any quantum state can be represented as a trace-one positive Hermitian density matrix, which is of low rank if it is a combination of a small number of pure states, we establish the following results.…”
Section: Introductionmentioning
confidence: 99%
“…Over the past few years, the kerel-based regularization method (KRM), which was first introduced in Pillonetto & De Nicolao (2010) and then further developed in Chen et al (2014Chen et al ( , 2012; Pillonetto et al (2011), has received increasing attention in the system identification community, see e.g., Chiuso (2016); Pillonetto et al (2014) and the references therein. It has become a complement to the classical maximum likelihood/prediction error methods (ML/PEM), Ljung (1999); Söderström & Stoica (1989), which can be justified in a couple of aspects.…”
Section: Introductionmentioning
confidence: 99%