52nd IEEE Conference on Decision and Control 2013
DOI: 10.1109/cdc.2013.6760734
|View full text |Cite
|
Sign up to set email alerts
|

Integrated pre-processing for Bayesian nonlinear system identification with Gaussian processes

Abstract: We introduce GP-FNARX: a new model for non linear system identification based on a nonlinear autoregressive exogenous model (NARX) with filtered regressors (F) where the nonlinear regression problem is tackled using sparse Gaussian processes (GP). We integrate data pre-processing with system identification into a fully automated procedure that goes from raw data to an identified model. Both pre-processing param eters and GP hyper-parameters are tuned by maximizing the marginal likelihood of the probabilistic m… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
30
0

Year Published

2015
2015
2024
2024

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(30 citation statements)
references
References 9 publications
0
30
0
Order By: Relevance
“…A main drawback of GP-NARX however is that it cannot account for the observation noise in x t , leading to the errors-in-variables problem. To address this issue, we could (i) conduct datapreprocessing to remove the noise from data [221]; (ii) adopt GPs considering input noise [200]; and (iii) employ the more powerful state space models (SSM) [217] introduced below.…”
Section: E Scalable Recurrent Gpmentioning
confidence: 99%
“…A main drawback of GP-NARX however is that it cannot account for the observation noise in x t , leading to the errors-in-variables problem. To address this issue, we could (i) conduct datapreprocessing to remove the noise from data [221]; (ii) adopt GPs considering input noise [200]; and (iii) employ the more powerful state space models (SSM) [217] introduced below.…”
Section: E Scalable Recurrent Gpmentioning
confidence: 99%
“…A particular advantage of expressing both the regularization and the loss in the ℓ 2 norm is that the solution of the corresponding optimization problem is obtained by solving a system of linear equations and an attractive trade-off between regularization bias and variance of the estimates is present (Goethals et al, 2005). LS-SVMs are also related to Kriging (Krige, 1966) in geostatistics and Gaussian processes (GPs) in machine learning, e.g., Frigola and Rasmussen (2013) and Kocijan, Girard, Banko, and Murray-Smith (2005), whose approaches can be seen as different variants of the reproducing kernel Hilbert space (RKHS) theory based function estimators. The relation between these methods is analyzed in Pillonetto et al (2014) and Van Gestel et al (2002).…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, neither AR-based nor VAR-based models capture non-linearity. For that reason, substantial effort has been put into non-linear models for time series forecasting based on kernel methods [Chen et al(2008)Chen, Wang, and Harris], ensembles [Bouchachia and Bouchachia(2008)], or Gaussian processes [Frigola and Rasmussen(2014)]. Still, these approaches apply predetermined non-linearities and may fail to recognize different forms of non-linearity for different MTS.…”
Section: Introductionmentioning
confidence: 99%