2015
DOI: 10.1016/j.automatica.2015.05.012
|View full text |Cite
|
Sign up to set email alerts
|

Tuning complexity in regularized kernel-based regression and linear system identification: The robustness of the marginal likelihood estimator

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
46
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 86 publications
(46 citation statements)
references
References 29 publications
0
46
0
Order By: Relevance
“…The parametric model, P, exhibits a poor performance because it describes only crude idealizations of the actual dynamics. The algorithms based on Cross Validation (CV) perform significantly worse in the first 60 seconds than those based on Marginal Likelihood (ML) optimisation; this is not unexpected as discussed in [53]. As expected, the nonparametric model, NP-ML, has worse generalization performance (the error is larger in the first few steps) but better adaptation capabilities with respect to model P. The models with the best performance are SP-ML and SPK-ML because they combine the benefit of the parametric approach, i.e.…”
Section: A Experimental Results Using Numerical Derivativesmentioning
confidence: 89%
“…The parametric model, P, exhibits a poor performance because it describes only crude idealizations of the actual dynamics. The algorithms based on Cross Validation (CV) perform significantly worse in the first 60 seconds than those based on Marginal Likelihood (ML) optimisation; this is not unexpected as discussed in [53]. As expected, the nonparametric model, NP-ML, has worse generalization performance (the error is larger in the first few steps) but better adaptation capabilities with respect to model P. The models with the best performance are SP-ML and SPK-ML because they combine the benefit of the parametric approach, i.e.…”
Section: A Experimental Results Using Numerical Derivativesmentioning
confidence: 89%
“…where det(·) is the determinant of a matrix. The EB (18) has the advantage that it is robust Pillonetto & Chiuso (2015), but not asymptotical optimal in the sense of MSE (Mu et al, 2017).…”
Section: Hyperparameter Estimationmentioning
confidence: 99%
“…We first use the method in Chen et al (2012); Pillonetto & Chiuso (2015) to generate 1000 30th order LTI systems. For each system, we truncate its impulse response at the order 50 and obtain a FIR model of order 50 accordingly, which is treated as the test system.…”
Section: Test Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…is the first order impulse response coefficient at lag τ i , E[·] denotes the expected value and 0 ≤ α, β ≤ ∞. The parameters c, α and β are called hyper-parameters and they are computed by maximizing the marginal likelihood of the observed output [11], [13], [8].…”
Section: Regularization For the First Order Volterra Kernelmentioning
confidence: 99%