2021
DOI: 10.1016/j.jcp.2021.110549
|View full text |Cite
|
Sign up to set email alerts
|

System identification through Lipschitz regularized deep neural networks

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 13 publications
(6 citation statements)
references
References 27 publications
0
6
0
Order By: Relevance
“…-Loss function: in our tests, the training effect of the L 1 loss is slightly better than that of L 2 ; -Regularizer: Although strict regularization may lead to the decline in end-to-end accuracy, in our experiments, the Lipschitz regularizer [28] slightly improves the estimation of physical parameters; -Training samples: the extreme values of the Coriolis force and centrifugal force of the manipulator only appear in a particular status, and the distribution of the sample must be specifically considered for this.…”
Section: Methodsmentioning
confidence: 72%
“…-Loss function: in our tests, the training effect of the L 1 loss is slightly better than that of L 2 ; -Regularizer: Although strict regularization may lead to the decline in end-to-end accuracy, in our experiments, the Lipschitz regularizer [28] slightly improves the estimation of physical parameters; -Training samples: the extreme values of the Coriolis force and centrifugal force of the manipulator only appear in a particular status, and the distribution of the sample must be specifically considered for this.…”
Section: Methodsmentioning
confidence: 72%
“…Applying machine learning techniques to dynamical systems have become a recent trend, now that there is an abundance of data. Some researchers have taken to performing system identification by identifying the governing symbolic equations through data [2,19,4], while others have taken to representing the governing equations through neural networks [23,18,5,17,16]. Delay-embeddings in terms of system identification have also been examined in [12,5].…”
Section: Related Workmentioning
confidence: 99%
“…These observations can be viewed as a 3dimensional time series that estimate the empirical covariance Γ obs as in [11]. The inverse problem involves learning the parameter u given these observations, also known as parameter identification [41]. Following [11], we endow a log-Normal prior on u: log u ∼ N (µ 0 , σ 2 0 ) with µ 0 = (2.0, 1.2, 3.3) and σ 0 = (0.2, 0.5, 0.15).…”
Section: Lorenz Systemmentioning
confidence: 99%