2020
DOI: 10.48550/arxiv.2010.13581
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Simplifying Hamiltonian and Lagrangian Neural Networks via Explicit Constraints

Abstract: Reasoning about the physical world requires models that are endowed with the right inductive biases to learn the underlying dynamics. Recent works improve generalization for predicting trajectories by learning the Hamiltonian or Lagrangian of a system rather than the differential equations directly. While these methods encode the constraints of the systems using generalized coordinates, we show that embedding the system into Cartesian coordinates and enforcing the constraints explicitly with Lagrange multiplie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…Using equation (19), this implies that for any δ > 0, with probability at least 1 − δ, the following holds for all θ ∈ Θ:…”
Section: A12 Putting Results Togethermentioning
confidence: 99%
See 1 more Smart Citation
“…Using equation (19), this implies that for any δ > 0, with probability at least 1 − δ, the following holds for all θ ∈ Θ:…”
Section: A12 Putting Results Togethermentioning
confidence: 99%
“…Most relevant to the types of problems we focus on is roto-translational equivariance [50,54,20,39], applications of GNNs in physical settings [9,5,26,2,1,37,34,38,13] and the encoding of time-invariance in RNNs [17,24,23,10,48]. Recent works have encoded Hamiltonian and Lagrangian mechanics into neural models [22,30,11,19], with gains in data-efficiency in physical and robotics systems, including some modeling controlled or dissipative systems [60,15]. In contrast to these works, we propose to encode biases via predictiontime fine-tuning following the tailoring framework [3], instead of architectural constraints.…”
Section: Related Workmentioning
confidence: 99%
“…In contrast to policy learning methods that seek to adapt to environment dynamics, a parallel line of work has explored directly modeling these system dynamics as learnable ordinary differential equations (ODEs) with deep neural networks. While the majority of these works seek to model arbitrary physical dynamics [22,23,39,40], some have explored specific complex physical phenomena such as contact [41][42][43] and fluid dynamics [44,45]. These works leverage strong physical priors to enforce model plausibility.…”
Section: Related Workmentioning
confidence: 99%
“…Combining learning algorithms with principles from physics and numerical methods, such as auxiliary loss terms and rich inductive biases, can improve sample complexity, computational efficiency, and generalization (Wu et al, 2018;Karniadakis et al, 2021;Chen et al, 2018;Rubanova et al, 2019). Imposing Hamiltonian (Greydanus et al, 2019;Sanchez-Gonzalez et al, 2019; and Lagrangian (Lutter et al, 2019;Cranmer et al, 2020;Finzi et al, 2020) mechanics in learned simulators offers unique speed/accuracy tradeoffs and can preserve symmetries more effectively.…”
Section: Background and Related Workmentioning
confidence: 99%