2021
DOI: 10.48550/arxiv.2106.10163
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Steerable Partial Differential Operators for Equivariant Neural Networks

Abstract: Recent work in equivariant deep learning bears strong similarities to physics. Fields over a base space are fundamental entities in both subjects, as are equivariant maps between these fields. In deep learning, however, these maps are usually defined by convolutions with a kernel, whereas they are partial differential operators (PDOs) in physics. Developing the theory of equivariant PDOs in the context of deep learning could bring these subjects even closer together and lead to a stronger flow of ideas. In thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…More recent work in this direction includes integrating equivariant partial differential operators in steerable CNNs [75], drawing a strong analogy between deep learning and physics.…”
Section: Neural Network and Differential Equationsmentioning
confidence: 99%
“…More recent work in this direction includes integrating equivariant partial differential operators in steerable CNNs [75], drawing a strong analogy between deep learning and physics.…”
Section: Neural Network and Differential Equationsmentioning
confidence: 99%
“…That is, if F takes an image f rotated by g (RHS of equation ( 2)), then it is possible to recover the same output by evaluating F for the un-rotated image f and rotating its output (LHS of equation ( 2)). The most equivariant mappings between spaces of feature fields are convolutions with G-steerable kernels (Weiler et al, 2018;Jenner and Weiler, 2021). Denote the input field type as ρ in : G → R d in ×d in and the output field type as ρ out : G → R d out ×d out .…”
Section: Equivariant Mappings and Steerable Kernelsmentioning
confidence: 99%
“…Gaussian Discretization We can also employ the derivatives of Gaussian function for estimation (Jenner & Weiler, 2021): given points…”
Section: Discretizationmentioning
confidence: 99%
“…Besides, as proven in (Finzi et al, 2021), scale-like nonlinearities are not sufficient for universality, and can limit the model performance. These disadvantages also account for why most related works (Weiler & Cesa, 2019;Jenner & Weiler, 2021) employ regular or quotient representations of discrete subgroups for implementation…”
Section: Common Deep Learning Techniquesmentioning
confidence: 99%
See 1 more Smart Citation