2022
DOI: 10.48550/arxiv.2204.07178
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Relaxing Equivariance Constraints with Non-stationary Continuous Filters

Abstract: Equivariances provide useful inductive biases in neural network modeling, with the translation equivariance of convolutional neural networks being a canonical example. Equivariances can be embedded in architectures through weight-sharing and place symmetry constraints on the functions a neural network can represent. The type of symmetry is typically fixed and has to be chosen in advance. Although some tasks are inherently equivariant, many tasks do not strictly follow such symmetries. In such cases, equivarian… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(6 citation statements)
references
References 9 publications
0
6
0
Order By: Relevance
“…Symmetries in neural networks. Work has been done on studying group convolutional neural networks and using Lie groups to generalize beyond discrete and compact symmetry groups using localized kernels [7]. Although inspiring, this work is specialized to convolutions and kernel-based methods; in addition, the transformation groups to be sought are specified beforehand.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Symmetries in neural networks. Work has been done on studying group convolutional neural networks and using Lie groups to generalize beyond discrete and compact symmetry groups using localized kernels [7]. Although inspiring, this work is specialized to convolutions and kernel-based methods; in addition, the transformation groups to be sought are specified beforehand.…”
Section: Related Workmentioning
confidence: 99%
“…Symmetry detection, the task of discovering invariant properties from raw data, has been of increasing interest to the machine learning community [1][2][3][4], in particular for science and engineering applications [5,6]. For one, detecting symmetries allows for better interpretability of data [7] or even the sparse identification of underlying mechanisms [8]. Furthermore, incorporating symmetries into the learning algorithm improves robustness and trustworthiness [9] by making sure that predictions obey properties that the underlying mechanism obeys as well (e.g.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…A perfectly equivariant model may have trouble learning partial or approximated symmetries in real-world data. Thus, some work has recently explored the idea of building approximately equivariant models and empirically demonstrated the benefits of it in modeling real-world data (van der Ouderaa et al, 2022;Romero & Lohit, 2021;Finzi et al, 2021). For example, (Wang et al, 2022) designed approximately equivariant models by relaxing the weight sharing schemes in the equivariant convolution networks.…”
Section: Introductionmentioning
confidence: 99%
“…Several recent work have designed approximately equivariant networks (Wang et al, 2022;van der Ouderaa et al, 2022;Finzi et al, 2021) to learn the approximate functions.…”
Section: Introductionmentioning
confidence: 99%