2022
DOI: 10.1609/aaai.v36i8.20866
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Invariance in Training Deep Neural Networks

Abstract: Inspired by two basic mechanisms in animal visual systems, we introduce a feature transform technique that imposes invariance properties in the training of deep neural networks. The resulting algorithm requires less parameter tuning, trains well with an initial learning rate 1.0, and easily generalizes to different tasks. We enforce scale invariance with local statistics in the data to align similar samples at diverse scales. To accelerate convergence, we enforce a GL(n)-invariance property with global statist… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…However, their method only supports group convolution with two elements, which limits its capability on a variety of higher dimensional groups. On the other hand, task-oriented equivariant works [8,37,9] show promising results in application while rarely establishing the connection to the corresponding group. Benton et al [3] learn the invariance by parameterizing a distribution with augmentations and optimizing the training loss simultaneously.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…However, their method only supports group convolution with two elements, which limits its capability on a variety of higher dimensional groups. On the other hand, task-oriented equivariant works [8,37,9] show promising results in application while rarely establishing the connection to the corresponding group. Benton et al [3] learn the invariance by parameterizing a distribution with augmentations and optimizing the training loss simultaneously.…”
Section: Introductionmentioning
confidence: 99%
“…Esteves et al [8] estimate the translation firstly and then classify the image in the log-polar coordinates [42], which is in fact a special case for the similarity group Sim(2). Ye et al [37] enforce a GL(n)-invariance property with global statistics extracted from training data, in which the gradient descent optimization should maintain the group invariance under basis changes. Unfortunately, these methods either are only capable of dealing with several subgroups of SL(3) and their corresponding transformations, or purely enforce the equivariance learning on image domain.…”
Section: Introductionmentioning
confidence: 99%