2021
DOI: 10.1007/s43674-021-00023-7
|View full text |Cite
|
Sign up to set email alerts
|

An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks

Abstract: The convolution neural networks are well known for their efficiency in detecting and classifying objects once adequately trained. Though they address shift in-variance up to a limit, appreciable rotation and scale in-variances are not guaranteed by many of the existing CNN architectures, making them sensitive towards input image or feature map rotation and scale variations. Many attempts have been made in the past to acquire rotation and scale in-variances in CNNs. In this paper, an efficient approach is propo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 33 publications
0
2
0
Order By: Relevance
“…The attention method in [75] combines the spatial attention mechanism and channel attention mechanism to reduce the spatial variance of the object. Alternatively, the eigenvector approach of [76] applies a scale and orientation correction for images based on eigenvectors and eigenvalues of the image covariance matrix. In adaptive Gabor convolutional networks [77], the convolutional kernels are adaptively multiplied by Gabor filters to achieve invariant information extracted from images.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The attention method in [75] combines the spatial attention mechanism and channel attention mechanism to reduce the spatial variance of the object. Alternatively, the eigenvector approach of [76] applies a scale and orientation correction for images based on eigenvectors and eigenvalues of the image covariance matrix. In adaptive Gabor convolutional networks [77], the convolutional kernels are adaptively multiplied by Gabor filters to achieve invariant information extracted from images.…”
Section: Introductionmentioning
confidence: 99%
“…The spatial transformer network (STN) [46] is used in [60] to tackle the joint image alignment problem on larger datasets with higher variability. STN [46] In summary, previous approaches for achieving invariance to affine transforms in images have three limitations: i) Some of these algorithms only contain a spatial invariance module [46,47,[74][75][76][77] embedded in a neural network designed for classification, object recognition or other tasks. As this module has to be trained via the learning objectives associated with those learning tasks, it is unable to learn the image transform parameters independently.…”
Section: Introductionmentioning
confidence: 99%