Fourteenth International Conference on Digital Image Processing (ICDIP 2022) 2022
DOI: 10.1117/12.2644390
|View full text |Cite
|
Sign up to set email alerts
|

Pre-rotation only at inference-time: a way to rotation invariance

Abstract: Weight sharing across different locations makes Convolutional Neural Networks (CNNs) space shift invariant, i.e., the weights learned in one location can be applied to recognize objects in other locations. However, weight sharing mechanism has been lacked in Rotated Pattern Recognition (RPR) tasks, and CNNs have to learn training samples in different orientations by rote. As such rote-learning strategy has greatly increased the difficulty of training, a new solution for RPR tasks, Pre-Rotation Only At Inferenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…The increase of parameters will rapidly increase the size of CNN deployment files, which is not conducive to the actual deployment of CNN. In order to overcome the challenges of data augmentation, this paper improves a novel rotation pattern recognition method RICNN 18 and applies it to rotation target recognition task of actual sonar images.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The increase of parameters will rapidly increase the size of CNN deployment files, which is not conducive to the actual deployment of CNN. In order to overcome the challenges of data augmentation, this paper improves a novel rotation pattern recognition method RICNN 18 and applies it to rotation target recognition task of actual sonar images.…”
Section: Introductionmentioning
confidence: 99%
“…The category and orientation of each target were simultaneously estimated by finding the max of these classification scores. The original RICNN only conducted experiments on MNIST and fashion MNIST datasets [18][19][20][21] , which have small image size, large sample number and balanced categories. However, underwater target sonar images usually have large image size, small sample number and unbalanced categories.…”
Section: Introductionmentioning
confidence: 99%