2021
DOI: 10.48550/arxiv.2109.06926
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A trainable monogenic ConvNet layer robust in front of large contrast changes in image classification

Abstract: Convolutional Neural Networks (ConvNets) at present achieve remarkable performance in image classification tasks. However, current ConvNets cannot guarantee the capabilities of the mammalian visual systems such as invariance to contrast and illumination changes. Some ideas to overcome the illumination and contrast variations usually have to be tuned manually and tend to fail when tested with other types of data degradation. In this context, we present a new bio-inspired entry layer, M6, which detects low-level… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 32 publications
0
1
0
Order By: Relevance
“…Among deep learning-based algorithms, we compare the CNN-based and transformer-based methods, as well as the other novel methods. CNN-based methods include A-ConvNet [19], CV-CNN [20], CV-FCNN [61], CVNLNet [10], and RVNLNet with real input, the transformer-based method include ViT transformer [35], SpectralFormer [62], CrossViT [63], and other methods such as SymNet [64], monogenic ConvNet layer [65]. Furthermore, we also include multi-feature-based Mono-CVNLNet [10] and FEC [8] in our comparison of methods.…”
Section: Classification Results and Analysismentioning
confidence: 99%
“…Among deep learning-based algorithms, we compare the CNN-based and transformer-based methods, as well as the other novel methods. CNN-based methods include A-ConvNet [19], CV-CNN [20], CV-FCNN [61], CVNLNet [10], and RVNLNet with real input, the transformer-based method include ViT transformer [35], SpectralFormer [62], CrossViT [63], and other methods such as SymNet [64], monogenic ConvNet layer [65]. Furthermore, we also include multi-feature-based Mono-CVNLNet [10] and FEC [8] in our comparison of methods.…”
Section: Classification Results and Analysismentioning
confidence: 99%