2011
DOI: 10.1007/s11063-011-9189-6
|View full text |Cite
|
Sign up to set email alerts
|

Learning Topographic Representations of Nature Images with Pairwise Cumulant

Abstract: In this paper, we propose a model for natural images to learn topographic representations and complex cell properties. Different from the estimation of traditional models, e.g., pooling the outputs of filters in neighboring regions, our method maximizes a simple form of binary relations between two adjacent complex cells-"pairwise cumulant", which contains the favorable nonlinearity as high order cumulant, and can exploit the "sparseness" and "correlation" of cells in primary visual cortex. By means of choosin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…Each convolution of 7 7 d 脳 is succeeded by a normalization layer and a GELU unit. While the utilization of global attention or large convolutional kernels to expand the sensory field has been extensively studied in recent years [32][33][34][35], these approaches often introduce significant computational cost and memory overhead, especially in dense prediction tasks, due to the large size of the input image. In contrast, the focusing block proposed in this paper introduces only a minimal additional cost.…”
Section: Cascading Fusion Network Cfnetmentioning
confidence: 99%
“…Each convolution of 7 7 d 脳 is succeeded by a normalization layer and a GELU unit. While the utilization of global attention or large convolutional kernels to expand the sensory field has been extensively studied in recent years [32][33][34][35], these approaches often introduce significant computational cost and memory overhead, especially in dense prediction tasks, due to the large size of the input image. In contrast, the focusing block proposed in this paper introduces only a minimal additional cost.…”
Section: Cascading Fusion Network Cfnetmentioning
confidence: 99%
“…CondenseNet [ 14 ], inspired by DenseNet, utilizes learned group convolutions to reduce computations, resulting in smaller models and faster processing, but this comes at the cost of additional complexity. Similarly, Yulin et al [ 15 ] introduced dynamic transformers for efficient image recognition, addressing the limitations of fixed-size image embeddings by dynamically adapting the number of tokens based on image complexity. On the other hand, this approach may increase computational overhead during inference due to dynamic grid resizing.…”
Section: Related Workmentioning
confidence: 99%
“…Dynamic neural networks (DNNs) aim to reduce the computational effort and increase the model generalization capability, and multi-exit is also one of the techniques of dynamic neural networks. The idea of multi-exit has been widely used in the fields of computer vision (CV) and natural language processing (NLP) (Liu et al, 2020;Wang et al, 2021), but to our knowledge, no multi-exit network has been specifically designed for the sleep stage classification task to reduce the computational cost of the network as of yet. In this section, we introduce the training and inference process of DynamicSleepNet.…”
Section: Model Training and Inferencementioning
confidence: 99%