2022
DOI: 10.1109/tip.2022.3175432
|View full text |Cite
|
Sign up to set email alerts
|

DO-Conv: Depthwise Over-Parameterized Convolutional Layer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
48
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 124 publications
(48 citation statements)
references
References 21 publications
0
48
0
Order By: Relevance
“…Five main modifications are considered in the network: (1) replace the residual blocks (written as ResBlocks) with residual dense blocks; (2) replace Relu with LeakeyRelu; (3) use depthwise over-parameterized convolution (Do-conv) [ 33 ] instead of non-1 × 1 convolutional layer; (4) replace the original feature attention module (named FAM in MIMO-Unet) with the proposed EFAM; (5) replace the original asymmetric feature fusion (named AFF in MIMO-Unet) by MFFB in the shallow features.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Five main modifications are considered in the network: (1) replace the residual blocks (written as ResBlocks) with residual dense blocks; (2) replace Relu with LeakeyRelu; (3) use depthwise over-parameterized convolution (Do-conv) [ 33 ] instead of non-1 × 1 convolutional layer; (4) replace the original feature attention module (named FAM in MIMO-Unet) with the proposed EFAM; (5) replace the original asymmetric feature fusion (named AFF in MIMO-Unet) by MFFB in the shallow features.…”
Section: Methodsmentioning
confidence: 99%
“…Depthwise over-parameterized convolution Do-Conv has shown great potential in computer vision tasks [ 33 ]. In our network, Do-Conv is used to replace all non-1 × 1 convolution.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…It is generally believed that the increase of network depth will improve network expression ability. Inspired by the literature [51], we employ an over‐parameterized convolutional layer, i.e. a conventional layer with additional depthwise convolution.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…The benefit of linear overparameterization in accelerating the training of deep neural networks and improving accuracy has received considerable attention in recent works [3,11,14,15,22,57]. Several of these previous works propose overparameterizing a convolutional layer during training by using a series of linear convolutional layers.…”
Section: Related Workmentioning
confidence: 99%