Wiley Encyclopedia of Electrical and Electronics Engineering 2021
DOI: 10.1002/047134608x.w8424
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Neural Networks

Abstract: Deep learning has been very successful in dealing with big data from various fields of science and engineering. It has brought breakthroughs using various deep neural network architectures and structures according to different learning tasks. An important family of deep neural networks are deep convolutional neural networks. We give a survey for deep convolutional neural networks induced by 1‐D or 2‐D convolutions. We demonstrate how these networks are derived from convolutional structures, and how they can be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 35 publications
0
4
0
Order By: Relevance
“…First, the initial convolutional layer is now followed by spatial dropout [91] (with p = 0.1). Second, we attach the ECA gate [69] (with k = 3) before the Localization Classification Regression (LCR) layer.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…First, the initial convolutional layer is now followed by spatial dropout [91] (with p = 0.1). Second, we attach the ECA gate [69] (with k = 3) before the Localization Classification Regression (LCR) layer.…”
Section: Methodsmentioning
confidence: 99%
“…On the contrary, most of the attention modules inevitably increase model complexity. Efficient Channel Attention (ECA) gate [69] is a soft attention mechanism that addresses this issue. It avoids dimensionality reduction and captures cross-channel interaction efficiently.…”
Section: Attention Mechanismsmentioning
confidence: 99%
“…First, the initial convolutional layer is now followed by spatial dropout [94] (with p = 0.1). Second, we attach the ECA gate [72] (with k = 3) before the localization classification regression (LCR) layer.…”
Section: Methodsmentioning
confidence: 99%
“…On the contrary, most of the attention modules inevitably increase model complexity. Efficient channel attention (ECA) gate [72] is a soft attention mechanism that addresses this issue. It avoids dimensionality reduction and captures cross-channel interaction efficiently.…”
Section: Attention Mechanismsmentioning
confidence: 99%