2019 International Applied Computational Electromagnetics Society Symposium - China (ACES) 2019
DOI: 10.23919/aces48530.2019.9060588
|View full text |Cite
|
Sign up to set email alerts
|

Influence of Batch Normalization on Convolutional Neural Networks in HRRP Target Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 4 publications
0
1
0
Order By: Relevance
“…In order to ensure that the generated feature map size is 200×200, we add corresponding padding to each layer of the residual network. We further add batch normalization [32] and ReLU layer to each convolution layer of each unit except the last convolution layer. The structure and paraments of CanNet unit in Table Ⅰ, where CONV stands for convolution layer, BN stands for batch standardization, and ReLU stands for activation function.…”
Section: Refinement Based On Deep Residual Networkmentioning
confidence: 99%
“…In order to ensure that the generated feature map size is 200×200, we add corresponding padding to each layer of the residual network. We further add batch normalization [32] and ReLU layer to each convolution layer of each unit except the last convolution layer. The structure and paraments of CanNet unit in Table Ⅰ, where CONV stands for convolution layer, BN stands for batch standardization, and ReLU stands for activation function.…”
Section: Refinement Based On Deep Residual Networkmentioning
confidence: 99%