2022 2nd International Conference on Computer Science, Electronic Information Engineering and Intelligent Control Technology (C 2022
DOI: 10.1109/cei57409.2022.9950075
|View full text |Cite
|
Sign up to set email alerts
|

Modulation signal recognition based on lightweight complex residual attention neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 8 publications
0
4
0
Order By: Relevance
“…In addition, considering that the signal feature distribution is different from the image pixel values, and the feature values are both positive and negative, the SeLU function is used after the normalization layer instead of the traditional activation function ReLU to increase the nonlinearity of the model, while retaining the negative features of the signal as much as possible [1]. Each Residual Stack unit includes a convolution kernel of size (1, 1), which is used to do the computation in channel dimension, in the first Residual Stack unit, due to the input data format of (2, 128) for the two IQ data, set the size of the convolution kernel inside the residual module of the Residual Unit are (2,5). The number of convolution kernels is 32, and after a maxpool of size (2, 2) the dimension becomes (1,64), at this time the dimension of the convolution kernels inside the Residual Unit residual module in the next Residual Stack unit should not be larger than the input data, so the size of the convolution kernels inside the second Residual Stack unit are both (1,5), and the number of convolution kernels is 32, which facilitates the input data format of (2, 128) two-way IQ data.…”
Section: Resnet-based Modulation Recognition Algorithm 21 Principles ...mentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, considering that the signal feature distribution is different from the image pixel values, and the feature values are both positive and negative, the SeLU function is used after the normalization layer instead of the traditional activation function ReLU to increase the nonlinearity of the model, while retaining the negative features of the signal as much as possible [1]. Each Residual Stack unit includes a convolution kernel of size (1, 1), which is used to do the computation in channel dimension, in the first Residual Stack unit, due to the input data format of (2, 128) for the two IQ data, set the size of the convolution kernel inside the residual module of the Residual Unit are (2,5). The number of convolution kernels is 32, and after a maxpool of size (2, 2) the dimension becomes (1,64), at this time the dimension of the convolution kernels inside the Residual Unit residual module in the next Residual Stack unit should not be larger than the input data, so the size of the convolution kernels inside the second Residual Stack unit are both (1,5), and the number of convolution kernels is 32, which facilitates the input data format of (2, 128) two-way IQ data.…”
Section: Resnet-based Modulation Recognition Algorithm 21 Principles ...mentioning
confidence: 99%
“…Each Residual Stack unit includes a convolution kernel of size (1, 1), which is used to do the computation in channel dimension, in the first Residual Stack unit, due to the input data format of (2, 128) for the two IQ data, set the size of the convolution kernel inside the residual module of the Residual Unit are (2,5). The number of convolution kernels is 32, and after a maxpool of size (2, 2) the dimension becomes (1,64), at this time the dimension of the convolution kernels inside the Residual Unit residual module in the next Residual Stack unit should not be larger than the input data, so the size of the convolution kernels inside the second Residual Stack unit are both (1,5), and the number of convolution kernels is 32, which facilitates the input data format of (2, 128) two-way IQ data. The number of convolutional kernels is 32, which is favorable to reduce the number of neural network parameters.…”
Section: Resnet-based Modulation Recognition Algorithm 21 Principles ...mentioning
confidence: 99%
See 1 more Smart Citation
“…In addition, the application of attention mechanisms and deformable convolution(DC) has received attention in improving image local feature extraction. In the literature [17][18], the attention mechanism is fused into the convolutional structure to enhance the model's feature extraction of target local region locations, thus further improving recognition and classification accuracy. DAI [19] proposed deformable convolution, which realizes the adaptive change of convolution kernel shape and improves image feature extraction.…”
Section: Introductionmentioning
confidence: 99%