“…In addition, considering that the signal feature distribution is different from the image pixel values, and the feature values are both positive and negative, the SeLU function is used after the normalization layer instead of the traditional activation function ReLU to increase the nonlinearity of the model, while retaining the negative features of the signal as much as possible [1]. Each Residual Stack unit includes a convolution kernel of size (1, 1), which is used to do the computation in channel dimension, in the first Residual Stack unit, due to the input data format of (2, 128) for the two IQ data, set the size of the convolution kernel inside the residual module of the Residual Unit are (2,5). The number of convolution kernels is 32, and after a maxpool of size (2, 2) the dimension becomes (1,64), at this time the dimension of the convolution kernels inside the Residual Unit residual module in the next Residual Stack unit should not be larger than the input data, so the size of the convolution kernels inside the second Residual Stack unit are both (1,5), and the number of convolution kernels is 32, which facilitates the input data format of (2, 128) two-way IQ data.…”