“…; however, the classification network model parameters and computation are large, which improves the detection accuracy to a certain extent, but it is difficult to deploy the network model to mobile devices with limited resources, so in order to meet special scenarios, the classification network needs to be lightweight. There are two mainstream ways of lightweighting: one is lightweight basic networks, such as SqueezeNet [3], Xception [4], MobileNet series [5], ShuffleNet series [8], GhostNet [10], etc., which reduce the number of model parameters by using depth-separable convolution, adjustable hyperparameters, etc.,more highly compact lightweight networks such as YOLONano [11], NanoDet [12], etc. The other is to compress the overall network parameters, and there are mainly methods of model compression such as network weight pruning [13], quantization [14], and knowledge distillation [15], etc.…”