Small-object detection is a basic and challenging problem in computer vision tasks. It is widely used in pedestrian detection, traffic sign detection, and other fields. This paper proposes a deep learning smallobject detection method based on image super-resolution to improve the speed and accuracy of small-object detection. First, we add a feature texture transfer (FTT) module at the input end to improve the image resolution at this end as well as to remove the noise in the image. Then, in the backbone network, using the Darknet53 framework, we use dense blocks to replace residual blocks to reduce the number of network structure parameters to avoid unnecessary calculations. Then, to make full use of the features of small targets in the image, the neck uses a combination of SPPnet and PANnet to complete this part of the multi-scale feature fusion work. Finally, the problem of image background and foreground imbalance is solved by adding the foreground and background balance loss function to the YOLOv4 loss function part. The results of the experiment conducted using our self-built dataset show that the proposed method has higher accuracy and speed compared with the currently available small-target detection methods.INDEX TERMS Small-object detection, image super-resolution, dense block, foreground and background, balance loss function, multi-scale feature fusion
Chlorophyll-a (Chl-a) concentration is a measure of phytoplankton biomass, and has been used to identify ‘red tide’ events. However, nearshore waters are optically complex, making the accurate determination of the chlorophyll-a concentration challenging. Therefore, in this study, a typical area affected by the Phaeocystis ‘red tide’ bloom, Qinzhou Bay, was selected as the study area. Based on the Gaofen-1 remote sensing satellite image and water quality monitoring data, the sensitive bands and band combinations of the nearshore Chl-a concentration of Qinzhou Bay were screened, and a Qinzhou Bay Chl-a retrieval model was constructed through stepwise regression analysis. The main conclusions of this work are as follows: (1) The Chl-a concentration retrieval regression model based on 1/B4 (near-infrared band (NIR)) has the best accuracy (R2 = 0.67, root-mean-square-error = 0.70 μg/L, and mean absolute percentage error = 0.23) for the remote sensing of Chl-a concentration in Qinzhou Bay. (2) The spatiotemporal distribution of Chl-a in Qinzhou Bay is varied, with lower concentrations (0.50 μg/L) observed near the shore and higher concentrations (6.70 μg/L) observed offshore, with a gradual decreasing trend over time (−0.8).
Along with social distancing, wearing masks is an effective method of preventing the transmission of COVID-19 in the ongoing pandemic. However, masks occlude a large number of facial features, preventing facial recognition. The recognition rate of existing methods may be significantly reduced by the presence of masks. In this paper, we propose a method to effectively solve the problem of the lack of facial feature information needed to perform facial recognition on people wearing masks. The proposed approach uses image super-resolution technology to perform image preprocessing along with a deep bilinear module to improve EfficientNet. It also combines feature enhancement with frequency domain broadening, fuses the spatial features and frequency domain features of the unoccluded areas of the face, and classifies the fused features. The features of the unoccluded area are increased to improve the accuracy of recognition of masked faces. The results of a cross-validation show that the proposed approach achieved an accuracy of 98% on the RMFRD dataset, as well as a higher recognition rate and faster speed than previous methods. In addition, we also performed an experimental evaluation in an actual facial recognition system and achieved an accuracy of 99%, which demonstrates the effectiveness and practicability of the proposed method.INDEX TERMS Face recognition with mask, convolutional neural network, frequency domain widening, bilinear module, RMFRD dataset.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.