Plant diseases have received common attention, and deep learning has also been applied to plant diseases. Deep neural networks (DNNs) have achieved outstanding results in plant diseases. Furthermore, DNNs are very fragile, and adversarial attacks in image classification deserve much attention. It is important to detect the robustness of DNNs through adversarial attacks. The paper firstly improves the EfficientNet by adding the SimAM attention module. The SimAM-EfficientNet is proposed in this paper. The experimental results show that the accuracy of the improved model on PlantVillage reaches 99.31%. The accuracy of ResNet50 is 98.33%. The accuracy of ResNet18 is 98.31%. The accuracy of DenseNet is 98.90%. In addition, the GP-MI-FGSM adversarial attack algorithm improved by gamma correction and image pyramid in this paper can increase the success rate of attack. The model proposed in this paper has an error rate of 87.6% whenattacked by the GP-MI-FGSM adversarial attack algorithm. The success rate of GP-MI-FGSM proposed in this paper is higher than other adversarial attack algorithms, including FGSM, I-FGSM, and MI-FGSM.
The aiming accuracy of the Unmanned Aerial Vehicles (UAV) Steadicam head can be affected by many factors, such as the state of the UAV during the actual flight and the installation error of the system related hardware. In order to eliminate the influence of objective factors on the UAV Steadicam, a Kalman filter aiming algorithm based on the coordinate transformation method is proposed to eliminate the attitude error of the UAV Steadicam and improve the accuracy of the system. The algorithm uses coordinate transformation to eliminate mounting errors and combines coordinate transformation and Kalman filtering methods to eliminate objective errors of the UAV in flight. The experimental simulation results show that our method can accurately give the amount of azimuth and pitch angle error compensation during the flight of the UAV, improving the accuracy of the UAV Steadicam head. Ultimately, the method is applied to the development of a real product.INDEX TERMS Attitude error, coordinate transformation, error compensation, Kalman filtering, Steadicam head.
Video object detection is an important research direction of computer vision. The task of video object detection is to detect and classify moving objects in a sequence of images. Based on the static image object detector, most of the existing video object detection methods use the unique temporal correlation of video to solve the problem of missed detection and false detection caused by moving object occlusion and blur. Another video object detection model guided by an optical flow network is widely used. Feature aggregation of adjacent frames is performed by estimating the optical flow field. However, there are many redundant computations for feature aggregation of adjacent frames. To begin with, this paper improved Faster RCNN by Feature Pyramid and Dynamic Region Aware Convolution. Then the S-SELSA module is proposed from the perspective of semantic and feature similarity. Feature similarity is obtained by a modified SSIM algorithm. The module can aggregate the features of frames globally to avoid redundancy. Finally, the experimental results on the ImageNet VID and DET datasets show that the mAP of the method proposed in this paper is 83.55%, which is higher than the existing methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.