A key issue in wireless sensor network applications is how to accurately detect anomalies in an unstable environment and determine whether an event has occurred. This instability includes the harsh environment, node energy insufficiency, hardware and software breakdown, etc. In this paper, a fault-tolerant anomaly detection method (FTAD) is proposed based on the spatial-temporal correlation of sensor networks. This method divides the sensor network into a fault neighborhood, event and fault mixed neighborhood, event boundary neighborhood and other regions for anomaly detection, respectively, to achieve fault tolerance. The results of experiment show that under the condition that 45% of sensor nodes are failing, the hit rate of event detection remains at about 97% and the false negative rate of events is above 92%.Information 2018, 9, 236 2 of 16 based on spatial-temporal correlation. The algorithm consists of two parts: the temporal correlation is used to obtain the probability of event and fault through the time-series data of sensor nodes, then we determine the state of sensor nodes. While according to the neighborhood definition, the sensor network is divided into the fault neighborhood, event and fault mixed neighborhoods, event boundary neighborhoods, and other areas, we use the minimum Bayesian risk decision method to distinguish the event nodes and faulty node. Fault tolerance is realized by abnormity detection to different neighborhoods, and experimental results and analysis show that the method can detect events well even under high fault rates.The contributions of this paper are summarized as follows:(i) In temporal correlation of sensor network, we propose the PCM and interval methods;(ii) In spatial correlation we divide the sensor network into fault neighborhood, event and fault mixed neighborhood, event boundary neighborhood, and other regions for anomaly detection, respectively, to achieve fault tolerance. (iii) We conduct extensive simulations to evaluate the performance of the proposed algorithms.The results demonstrate the effectiveness of the proposed algorithms.The second section introduces the related work and research results. The third section introduces the symbol definitions and network model used in this paper. The fourth section introduces the detection method of fault-tolerance of wireless sensor networks. The fifth section offers the results and analysis of the experiment. The final section concludes the paper.
Due to the arbitrariness of the drone’s shooting angle of view and camera movement and the limited computing power of the drone platform, pedestrian detection in the drone scene poses a greater challenge. This paper proposes a new convolutional neural network structure, SMYOLO, which achieves the balance of accuracy and speed from three aspects: (1) By combining deep separable convolution and point convolution and replacing the activation function, the calculation amount and parameters of the original network are reduced; (2) by adding a batch normalization (BN) layer, SMYOLO accelerates the convergence and improves the generalization ability; and (3) through scale matching, reduces the feature loss of the original network. Compared with the original network model, SMYOLO reduces the accuracy of the model by only 4.36%, the model size is reduced by 76.90%, the inference speed is increased by 43.29%, and the detection target is accelerated by 33.33%, achieving minimization of the network model volume while ensuring the detection accuracy of the model.
Due to the large amount of video data from UAV aerial photography and the small target size from the aerial perspective, pedestrian detection in drone videos remains a challenge. To detect objects in UAV images quickly and accurately, a small-sized pedestrian detection algorithm based on the weighted fusion of static and dynamic bounding boxes is proposed. First, a weighted filtration algorithm for redundant frames was applied using the inter-frame pixel difference algorithm cascading vision and structural similarity, which solved the redundancy of the UAV video data, thereby reducing the delay. Second, the pre-training and detector learning datasets were scale matched to address the feature representation loss caused by the scale mismatch between datasets. Finally, the static bounding extracted by YOLOv4 and the motion bounding boxes extracted by LiteFlowNet were subject to the weighted fusion algorithm to enhance the semantic information and solve the problem of missing and multiple detections in UAV object detection. The experimental results showed that the small object recognition method proposed in this paper enabled reaching an mAP of 70.91% and an IoU of 57.53%, which were 3.51% and 2.05% higher than the mainstream target detection algorithm.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.