Smoke is more observable than open fires. Optical satellite video has the advantages of a wide monitoring range, fast response speed, and good economy in large-scale surface smoke monitoring tasks. It can be used in wide-area forest wildfire monitoring, battlefield dynamic monitoring, disaster relief decision-making. The smoke segmentation method based on traditional handcrafted features is easily limited by the scene and data. This paper introduces the deep learning method to the optical satellite video smoke target segmentation. However, due to the lack of real smoke images and the blurred edges of smoke, there are currently few labeled datasets for smoke segmentation in high-resolution optical satellite imagery scenes, which cannot provide sufficient training data for deep learning models. The smoke image from the satellite perspective also has the characteristics of multi-scale features and ground object background interference. To solve the above problems, we construct a set of high-resolution optical satellite imagery smoke synthesis datasets based on the optical imaging process of smoke targets, which saves the cost of manual labeling. In addition, we design an attention-guided optical satellite video smoke segmentation network model (AOSVSSNet), which can effectively suppress the ground object background's false alarm and extract the smoke's multi-scale features. Synthetic data faces the transferability problem in real-world applications, so the physical constraints of the smoke imaging process are introduced into the loss function to improve the generalization of the model in real smoke data. The comprehensive evaluation results show that the method outperforms representative semantic segmentation networks.
Vehicle tracking on satellite videos poses a challenge for the existing object tracking algorithms due to the few features, object occlusion and similar objects appearance. To improve the performance of the object tracking algorithm, a historical modelbased tracker intended for satellite videos is proposed in this study. It updates the tracker by using the historical model of each frame in the video, which contains plenty of object information and background information, so as to improve tracking ability on fewfeature objects. Furthermore, a historical model evaluation scheme is designed to obtain reliable historical models, which ensures that the tracker is sensitive to the object in the current frame, thus avoiding the impact caused by changes in object appearance and background. Besides, to solve the drift issue of the tracker caused by object occlusion and the appearance of similar objects, an anti-drift tracker correction scheme is proposed as well. According to the comparative experiments conducted on satellite videos dataset SatSOT, our tracker produces an excellent performance. Moreover, sensitivity analysis, varying criteria comparative experiments and ablation experiments are conducted to demonstrate that the proposed schemes are effective in improving the Precision and Success Rate of the tracker.
Video image stabilization technology is a crucial foundation for applications such as video image target identification, monitoring, and tracking. Satellite video covers a wide range of areas with complex and similar types of objects on the ground and diverse video types. However, currently, there is a lack of a general high-precision satellite video stabilization method (VSM) that can be applied to different land cover types and imaging modes. This paper proposes a high-precision VSM based on the ED-RANSAC, an error elimination operator constrained by Euclidean distance. Furthermore, a set of accuracy evaluation methods to ensure the reliability of video stabilization are sorted out. This paper conducted video stabilization experiments using optical video data from the Jilin-01 satellite and airborne SAR video data. Under the precision evaluation criteria proposed in this paper, the optical satellite video achieved inter-frame stabilization accuracy of better than 0.15 pixels in different test areas. The overall stabilization accuracy was better than 0.15 pixels. Similarly, the SAR video achieved inter-frame stabilization accuracy better than 0.3 pixels, and the overall stabilization accuracy was better than 0.3 pixels. These experimental results demonstrate the reliability and effectiveness of the proposed method for multi-modal satellite video stabilization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.