Approaches for visualizing and explaining the decision process of convolutional neural networks (CNNs) have recently received increasing attention. Particularly popular approaches are so-called saliency methods, which aim to assign a valence to each input pixel based on its importance and influence on the classification via saliency maps. In our paper, we contribute by a novel analyzing approach build on adversarial examples to investigate the explanatory power of saliency methods exemplified by layer-wise relevance propagation (LRP). Based on the hypothesis that distinct decisions, such as an image’s classification and the classification of its corresponding adversarial examples, should yield to dissimilar saliency maps to provide transparent rationales, we break down relevance scores of images and corresponding adversarial examples and analyze them using a comprehensive statistical evaluation. It turns out that different relevance decomposition rules of LRP do not lead to clearly distinguishable saliency maps for images and corresponding adversarial examples, neither in terms of their contour lines, nor in terms of the statistical analysis.
The detection of drones or unmanned aerial vehicles is a crucial component in protecting safety-critical infrastructures and maintaining privacy for individuals and organizations. The widespread use of optical sensors for perimeter surveillance has made optical sensors a popular choice for data collection in the context of drone detection. However, efficiently processing the obtained sensor data poses a significant challenge. Even though deep learning-based object detection models have shown promising results, their effectiveness depends on large amounts of annotated training data, which is time consuming and resource intensive to acquire. Therefore, this work investigates the applicability of synthetically generated data obtained through physically realistic simulations based on three-dimensional environments for deep learning-based drone detection. Specifically, we introduce a novel three-dimensional simulation approach built on Unreal Engine and Microsoft AirSim for generating synthetic drone data. Furthermore, we quantify the respective simulation–reality gap and evaluate established techniques for mitigating this gap by systematically exploring different compositions of real and synthetic data. Additionally, we analyze the adaptation of the simulation setup as part of a feedback loop-based training strategy and highlight the benefits of a simulation-based training setup for image-based drone detection, compared to a training strategy relying exclusively on real-world data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.