Swarm intelligence has been applied to replicate numerous natural processes and relatively simple species to achieve excellent performance in a variety of disciplines. An autonomous approach employing deep reinforcement learning is presented in this study for swarm navigation. In this approach, complex 3D environments with static and dynamic obstacles and resistive forces such as linear drag, angular drag, and gravity are modeled to track multiple dynamic targets. In this regard, a novel island policy optimization model is introduced to tackle multiple dynamic targets simultaneously and thus make the swarm more dynamic. Moreover, new reward functions for robust swarm formation and target tracking are devised to learn complex swarm behaviors. Since the number of agents is not fixed and has only the partial observance of the environment, swarm formation and navigation become challenging. In this regard, the proposed strategy consists of four main components to tackle the aforementioned challenges: 1) Island policy-based optimization framework with multiple targets tracking 2) Novel reward functions for multiple dynamic target tracking 3) Improved policy and critic-based framework for the dynamic swarm management 4) Memory. The dynamic swarm management phase translates basic sensory input to highlevel commands and thus enhances swarm navigation and decentralized setup while maintaining the swarm's size fluctuations. While in the island model, the swarm can split into individual sub-swarms according to the number of targets, thus allowing it to track multiple targets that are far apart. Also, when multiple targets come close to each other, these sub-swarms have the ability to rejoin and thus form a single swarm surrounding all the targets. Customized state-of-the-art policy-based deep reinforcement learning neuroarchitectures are employed to achieve policy optimization. The results show that the proposed strategy enhances swarm navigation and can track multiple static and dynamic targets in complex environments.
Drones are unmanned aerial vehicles (UAV) utilized for a broad range of functions, including delivery, aerial surveillance, traffic monitoring, architecture monitoring, and even War-field. Drones confront significant obstacles while navigating independently in complex and highly dynamic environments. Moreover, the targeted objects within a dynamic environment have irregular morphology, occlusion, and minor contrast variation with the background. In this regard, a novel deep Convolutional Neural Network(CNN) based data-driven strategy is proposed for drone navigation in the complex and dynamic environment. The proposed Drone Split-Transform-and-Merge Region-and-Edge (Drone-STM-RENet) CNN is comprised of convolutional blocks where each block methodically implements region and edge operations to preserve a diverse set of targeted properties at multi-levels, especially in the congested environment. In each block, the systematic implementation of the average and max-pooling operations can deal with the region homogeneity and edge properties. Additionally, these convolutional blocks are merged at a multi-level to learn texture variation that efficiently discriminates the target from the background and helps obstacle avoidance. Finally, the Drone-STM-RENet generates steering angle and collision probability for each input image to control the drone moving while avoiding hindrances and allowing the UAV to spot risky situations and respond quickly, respectively. The proposed Drone-STM-RENet has been validated on two urban cars and bicycles datasets: udacity and collision-sequence, and achieved considerable performance in terms of explained variance (0.99), recall (95.47%), accuracy (96.26%), and F-score (91.95%). The promising performance of Drone-STM-RENet on urban road datasets suggests that the proposed model is generalizable and can be deployed for real-time autonomous drones navigation and real-world flights.
Substantial research has been done in saliency modeling to make intelligent machines that can perceive and interpret their surroundings and focus only on the salient regions in a visual scene. But existing spatio-temporal saliency models either treat videos as merely image sequences excluding any audio information or are unable to cope with inherently varying content. Based on the hypothesis that an audiovisual saliency model will perform better than traditional spatio-temporal saliency models, this work aims to provide a generic preliminary audio/video saliency model. This is achieved by augmenting visual saliency map with an audio saliency map computed by synchronizing low-level audio and visual features. The proposed model was evaluated using different criteria against eye fixations data for a publicly available video dataset DIEM. The evaluation results show that the model outperforms two state-of-the-art visual spatio-temporal saliency models supporting our hypothesis that an audiovisual model performs better in comparison to a visual model for natural uncategorized videos.Interestingly the effect of audio stimuli is proved to be relevant to human eye movements, for instance in [16] the authors find eye movements to be spatially biased towards the source of audio by performing an eye tracking experiment on images with spatially localized sound sources in three conditions: auditory (A), visual (V) and audio-visual (AV). Moreover, another study [17] analyzed the effects of different type of sounds on human gaze involving an experiment with 36 participants and thirteen sound classes under audio-visual and visual conditions. The sound classes are further clustered into on-screen with one sound source,
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.