The sea as a very extensive area, renders difficult a pre-emptive and long-lasting search for shipwreck survivors. The operational cost for deploying manned teams with such proactive strategy is high and, thus, these teams are only reactively deployed when a disaster like a shipwreck has been communicated. To reduce the involved financial costs, unmanned robotic systems could be used instead as background surveillance teams patrolling the seas. In this sense, a robotic team for search and rescue (SAR) operations at sea is presented in this work. Composed of an Unmanned Surface Vehicle (USV) piggybacking a watertight Unmanned Aerial Vehicle (UAV) with vertical takeoff and landing capabilities, the proposed cooperative system is capable of search, track and provide basic life support while reporting the position of human survivors to better prepared manned rescue teams. The USV provides long-range transportation of the UAV and basic survival kits for victims. The UAV assures an augmented perception of the environment due to its high vantage point.
This paper proposes a hybridization of two well-known stereo-based obstacle detection techniques for allterrain environments. While one of the techniques is employed for the detection of large obstacles, the other is used for the detection of small ones. This combination of techniques opportunistically exploits their complementary properties to reduce computation and improve detection accuracy. Being particularly computation intensive and prone to generate a high false-positive rate in the face of noisy three-dimensional point clouds, the technique for small obstacle detection is further extended in two directions. The goal of the first extension is to reduce both problems by focusing the detection on those regions of the visual field that detach more from the background and, consequently, are more likely to contain an obstacle. This is attained by means of spatially varying the data density of the input images according to their visual saliency. The second extension refers to the use of a novel voting mechanism, which further improves robustness. Extensive experimental results confirm the ability of the proposed method to robustly detect obstacles up to a range of 20 m on uneven terrain. Moreover, the model runs at 5 Hz on 640 × 480 stereo images. C
This paper proposes a model for trail detection and tracking that builds upon the observation that trails are salient structures in the robot's visual field. Due to the complexity of natural environments, the straightforward application of bottom‐up visual saliency models is not sufficiently robust to predict the location of trails. As for other detection tasks, robustness can be increased by modulating the saliency computation based on a priori knowledge about which pixel‐wise visual features are most representative of the object being sought. This paper proposes the use of the object's overall layout as the primary cue instead, as it is more stable and predictable in natural trails. Bearing in mind computational parsimony and detection robustness, this knowledge is specified in terms of perception‐action rules, which control the behavior of simple agents performing as a swarm to compute the saliency map of the input image. For the purpose of tracking, multiframe evidence about the trail location is obtained with a motion‐compensated dynamic neural field. In addition, to reduce ambiguity between the trail and trail‐like distractors, a simple appearance model is learned online and used to influence the agents' activity. Experimental results on a large data set reveal the ability of the model to produce a success rate on the order of 97% at 20 Hz. The model is shown to be robust in situations where previous models would fail, such as when the trail does not emerge from the lower part of the image or when it is considerably interrupted. © 2012 Wiley Periodicals, Inc.
This paper presents RIVERWATCH, an autonomous surface-aerial marsupial robotic team for riverine environmental monitoring. The robotic system is composed of an Autonomous Surface Vehicle (ASV) piggybacking a multirotor Unmanned Aerial Vehicle (UAV) with vertical takeoff and landing capabilities. The ASV provides the team with longrange transportation in all-weather conditions, whereas the UAV assures an augmented perception of the environment. The coordinated aerial, underwater, and surface level perception allows the team to assess navigation cost from the near field to the far field, which is key for safe navigation and environmental monitoring data gathering. The robotic system is validated on a set of field trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.