Autonomous vehicles can obtain navigation information by observing a source with a camera or an acoustic system mounted on the frame of the vehicle. This information properly fused provides navigation information that can overcome the lack of other sources of positioning. However, these systems often have a limited angular field-of-view (FOV). Due to this restriction, motion along some paths will make it impossible to obtain the necessary navigation information as the source is no longer in the vehicle's FOV. This paper proposes both a path planning approach and a guidance control law that allows the vehicle to preserve a certain object or feature inside the FOV while at the same time converging to the proposed path.
Abstract-This paper proposes a docking maneuver for an underactuated autonomous underwater vehicle (AUV) to dock into a funnel shaped docking station. The novelty of the proposed approach is enabling an underactuated AUV that is unable to control its sway motion, to dock with no crab angle, in the presence of cross-currents. Docking without a crab angle can be beneficial in cases where the geometry of the docking station entrance does not allow entering with crab angles.In order to successfully dock under such restrictions, a path planner and two guidance laws are proposed. By properly switching between the two guidance laws, it is possible for the vehicle to slide cross-current into the docking station.
This paper proposes a novel approach for constructing a docking path for underwater vehicles, using a new spiral resulting of combining the Fermat and logarithmic spirals. The proposed spiral path has two properties that will help solve some of the challenges of docking autonomous underactuated vehicles (AUVs). The first property is that the spiral path reaches the entrance of the docking station without curvature, allowing a smooth transition when entering the docking station. The second is that the AUV never exceeds a certain bearing angle with respect to docking station. This last feature allows AUVs equipped with navigation sensors which have a reduced field of view (FOV), such as cameras or acoustic positioning systems, to always preserve the docking station inside the FOV. Furthermore, the paper presents an interpolation of the spiral using waypoints that are connected with segments of logarithmic spirals. This makes it possible to apply existing guidance laws to follow the docking spiral. The proposed spiral docking path has been experimentally tested using an autonomous underwater vehicle.
This paper proposes and implements a convolutional neural network (CNN) that maps images from a camera to an error signal to guide and control an autonomous underwater vehicle into the entrance of a docking station. The paper proposes to use an external positioning system synchronized with the vehicle to obtain a dataset of images matched with the position and orientation of the vehicle. By using a guidance map the positions are converted into desired directions that guide the vehicle to a docking station. The network is then trained to estimate, for each frame, the error between the desired direction and the orientation. After training, the CNN can estimate the error without using the external positioning system, creating an end-to-end solution from image to a control signal.
Reinforcement learning (RL) is a form of motor learning that robotic therapy devices could potentially manipulate to promote neurorehabilitation. We developed a system that requires trainees to use RL to learn a predefined target movement. The system provides higher rewards for movements that are more similar to the target movement. We also developed a novel algorithm that rewards trainees of different abilities with comparable reward sizes. This algorithm measures a trainee's performance relative to their best performance, rather than relative to an absolute target performance, to determine reward. We hypothesized this algorithm would permit subjects who cannot normally achieve high reward levels to do so while still learning. In an experiment with 21 unimpaired human subjects, we found that all subjects quickly learned to make a first target movement with and without the reward equalization. However, artificially increasing reward decreased the subjects' tendency to engage in exploration and therefore slowed learning, particularly when we changed the target movement. An anti-slacking watchdog algorithm further slowed learning. These results suggest that robotic algorithms that assist trainees in achieving rewards or in preventing slacking might, over time, discourage the exploration needed for reinforcement learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.