2015 International Conference on Cognitive Computing and Information Processing(CCIP) 2015
DOI: 10.1109/ccip.2015.7100710
|View full text |Cite
|
Sign up to set email alerts
|

Simulation for path planning of autonomous underwater vehicle using Flower Pollination Algorithm, Genetic Algorithm and Q-Learning

Abstract: the motivation behind this paper is to address the necessity for exploration in near bottom ocean environment employing Autonomous Underwater Vehicles. This paper presents a simulation for an optimized path planning for an autonomous underwater vehicle in benthic ocean zones. The statistical data pertaining to the near-bottom ocean currents has been sourced from the Bedford Institute of Oceanography, Canada. A cost function is developed which incorporates the interaction of the underwater vehicle with the ocea… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
3
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 6 publications
0
3
0
Order By: Relevance
“…For route planning in a dynamic environment using genetic algorithms, see [5]. The combination of genetic algorithms and Q-learning is described in [6]. Another example of using Q-learning for a dynamic environment is in [7,8].…”
Section: Path Planning In Unknown Environmentmentioning
confidence: 99%
See 1 more Smart Citation
“…For route planning in a dynamic environment using genetic algorithms, see [5]. The combination of genetic algorithms and Q-learning is described in [6]. Another example of using Q-learning for a dynamic environment is in [7,8].…”
Section: Path Planning In Unknown Environmentmentioning
confidence: 99%
“…This approach is not sufficient. Equations ( 5) and (6) show that whether a given terrain is passable depends not only on the slope of the elevation but also on the type of surface, the type of wheels, and the engine power. In our approach, each map grid contains two values: the terrain height and the semantic label.…”
Section: Maximum Descent Slopementioning
confidence: 99%
“…Then, the robot’s action with the maximum Q value at each current state can be chosen until reaching the goal position. Based on the Q -learning algorithm designed in [ 32 ], when the robot performs an action in state , the corresponding action value function can be updated as where is the attenuation rate representing the attenuation of future rewards, and is the learning rate. The attenuation rate affects the ratio that the robot replaces the original Q value with a new value.…”
Section: Path Planning Algorithm For Mobile Robotmentioning
confidence: 99%
“…Then, the robot's action with the maximum Q value at each current state can be chosen until reaching the goal position. Based on the Q-learning algorithm designed in [32], when the robot performs an action a t in state s t , the corresponding action value function Q t (s t , a t ) can be updated as…”
Section: Q-learning Based Local Path Planning Algorithmmentioning
confidence: 99%
“…Reinforcement learning is a kind of unsupervised machine learning, and its learning can be regarded as a trial evaluation process [28]. Considering the impact of ocean current on the energy consumption of underwater gliders, the design considers the cost function of ocean current to optimize the path planning in the 3D environment [29]. Compared with RL, which focuses on solving learning problems, deep learning networks can extract abstract features from large-scale data to cope with increasingly complex task environments.…”
Section: Introductionmentioning
confidence: 99%