2019
DOI: 10.1108/ir-01-2019-0002
|View full text |Cite
|
Sign up to set email alerts
|

NAO robot obstacle avoidance based on fuzzy Q-learning

Abstract: Purpose This paper aims to propose a novel active SLAM framework to realize avoid obstacles and finish the autonomous navigation in indoor environment. Design/methodology/approach The improved fuzzy optimized Q-Learning (FOQL) algorithm is used to solve the avoidance obstacles problem of the robot in the environment. To reduce the motion deviation of the robot, fractional controller is designed. The localization of the robot is based on FastSLAM algorithm. Findings Simulation results of avoiding obstacles … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 30 publications
0
7
0
Order By: Relevance
“…Fuzzy logic control rationalized rules from the previous information for the obstacle avoidance method. It is proven to show great performance and less reliance on the environment [16]. As shown in Fig.…”
Section: Fuzzy Logic Control Methodsmentioning
confidence: 99%
“…Fuzzy logic control rationalized rules from the previous information for the obstacle avoidance method. It is proven to show great performance and less reliance on the environment [16]. As shown in Fig.…”
Section: Fuzzy Logic Control Methodsmentioning
confidence: 99%
“…Therefore, the selection of each action plays an extremely important role for the agent to take the optimal path in the end (Yao et al , 2010). However, according to our previous research results (Wen et al , 2019a), in the action space, if all possible actions are selected with equal probability, the optimal path will not be selected in path planning. In some states, other actions may be selected instead of the optimal action.…”
Section: Active Fastslam Based On Probability Dueling Dqnmentioning
confidence: 99%
“…The algorithm focuses on value-based reinforcement learning that is updated as the environment is explored by means of the Q-value function [57][58][59][60]. Recently, combining intelligent control and Q-learning are also applied [61][62][63][64][65]. However, these studies are only carried out on simulations, as well as experiments with simple static objects.…”
Section: Introductionmentioning
confidence: 99%
“…This means that the training is difficult to perform in a real environment and tends to be done mostly in simulation. From [61][62][63][64][65] it can be seen that Q-learning are trainable in virtual environments and afterward transferable to the real world in robot applications. To address these challenges, in this research the training environment for the RL agent in a virtual environment is developed.…”
Section: Introductionmentioning
confidence: 99%