1999
DOI: 10.1109/3477.752807
|View full text |Cite
|
Sign up to set email alerts
|

An intelligent mobile vehicle navigator based on fuzzy logic and reinforcement learning

Abstract: In this paper, an alternative training approach to the EEM-based training method is presented and a fuzzy reactive navigation architecture is described. The new training method is 270 times faster in learning speed; and is only 4% of the learning cost of the EEM method. It also has very reliable convergence of learning; very high number of learned rules (98.8%); and high adaptability. Using the rule base learned from the new method, the proposed fuzzy reactive navigator fuses the obstacle avoidance behaviour a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
51
0

Year Published

2000
2000
2014
2014

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 78 publications
(51 citation statements)
references
References 17 publications
0
51
0
Order By: Relevance
“…Global path planning methods are usually conducted off-line in a completely known environment [27]. In the global path planning approach, an exact model of the environment has to be used to plan the path.…”
Section: Methods Used To Develop Robotic Agents 41 Path Planningmentioning
confidence: 99%
See 1 more Smart Citation
“…Global path planning methods are usually conducted off-line in a completely known environment [27]. In the global path planning approach, an exact model of the environment has to be used to plan the path.…”
Section: Methods Used To Develop Robotic Agents 41 Path Planningmentioning
confidence: 99%
“…However real environments are never simple enough. On the other hand local path planning techniques, also known as the obstacle avoidance methods, are potentially more efficient in robot navigation when the environment is unknown (which is our case) or only partially known [27].…”
Section: Methods Used To Develop Robotic Agents 41 Path Planningmentioning
confidence: 99%
“…In primary learning stage, we use a simple channel environment as Fig.6 to train the model [11], as regular as channel environment is, the robot's track keeps unchanged in the learning process, so we can set a stopping learning standards for it.…”
Section: B the Twice Learning By Learning Automatamentioning
confidence: 99%
“…Based on fuzzy inference system, S. Kermiche combined artificial potential field theory with supervised learning to adjust the fuzzy controller, successfully realize the robot's obstacle avoidance and target navigation, the curve is closer to the optimal path [4], Tan and Lee applied genetic algorithm to regular fuzzy controller, also achieved good results [5,6], meanwhile, there are many scholars combined reinforcement learning with fuzzy inference system to accomplish different navigation tasks [7][8][9][10][11], Meng improved reinforcement by proposing a dynamic fuzzy Q-learning, greatly improved the speed of operation and control accuracy [12,13], in 2012, Gao based on the fuzzy inference, introduced bionics control mechanism, through continuous interaction with the external environment enable the robot features of self-learning and adaptability [14], however, the fuzzy rule base established by expert knowledge increases the uncertainty and imprecision of the model.…”
Section: Introductionmentioning
confidence: 99%
“…While it is possible to use reinforcement learning [3,17,18] or supervised learning [19] to automatically learn the parameters of membership functions or behavior modules, both techniques may be impractical for use on many real robots. Reinforcement learning can require a prohibitively long learning period, and the success of supervised learning is strongly dependent on having su cient and appropriate input/output training data.…”
Section: Mobile Robot Planning and Controlmentioning
confidence: 99%