2013
DOI: 10.3390/robotics2030122
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement Learning in Robotics: Applications and Real-World Challenges

Abstract: In robotics, the ultimate goal of reinforcement learning is to endow robots with the ability to learn, improve, adapt and reproduce tasks with dynamically changing constraints based on exploration and autonomous learning. We give a summary of the state-of-the-art of reinforcement learning in the context of robotics, in terms of both algorithms and policy representations. Numerous challenges faced by the policy representation in robotics are identified. Three recent examples for the application of reinforcement… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
109
0
1

Year Published

2014
2014
2024
2024

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 204 publications
(110 citation statements)
references
References 41 publications
0
109
0
1
Order By: Relevance
“…Reinforcement learning is a promising approach to deal with control of physical robot with ever increasing complexity of hardware [22], [23] through experience and observations. Q-learning algorithm is a popular model-free reinforcement learning that have been demonstrated to give good results for some instances of robot tasks over the years.…”
Section: Q-learning Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…Reinforcement learning is a promising approach to deal with control of physical robot with ever increasing complexity of hardware [22], [23] through experience and observations. Q-learning algorithm is a popular model-free reinforcement learning that have been demonstrated to give good results for some instances of robot tasks over the years.…”
Section: Q-learning Algorithmmentioning
confidence: 99%
“…It has been widely applied to the design of robot speed and orientation steering controller because of the following reasons: 1) Control rules are more flexible, thus it can simplify the complex system; 2) The controller can emulate the human decision making; 3) It does not need a detailed model of the plant, and it replaces the mathematical values in describing control system by using the linguistic ambiguous labels for designing robust controllers. On the other hand, reinforcement learning, in particular Q-learning, shows good learning results in designing control input for performing constrained tasks by robots without knowing the system dynamics [22], [23]. The approaches of combining type-1 fuzzy logic and Q-learning for optimization of the consequence parts of fuzzy rules are promising due to the ease of implementation on mobile robot navigation [12]- [17] in which Q value is a cost for each navigation behavior.…”
Section: Introductionmentioning
confidence: 99%
“…EnRoCo is not the first approach to use model learning for controlling a robot. For example, approaches like bodyschema learning [13], learning forward models [14], motor babbling [15], and reinforcement learning of robot skills [16] employ machine learning techniques to help control a robot with unknown or uncertain kinematic/dynamic properties. However, unlike EnRoCo, all existing approaches ultimately rely on encoder (or joint angle) feedback for estimating the robot state (e.g.…”
Section: Related Workmentioning
confidence: 99%
“…Such an approach to RL, which is called cooperative RL, is increasingly used by research labs around the world to solve real world problems, such as robot control and autonomous navigation [4], [5]. This is because cooperative reinforcement learners can learn and converge faster than independent reinforcement learners via sharing of information (e.g., Q-values, Episodes, Policies) [3], [6]- [8].…”
Section: Introductionmentioning
confidence: 99%