2020
DOI: 10.1016/j.ifacol.2020.12.2093
|View full text |Cite
|
Sign up to set email alerts
|

Position control of a mobile robot using reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 13 publications
0
3
0
Order By: Relevance
“…However, it has limited computational power to carry out the computational vision (not fitted with a GPU). To take advantage of the potential mentioned above of the Khepera, an embedded system can be integrated that adds the functionality to detect objects using ML models or even Deep Learning (DL), for example [36].…”
Section: Introductionmentioning
confidence: 99%
“…However, it has limited computational power to carry out the computational vision (not fitted with a GPU). To take advantage of the potential mentioned above of the Khepera, an embedded system can be integrated that adds the functionality to detect objects using ML models or even Deep Learning (DL), for example [36].…”
Section: Introductionmentioning
confidence: 99%
“…This is a challenging task due to the complexity of the spherical robot model. The developed model is controlled under different scenarios with several control algorithms implemented by the authors in previous studies, including Villela [24], IPC (integral proportional controller) [25], and reinforcement learning (RL) [26,27]. The experiments undertaken to test the robot model included investigation of position control, path-following and formation control.…”
Section: Introductionmentioning
confidence: 99%
“…This is a challenging task due to the complexity involved in the construction and implementation of control algorithms on a spherical robot. The developed model [23] is controlled in different scenarios with several control algorithms tested by the authors in previous studies [24], including Villela [25], IPC (proportional integral controller) [26], and reinforcement learning (RL) [27,28]. The conducted experiments for testing the mobility of the real robot including the robot design, both electronic and 3D, adjusting the position control, trajectory tracking and comparative analysis between the different scenarios.…”
mentioning
confidence: 99%