2021
DOI: 10.1016/j.ifacol.2021.08.586
|View full text |Cite
|
Sign up to set email alerts
|

Experimental Assessment of Deep Reinforcement Learning for Robot Obstacle Avoidance: A LPV Control Perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 15 publications
0
2
0
Order By: Relevance
“…When it comes to knowledge-based systems, artificial neural networks can be well adapted in an appropriate form. Reference 7 proposed that the experimental evaluation of robot obstacle avoidance using deep reinforcement learning, relying on the equivalent linear parameter change state space representation of the system, and according to the measurement of the distance between the robot and the obstacle, activate two operation modes, one based on joint position and speed, and the other based only on speed input. Therefore, when the obstacle is close to the robot, a switching mechanism is introduced to enable the Deep Reinforcement Learning (DRL) algorithm, thus generating a self configuring architecture to deal with objects moving randomly in the workspace.…”
Section: Review Of Referencesmentioning
confidence: 99%
“…When it comes to knowledge-based systems, artificial neural networks can be well adapted in an appropriate form. Reference 7 proposed that the experimental evaluation of robot obstacle avoidance using deep reinforcement learning, relying on the equivalent linear parameter change state space representation of the system, and according to the measurement of the distance between the robot and the obstacle, activate two operation modes, one based on joint position and speed, and the other based only on speed input. Therefore, when the obstacle is close to the robot, a switching mechanism is introduced to enable the Deep Reinforcement Learning (DRL) algorithm, thus generating a self configuring architecture to deal with objects moving randomly in the workspace.…”
Section: Review Of Referencesmentioning
confidence: 99%
“…More specifically, a switching mechanism is embedded within a dual architecture, where the mode is determined based on the distance between the robot and the obstacles. The two modes may differ from each other in terms of, e.g., whether or not the joint positions are directly controlled, whereby the RL planner takes control when obstacles are perceived to have become too close to the robot [109].…”
Section: Reinforcement Learningmentioning
confidence: 99%