2023
DOI: 10.3390/robotics12010012
|View full text |Cite
|
Sign up to set email alerts
|

Simulated and Real Robotic Reach, Grasp, and Pick-and-Place Using Combined Reinforcement Learning and Traditional Controls

Abstract: The majority of robots in factories today are operated with conventional control strategies that require individual programming on a task-by-task basis, with no margin for error. As an alternative to the rudimentary operation planning and task-programming techniques, machine learning has shown significant promise for higher-level task planning, with the development of reinforcement learning (RL)-based control strategies. This paper reviews the implementation of combined traditional and RL control for simulated… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(7 citation statements)
references
References 22 publications
0
7
0
Order By: Relevance
“…The latter typically relies on implicit prediction of the same mechanics to generate accurate value estimates. These findings emphasize the superiority of explicitly integrating physical mechanics into the policy formulation process [7]. In order to address the issue regarding initial categorization with many groups, the researchers of [8] suggested a DRL method based on the (DQN) technique.…”
Section: Literature Reviewmentioning
confidence: 92%
“…The latter typically relies on implicit prediction of the same mechanics to generate accurate value estimates. These findings emphasize the superiority of explicitly integrating physical mechanics into the policy formulation process [7]. In order to address the issue regarding initial categorization with many groups, the researchers of [8] suggested a DRL method based on the (DQN) technique.…”
Section: Literature Reviewmentioning
confidence: 92%
“…In [ 77 ], the authors introduce a novel pipeline that combines traditional control and reinforcement learning (RL) techniques for both simulated and real-world environments to validate RL methods across various scenarios, including reach, grasp, and pick-and-place tasks. Two algorithms, Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO), are employed in this study.…”
Section: Rl Applicationmentioning
confidence: 99%
“…For safety concerns, which could involve objects or the robot itself, avoiding damages is crucial, limiting the number of samples that can be collected. Moreover, the disparity between the simulation and the real-world environment remains a challenge, and most recent studies have faced this issue [ 75 , 76 , 77 ]. The samples used in the simulation environment could lead to a good policy that may not transfer well to the real world due to variations in the samples, necessitating the collection of more samples for fine-tuning.…”
Section: Challenges Conclusion and Future Directionsmentioning
confidence: 99%
“…Combining artificial intelligence and experience sharing in LFD is also one of the research hotspots in this field in recent years [32][33][34][35][36][37][38][39]. Kehoe et al [40] proposed an algorithm based on deep neural networks.…”
Section: Related Workmentioning
confidence: 99%