2023
DOI: 10.3390/s23031513
|View full text |Cite
|
Sign up to set email alerts
|

Prehensile and Non-Prehensile Robotic Pick-and-Place of Objects in Clutter Using Deep Reinforcement Learning

Abstract: In this study, we develop a framework for an intelligent and self-supervised industrial pick-and-place operation for cluttered environments. Our target is to have the agent learn to perform prehensile and non-prehensile robotic manipulations to improve the efficiency and throughput of the pick-and-place task. To achieve this target, we specify the problem as a Markov decision process (MDP) and deploy a deep reinforcement learning (RL) temporal difference model-free algorithm known as the deep Q-network (DQN). … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 54 publications
0
8
0
Order By: Relevance
“…In [ 76 ], the authors introduce self-supervised deep reinforcement learning (DRL) for performing pick-and-place operations on objects of various shapes. In this framework, the agent learns how to perform a series of prehensile (grasping actions) and non-prehensile (left-right slides) robotic manipulations using a model-free and off-policy DRL, specifically Q-Learning.…”
Section: Rl Applicationmentioning
confidence: 99%
See 1 more Smart Citation
“…In [ 76 ], the authors introduce self-supervised deep reinforcement learning (DRL) for performing pick-and-place operations on objects of various shapes. In this framework, the agent learns how to perform a series of prehensile (grasping actions) and non-prehensile (left-right slides) robotic manipulations using a model-free and off-policy DRL, specifically Q-Learning.…”
Section: Rl Applicationmentioning
confidence: 99%
“…For safety concerns, which could involve objects or the robot itself, avoiding damages is crucial, limiting the number of samples that can be collected. Moreover, the disparity between the simulation and the real-world environment remains a challenge, and most recent studies have faced this issue [ 75 , 76 , 77 ]. The samples used in the simulation environment could lead to a good policy that may not transfer well to the real world due to variations in the samples, necessitating the collection of more samples for fine-tuning.…”
Section: Challenges Conclusion and Future Directionsmentioning
confidence: 99%
“…A deep reinforcement learning (RL) framework for pick-and-place tasks in crowded industrial contexts is proposed by Imtiaz et al [ 134 ]. Using a deep Q-network (DQN) made up of three fully convolutional networks (FCN) based on the DenseNet-121 architecture, the issue is handled as a Markov decision process (MDP).…”
Section: Deep Rl For Robotic Manipulationmentioning
confidence: 99%
“…To improve efficiency and throughput, it may be beneficial to explore Double Q-learning and Dueling Q-learning variants in the future. [82] et al introduced a loss function named RL-scene consistency loss is utilized to make sure that image translation is invariant with respect to the Q-values associated with it [91]. Ho et al proposed RetinaGAN, a GAN-based approach to achieve consistency in object detection when adapting simulated images to realistic ones [92].…”
Section: Simulation-to-real-world Transfermentioning
confidence: 99%