2019
DOI: 10.1007/978-3-030-27541-9_35
|View full text |Cite
|
Sign up to set email alerts
|

Active Affordance Exploration for Robot Grasping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 17 publications
0
5
0
Order By: Relevance
“…The robot only focused on grasping objects that were aligned with the bin wall or boundary, resulting in poor success rates. Separately, in order to grasp the objects placed in well-organized shapes, Chen et al [13] employed a Deep Q-Network (DQN) to guide the robot in actively exploring the environment of the objects placed around highly randomly until a suitable grasp affordance map was generated. This data-driven deep reinforcement learning method results in improper selection of many grasping points due to insufficient training cases, with time-consuming training iterations and low grasping efficiency and success rates.…”
Section: Related Workmentioning
confidence: 99%
“…The robot only focused on grasping objects that were aligned with the bin wall or boundary, resulting in poor success rates. Separately, in order to grasp the objects placed in well-organized shapes, Chen et al [13] employed a Deep Q-Network (DQN) to guide the robot in actively exploring the environment of the objects placed around highly randomly until a suitable grasp affordance map was generated. This data-driven deep reinforcement learning method results in improper selection of many grasping points due to insufficient training cases, with time-consuming training iterations and low grasping efficiency and success rates.…”
Section: Related Workmentioning
confidence: 99%
“…Their approach was aimed at assisting the robotic hand to actively explore the environment until the optimal affordance map is obtained. This study [ 61 ] was adapted from their previous works [ 62 , 63 ]. However, they implemented tactile sensing in [ 61 ] as a way to enhance the grasping performance and improve grasp efficiency; in addition, more experimental tests were provided.…”
Section: Critical Reviewmentioning
confidence: 99%
“…In this approach, the object region was estimated by the density-based spatial clustering of applications with noise (DBSCAN) algorithm, and a depth difference image (DDI) that represents the depth difference between adjacent areas is defined. Different frameworks have been presented in achieving grasping object in clutter such as active affordance exploration framework which leverages the privileges of affordance map and the active exploration [129], integrating perception, action selection, and manipulation policies to address a version of the Mechanical Search problem [130], actor model with neural network that combines Gaussian mixture and normalizing flows [131], joint learning of instance and semantic segmentation for robotic pickand-place with heavy occlusions in clutter [132], and predicting the quality and the pose of grasp using U-Grasping fully convolutional neural network(UG-Net) based on pixel-wise using depth image [133].…”
Section: B Suction and Multifunctional Graspingmentioning
confidence: 99%