2019 International Conference on Robotics and Automation (ICRA) 2019
DOI: 10.1109/icra.2019.8793779
|View full text |Cite
|
Sign up to set email alerts
|

Using Geometric Features to Represent Near-Contact Behavior in Robotic Grasping

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Newbury et al [49] used two Convolutional Neural Networks (CNNs) to estimate both the placement rotations and stabilities and obtain the humanpreferred object placements and orientations. CNNs are also well used to estimate the grasp configurations [50] and predict the grasp qualities [51] as well. Feng et al [52] used a Support Vector Machine (SVM) and a Long Short-Term Memory (LSTM) model to analyze the features of tactile sensors to detect slip and unstable grasps.…”
Section: B Placement Estimationmentioning
confidence: 99%
“…Newbury et al [49] used two Convolutional Neural Networks (CNNs) to estimate both the placement rotations and stabilities and obtain the humanpreferred object placements and orientations. CNNs are also well used to estimate the grasp configurations [50] and predict the grasp qualities [51] as well. Feng et al [52] used a Support Vector Machine (SVM) and a Long Short-Term Memory (LSTM) model to analyze the features of tactile sensors to detect slip and unstable grasps.…”
Section: B Placement Estimationmentioning
confidence: 99%
“…In light of this, we propose a supervised learning method to learn grasping movements according to the shape of the object. Previous studies have shown that the size of the object along the grasped dimension is closely related to grasping postures (Dessalene et al, 2019;Starke et al, 2020). Therefore, this was used as a shape descriptor to establish the relationship with grasping movements.…”
Section: Learning Postural Synergiesmentioning
confidence: 99%
“…Newbury et al [49] used two Convolutional Neural Networks (CNNs) to estimate both the placement rotations and stabilities and obtain the humanpreferred object placements and orientations. CNNs are also well used to estimate the grasp configurations [50] and predict the grasp qualities [51] as well. Feng et al [52] used a Support Vector Machine (SVM) and a Long Short-Term Memory (LSTM) model to analyze the features of tactile sensors to detect slip and unstable grasps.…”
Section: B Placement Estimationmentioning
confidence: 99%