2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids) 2017
DOI: 10.1109/humanoids.2017.8246880
|View full text |Cite
|
Sign up to set email alerts
|

The complexities of grasping in the wild

Abstract: The recent ubiquity of high-framerate (120 fps and higher) handheld cameras creates the opportunity to study human grasping at a greater level of detail than normal speed cameras allow. We first collected 91 slow-motion interactions with objects in a convenience store setting. We then annotated the actions through the lenses of various existing manipulation taxonomies. We found manipulation, particularly the process of forming a grasp, is complicated and proceeds quickly. Our dataset shows that there are many … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
13
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 22 publications
(14 citation statements)
references
References 25 publications
1
13
0
Order By: Relevance
“…Human grasp and contact. There is a large body of work on capturing and recognizing human grasps [13,22,31,56,64,92]. Recently, [22] introduced a stretch-sensing soft glove to capture accurate hand pose without extra optical sensors.…”
Section: Related Workmentioning
confidence: 99%
“…Human grasp and contact. There is a large body of work on capturing and recognizing human grasps [13,22,31,56,64,92]. Recently, [22] introduced a stretch-sensing soft glove to capture accurate hand pose without extra optical sensors.…”
Section: Related Workmentioning
confidence: 99%
“…Feix's GRASP taxonomy is well-structured and has been widely used, such as assisting in determining anthropomorphic hand capabilities (Feix et al, 2012), robotic hand design (Xiong et al, 2016), and experiments analyzing human hand functionality (Juravle et al, 2011;Tessitore et al, 2013). Pollard attempted to refine the previous taxonomies (Abbasi et al, 2016) and used high frame rate handheld cameras to encode for shelf picking and placing actions (Nakamura et al, 2017). Compared with the previous extensive investigations of human static grasp functionality, there have been relatively few efforts devoted to understanding human manipulative functionality.…”
Section: Side (C)mentioning
confidence: 99%
“…This is the first hand prehensile taxonomy for describing human prehensile functionality containing stable hold and within-hand manipulation from a comprehensive view. To guarantee the comprehensiveness, we compared 29 human hand prehensile investigations, covering neuroscience (Jakobson and Goodale, 1991 ; Rosenbaum et al, 1996 ; Santello et al, 1998 ; Smeets and Brenner, 1999 ; Cohen and Rosenbaum, 2004 ; Schieber and Santello, 2004 ; Ansuini et al, 2008 ; Bullock et al, 2012 ; Touvet et al, 2014 ), hand surgery and rehabilitation (Schlesinger, 1919 ; Napier, 1956 ; Kamakura et al, 1980 ; Light et al, 2002 ), anatomy (Preuschoft and Chivers, 2012 ; Santello et al, 2013 ), biomechanics (Kamper et al, 2003 ), and robotics (Elliott and Connolly, 1984 ; Iberall, 1987 , 1997 ; Cutkosky, 1989 ; Juravle et al, 2011 ; Feix et al, 2012 , 2014a , b , 2015 , 2020 ; Bullock et al, 2013a , b ; Tessitore et al, 2013 ; Zhan and Liu, 2013 ; Abbasi et al, 2016 ; Nakamura et al, 2017 ). For understanding human static grasp ability, Feix provided a good foundation (Feix et al, 2015 ) and built a grasp taxonomy to cover human static grasping functionality, which has been widely used related to multiple domains.…”
Section: Classifying Hand Movement Functionalitymentioning
confidence: 99%
“…A straight-forward grasp planning may involve high-level motion control and highresolution information from sensors. As for human, contactguided placing is standard, and error recovery is quick when it is necessary at all [16]. As for robots, it is difficult to have a precise prediction because of the limited precision of the camera, the limited information of the objects gained from sensors, and the complexities of robot control.…”
Section: Rigid-soft Interactive Learningmentioning
confidence: 99%