2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00989
|View full text |Cite
|
Sign up to set email alerts
|

Understanding Human Hands in Contact at Internet Scale

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
297
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 199 publications
(301 citation statements)
references
References 23 publications
3
297
1
Order By: Relevance
“…Content may change prior to final publication. without hands, we eliminated frames in which hands appeared by using a hand detector [50] and then used the remained frames for the rendering.…”
Section: Discussionmentioning
confidence: 99%
“…Content may change prior to final publication. without hands, we eliminated frames in which hands appeared by using a hand detector [50] and then used the remained frames for the rendering.…”
Section: Discussionmentioning
confidence: 99%
“…Hand-object interaction. Reconstructing hand and object jointly has been studied with both RGB input and RGB-D input [62,71,72,73,78,83,86,87,88,89]. Recently, Hasson et al [27,29] achieved promising results on explicitly modeling the contact by combining a parametric hand model MANO [74], with the mesh based representation for the object.…”
Section: Related Workmentioning
confidence: 99%
“…Lee et al [19,20] proposed using hands as a guide to identify an object of interest from a photo taken by people with visual impairment. Shan et al [31] collected a large-scale dataset of hand-object interaction along with annotated bounding boxes of hands and objects in contact with each other. Their proposed system can detect hands and objects in contact with each other from an image.…”
Section: Objects and Hands In First-person Videosmentioning
confidence: 99%
“…We use the state-of-the-art algorithm on hand-held object detection [31] trained on a large-scale image dataset of hand-object interaction collected from first-person video datasets [7,23,33]. Given a video frame, it produces bounding boxes of the hand, contact state (self-contact, other people, portable object, and static object), and its manipulating objects (see Figure 5 (a)).…”
Section: Hand-held Object Detectionmentioning
confidence: 99%