2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2022
DOI: 10.1109/cvpr52688.2022.01292
|View full text |Cite
|
Sign up to set email alerts
|

Capturing and Inferring Dense Full-Body Human-Scene Contact

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 77 publications
(29 citation statements)
references
References 65 publications
0
29
0
Order By: Relevance
“…Recent approaches begin to tackle modeling and synthesizing human interactions within 3D scenes, or with objects. Most of the researches focus on statically posing humans within the given 3D environment [16,24,69,71], by generating human scene interaction poses from various types of input including object semantics [17], images [21,23,64,65,68], and text descriptions [49,72].…”
Section: Related Workmentioning
confidence: 99%
“…Recent approaches begin to tackle modeling and synthesizing human interactions within 3D scenes, or with objects. Most of the researches focus on statically posing humans within the given 3D environment [16,24,69,71], by generating human scene interaction poses from various types of input including object semantics [17], images [21,23,64,65,68], and text descriptions [49,72].…”
Section: Related Workmentioning
confidence: 99%
“…Most related to our work are two recent datasets. The RICH dataset [21], fits SMPL-X bodies to multi-view RGB videos taken both indoors and outdoors. The method uses a detailed 3D scan of the scene and models the contact between the body and the world.…”
Section: Related Workmentioning
confidence: 99%
“…This makes these datasets ill-suited as evaluation benchmarks. The recent RICH dataset [21] addresses many of these issues with indoor and outdoor scenes, accurate multi-view capture of SMPL-X, 3D scene scans, and human-scene contact. It is not appropriate for our task, however, as it does not include object manipulation.…”
Section: Joint Modeling Of Humans and Scenesmentioning
confidence: 99%
See 1 more Smart Citation
“…However, the contact sequence is planned or annotated usually manually based on experiences. On the other hand, especially in the computer vision and graphics field, many datasets incorporating contacts are available, for example, BEHAVE ( Bhatnagar et al, 2022 ), RICH ( Huang et al, 2022 ), HuMoD ( Wojtusch and von Stryk, 2015 ), and MMM ( Mandery et al, 2016 ). While they are very useful to broaden the range of motion creation for animation or digital human, data with contact forces need to be reinforced, and their physical plausibility is to be improved for robotic motion generation yet.…”
Section: Challenges For Contact-rich Motions Through Icphmentioning
confidence: 99%