2022
DOI: 10.1007/978-3-031-20071-7_33
|View full text |Cite
|
Sign up to set email alerts
|

HuMMan: Multi-modal 4D Human Dataset for Versatile Sensing and Modeling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 44 publications
(11 citation statements)
references
References 107 publications
0
7
0
Order By: Relevance
“…This is due to a large amount of data necessary to sufficiently sample the model space. However, recent progress in dataset [HYH*20, CRZ*22] acquisition may now enable the building of such a model. Another unsolved fundamental problem is the tracking of topological changes ( e.g .…”
Section: State‐of‐the‐art Methodsmentioning
confidence: 99%
“…This is due to a large amount of data necessary to sufficiently sample the model space. However, recent progress in dataset [HYH*20, CRZ*22] acquisition may now enable the building of such a model. Another unsolved fundamental problem is the tracking of topological changes ( e.g .…”
Section: State‐of‐the‐art Methodsmentioning
confidence: 99%
“…In fact, using the skill label as a condition brings a lot of flexibility, which makes our method compatible with various input sources. For instance, methods that directly regress the skeleton rotation [Shi [Cai et al 2022;Dou et al 2022;Li et al 2023Li et al , 2021bWan et al 2021;] of the parametric model, e.g., SMPL [Loper et al 2015], from the video, can be directly applied to our labeling. Moreover, skill labels are typically aligned with the language of skill descriptions, which could open the door for language-guide motion control.…”
Section: A2 Skill Labelmentioning
confidence: 99%
“…Currently there exist many datasets providing RGB images and 3D pose annotations in both singleperson and multi-person scenarios. Given the ease of data collection, there are numerous single-person 2D [Joo et al 2021;Kolotouros et al 2019;Lin et al 2014] and 3D datasets [Cai et al 2022;Chatzitofis et al 2020;Fieraru et al 2021a,b;Ionescu et al 2014;Mehta et al 2017;Ofli et al 2013;Sigal et al 2010;Trumble et al 2017;Von Marcard et al 2018;Yoon et al 2021], featuring diverse actors, actions, and modalities. However, training and validation data are relatively scarce in multi-person scenarios.…”
Section: Related Workmentioning
confidence: 99%