From a 2D video of a person in action, human mesh recovery aims to infer the 3D human pose and shape frame by frame. Despite progress on video-based human pose and shape estimation, it is still challenging to guarantee high accuracy and smoothness simultaneously. To tackle this problem, we propose a Video2mesh, a temporal convolutional transformer (TConvTransformer) based temporal network which is able to recover accurate and smooth human mesh from 2D video. The temporal convolution block achieves the sequence-level smoothness by aggregating image features from adjacent frames. The subsequent multi-attention transformer improves the accuracy due to its multi-subspace for better middle-frame feature representation. Meanwhile, we add a TConvTransformer discriminator which is trained together with our 3D human mesh temporal encoder. This TConvTransformer discriminator further improves the accuracy and smoothness by restricting the pose and shape in a more reliable space based on the AMASS dataset. We conduct extensive experiments on three standard benchmark datasets and show that our proposed Video2mesh outperforms other state-of-the-art methods in both accuracy and smoothness.
Multi‐person novel view synthesis aims to generate free‐viewpoint videos for dynamic scenes of multiple persons. However, current methods require numerous views to reconstruct a dynamic person and only achieve good performance when only a single person is present in the video. This paper aims to reconstruct a multi‐person scene with fewer views, especially addressing the occlusion and interaction problems that appear in the multi‐person scene. We propose MP‐NeRF, a practical method for multi‐person novel view synthesis from sparse cameras without the pre‐scanned template human models. We apply a multi‐person SMPL template as the identity and human motion prior. Then we build a global latent code to integrate the relative observations among multiple people, so we could represent multiple dynamic people into multiple neural radiance representations from sparse views. Experiments on multi‐person dataset MVMP show that our method is superior to other state‐of‐the‐art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.