Proceedings of the 29th ACM International Conference on Multimedia 2021
DOI: 10.1145/3474085.3475412
|View full text |Cite
|
Sign up to set email alerts
|

iButter: Neural Interactive Bullet Time Generator for Human Free-viewpoint Rendering

Abstract: Figure 1: Our neural interactive bullet-time generator (iButter) enables convenient, flexible and interactive design for human bullet-time visual effects from dense RGB streams, and achieves high-quality and photo-realistic human performance rendering along the designed trajectory.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(5 citation statements)
references
References 129 publications
(212 reference statements)
0
5
0
Order By: Relevance
“…Some methods leveraged the additional temporal information to perform novel-view synthesis from a single video of a moving camera instead of large collections of multi-view images [12,16,28,44,46,56,65,69]. Among these, the reconstruction of humans also gained increasing interest where morphable [16] and implicit generative models [69], pre-trained features [59], or deformation fields [44,65] were employed to regularize the reconstruction. Furthermore, TöRF [1] used time-of-flight sensor measurements as an additional source of information and DyNeRF [27] learned time-dependent latent codes to constrain the radiance field.…”
Section: Dynamic Scene Representationsmentioning
confidence: 99%
“…Some methods leveraged the additional temporal information to perform novel-view synthesis from a single video of a moving camera instead of large collections of multi-view images [12,16,28,44,46,56,65,69]. Among these, the reconstruction of humans also gained increasing interest where morphable [16] and implicit generative models [69], pre-trained features [59], or deformation fields [44,65] were employed to regularize the reconstruction. Furthermore, TöRF [1] used time-of-flight sensor measurements as an additional source of information and DyNeRF [27] learned time-dependent latent codes to constrain the radiance field.…”
Section: Dynamic Scene Representationsmentioning
confidence: 99%
“…One track of literature [38,101] leveraged neural rendering techniques [83]. Meshes [71,101,4,3] or point clouds [86] are commonly chosen explicit representations. Moreover, fine-grained geometry and textures are learned by neural networks.…”
Section: Related Workmentioning
confidence: 99%
“…Moreover, fine-grained geometry and textures are learned by neural networks. However, these methods are either only applicable for novel view synthesis [86] or restricted to selfrotation video captures [4,3]. Besides, the neural renderers have limitations, e.g., stitching texture [36,37], and baked textures into the renderer.…”
Section: Related Workmentioning
confidence: 99%
“…Our approach is the first neural representation which enables real-time dynamic rendering and editing and to the best of our knowledge. To demonstrate the overall performance of our approach, we compare to the existing free-viewpoint video methods based on neural rendering, including the implicit methods NeuS [Wang et al 2021a], iButter [Wang et al 2021b], ST-NeRF [Zhang et al 2021b] and Neural Body [Peng et al 2021] based on neural radiance field. Note that NeuS only supports static scenes, so we only compare single frame performance with it, the rest of methods support dynamic scenes, we compare the whole sequence with them.…”
Section: Rendering Comparisonsmentioning
confidence: 99%